Refer to the exhibit.
Which two options describe how the Update Asset and Identity Database playbook is configured? (Choose two.)
Understanding the Playbook Configuration:
The playbook named 'Update Asset and Identity Database' is designed to update the FortiAnalyzer Asset and Identity database with endpoint and user information.
The exhibit shows the playbook with three main components: ON_SCHEDULE STARTER, GET_ENDPOINTS, and UPDATE_ASSET_AND_IDENTITY.
Analyzing the Components:
ON_SCHEDULE STARTER: This component indicates that the playbook is triggered on a schedule, not on-demand.
GET_ENDPOINTS: This action retrieves information about endpoints, suggesting it interacts with an endpoint management system.
UPDATE_ASSET_AND_IDENTITY: This action updates the FortiAnalyzer Asset and Identity database with the retrieved information.
Evaluating the Options:
Option A: The actions shown in the playbook are standard local actions that can be executed by the FortiAnalyzer, indicating the use of a local connector.
Option B: There is no indication that the playbook uses a FortiMail connector, as the tasks involve endpoint and identity management, not email.
Option C: The playbook is using an 'ON_SCHEDULE' trigger, which contradicts the description of an on-demand trigger.
Option D: The action 'GET_ENDPOINTS' suggests integration with an endpoint management system, likely FortiClient EMS, which manages endpoints and retrieves information from them.
Conclusion:
The playbook is configured to use a local connector for its actions.
It interacts with FortiClient EMS to get endpoint information and update the FortiAnalyzer Asset and Identity database.
Fortinet Documentation on Playbook Actions and Connectors.
FortiAnalyzer and FortiClient EMS Integration Guides.
Refer to the exhibits.
What can you conclude from analyzing the data using the threat hunting module?
Understanding the Threat Hunting Data:
The Threat Hunting Monitor in the provided exhibits shows various application services, their usage counts, and data metrics such as sent bytes, average sent bytes, and maximum sent bytes.
The second part of the exhibit lists connection attempts from a specific source IP (10.0.1.10) to a destination IP (8.8.8.8), with repeated 'Connection Failed' messages.
Analyzing the Application Services:
DNS is the top application service with a significantly high count (251,400) and notable sent bytes (9.1 MB).
This large volume of DNS traffic is unusual for regular DNS queries and can indicate the presence of DNS tunneling.
DNS Tunneling:
DNS tunneling is a technique used by attackers to bypass security controls by encoding data within DNS queries and responses. This allows them to extract data from the local network without detection.
The high volume of DNS traffic, combined with the detailed metrics, suggests that DNS tunneling might be in use.
Connection Failures to 8.8.8.8:
The repeated connection attempts from the source IP (10.0.1.10) to the destination IP (8.8.8.8) with connection failures can indicate an attempt to communicate with an external server.
Google DNS (8.8.8.8) is often used for DNS tunneling due to its reliability and global reach.
Conclusion:
Given the significant DNS traffic and the nature of the connection attempts, it is reasonable to conclude that DNS tunneling is being used to extract confidential data from the local network.
Why Other Options are Less Likely:
Spearphishing (A): There is no evidence from the provided data that points to spearphishing attempts, such as email logs or phishing indicators.
Reconnaissance (C): The data does not indicate typical reconnaissance activities, such as scanning or probing mail servers.
FTP C&C (D): There is no evidence of FTP traffic or command-and-control communications using FTP in the provided data.
SANS Institute: 'DNS Tunneling: How to Detect Data Exfiltration and Tunneling Through DNS Queries' SANS DNS Tunneling
OWASP: 'DNS Tunneling' OWASP DNS Tunneling
By analyzing the provided threat hunting data, it is evident that DNS tunneling is being used to exfiltrate data, indicating a sophisticated method of extracting confidential information from the network.
Which three end user logs does FortiAnalyzer use to identify possible IOC compromised hosts? (Choose three.)
Overview of Indicators of Compromise (IoCs): Indicators of Compromise (IoCs) are pieces of evidence that suggest a system may have been compromised. These can include unusual network traffic patterns, the presence of known malicious files, or other suspicious activities.
FortiAnalyzer's Role: FortiAnalyzer aggregates logs from various Fortinet devices to provide comprehensive visibility and analysis of network events. It uses these logs to identify potential IoCs and compromised hosts.
Relevant Log Types:
DNS Filter Logs:
DNS requests are a common vector for malware communication. Analyzing DNS filter logs helps in identifying suspicious domain queries, which can indicate malware attempting to communicate with command and control (C2) servers.
IPS Logs:
Intrusion Prevention System (IPS) logs detect and block exploit attempts and malicious activities. These logs are critical for identifying compromised hosts based on detected intrusion attempts or behaviors matching known attack patterns.
Web Filter Logs:
Web filtering logs monitor and control access to web content. These logs can reveal access to malicious websites, download of malware, or other web-based threats, indicating a compromised host.
Why Not Other Log Types:
Email Filter Logs:
While important for detecting phishing and email-based threats, they are not as directly indicative of compromised hosts as DNS, IPS, and Web filter logs.
Application Filter Logs:
These logs control application usage but are less likely to directly indicate compromised hosts compared to the selected logs.
Detailed Process:
Step 1: FortiAnalyzer collects logs from FortiGate and other Fortinet devices.
Step 2: DNS filter logs are analyzed to detect unusual or malicious domain queries.
Step 3: IPS logs are reviewed for any intrusion attempts or suspicious activities.
Step 4: Web filter logs are checked for access to malicious websites or downloads.
Step 5: FortiAnalyzer correlates the information from these logs to identify potential IoCs and compromised hosts.
Fortinet Documentation: FortiOS DNS Filter, IPS, and Web Filter administration guides.
FortiAnalyzer Administration Guide: Details on log analysis and IoC identification.
By using DNS filter logs, IPS logs, and Web filter logs, FortiAnalyzer effectively identifies possible compromised hosts, providing critical insights for threat detection and response.
When does FortiAnalyzer generate an event?
Understanding Event Generation in FortiAnalyzer:
FortiAnalyzer generates events based on predefined rules and conditions to help in monitoring and responding to security incidents.
Analyzing the Options:
Option A: Data selectors filter logs based on specific criteria but do not generate events on their own.
Option B: Connectors facilitate integrations with other systems but do not generate events based on log matches.
Option C: Event handlers are configured with rules that define the conditions under which events are generated. When a log matches a rule in an event handler, FortiAnalyzer generates an event.
Option D: Tasks in playbooks execute actions based on predefined workflows but do not directly generate events based on log matches.
Conclusion:
FortiAnalyzer generates an event when a log matches a rule in an event handler.
Fortinet Documentation on Event Handlers and Event Generation in FortiAnalyzer.
Best Practices for Configuring Event Handlers in FortiAnalyzer.
Refer to Exhibit:
You are tasked with reviewing a new FortiAnalyzer deployment in a network with multiple registered logging devices. There is only one FortiAnalyzer in the topology.
Which potential problem do you observe?
Understanding FortiAnalyzer Data Policy and Disk Utilization:
FortiAnalyzer uses data policies to manage log storage, retention, and disk utilization.
The Data Policy section indicates how long logs are kept for analytics and archive purposes.
The Disk Utilization section specifies the allocated disk space and the proportions used for analytics and archive, as well as when alerts should be triggered based on disk usage.
Analyzing the Provided Exhibit:
Keep Logs for Analytics: 60 Days
Keep Logs for Archive: 120 Days
Disk Allocation: 300 GB (with a maximum of 441 GB available)
Analytics: Archive Ratio: 30% : 70%
Alert and Delete When Usage Reaches: 90%
Potential Problems Identification:
Disk Space Allocation: The allocated disk space is 300 GB out of a possible 441 GB, which might not be insufficient if the log volume is high, but it is not the primary concern based on the given data.
Analytics-to-Archive Ratio: The ratio of 30% for analytics and 70% for archive is unconventional. Typically, a higher percentage is allocated for analytics since real-time or recent data analysis is often prioritized. A common configuration might be a 70% analytics and 30% archive ratio. The misconfigured ratio can lead to insufficient space for analytics, causing issues with real-time monitoring and analysis.
Retention Periods: While the retention periods could be seen as lengthy, they are not necessarily indicative of a problem without knowing the specific log volume and compliance requirements. The length of these periods can vary based on organizational needs and legal requirements.
Conclusion:
Based on the analysis, the primary issue observed is the analytics-to-archive ratio being misconfigured. This misconfiguration can significantly impact the effectiveness of the FortiAnalyzer in real-time log analysis, potentially leading to delayed threat detection and response.
Fortinet Documentation on FortiAnalyzer Data Policies and Disk Management.
Best Practices for FortiAnalyzer Log Management and Disk Utilization.