In Splunk IT Service Intelligence (ITSI), notable events are created and managed within the context of its Event Analytics framework. These notable events are stored in theitsi_tracked_alertsindex. This index is specifically designed to hold the active notable events that are generated by ITSI's correlation searches, which are based on the conditions defined for various services and their KPIs. Notable events are essentially alerts or issues that need to be investigated and resolved. Theitsi_tracked_alertsindex enables efficient storage, querying, and management of these events, facilitating the ITSI's event management and review process. The other options, such asitsi_notable_archiveanditsi_notable_audit, serve different purposes, such as archiving resolved notable events and auditing changes to notable event configurations, respectively. Therefore, the correct answer for where active notable events are stored is theitsi_tracked_alertsindex.
Question 2
Which of the following is a recommended best practice for ITSI installation?
Options:
A.
ITSI should not be installed on search heads that have Enterprise Security installed.
B.
Before installing ITSI, make sure the Common Information Model (CIM) is installed.
C.
Install the Machine Learning Toolkit app if anomaly detection must be configured.
D.
Install ITSI on one search head in a search head cluster and migrate the configuration bundle to other search heads.
Answer:
A
Explanation:
Explanation:
One of the recommended best practices for Splunk IT Service Intelligence (ITSI) installation is to avoid installing ITSI on search heads that already have Splunk Enterprise Security (ES) installed. This recommendation stems from potential resource conflicts and performance issues that can arise when both resource-intensive applications are deployed on the same instance. Both ITSI and ES are complex applications that require significant system resources to function effectively, and running them concurrently on the same search head can lead to degraded performance, conflicts in resource allocation, and potential stability issues. It's generally advised to segregate these applications onto separate Splunk instances to ensure optimal performance and stability for both platforms.
Question 3
Which of the following is a best practice when configuring maintenance windows?
Options:
A.
Disable any glass tables that reference a KPI that is part of an open maintenance window.
B.
Develop a strategy for configuring a service’s notable event generation when the service’s maintenance window is open.
C.
Give the maintenance window a buffer, for example, 15 minutes before and after actual maintenance work.
D.
Change the color of services and entities that are part of an open maintenance window in the service analyzer.
Answer:
C
Explanation:
Explanation:
It's a best practice to schedule maintenance windows with a 15- to 30-minute time buffer before and after you start and stop your maintenance work.
Reference: [Reference: https://docs.splunk.com/Documentation/ITSI/4.10.2/Configure/AboutMW, A maintenance window is a period of time when a service or entity is undergoing maintenance operations or does not require active monitoring. It is a best practice to schedule maintenance windows with a 15- to 30-minute time buffer before and after you start and stop your maintenance work. This gives the system an opportunity to catch up with the maintenance state and reduces the chances of ITSI generating false positives during maintenance operations. For example, if a server will be shut down for maintenance at 1:00PM and restarted at 5:00PM, the ideal maintenance window is 12:30PM to 5:30PM. The 15- to 30-minute time buffer is a rough estimate based on 15 minutes being the time period over which most KPIs are configured to search data and identify alert triggers. References: Overview of maintenance windows in ITSI]