My primary use cases for Splunk Observability Cloud include creating dashboards for metrics, detecting incidents, and ensuring overall observability of applications, service connections, and integrations, along with reporting and Slack integrations.
Learn what your peers think about Splunk Observability Cloud. Get advice and tips from experienced pros sharing their opinions. Updated: September 2025.
Systems Monitoring Engineer II at a government with 10,001+ employees
Real User
Top 20
2025-09-10T14:03:35Z
Sep 10, 2025
My main use cases for Splunk Observability Cloud include Application Performance Monitoring, synthetic monitoring, and dabbling in infrastructure and what comes along with it; however, we do already have a tool that does infrastructure. We're debating about just switching it all over to Observability.
My primary use cases for Splunk Observability Cloud include alerting the business and the integrations app team, which are the largest users of Splunk within our company. They take up most of our ingest and have many alerts set up, along with log analysis and events analysis. Those are our biggest team users, and alerting in general plays a crucial role for incident creation across multiple teams, regardless of who the shareholders are, cutting across multiple teams.
We are using the Splunk Observability Cloud for monitoring purposes and troubleshooting, and we are using that infrastructure in real time, in which we have infrastructure monitoring, application monitoring, log observer, and RUM synthetic monitoring. For troubleshooting purposes, we are installing the open telemetry collector agent on some of the servers, including Intel, Windows, and UNIX servers. I have also worked on the agent upgrade from version 0.103 to 0.1113, which is ongoing right now.
The solution involves observability in general, such as Application Performance Monitoring, and generally addresses digital applications, web applications, sites, and mobile applications. I worked with it in two companies: one in the energy sector and one in the hotel sector. The Splunk teams helped us with data collection, instrumentation, and many other options.
We primarily use Splunk Real User Monitoring to analyze performance bottlenecks and application transactions. It allows us to see how applications are experienced on the user side, making it easy to capture any bottlenecks or performance issues.
Splunk is primarily used for log monitoring, where I collect all my security logs, system logs, and application logs into a centralized place. This helps me customize my monitoring models.
Typically, the standard approach for Splunk sizing involves gathering data from the entire IT environment, regardless of whether it's hardware, virtualized, or application-based. This data is then collected and monitored through Splunk as a comprehensive security solution. We also work with Splunk-related platforms like Application Performance Monitoring to provide a holistic view of system performance. Recently, we implemented this solution for a bank in Jetar. Splunk excels at collecting high-volume data from networks, making it ideal for performance monitoring and scaling. During the sizing process, it's crucial to calculate the daily data ingestion rate, which determines the amount of data Splunk Enterprise needs to process and visualize for security purposes. Several factors need consideration when sizing Splunk: tier structure hot and cold buckets, customer use cases for free quota access, and storage choices based on data access frequency. Hot buckets typically utilize all-flash storage for optimal performance and low latency, while less frequently accessed data resides in cold or frozen buckets for archival purposes. In essence, the goal is to tailor the Splunk solution to meet the specific needs and usage patterns of each customer. One challenge that our customers face is slow data retrieval. Customers may experience delays in retrieving call data due to complex search queries within Splunk Enterprise Security. These queries can sometimes take up to an hour and a half to execute. Our architecture incorporates optimized query strategies and customization options to significantly reduce data retrieval times. This enables faster access to both hot and cold data. Another challenge is scalability constraints. Traditional solutions may have limitations in scaling to accommodate increasing data volumes. This can be a significant concern for customers who anticipate future growth. Our certified architecture is designed for easy and flexible scalability. It allows customers to seamlessly scale their infrastructure based on their evolving needs, without encountering the limitations often faced with other vendors' solutions. The final challenge is complex sizing and management. Traditional solutions often require extensive hardware configuration and sizing expertise, which can be a challenge for many organizations. This reliance on hardware expertise can hinder scalability and adaptability. Our architecture focuses on software and application administration, minimizing the dependence on specific hardware configurations. This simplifies deployment and ongoing management, making it more accessible to organizations with varying levels of technical expertise. Our architecture leverages Splunk's native deployment features, including: Index and bucket configuration. Data is categorized into hot, warm, and cold buckets for efficient storage and retrieval. Active/passive or active/active clustering. This ensures high availability and redundancy for critical data. Resource allocation. Data, compute, and memory resources are distributed evenly across clusters for optimal performance. For high-volume data ingestion exceeding 8 terabytes per day, we recommend deploying critical components on dedicated physical hardware rather than virtual machines. Virtualization can introduce overhead and latency, potentially impacting performance. Utilizing physical hardware for these components can help mitigate these bottlenecks and ensure optimal performance for large data volumes.
Splunk Observability Cloud offers sophisticated log searching, data integration, and customizable dashboards. With rapid deployment and ease of use, this cloud service enhances monitoring capabilities across IT infrastructures for comprehensive end-to-end visibility.Focused on enhancing performance management and security, Splunk Observability Cloud supports environments through its data visualization and analysis tools. Users appreciate its robust application performance monitoring and...
My main use cases for Splunk Observability Cloud include retail analytics.
My primary use cases for Splunk Observability Cloud include creating dashboards for metrics, detecting incidents, and ensuring overall observability of applications, service connections, and integrations, along with reporting and Slack integrations.
My main use cases for Splunk Observability Cloud are indexing, dashboards, alerts, and reports.
My main use case is end-to-end monitoring for the application.
For the retail sector, we are building a solution for customer stores in order to know how the products are sold.
Our main use cases for Splunk Observability Cloud are to observe our application, our websites, and our infrastructure metrics.
My main use cases for Splunk Observability Cloud include Application Performance Monitoring, synthetic monitoring, and dabbling in infrastructure and what comes along with it; however, we do already have a tool that does infrastructure. We're debating about just switching it all over to Observability.
I use Splunk Observability Cloud for network logging analysis.
My main use case for Splunk Observability Cloud is end-to-end tracing of business processes.
My main use cases for Splunk Observability Cloud include Application Performance Monitoring, Real User Monitoring, and Synthetic Monitoring.
My primary use cases for Splunk Observability Cloud include alerting the business and the integrations app team, which are the largest users of Splunk within our company. They take up most of our ingest and have many alerts set up, along with log analysis and events analysis. Those are our biggest team users, and alerting in general plays a crucial role for incident creation across multiple teams, regardless of who the shareholders are, cutting across multiple teams.
My main use case for Splunk Observability Cloud is application monitoring.
Our main use cases include synthetic monitoring, APM, RUM, alerting, detectors, dashboards, and all related functionality.
We are using the Splunk Observability Cloud for monitoring purposes and troubleshooting, and we are using that infrastructure in real time, in which we have infrastructure monitoring, application monitoring, log observer, and RUM synthetic monitoring. For troubleshooting purposes, we are installing the open telemetry collector agent on some of the servers, including Intel, Windows, and UNIX servers. I have also worked on the agent upgrade from version 0.103 to 0.1113, which is ongoing right now.
The solution involves observability in general, such as Application Performance Monitoring, and generally addresses digital applications, web applications, sites, and mobile applications. I worked with it in two companies: one in the energy sector and one in the hotel sector. The Splunk teams helped us with data collection, instrumentation, and many other options.
We primarily use Splunk Real User Monitoring to analyze performance bottlenecks and application transactions. It allows us to see how applications are experienced on the user side, making it easy to capture any bottlenecks or performance issues.
Splunk is primarily used for log monitoring, where I collect all my security logs, system logs, and application logs into a centralized place. This helps me customize my monitoring models.
Typically, the standard approach for Splunk sizing involves gathering data from the entire IT environment, regardless of whether it's hardware, virtualized, or application-based. This data is then collected and monitored through Splunk as a comprehensive security solution. We also work with Splunk-related platforms like Application Performance Monitoring to provide a holistic view of system performance. Recently, we implemented this solution for a bank in Jetar. Splunk excels at collecting high-volume data from networks, making it ideal for performance monitoring and scaling. During the sizing process, it's crucial to calculate the daily data ingestion rate, which determines the amount of data Splunk Enterprise needs to process and visualize for security purposes. Several factors need consideration when sizing Splunk: tier structure hot and cold buckets, customer use cases for free quota access, and storage choices based on data access frequency. Hot buckets typically utilize all-flash storage for optimal performance and low latency, while less frequently accessed data resides in cold or frozen buckets for archival purposes. In essence, the goal is to tailor the Splunk solution to meet the specific needs and usage patterns of each customer. One challenge that our customers face is slow data retrieval. Customers may experience delays in retrieving call data due to complex search queries within Splunk Enterprise Security. These queries can sometimes take up to an hour and a half to execute. Our architecture incorporates optimized query strategies and customization options to significantly reduce data retrieval times. This enables faster access to both hot and cold data. Another challenge is scalability constraints. Traditional solutions may have limitations in scaling to accommodate increasing data volumes. This can be a significant concern for customers who anticipate future growth. Our certified architecture is designed for easy and flexible scalability. It allows customers to seamlessly scale their infrastructure based on their evolving needs, without encountering the limitations often faced with other vendors' solutions. The final challenge is complex sizing and management. Traditional solutions often require extensive hardware configuration and sizing expertise, which can be a challenge for many organizations. This reliance on hardware expertise can hinder scalability and adaptability. Our architecture focuses on software and application administration, minimizing the dependence on specific hardware configurations. This simplifies deployment and ongoing management, making it more accessible to organizations with varying levels of technical expertise. Our architecture leverages Splunk's native deployment features, including: Index and bucket configuration. Data is categorized into hot, warm, and cold buckets for efficient storage and retrieval. Active/passive or active/active clustering. This ensures high availability and redundancy for critical data. Resource allocation. Data, compute, and memory resources are distributed evenly across clusters for optimal performance. For high-volume data ingestion exceeding 8 terabytes per day, we recommend deploying critical components on dedicated physical hardware rather than virtual machines. Virtualization can introduce overhead and latency, potentially impacting performance. Utilizing physical hardware for these components can help mitigate these bottlenecks and ensure optimal performance for large data volumes.