In terms of pricing, I have one issue with Splunk Observability Cloud. In a large-scale organization, it does not have features such as cost optimization or budgeting for observability spend. I think they need to improve that so that I can optimize our observability. For instance, if our thousands of server applications are running, I should be able to set a budget, such as only spending $100 per month for a specific environment. They need to introduce that feature because it is very important for budgeting. In terms of areas for improvement in Splunk Observability Cloud, the first is cost budgeting. The second is that they have many integrations, but if you are new to Splunk or new to observability, you must dive deep into more concepts. They can improve user-friendly features so that new users can set up their observability in their environment more smoothly. I think they need to improve in that integration part so that end users can onboard their infrastructure or applications very effectively. I would appreciate more simplicity in the platform.
Telemetry And Observability Architect at a pharma/biotech company with 10,001+ employees
Real User
Top 10
Mar 11, 2026
As an integrator, I think the biggest advantage of Splunk Observability Cloud is because it is part of the Splunk ecosystem, it is good to correlate logs with application data through traces and metrics. Overall, it is an evolving product, not top class, but it is getting there. I see many good things about the product and many advantages. Regarding the negative side, I think the licensing can be much better because it is based upon host units and there is additional licensing for the number of traces that I can bring in. A simplified licensing model would be much better, similar to what other tools offer. Pricing could be either based upon ingestion or directly based upon host units, rather than multiple different trackers. There are licenses for custom metrics, licenses for the number of traces that I can ingest, and host unit licensing. A better licensing plan would be beneficial.
To improve Splunk Observability Cloud, I wish they could develop more in the area of pricing and cost transparency, provide a smoother learning curve, and enhance the log management experience, ensuring that log navigation is not solely focused on metrics and tracing but also has good search performance to understand larger data sets. I would also like to see a very good user interface and onboarding experience that is smoother for new users. Before we wrap up, I want to emphasize the need for improvements in the log search performance and the smarter alerting system.
Aws Dev Ops Engineer at a consultancy with 10,001+ employees
Real User
Top 20
Jan 30, 2026
I want to address a disadvantage regarding the service map showing misinformation with respect to latency, which relates to data reliability pulled from AWS cloud or on-premise servers. We saw issues with latency because Splunk APM app shows different data than Prometheus and Grafana. We tried to get premium support and on-call support with Splunk, and they were helpful in troubleshooting, but they ended up with no solution. Performance with Splunk Observability Cloud is acceptable to me, but the modifications required by users are problematic. I had to build the complete alerting system and monitoring system, which had to be changed. The way they designed this is not optimal. If I compare with Prometheus, we can import and export dashboards, but here we face errors with dialogue boxes. We tried with technical support calls about this, but they were unable to solve it, so I do not understand why export and imports are not functioning. The overall impression of the no-sample tracing feature in Splunk Observability Cloud, specifically in terms of eliminating blind spots in data collection, is that it needs improvement because the data is not adequate compared to other third parties. We get disturbance in the dashboards and charts while trying to correlate data. The mechanism functions differently manually than it does with a SignalFlow query, and both should be equal. We are unable to replicate from manual processes to the automation method, which is the issue. The SignalFlow query feature in Splunk Observability Cloud needs improvement because it should function the same as manual processes. When we configure manual queries and then configure them via SignalFlow, they give different outputs. We tried with on-call support about this, but they were unable to address it, indicating there is a bug with the queries that needs improvement. For enhancements, I would like to see improvements in the OTEL agents, OTEL collectors, and other features in Splunk Observability Cloud. The guidelines in the official documentation are not working at all. We have to deploy processes in our own way, and the documentation works only in 60 percent of the conditions, leaving the remaining 40 percent as problematic and needing improvement.
Senior Software Engineer at a consultancy with 10,001+ employees
Real User
Top 5
Jan 12, 2026
When we have too many detectors in place for one particular app, such as when I have created 50+ detectors through my account, the entire page becomes a bit loaded when creating the 51st detector, feeling heavy and taking time to load. Additionally, it throws random errors; for example, when we try to save one detector, it might throw some random error which is not even related, with something else being wrong, not that particular error, but the underlying root cause might be different. Sometimes the error is just "some problem occurred," and we are not able to point out what the real cause is. This mainly happens when we have too many detectors or too many alerts in place rather than a standard number. One more thing is in the alert rules; if we have a main general alert, and instead of creating a new detector, we are adding a new rule under one detector, when the number of rules also increases, such as when we have 10 or 15 rules under one generic detector, that again creates the same kind of problem, taking some time to save that particular newly added rule, and it might not save at times, just keeps on spinning. Those are the two drawbacks which I spotted recently; other than that, everything looks perfect.
The dashboards are good, but the only limitation I see currently is that they need particular formats only to create a dashboard. They need to have a particular JSON format or time series format. This sometimes creates additional work for me so that when I am ingesting logs in Splunk Observability Cloud, it should be in a specific format. Either Splunk Observability Cloud should have multiple formats available or multiple dashboards available for different kinds of formats. At least Splunk Observability Cloud has everything available at a Splunk level. They can do some kind of analysis and see what are the major top ten or top twenty types of logs they are getting and they can have dashboards according to those logs. Instead of forcing customers to design their logs in the way of Splunk Observability Cloud, Splunk Observability Cloud can create dashboards based on the customer requirement. This will actually ease things up for the end users. The current dashboards are good. The feedback is that Splunk Observability Cloud is forcing me to modify my logs that I am ingesting in Splunk Observability Cloud in a specific format. If Splunk Observability Cloud can leverage it and make it open for any format, that would be great. If that is not feasible, at least the top ten or top twenty logs that Splunk Observability Cloud is getting should be readable by Splunk Observability Cloud without any changes. That actually is one of the major feedback items I can provide which can actually ease the life of the end users or any layman. As a newcomer to Splunk Observability Cloud, I may not know JSON. I now need to hire someone or I need to look for someone who knows JSON and who can convert my logs into JSON format and then I will ingest them into the logs if I want to create a dashboard. If I do not want to create a dashboard, that is okay. On the other hand, Splunk Observability Cloud is giving me a usability and easy to go interface, but for a dashboard, I need to have an understanding of JSON so that I can ingest the log in JSON format. That is a dilemma that they have and they should work on. Currently, Splunk Observability Cloud is not the only solution which any organization is using. There is also Grafana and PagerDuty. If Splunk Observability Cloud can plan some kind of integration with PagerDuty and Grafana, then those things can be controlled from a single position and if something else is happening at one location, it can update things at all levels. That can also bring great value to the users. Currently, I have to maintain three systems separately, but if some kind of integrations can be developed with these three vendors, then that can be a great thing because all these three have now become the industry pillars or industry standards for observability and resiliency.
There are not complexities with the installation of Splunk Observability Cloud, but with the configuration of alerts and everything because Splunk has its own language in the background. You need to know Splunk in order to configure everything that you want. It requires some in-depth knowledge of the product. It should be more plug-and-play, similar to ScienceLogic. ScienceLogic uses whatever it finds. You can use PowerShell, you can use scripts that you make. Splunk is more on the old style. It uses agents, and you have to deploy the agents. The out-of-the-box customizable dashboards provided by Splunk are okay, but usually, I have to create new dashboards because every user wants to see something else. The out-of-the-box dashboards help to get started faster, but in the end, I will have to redo them. I would like to see agentless deployment and better integration with ticketing systems such as ServiceNow, which is the biggest. We utilize the ability to enrich data with custom metrics in Splunk Observability Cloud to create tickets in ServiceNow. It is integrated with ServiceNow, but we enrich the tickets by putting the logs in the tickets and things of that nature, so it helps us. However, even that is a mixed approach. From Splunk Observability Cloud, you cannot put the logs directly in the tickets. Instead, it will create a ticket and send you an email with the logs. That integration could be improved.
Splunk Observability Cloud could be improved by having more integration with Splunk Cloud because at the moment they're two separate products. They're making great moves on what they call unified access; tighter integration is always a good thing.
Systems Engineer at a tech vendor with 1,001-5,000 employees
Real User
Top 5
Sep 10, 2025
The UI of Splunk Observability Cloud is one of the major issues; it's old and has been there for more than 10 years, acquired by other applications from other companies. It's time to reinvent how the UI is going to work with the AI modules and integrations, making it softer and cleaner. Splunk Observability Cloud is comprehensive in terms of functionality and features, so educating users has to be more functional. Users need to know how to be educated about certain views or pages they're working on.
The main improvement I would suggest for Splunk Observability Cloud would be offering the ability to implement custom apps, specifically allowing Python scripts that Splunk Cloud could host. Currently, we cannot create custom apps through Splunk Cloud. Additionally, continuous performance improvements for faster searching and indexing would be beneficial.
Avp at a financial services firm with 5,001-10,000 employees
Real User
Top 10
Sep 10, 2025
The integrations need to be improved for Splunk Observability Cloud. Currently, they do not have great support for Azure. We are on Azure, and I know they invested a lot of time in AWS yet not in Azure. I had given feedback to the teams here, as the integration from Azure Cloud, how we supply the logs and the metrics, is not clearly documented yet, which was acknowledged by the team. For example, the OTEL collector has a thousand parameters, and we need a very specific use case with 10 parameters required for our integration. We can't go through the thousand parameters; we can, however, that is basically why I think some integrations need to get better for Azure. There's a lot of talk about AI-powered analytics and guidance in Splunk Observability Cloud. I didn't get a great sense of how much of it is actually working; there are a lot of AI hallucinations. I think it probably needs much more improvement to contextualize it so that it is very clear and precise about what it randomly thinks, but it needs to match the context better. Customer service and technical support need some improvement. We had issues with technical support, and the professional services were struggling as well.
Splunk Observability Cloud can be improved. In terms of additional features I would want to see in future releases, since Cisco acquired Splunk, more Cisco integration could be beneficial.
Software Developer And Engineer at a retailer with 5,001-10,000 employees
Real User
Top 20
Sep 10, 2025
The RUM part of Splunk Observability Cloud can be improved significantly. We are currently struggling to use it since our application is mixed mobile and non-mobile. Some AI features in the search functionality could be beneficial in the next release of Splunk Observability Cloud. In GCP, Cloud Run is not natively supported by Splunk, and we are challenged with bringing data from Cloud Run to Splunk. Native support of it in the future would be great for us.
Systems Monitoring Engineer II at a government with 10,001+ employees
Real User
Top 10
Sep 10, 2025
The user interface of Splunk Observability Cloud needs a lot of work. I have been known to describe it as slapping lipstick on a pig. The pretty colors draw in everybody, however, the actual functionality of it has a lot that you cannot do, and how the user interface is organized is very difficult to navigate. This is a driving factor for us not to use the product. The next release of Splunk Observability Cloud should include a feature that makes it so that when looking at charts and dashboards, and also looking at one environment regardless of the product feature that you're in, APM, infrastructure, RUM, the environment that is chosen in the first location when you sign into Splunk Observability Cloud needs to stay persistent all the way through. There's no reason that a user should have to keep having to restart all of their filters and select their environment anytime that they switch to a different area of the tool.
The only strain point we've encountered with Splunk Observability Cloud is that the search times can be lengthy for some things. We have a large environment, so that's expected. That's the only complaint I've had so far.
Systems Administrator at a insurance company with 1,001-5,000 employees
Real User
Top 20
Sep 9, 2025
Splunk Observability Cloud could be improved with better integration with AppDynamics, as we know that's coming, however, it is an issue we've had between the OpenTelemetry and the AppDynamics collector. We saw a complete difference in what data was being brought in, however, we know that issue is being resolved and that's a big one for us.
Splunk Observability Cloud could be improved in terms of integrations with more technical add-ons, such as Zoom. Although they have one with Zoom, it's not available in the cloud, so having that feature would be beneficial. Essentially, Splunk should continue expanding to create easier ways to ingest logs from different products. The out-of-the-box customizable dashboards in Splunk Observability Cloud are very effective in showcasing IT performance to business leaders. However, there are aspects that could be improved, such as linking dashboards to one another. While IT leaders may not drill down, it's crucial to create levels of dashboards for technical users to find root causes, making it effective for stakeholders.
To improve Splunk Observability Cloud, we need more applications to be included in the observability so that more applications can have agents to monitor them and bring that information to the cloud. Splunk Observability Cloud has not yet completely improved our operational performance for our company's resilience as we are just starting out, however, it will help us ultimately to reduce incident time.
For potential areas of improvement, I find that while Synthetics, APM, and infrastructure management models are fine, an enhancement could be seamless integration with some third-party tools. It should better support interactions within Splunk tools. If a customer utilizes third-party tools and wants to forward data from Splunk Observability Cloud, seamless integration would be beneficial. This is crucial for passing data to tools such as Dynatrace or Grafana, as integrating some third-party add-ons can be challenging, involving many implementation and configuration steps.
Administrator at a tech vendor with 10,001+ employees
Real User
Top 10
Apr 30, 2025
In Splunk Observability Cloud, I notice room for improvement in synthetic monitoring. It does not provide output based on server names. It only gives a response when we input a URL. I'm not sure if this issue is specific to my organization, but it would be beneficial if server details could be retrieved directly in synthetic monitoring.
Regarding dashboard customization, while Splunk has many dashboard building options, customers sometimes need to create specific dashboards, particularly for applicative metrics such as Java and process terms. These categories of dashboards would be very helpful for customers.
It would be beneficial to have more enhanced features with capabilities to adapt more integrated applications. Improvements in dashboard configuration, customization, and artificial intelligence functionalities are desired. There is room for improvement in customer support due to delays and standard feedback responses.
I'm still experiencing some features of the product. However, in future updates, I would like to see more predefined monitoring query solutions, which could be more effective.
I'd like a dashboard that allows me to connect elements through drag-and-drop functionality. Additionally, I want the ability to view the automatically generated queries behind the scenes, including recommendations for optimization. This is just a preliminary idea, but I envision the possibility of using intelligent software to further customize my queries. For example, imagine I could train my queries to be more specific through an AI-powered interface. This would allow me to perform complex searches efficiently. For instance, an initial search might take an hour and a half, but by refining the parameters through drag-and-drop and AI suggestions, I could achieve the same result in just five minutes. Overall, I'm interested in exploring ways to customize queries for faster and more efficient data retrieval. Ideally, the dashboard would provide additional guidance and suggestions to further enhance my workflow through customization and optimization.
Splunk Observability Cloud offers sophisticated log searching, data integration, and customizable dashboards. With rapid deployment and ease of use, this cloud service enhances monitoring capabilities across IT infrastructures for comprehensive end-to-end visibility.Focused on enhancing performance management and security, Splunk Observability Cloud supports environments through its data visualization and analysis tools. Users appreciate its robust application performance monitoring and...
In terms of pricing, I have one issue with Splunk Observability Cloud. In a large-scale organization, it does not have features such as cost optimization or budgeting for observability spend. I think they need to improve that so that I can optimize our observability. For instance, if our thousands of server applications are running, I should be able to set a budget, such as only spending $100 per month for a specific environment. They need to introduce that feature because it is very important for budgeting. In terms of areas for improvement in Splunk Observability Cloud, the first is cost budgeting. The second is that they have many integrations, but if you are new to Splunk or new to observability, you must dive deep into more concepts. They can improve user-friendly features so that new users can set up their observability in their environment more smoothly. I think they need to improve in that integration part so that end users can onboard their infrastructure or applications very effectively. I would appreciate more simplicity in the platform.
As an integrator, I think the biggest advantage of Splunk Observability Cloud is because it is part of the Splunk ecosystem, it is good to correlate logs with application data through traces and metrics. Overall, it is an evolving product, not top class, but it is getting there. I see many good things about the product and many advantages. Regarding the negative side, I think the licensing can be much better because it is based upon host units and there is additional licensing for the number of traces that I can bring in. A simplified licensing model would be much better, similar to what other tools offer. Pricing could be either based upon ingestion or directly based upon host units, rather than multiple different trackers. There are licenses for custom metrics, licenses for the number of traces that I can ingest, and host unit licensing. A better licensing plan would be beneficial.
To improve Splunk Observability Cloud, I wish they could develop more in the area of pricing and cost transparency, provide a smoother learning curve, and enhance the log management experience, ensuring that log navigation is not solely focused on metrics and tracing but also has good search performance to understand larger data sets. I would also like to see a very good user interface and onboarding experience that is smoother for new users. Before we wrap up, I want to emphasize the need for improvements in the log search performance and the smarter alerting system.
I want to address a disadvantage regarding the service map showing misinformation with respect to latency, which relates to data reliability pulled from AWS cloud or on-premise servers. We saw issues with latency because Splunk APM app shows different data than Prometheus and Grafana. We tried to get premium support and on-call support with Splunk, and they were helpful in troubleshooting, but they ended up with no solution. Performance with Splunk Observability Cloud is acceptable to me, but the modifications required by users are problematic. I had to build the complete alerting system and monitoring system, which had to be changed. The way they designed this is not optimal. If I compare with Prometheus, we can import and export dashboards, but here we face errors with dialogue boxes. We tried with technical support calls about this, but they were unable to solve it, so I do not understand why export and imports are not functioning. The overall impression of the no-sample tracing feature in Splunk Observability Cloud, specifically in terms of eliminating blind spots in data collection, is that it needs improvement because the data is not adequate compared to other third parties. We get disturbance in the dashboards and charts while trying to correlate data. The mechanism functions differently manually than it does with a SignalFlow query, and both should be equal. We are unable to replicate from manual processes to the automation method, which is the issue. The SignalFlow query feature in Splunk Observability Cloud needs improvement because it should function the same as manual processes. When we configure manual queries and then configure them via SignalFlow, they give different outputs. We tried with on-call support about this, but they were unable to address it, indicating there is a bug with the queries that needs improvement. For enhancements, I would like to see improvements in the OTEL agents, OTEL collectors, and other features in Splunk Observability Cloud. The guidelines in the official documentation are not working at all. We have to deploy processes in our own way, and the documentation works only in 60 percent of the conditions, leaving the remaining 40 percent as problematic and needing improvement.
In Splunk Observability Cloud, the areas that have room for improvement include usability enhancements to make it even better.
When we have too many detectors in place for one particular app, such as when I have created 50+ detectors through my account, the entire page becomes a bit loaded when creating the 51st detector, feeling heavy and taking time to load. Additionally, it throws random errors; for example, when we try to save one detector, it might throw some random error which is not even related, with something else being wrong, not that particular error, but the underlying root cause might be different. Sometimes the error is just "some problem occurred," and we are not able to point out what the real cause is. This mainly happens when we have too many detectors or too many alerts in place rather than a standard number. One more thing is in the alert rules; if we have a main general alert, and instead of creating a new detector, we are adding a new rule under one detector, when the number of rules also increases, such as when we have 10 or 15 rules under one generic detector, that again creates the same kind of problem, taking some time to save that particular newly added rule, and it might not save at times, just keeps on spinning. Those are the two drawbacks which I spotted recently; other than that, everything looks perfect.
The dashboards are good, but the only limitation I see currently is that they need particular formats only to create a dashboard. They need to have a particular JSON format or time series format. This sometimes creates additional work for me so that when I am ingesting logs in Splunk Observability Cloud, it should be in a specific format. Either Splunk Observability Cloud should have multiple formats available or multiple dashboards available for different kinds of formats. At least Splunk Observability Cloud has everything available at a Splunk level. They can do some kind of analysis and see what are the major top ten or top twenty types of logs they are getting and they can have dashboards according to those logs. Instead of forcing customers to design their logs in the way of Splunk Observability Cloud, Splunk Observability Cloud can create dashboards based on the customer requirement. This will actually ease things up for the end users. The current dashboards are good. The feedback is that Splunk Observability Cloud is forcing me to modify my logs that I am ingesting in Splunk Observability Cloud in a specific format. If Splunk Observability Cloud can leverage it and make it open for any format, that would be great. If that is not feasible, at least the top ten or top twenty logs that Splunk Observability Cloud is getting should be readable by Splunk Observability Cloud without any changes. That actually is one of the major feedback items I can provide which can actually ease the life of the end users or any layman. As a newcomer to Splunk Observability Cloud, I may not know JSON. I now need to hire someone or I need to look for someone who knows JSON and who can convert my logs into JSON format and then I will ingest them into the logs if I want to create a dashboard. If I do not want to create a dashboard, that is okay. On the other hand, Splunk Observability Cloud is giving me a usability and easy to go interface, but for a dashboard, I need to have an understanding of JSON so that I can ingest the log in JSON format. That is a dilemma that they have and they should work on. Currently, Splunk Observability Cloud is not the only solution which any organization is using. There is also Grafana and PagerDuty. If Splunk Observability Cloud can plan some kind of integration with PagerDuty and Grafana, then those things can be controlled from a single position and if something else is happening at one location, it can update things at all levels. That can also bring great value to the users. Currently, I have to maintain three systems separately, but if some kind of integrations can be developed with these three vendors, then that can be a great thing because all these three have now become the industry pillars or industry standards for observability and resiliency.
There are not complexities with the installation of Splunk Observability Cloud, but with the configuration of alerts and everything because Splunk has its own language in the background. You need to know Splunk in order to configure everything that you want. It requires some in-depth knowledge of the product. It should be more plug-and-play, similar to ScienceLogic. ScienceLogic uses whatever it finds. You can use PowerShell, you can use scripts that you make. Splunk is more on the old style. It uses agents, and you have to deploy the agents. The out-of-the-box customizable dashboards provided by Splunk are okay, but usually, I have to create new dashboards because every user wants to see something else. The out-of-the-box dashboards help to get started faster, but in the end, I will have to redo them. I would like to see agentless deployment and better integration with ticketing systems such as ServiceNow, which is the biggest. We utilize the ability to enrich data with custom metrics in Splunk Observability Cloud to create tickets in ServiceNow. It is integrated with ServiceNow, but we enrich the tickets by putting the logs in the tickets and things of that nature, so it helps us. However, even that is a mixed approach. From Splunk Observability Cloud, you cannot put the logs directly in the tickets. Instead, it will create a ticket and send you an email with the logs. That integration could be improved.
Splunk Observability Cloud could be improved by having more integration with Splunk Cloud because at the moment they're two separate products. They're making great moves on what they call unified access; tighter integration is always a good thing.
The UI of Splunk Observability Cloud is one of the major issues; it's old and has been there for more than 10 years, acquired by other applications from other companies. It's time to reinvent how the UI is going to work with the AI modules and integrations, making it softer and cleaner. Splunk Observability Cloud is comprehensive in terms of functionality and features, so educating users has to be more functional. Users need to know how to be educated about certain views or pages they're working on.
The main improvement I would suggest for Splunk Observability Cloud would be offering the ability to implement custom apps, specifically allowing Python scripts that Splunk Cloud could host. Currently, we cannot create custom apps through Splunk Cloud. Additionally, continuous performance improvements for faster searching and indexing would be beneficial.
The integrations need to be improved for Splunk Observability Cloud. Currently, they do not have great support for Azure. We are on Azure, and I know they invested a lot of time in AWS yet not in Azure. I had given feedback to the teams here, as the integration from Azure Cloud, how we supply the logs and the metrics, is not clearly documented yet, which was acknowledged by the team. For example, the OTEL collector has a thousand parameters, and we need a very specific use case with 10 parameters required for our integration. We can't go through the thousand parameters; we can, however, that is basically why I think some integrations need to get better for Azure. There's a lot of talk about AI-powered analytics and guidance in Splunk Observability Cloud. I didn't get a great sense of how much of it is actually working; there are a lot of AI hallucinations. I think it probably needs much more improvement to contextualize it so that it is very clear and precise about what it randomly thinks, but it needs to match the context better. Customer service and technical support need some improvement. We had issues with technical support, and the professional services were struggling as well.
Splunk Observability Cloud can be improved. In terms of additional features I would want to see in future releases, since Cisco acquired Splunk, more Cisco integration could be beneficial.
The RUM part of Splunk Observability Cloud can be improved significantly. We are currently struggling to use it since our application is mixed mobile and non-mobile. Some AI features in the search functionality could be beneficial in the next release of Splunk Observability Cloud. In GCP, Cloud Run is not natively supported by Splunk, and we are challenged with bringing data from Cloud Run to Splunk. Native support of it in the future would be great for us.
The user interface of Splunk Observability Cloud needs a lot of work. I have been known to describe it as slapping lipstick on a pig. The pretty colors draw in everybody, however, the actual functionality of it has a lot that you cannot do, and how the user interface is organized is very difficult to navigate. This is a driving factor for us not to use the product. The next release of Splunk Observability Cloud should include a feature that makes it so that when looking at charts and dashboards, and also looking at one environment regardless of the product feature that you're in, APM, infrastructure, RUM, the environment that is chosen in the first location when you sign into Splunk Observability Cloud needs to stay persistent all the way through. There's no reason that a user should have to keep having to restart all of their filters and select their environment anytime that they switch to a different area of the tool.
The only strain point we've encountered with Splunk Observability Cloud is that the search times can be lengthy for some things. We have a large environment, so that's expected. That's the only complaint I've had so far.
Splunk Observability Cloud could be improved with better integration with AppDynamics, as we know that's coming, however, it is an issue we've had between the OpenTelemetry and the AppDynamics collector. We saw a complete difference in what data was being brought in, however, we know that issue is being resolved and that's a big one for us.
Splunk Observability Cloud can be optimized to its full potential.
Splunk Observability Cloud could be improved in terms of integrations with more technical add-ons, such as Zoom. Although they have one with Zoom, it's not available in the cloud, so having that feature would be beneficial. Essentially, Splunk should continue expanding to create easier ways to ingest logs from different products. The out-of-the-box customizable dashboards in Splunk Observability Cloud are very effective in showcasing IT performance to business leaders. However, there are aspects that could be improved, such as linking dashboards to one another. While IT leaders may not drill down, it's crucial to create levels of dashboards for technical users to find root causes, making it effective for stakeholders.
To improve Splunk Observability Cloud, we need more applications to be included in the observability so that more applications can have agents to monitor them and bring that information to the cloud. Splunk Observability Cloud has not yet completely improved our operational performance for our company's resilience as we are just starting out, however, it will help us ultimately to reduce incident time.
It can be improved through the integration of AI, which is either coming or already available.
For potential areas of improvement, I find that while Synthetics, APM, and infrastructure management models are fine, an enhancement could be seamless integration with some third-party tools. It should better support interactions within Splunk tools. If a customer utilizes third-party tools and wants to forward data from Splunk Observability Cloud, seamless integration would be beneficial. This is crucial for passing data to tools such as Dynatrace or Grafana, as integrating some third-party add-ons can be challenging, involving many implementation and configuration steps.
In Splunk Observability Cloud, I notice room for improvement in synthetic monitoring. It does not provide output based on server names. It only gives a response when we input a URL. I'm not sure if this issue is specific to my organization, but it would be beneficial if server details could be retrieved directly in synthetic monitoring.
Regarding dashboard customization, while Splunk has many dashboard building options, customers sometimes need to create specific dashboards, particularly for applicative metrics such as Java and process terms. These categories of dashboards would be very helpful for customers.
It would be beneficial to have more enhanced features with capabilities to adapt more integrated applications. Improvements in dashboard configuration, customization, and artificial intelligence functionalities are desired. There is room for improvement in customer support due to delays and standard feedback responses.
I'm still experiencing some features of the product. However, in future updates, I would like to see more predefined monitoring query solutions, which could be more effective.
I'd like a dashboard that allows me to connect elements through drag-and-drop functionality. Additionally, I want the ability to view the automatically generated queries behind the scenes, including recommendations for optimization. This is just a preliminary idea, but I envision the possibility of using intelligent software to further customize my queries. For example, imagine I could train my queries to be more specific through an AI-powered interface. This would allow me to perform complex searches efficiently. For instance, an initial search might take an hour and a half, but by refining the parameters through drag-and-drop and AI suggestions, I could achieve the same result in just five minutes. Overall, I'm interested in exploring ways to customize queries for faster and more efficient data retrieval. Ideally, the dashboard would provide additional guidance and suggestions to further enhance my workflow through customization and optimization.