Lead Engineer at a tech vendor with 51-200 employees
Real User
Top 5
Mar 31, 2026
I received information from your team regarding a peer review of Honeycomb Enterprise. As an observability engineer using Honeycomb Enterprise extensively, I can provide substantial input. My primary tasks involve OpenTelemetry, where I send numerous traces and metrics to Honeycomb Enterprise and use it for visualizations of how our code is functioning and how data moves through our systems. We have identified many problems using Honeycomb Enterprise, such as delays in the system where latency is observed by Honeycomb Enterprise traces, which we then investigate further and resolve. We have our SLA and SLOs configured on Honeycomb Enterprise, allowing us to monitor our reliability score. Additionally, we send many metrics to the platform. In my previous organization, we used it only for traces, but in my current organization, I use it extensively for traces, metrics, SLA, and SLO work. We also use Signoz for infrastructure-related APM metrics and other things, but we cannot use it for Honeycomb Enterprise capabilities. Honeycomb Enterprise has genuinely helped us. We are not using Honeybee command line, which is a native solution by Honeycomb Enterprise. Instead, we use OpenTelemetry so that we can switch vendors easily. Our complete codebase now has proper telemetry data sending into Honeycomb Enterprise. We have provided traces for each of the functions we are using. If we need more in-depth insights into what happens inside the code, we set up additional telemetry data points in the code. When you set those telemetry data points there in the code, you send those traces to Honeycomb Enterprise and then we can examine them. In terms of customer usage, observing how customers are using our solution is very helpful in debugging aspects and also helpful in cost prioritization aspects. If a customer is not using our solution extensively, we can scale it down a bit in terms of servers where our solution is running. This has given us a better approach, particularly for Lambdas where Lambdas are deployed, and we can save considerable costs. If traffic is not very high and we are able to see metrics where traces are not falling a lot, we can scale down our instances and save our AWS billing. If there is a sudden spike, we can get that monitored via the SLI and SLOs that I have configured. If there are sudden errors, I can scale the instance up. If there is a genuine bug in the code, Honeycomb Enterprise is very useful for debugging the issues. Even a data-driven issue can be fixed using it because we get all the data when a user is providing any request. We actually get the input data in Honeycomb Enterprise and that way we can solve it very fast by seeing what query they hit or what workflow they have done.
I am part of the performance engineering practice, and I lead the performance engineering practice at my current employer. We use Honeycomb Enterprise for tracing, which is application performance management in short. Our client has several APM tools such as Datadog, and in addition to Datadog, we only have the monitoring capacity of the counters. We do not have the agent-level monitoring which Honeycomb Enterprise is providing, where we can see the traces for each call being made by the software to trace where it is spending the time. For that gap, they have Honeycomb Enterprise in addition to Datadog. We use Honeycomb Enterprise for the same purpose, as Honeycomb hooks into our applications and tells us the traces where the request is spending the time. We have Datadog here and often we get restrictions related to cardinality on Datadog because of their billing systems. They have limitations of cardinality, and that is the impact. That is how we can compare the impact. Another thing I want to add here is that the team here had tried to use Honeycomb Enterprise earlier for tracing, but they faced issues. They could not get proper tracing with Honeycomb Enterprise at that time. That is what I have been given as feedback. My main focus is on the tracing part. We have a microservices architecture with multiple microservices, and we want to see how when the request flows across multiple microservices, where the time gets spent. We mostly look at the time the requests spend. Honeycomb Enterprise would capture a trace and span, and from there, we look at the time, the milliseconds or seconds that get spent at a particular request. That is what we look at and what we are interested in.
Software Engineer at a financial services firm with 11-50 employees
Real User
Top 20
Feb 6, 2026
We were building a product for one of the biggest wealth management platforms in the world, an American wealth management platform. For them, it is really important for the product to be reliable and for them to set up KPIs, especially for vendors like us who worked for them. The debugging process usually involved Splunk Cloud or Honeycomb Enterprise traces. Whenever I was looking at an issue, I probably went through the traces because it was a microservice architecture. Sometimes it really helped to understand the call chain. For example, if there were 10 microservices calling each other in some sort of order, being able to visualize that and look through that was pretty useful.
Although Grit is a tool code code migration and management of technical debt for large chunks of work, we reviewed Grit from the use case of assisting in faster remediation of vulnerable libraries. We examined 3 areas and how we could use the synergy of Grit.io along with Snyk.io that helps overcome Snyk's limitations: 1. Deep scanning and reachability analysis 2. Management of auto-generated Pull Requests (PRs) 3. Reduction of false positives I'm connected and had interactions with the founder Mr. Morgante Pell, while I designed a comprehensive synergistic solution, and I wrote a 35+ page technical paper on this topic.
The solution is mainly used for stack observability. It observes service behavior or any kind of failure that may be happening. The tool is also related to research. My company is working more on this, but I have been working on my SLOs and defining SLOs for the last seven months.
Honeycomb Enterprise is designed to optimize performance visibility, offering a robust platform for distributed system observability. It provides insights for complex data and aids in faster issue resolution, making it a valuable tool for IT professionals.This tool is tailored for real-time data tracking and improving system performance efficiency. Enterprises benefit from its capacity to handle large-scale data, ensuring seamless operations and continuity. Honeycomb Enterprise helps teams to...
I received information from your team regarding a peer review of Honeycomb Enterprise. As an observability engineer using Honeycomb Enterprise extensively, I can provide substantial input. My primary tasks involve OpenTelemetry, where I send numerous traces and metrics to Honeycomb Enterprise and use it for visualizations of how our code is functioning and how data moves through our systems. We have identified many problems using Honeycomb Enterprise, such as delays in the system where latency is observed by Honeycomb Enterprise traces, which we then investigate further and resolve. We have our SLA and SLOs configured on Honeycomb Enterprise, allowing us to monitor our reliability score. Additionally, we send many metrics to the platform. In my previous organization, we used it only for traces, but in my current organization, I use it extensively for traces, metrics, SLA, and SLO work. We also use Signoz for infrastructure-related APM metrics and other things, but we cannot use it for Honeycomb Enterprise capabilities. Honeycomb Enterprise has genuinely helped us. We are not using Honeybee command line, which is a native solution by Honeycomb Enterprise. Instead, we use OpenTelemetry so that we can switch vendors easily. Our complete codebase now has proper telemetry data sending into Honeycomb Enterprise. We have provided traces for each of the functions we are using. If we need more in-depth insights into what happens inside the code, we set up additional telemetry data points in the code. When you set those telemetry data points there in the code, you send those traces to Honeycomb Enterprise and then we can examine them. In terms of customer usage, observing how customers are using our solution is very helpful in debugging aspects and also helpful in cost prioritization aspects. If a customer is not using our solution extensively, we can scale it down a bit in terms of servers where our solution is running. This has given us a better approach, particularly for Lambdas where Lambdas are deployed, and we can save considerable costs. If traffic is not very high and we are able to see metrics where traces are not falling a lot, we can scale down our instances and save our AWS billing. If there is a sudden spike, we can get that monitored via the SLI and SLOs that I have configured. If there are sudden errors, I can scale the instance up. If there is a genuine bug in the code, Honeycomb Enterprise is very useful for debugging the issues. Even a data-driven issue can be fixed using it because we get all the data when a user is providing any request. We actually get the input data in Honeycomb Enterprise and that way we can solve it very fast by seeing what query they hit or what workflow they have done.
I am part of the performance engineering practice, and I lead the performance engineering practice at my current employer. We use Honeycomb Enterprise for tracing, which is application performance management in short. Our client has several APM tools such as Datadog, and in addition to Datadog, we only have the monitoring capacity of the counters. We do not have the agent-level monitoring which Honeycomb Enterprise is providing, where we can see the traces for each call being made by the software to trace where it is spending the time. For that gap, they have Honeycomb Enterprise in addition to Datadog. We use Honeycomb Enterprise for the same purpose, as Honeycomb hooks into our applications and tells us the traces where the request is spending the time. We have Datadog here and often we get restrictions related to cardinality on Datadog because of their billing systems. They have limitations of cardinality, and that is the impact. That is how we can compare the impact. Another thing I want to add here is that the team here had tried to use Honeycomb Enterprise earlier for tracing, but they faced issues. They could not get proper tracing with Honeycomb Enterprise at that time. That is what I have been given as feedback. My main focus is on the tracing part. We have a microservices architecture with multiple microservices, and we want to see how when the request flows across multiple microservices, where the time gets spent. We mostly look at the time the requests spend. Honeycomb Enterprise would capture a trace and span, and from there, we look at the time, the milliseconds or seconds that get spent at a particular request. That is what we look at and what we are interested in.
We were building a product for one of the biggest wealth management platforms in the world, an American wealth management platform. For them, it is really important for the product to be reliable and for them to set up KPIs, especially for vendors like us who worked for them. The debugging process usually involved Splunk Cloud or Honeycomb Enterprise traces. Whenever I was looking at an issue, I probably went through the traces because it was a microservice architecture. Sometimes it really helped to understand the call chain. For example, if there were 10 microservices calling each other in some sort of order, being able to visualize that and look through that was pretty useful.
Although Grit is a tool code code migration and management of technical debt for large chunks of work, we reviewed Grit from the use case of assisting in faster remediation of vulnerable libraries. We examined 3 areas and how we could use the synergy of Grit.io along with Snyk.io that helps overcome Snyk's limitations: 1. Deep scanning and reachability analysis 2. Management of auto-generated Pull Requests (PRs) 3. Reduction of false positives I'm connected and had interactions with the founder Mr. Morgante Pell, while I designed a comprehensive synergistic solution, and I wrote a 35+ page technical paper on this topic.
The solution is mainly used for stack observability. It observes service behavior or any kind of failure that may be happening. The tool is also related to research. My company is working more on this, but I have been working on my SLOs and defining SLOs for the last seven months.
There aren't any specific use cases for the solution as such. In our company, we use the solution for SLA and SLO-related work.