We've been able to automate a lot of our interactions with it. Also, it has the ability to monitor random URLs not tied to the one pinger per application (though it costs extra).
We've been able to automate a lot of our interactions with it. Also, it has the ability to monitor random URLs not tied to the one pinger per application (though it costs extra).
I think that there have been some questionable product enhancements. Over a year ago, New Relic rolled out a new navigation that really disrupted our workflow. It added many more clicks and was surprisingly frustrating. Luckily, that was mostly reverted, but more recently, around six months ago, they redesigned the error reporting page. This is another example of a tool that worked fine, but which is now very hard to use.
About six months to a year ago, we invested a lot of time automating a lot of our interactions with New Relic. However, their API couldn't do a lot of things, and even getting a list of errors was impossible without scrubbing every application/server manually and checking health conditions yourself. This seems very basic. While they have made a new API version, we've had difficulty with that as well.
Additionally, I'm told that they will deprecate completely the old API, which now means I need to reimplement everything that was working in this new version.
It's been a stable product. We haven't had any issues with instability.
It's scaled for our needs. We haven't had any issues with scalability.
Cost is significant with a lot of extras. For instance, another big negative point is the inability to monitor random URLs not tied to the one pinger per application. They've added it which is great, but now it costs extra for using it in any real capacity.
The website is much more responsive because we are able to quickly pinpoint the worst pages – we can be really targeted with where we put our resources. In a lot of areas, one page takes one minute, the next can be ten, then some take one, some take 30 seconds. You have to decide at what point you want to focus, this allows us to find the pages that are really painful for users, and fix those and make them a lot better.
It’s absolutely the ability to get a really specific read of what is taking up time. For example, if a webpage takes two minutes to load it tells you why it’s taking so long.
They instrument up from the bottom to the top – every piece of code - they have a very perfect read of what’s being done, and how long it’s taking. And, a super nice way of presenting that.
It gives you amazing statistics, but doesn’t give you enough information about what to do with the statistics. The sales people need to be on board on this end.
There were no issues.
From a point of view of showing you what to do fine but in terms of showing you what to do with the data infuriatingly unhelpful. Very friendly and available however.
There was no solution in place previously.
We implemented in-house.
It was able to really effectively find the problem and solve it – it sped up the pages considerably.
The monthly cost os $1000 per server per month, but it could be even more. We pay about $250 for the server, and then New Relic wants over $1000 to give us statistics on those servers.
The alternative is to do UNIX profiling. Basically, you do it piecemeal, and amongst those piecemeal solutions is also browser profiling, and it’s really hard to justify. We thought about it, as if you’re a relatively small company New Relic is expensive, but it's way better than that. The price point is an issue so we turn it on and off when we need it. Not a great solution for a start-up.
It helps us organize the data from our clients and reason with it. Moreover, with the client JS API, we can report data to New Relic and query it with Insights. It is easy to use and has an understandable graph editor.
This is the real deal.
Tracking and monitoring production is mandatory, however, we did it for years only for the back-end side. As the client-side applications grow bigger and get more complex more responsibility gets to the client. Because of that we must be able to understand if our clients are "healthy". The term "healthy" is not clear to what it means.
Presently we report data to New Relic Insights, and we've built some really readable and understandable graphs. We monitor with those graphs every deploy to production, and we are advising the graphs to resolve current problems with our application, like errors and problems with the load and response times.
New Relic comes with some features out of the box, but not enough. There are some essential features that New Relic needs to implement that their competitors already support, like special treatment for AngularJS/React applications. We had to implement (with the JS API) the ability to query errors through Insights which is essential. Currently, we don't have a way to send alerts which is a real pain.
We are using it in conjunction with APM and Insights.
We've had no issues with deployment.
We've had no issues with stability.
We've had no issues with scalability.
We also looked at Raygun.io and TrackJs which are great products, but neither has the tri-factor, which is:
New Relic has a good separation between the data that you report and how you show it, along with data you get out of the box. My advice is about how to show the data in a way that it will be easy to reason with it.
You can build many different graphs, really try them all, and then decide what fits best for your organization. That's what we did. The way you handle data varies between organizations and even between teams in a organization, and the ability to show the data in different ways is very helpful with that.
New Relic always gets new things done. The system is always changing and in a good way. New features are always coming into the system and we are very happy with it.
With the help of New Relic APM, we managed to deliver an online B2B application with average response times below two seconds, where with v6, the average response times was about 30 seconds.
The most valuable feature is the New Relic APM module to deep-dive into the application, to get bottlenecks to the surface, and to improve application performance. Also, the New Relic Insights module creates a real-time dashboard on application performance to create awareness for the DevOps team.
They need to improve the alerting and dashboarding as these are the key features in DevOps.
Once we had stability issues when the New Relic agent was overwhelming the IIS process, but that was a long time ago. We spoke to New Relic, and they delivered an agent to fix the problem.
We've not had any issues scaling it. We work with Java, and the agent is easily implemented.
Customer Service:
Many times, they have been of great help even though support is in America and we are in Europe; we get help within eight hours.
Technical Support:
The support department has good technical knowledge and is customer-friendly. Even if you don't answer their follow-up questions, the issue is resolved.
The setup is really straightforward. Install a server agent on the operation system, and install an application agent in the application.
We developed in-house and also maintain the developed application.
I don't have actual numbers, but as we improved the quality of the application, we received less incidences compared to applications without New Relic.
New Relic is either free with low retention and minimal functionalities, or expensive with full options and retention. I suggest a pricing between.
We did investigate other software, such as Ruxit and AppDynamics, but the price and quality of New Relic made us choose New Relic.
I think all online applications need to have APM software implemented to actually knów the performance state of the application.
The ability to trace transactions all the way down to find where the software is broken - database, web services, etc., and all the way down, with the trace dumps, to see where our application is broken.
When our app passes critical threshold, can quickly go to Transactions and/or Database views and immediately see the code areas causing the issue. Saves so much time in debugging our code and environments.
I can have my developers find bugs and fix them in one-tenth of time they used to take. It enables the stability of our product, and it's allowed me to keep human resources at a minimum so that we have a smaller number of people to do better things.
In Alert History, while you can see the trending in response time by Request Queuing, .NER CLR and Database, if you had the ability to see which transaction type was the slowest during the timeframe when the critical error occurred by displaying the info within the same “tool tip” hover window which currently gives me the time per request and number of transactions, i.e., if it had the additional correlation information of “StatusCode/403” which you can get from the Events Errors hover. This has the potential of saving a lot of analysis time going back and forth between views.
We didn't have any deployment issues.
I haven’t had any issues with stability.
It’s scaled for us. We’re still relatively small with just 16 servers.
When my IT manager did the initial install, they were very responsive.
It was straightforward, but the issue was the unfamiliarity of our IT manager outside the Microsoft world.
If you want to save money, go for it. Time is money, and it saves you so much time to be able to find issues and to fix them.
The thing I use the most is the ability to tell at a glance that we’re in a red state. We have dashboards around our office which let me know what I need to pay attention to. I can dig into the error. It also has high throughput.
Mean time to recovery has improved, leading to cost savings and reduced customer dissatisfaction.
One of my issues was with not getting enough insight into errors, as I can only go back seven days. The data collection on it is not a long enough period of time if I want to see some trends. If someone is having some errors, I can’t get historical insight.
We had a problem where our application crashed because of New Relic. They acknowledged the problem and we just had to turn it off for six months.
It’s been scaling along with our growth.
Great tech support, very responsive. Have helped us solve problems.
I wasn't involved.
It’s just so easy to set up and use with little training. The barrier to entry is extremely low and it adds a high-value.
We primarily have an API so our front-end apps aren’t a huge part of our business, but Browser allows us to see geo-location to see where requests are coming from.
It also provides us with really valuable information such as which different browsers and versions our website visitors are using.
As a QA manager, it helps me to know exactly where to focus our attentions because we can pinpoint specifically where there may be issues -- where geographically, which browsers, which browser versions, and other very granular details.
I'd like to see alerting based on custom insight queries. If I set a custom query to give me some value, I want to be able to set an alert for that.
No issues with deployment at all.
No stability issues.
It’s been scaling along with our growth.
I haven't had to use technical support.
Setup was very simple and straightforward.
It allows flexible queries, allowing me to find answers easily.
It’s been helpful to get a unified understanding of how our application is being used, usage patterns, etc. We get a shared organizational understanding.
Alerting based on custom insights queries. If I set a custom query to give me some value, I want to be able to set an alert for that.
No issues encountered.
It’s been scaling along with our growth.
Great tech support, very responsive. They've helped us solve some perplexing problems.
Very straightforward. Worked through our account manager.
You’ll get way more data than you thought.
