What is our primary use case?
This solution is a big strategic piece for us. We wanted to replace the legacy performance testing capability with BlazeMeter's performance testing capability. That was our first use case. Now, it's a more strategic platform with GUI testing, API testing, mock services, service virtualization capability, and test data capabilities. We aren't using everything at the moment, but that is the strategic intent.
We faced some challenges in getting multiple teams to adopt BlazeMeter. This was a big transformation for us. We used LoadRunner for 10 years, so changing to BlazeMeter was definitely a bit challenging. Organizational change management was involved. We were able to use other online resources to learn how to use BlazeMeter. There was resistance from some teams.
This solution is used by 24 teams across 13 divisions. There are about 75 engineers using BlazeMeter, but usage is higher. This is a hybrid solution.
How has it helped my organization?
The range of test tools that BlazeMeter provides is amazing. There are more than 18 tools, which gives us freedom of choice.
It's very important to us that BlazeMeter is a cloud-based and open-source testing platform. It's critical for our organization because we are increasingly moving from on-premises application hosting to cloud-native hosting.
BlazeMeter has definitely improved the productivity of our organization, especially because of the integration features. Engineering productivity has improved because people are able to use the tool of their choice. This increases our delivery velocity.
From the operational-benefits perspective, we save infrastructure costs because we don't have to host this massive product on infrastructure. We also save operational costs. We don't need a big team because it's a SaaS platform.
What is most valuable?
It's a great platform because it's a SaaS solution, but it also builds on-premises hosting solutions, so we have implemented a hybrid approach. BlazeMeter sets us up for our traditional hosting platforms and application stack as well as the modern cloud-based or SaaS-based application technologies.
The solution is completely built on an open-source stack. Before performance testing, we used JMeter. There's flexibility in choosing Gatling, Locust, Taurus, or other open-source technologies. We're able to attract good talent in the market. They like open-source because it's lightweight, accessible, and quick. That's been a strong point.
We integrated user access management, so it's easy for consumers to actually use it. It has great reporting features and integrations. It can connect to AppDynamics, Dynatrace, and Splunk.
Another great feature is that it meets the various maturity levels in our organization. We still have manual-based testing, and there are some teams that are very engineering and code focused. BlazeMeter helps meet all those maturity levels.
For example, a manual tester who wants to get into automation can use the scriptless feature. Even business people can use the record and playback function and record the business process. That is captured into JMeter and Selenium scripts, and they can continue executing that.
The solution enables the creation of test data that can be used both for the performance and functional testing of any application. Currently, we aren't using the test data feature in BlazeMeter.
It took us a year to realize the benefits because we had to do the design work and the network enablement piece for teams to start using it at that scale.
BlazeMeter helps bridge Agile and CoE teams. We define CoE as the center of enablement, not a center of excellence. We don't have central teams. We use the hub and spoke model. The hub is basically the central enablement team. We provide BlazeMeter as a service in the bank, and we manage, maintain, and govern it, but individual teams have federated autonomy.
The solution helps us implement shift-left testing. We're still in that stage, and we have various maturity levels in our organization. We have between 6,000 and 7,000 engineers. Out of that, around 2,000 are manual testers. The maturity level across those many thousands of engineers is varied. Some teams have definitely embedded shift left, and BlazeMeter is good at that. They can use YAML files and start shifting left. That means the developers are able to have YAML definitions in their code to do smaller performance load tests.
We use the solution's scriptless testing functionality. We have many testers who use scriptless testing now. The record and playback function is also one of the key aspects.
The manual testers are definitely getting more confident that they can start moving toward automation. People are finding that the existing test automation helps to build their test cases quicker. They struggled with JMeter as a tool. They had to learn various nuances. With scriptless testing, recording, and playback, they don't have to worry about that.
BlazeMeter definitely decreased our test cycle times. During each cycle, we're saving between one to two hours. We enabled integration between BlazeMeter and AppDynamics, so people don't have to log into multiple tools to do their analysis. BlazeMeter provides a single pane of glass to do the analysis.
It essentially saves days in the sprint because they would execute a test, then go into AppDynamics, the SCOM, or the IIS logs. To fetch the IIS logs, they would have to wait for the operations team to give them access.
What needs improvement?
The seamless integration with mobiles could be improved. Right now, they have the UI testing capability, which provides the browsers. They made Safari available, which is amazing. We're able to test on Chrome, Firefox, and Safari. We want the capability to test Chrome on Windows, Chrome on Mac OS, and the capability to test Chrome on Android OS and iOS.
Buyer's Guide
BlazeMeter
May 2026
Learn what your peers think about BlazeMeter. Get advice and tips from experienced pros sharing their opinions. Updated: May 2026.
893,164 professionals have used our research since 2012.
For how long have I used the solution?
I have used this solution for more than two years.
What do I think about the stability of the solution?
The solution is quite stable. There were some issues that impacted us a couple of months ago. There was an incident in which they did an upgrade, and we weren't able to execute the load for four hours.
What do I think about the scalability of the solution?
It's absolutely scalable. We have plans to increase usage.
How are customer service and support?
Customer support is excellent. I'm very impressed with them because of the kind of queries we're raising. The usage has also gone up from the initial 100 users to over 800 users. Support responds within two hours because they're based in Israel and the US.
Which solution did I use previously and why did I switch?
We previously used an HP product and we used LoadRunner for performance testing.
We chose BlazeMeter because of the open-source technology. LoadRunner Performance Center is a proprietary tool. We had specialized engineers just on LoadRunner. They were expensive and difficult to get in the market. Key man dependency was a risk, so we wanted a platform with flexible tools and scalability. We also wanted a future-proof solution that would still be useful for the existing traditional tool set.
How was the initial setup?
The SaaS account creation was very easy. Workspace creation was very easy. It's self-service, so those aspects were simple. Even the on-premises deployment is all Docker based. It's pretty advanced. They are clustering based, so having a cluster makes it easier. We didn't have an on-premises clustering solution like Kubernetes, so we had to go with the bare-bones Docker image implementation. We didn't have to do a lot of engineering because it's all self-service.
It took one year for our internal design to be done, approved, and implemented. The design involved allowing all of the connections from BlazeMeter as a load engine sitting on non-production infrastructure and applications. The network connectivity was done in one year. It was a massive implementation. We have 16 platforms, which can be considered mini-business units. We have our securities, treasury, retail banking, and internal corporate services.
The SaaS deployment took one day. They created the account and gave us access. My team was able to create workspaces. The on-premises deployment took a few hours. We went through all the connectivity and design. To complete the on-premises setup, we had to run a bunch of commands. Running the commands was easy and quick, but downloading hundreds of GBs of images took hours.
Two engineers were involved in the deployment. For the on-premises deployment, their role was to follow the instructions to complete the setup. After that, they had to verify if the setup was correct and then do end-to-end verification. A test was created in the SaaS portal, and we could choose the on-premises location and execute it to get results in the test portal.
Maintenance involves remediating vulnerabilities. BlazeMeter itself was not vulnerable. A Log4j vulnerability came out in December last year. BlazeMeter was pretty quick to respond. We quickly worked with our cyber team and service management teams. We were happy that it wasn't vulnerable because JMeter is used in BlazeMeter. JMeter uses Java, and the older version of JMeter has Log4j binaries. We weren't using those versions.
Our cyber team's direction was to get rid of those binaries if we weren't using them. The BlazeMeter team didn't have that policy, but they understood our stance. To address the risk, they upgraded and removed all of the old versions of JMeter from the platform.
We have auto updates enabled in our system for SaaS and on-premises. Maintenance is very light for us because of the auto-update feature. We have a small team for maintenance, but we're focusing more on addressing the customer knowledge gap because new teams and people are using BlazeMeter within our bank now.
What was our ROI?
We're already seeing a return on investment.
We had the protection license for LoadRunner, but the annual maintenance cost was going to increase to a point where we could get BlazeMeter with the same annual cost. With BlazeMeter, we're saving around $50,000 annually on infrastructure costs by reducing our amount of servers by 50.
There's always the opportunity cost because the on-premises LoadRunner infrastructure had limited scope. We could never scale. Now, we're able to scale more and generate more load. For LoadRunner, we couldn't generate load for our SaaS instances or do geolocation testing. If we had to do that, we would have easily spent around $200,000 to $300,000.
With BlazeMeter, our return on investment for the opportunity cost is between $300,000 to $400,000.
What's my experience with pricing, setup cost, and licensing?
It's consumption-based pricing but with a ceiling. They're called CVUs, or consumption variable units. We can use API testing, GUI testing, and test data, but everything gets converted into CVUs, so we are free to use the platform in its entirety without getting bogged down by a license for certain testing areas. We know for sure how much we are going to spend.
There's one additional component of dedicated IP. That was added for us because for SaaS and cloud-based applications, we wanted to use the SaaS but not the whole internet. Dedicated IPs are expensive, so that is charged separately.
Which other solutions did I evaluate?
We looked at about seven other solutions, including SmartBear, LoadNinja, and JMeter.
What other advice do I have?
I would rate this solution a 10 out of 10.
I would recommend this solution for those who want to use it, but it depends on the need. My advice is that you can start using the tool immediately. There's no need to do POCs.
The intention of this tool is that people should uncover more and more in their testing. That means people should be doing more testing in an automated fashion, or they should just give a command to BlazeMeter, and it should execute test cases and give them some insight into whether something is wrong.
We want people to do more automation testing and move away from manual testing. That is the success criteria. That means that we are spending more on BlazeMeter, but that's a sign that we're doing more automation. Our operational expenditure isn't increasing because we still have a team to manage the BlazeMeter account and the on-premises setup.
Our intention isn't to save time. The ultimate goal is to increase the velocity and improve quality, which will happen if we uncover more defects early.
Which deployment model are you using for this solution?
Hybrid Cloud
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.