Try our new research platform with insights from 80,000+ expert users
Lalit Parkale - PeerSpot reviewer
Senior Product Owner at a financial services firm with 10,001+ employees
Real User
Nov 3, 2022
Saves us $50,000 annually on infrastructure costs, increases delivery velocity, and improves productivity
Pros and Cons
  • "It's a great platform because it's a SaaS solution, but it also builds the on-premises hosting solutions, so we have implemented a hybrid approach. BlazeMeter sets us up for our traditional hosting platforms and application stack as well as the modern cloud-based or SaaS-based application technologies."
  • "The seamless integration with mobiles could be improved."

What is our primary use case?

This solution is a big strategic piece for us. We wanted to replace the legacy performance testing capability with BlazeMeter's performance testing capability. That was our first use case. Now, it's a more strategic platform with GUI testing, API testing, mock services, service virtualization capability, and test data capabilities. We aren't using everything at the moment, but that is the strategic intent.

We faced some challenges in getting multiple teams to adopt BlazeMeter. This was a big transformation for us. We used LoadRunner for 10 years, so changing to BlazeMeter was definitely a bit challenging. Organizational change management was involved. We were able to use other online resources to learn how to use BlazeMeter. There was resistance from some teams.

This solution is used by 24 teams across 13 divisions. There are about 75 engineers using BlazeMeter, but usage is higher. This is a hybrid solution.

How has it helped my organization?

The range of test tools that BlazeMeter provides is amazing. There are more than 18 tools, which gives us freedom of choice.

It's very important to us that BlazeMeter is a cloud-based and open-source testing platform. It's critical for our organization because we are increasingly moving from on-premises application hosting to cloud-native hosting.

BlazeMeter has definitely improved the productivity of our organization, especially because of the integration features. Engineering productivity has improved because people are able to use the tool of their choice. This increases our delivery velocity.

From the operational-benefits perspective, we save infrastructure costs because we don't have to host this massive product on infrastructure. We also save operational costs. We don't need a big team because it's a SaaS platform.

What is most valuable?

It's a great platform because it's a SaaS solution, but it also builds on-premises hosting solutions, so we have implemented a hybrid approach. BlazeMeter sets us up for our traditional hosting platforms and application stack as well as the modern cloud-based or SaaS-based application technologies. 

The solution is completely built on an open-source stack. Before performance testing, we used JMeter. There's flexibility in choosing Gatling, Locust, Taurus, or other open-source technologies. We're able to attract good talent in the market. They like open-source because it's lightweight, accessible, and quick. That's been a strong point.

We integrated user access management, so it's easy for consumers to actually use it. It has great reporting features and integrations. It can connect to AppDynamics, Dynatrace, and Splunk. 

Another great feature is that it meets the various maturity levels in our organization. We still have manual-based testing, and there are some teams that are very engineering and code focused. BlazeMeter helps meet all those maturity levels.

For example, a manual tester who wants to get into automation can use the scriptless feature. Even business people can use the record and playback function and record the business process. That is captured into JMeter and Selenium scripts, and they can continue executing that.

The solution enables the creation of test data that can be used both for the performance and functional testing of any application. Currently, we aren't using the test data feature in BlazeMeter.

It took us a year to realize the benefits because we had to do the design work and the network enablement piece for teams to start using it at that scale.

BlazeMeter helps bridge Agile and CoE teams. We define CoE as the center of enablement, not a center of excellence. We don't have central teams. We use the hub and spoke model. The hub is basically the central enablement team. We provide BlazeMeter as a service in the bank, and we manage, maintain, and govern it, but individual teams have federated autonomy.

The solution helps us implement shift-left testing. We're still in that stage, and we have various maturity levels in our organization. We have between 6,000 and 7,000 engineers. Out of that, around 2,000 are manual testers. The maturity level across those many thousands of engineers is varied. Some teams have definitely embedded shift left, and BlazeMeter is good at that. They can use YAML files and start shifting left. That means the developers are able to have YAML definitions in their code to do smaller performance load tests.

We use the solution's scriptless testing functionality. We have many testers who use scriptless testing now. The record and playback function is also one of the key aspects.

The manual testers are definitely getting more confident that they can start moving toward automation. People are finding that the existing test automation helps to build their test cases quicker. They struggled with JMeter as a tool. They had to learn various nuances. With scriptless testing, recording, and playback, they don't have to worry about that.

BlazeMeter definitely decreased our test cycle times. During each cycle, we're saving between one to two hours. We enabled integration between BlazeMeter and AppDynamics, so people don't have to log into multiple tools to do their analysis. BlazeMeter provides a single pane of glass to do the analysis.

It essentially saves days in the sprint because they would execute a test, then go into AppDynamics, the SCOM, or the IIS logs. To fetch the IIS logs, they would have to wait for the operations team to give them access.

What needs improvement?

The seamless integration with mobiles could be improved. Right now, they have the UI testing capability, which provides the browsers. They made Safari available, which is amazing. We're able to test on Chrome, Firefox, and Safari. We want the capability to test Chrome on Windows, Chrome on Mac OS, and the capability to test Chrome on Android OS and iOS.

Buyer's Guide
BlazeMeter
January 2026
Learn what your peers think about BlazeMeter. Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
881,082 professionals have used our research since 2012.

For how long have I used the solution?

I have used this solution for more than two years.

What do I think about the stability of the solution?

The solution is quite stable. There were some issues that impacted us a couple of months ago. There was an incident in which they did an upgrade, and we weren't able to execute the load for four hours.

What do I think about the scalability of the solution?

It's absolutely scalable. We have plans to increase usage.

How are customer service and support?

Customer support is excellent. I'm very impressed with them because of the kind of queries we're raising. The usage has also gone up from the initial 100 users to over 800 users. Support responds within two hours because they're based in Israel and the US.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We previously used an HP product and we used LoadRunner for performance testing. 

We chose BlazeMeter because of the open-source technology. LoadRunner Performance Center is a proprietary tool. We had specialized engineers just on LoadRunner. They were expensive and difficult to get in the market. Key man dependency was a risk, so we wanted a platform with flexible tools and scalability. We also wanted a future-proof solution that would still be useful for the existing traditional tool set.

How was the initial setup?

The SaaS account creation was very easy. Workspace creation was very easy. It's self-service, so those aspects were simple. Even the on-premises deployment is all Docker based. It's pretty advanced. They are clustering based, so having a cluster makes it easier. We didn't have an on-premises clustering solution like Kubernetes, so we had to go with the bare-bones Docker image implementation. We didn't have to do a lot of engineering because it's all self-service. 

It took one year for our internal design to be done, approved, and implemented. The design involved allowing all of the connections from BlazeMeter as a load engine sitting on non-production infrastructure and applications. The network connectivity was done in one year. It was a massive implementation. We have 16 platforms, which can be considered mini-business units. We have our securities, treasury, retail banking, and internal corporate services.

The SaaS deployment took one day. They created the account and gave us access. My team was able to create workspaces. The on-premises deployment took a few hours. We went through all the connectivity and design. To complete the on-premises setup, we had to run a bunch of commands. Running the commands was easy and quick, but downloading hundreds of GBs of images took hours.

Two engineers were involved in the deployment. For the on-premises deployment, their role was to follow the instructions to complete the setup. After that, they had to verify if the setup was correct and then do end-to-end verification. A test was created in the SaaS portal, and we could choose the on-premises location and execute it to get results in the test portal.

Maintenance involves remediating vulnerabilities. BlazeMeter itself was not vulnerable. A Log4j vulnerability came out in December last year. BlazeMeter was pretty quick to respond. We quickly worked with our cyber team and service management teams. We were happy that it wasn't vulnerable because JMeter is used in BlazeMeter. JMeter uses Java, and the older version of JMeter has Log4j binaries. We weren't using those versions. 

Our cyber team's direction was to get rid of those binaries if we weren't using them. The BlazeMeter team didn't have that policy, but they understood our stance. To address the risk, they upgraded and removed all of the old versions of JMeter from the platform. 

We have auto updates enabled in our system for SaaS and on-premises. Maintenance is very light for us because of the auto-update feature. We have a small team for maintenance, but we're focusing more on addressing the customer knowledge gap because new teams and people are using BlazeMeter within our bank now.

What was our ROI?

We're already seeing a return on investment.

We had the protection license for LoadRunner, but the annual maintenance cost was going to increase to a point where we could get BlazeMeter with the same annual cost. With BlazeMeter, we're saving around $50,000 annually on infrastructure costs by reducing our amount of servers by 50.

There's always the opportunity cost because the on-premises LoadRunner infrastructure had limited scope. We could never scale. Now, we're able to scale more and generate more load. For LoadRunner, we couldn't generate load for our SaaS instances or do geolocation testing. If we had to do that, we would have easily spent around $200,000 to $300,000.

With BlazeMeter, our return on investment for the opportunity cost is between $300,000 to $400,000.

What's my experience with pricing, setup cost, and licensing?

It's consumption-based pricing but with a ceiling. They're called CVUs, or consumption variable units. We can use API testing, GUI testing, and test data, but everything gets converted into CVUs, so we are free to use the platform in its entirety without getting bogged down by a license for certain testing areas. We know for sure how much we are going to spend.

There's one additional component of dedicated IP. That was added for us because for SaaS and cloud-based applications, we wanted to use the SaaS but not the whole internet. Dedicated IPs are expensive, so that is charged separately.

Which other solutions did I evaluate?

We looked at about seven other solutions, including SmartBear, LoadNinja, and JMeter.

What other advice do I have?

I would rate this solution a 10 out of 10.

I would recommend this solution for those who want to use it, but it depends on the need. My advice is that you can start using the tool immediately. There's no need to do POCs.

The intention of this tool is that people should uncover more and more in their testing. That means people should be doing more testing in an automated fashion, or they should just give a command to BlazeMeter, and it should execute test cases and give them some insight into whether something is wrong.

We want people to do more automation testing and move away from manual testing. That is the success criteria. That means that we are spending more on BlazeMeter, but that's a sign that we're doing more automation. Our operational expenditure isn't increasing because we still have a team to manage the BlazeMeter account and the on-premises setup.

Our intention isn't to save time. The ultimate goal is to increase the velocity and improve quality, which will happen if we uncover more defects early.

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Bala Maddu - PeerSpot reviewer
Mobile Network Automation Architect at a comms service provider with 10,001+ employees
MSP
Aug 1, 2022
Reduced our test operating costs, provides quick feedback, and helps us understand how to build better test cases
Pros and Cons
  • "The on-the-fly test data improved our testing productivity a lot. The new test data features changed how we test the applications because there are different things we can do. We can use mock data or real data. We can also build data based on different formats."
  • "Version controlling of the test cases and the information, the ability to compare the current version and the previous version within Runscope would be really nice. The history shows who made the changes, but it doesn't compare the changes."

What is our primary use case?

We use this solution as a tester. When it comes to 5G, there are loads of changes because we're trying to build the first 5G core network with the standalone architecture. Everything is based on APIs and API-based communications with a new HTTP/2 protocol. When we build the core network, we constantly change and tweak the network.

When it comes to testing, whether it's with Postman or any other tool, normally we run the test, make sure it works, and then move on. I was pretty impressed with [Runscope] because we can keep the test running 24/7 and are able to see feedback at any time.

A proper feedback loop is enabled through their graphical user interface. We can add loads of validation criteria. As a team, if we make changes and something fails on the core service, we can actually find it. 

For example, we had a security patch that was deployed on one of the components. [Runscope] immediately identified that the network mode failed at that API layer. The monitoring capability allows us to provide fast feedback. 

We can also trigger it with Jenkins Pipelines. We can integrate it into our DevOps quite easily, and they have webhooks. The validation criteria is quite simple. Most of the team love it and the stakeholders love the feedback loop as well. They can look at it, run it, and see what's happening.

The final solution will be across four different locations. The performance will run in a specific location. [Runscope] will run across different locations and test different development environments. At the moment, it's only on two environments. One is a sandbox where we experiment, and one is a real environment where we test the core network.

There are around 10 to 15 people using the application, but some of them only view the results. They're not always checking whether it works or not. We have multiple endpoints.

We use the solution on-premises.

How has it helped my organization?

The on-the-fly test data improved our testing productivity a lot. The new test data features changed how we test the applications because there are different things we can do. We can use mock data or real data. We can also build data based on different formats. 

For example, an IMEI number should be a 15 digit number. If you need various combinations of it, BlazeMeter can do it as long as we can provide regular expressions and say, "The numbers should be in this format." Mobile subscriber identities, which are pretty common in the telecom world, are easy. This solution has changed how we test things. Fundamentally, it helped us a lot.

Previously, most of the test projects that I delivered before I moved into automation used to take months. Now, the entire API test is completed within minutes. Because we look at millisecond latency, the tests don't take any longer. It's less than a minute. 

The moment those tests run on schedule, I don't really need to do anything. I just concentrate on what other tests I can add and what other areas I can think of. 

Recently, I have seen BlazeMeter's other products in their roadmap, and they're really cool products. They use some AI and machine learning to build new API level tests. I don't think it's available to the wider market yet, but there are some really cool features they're developing.

BlazeMeter reduced our test operating costs by quite a lot because normally to do the same level of testing, we need loads of resources, which are expensive. Contractors and specialists are expensive, and offshore is quite expensive. However we do it, we have to spend a lot of time. Running those tests manually, managing the data manually, and updating the data manually take a lot of time and effort. With this project, we definitely save a lot in costs, and we give confidence to the stakeholders.

For previous projects and even smaller projects, we used to charge 100k to 200K for testing. We're using BlazeMeter for massive programs, and the cost is a lot less.

What is most valuable?

Scheduling is the most valuable feature. You can run the test 24/7 and then integrate it into the on-premises internal APIs. How it connects to the internal APIs and how it secures the data is very important for us and definitely helped us.

It enables the creation of test data that can be used both for performance and functional testing of any application. Within the performance module of BlazeMeter, they have a different capability that supports performance testing. We have the performance test run on schedule, which is quite nice. It uses something called the Taurus framework. We built our own containers with the Taurus framework, but we moved to BlazeMeter because of the security vulnerabilities with Log4j. 

They've been more proactive in fixing those, but it was quite hard for us. We did it over five days, but they came back with the fixes in two days. We realized that their container solutions are much more secure. But at the same time, when it comes to [Runscope]. they have yet to add the data-driven approach, but they are really good. They support test data creation in their functional module, but there could be a few improvements made in test data management on [Runscope].

The ability to create performance and functional test data that can be used for testing any application is very important to our organization because we're looking at big loads of customers moving onto 5G standalone architecture. We're also looking at Narrowband IoT, machine-to-machine communications, and vehicle-to-vehicle communications. 

All of these require the new low latency tests, so that if we ship a piece of telecom equipment and move the customers onto the new 5G architecture, we can be confident enough to say, "Yes, this works perfectly."

Also, running those tests continuously means we can give assurance to our stakeholders and customers that we can build the applications in a way that can support the load. There are more than 20 million customers in the UK, and there's growing traffic on private networks and on IoT. As the technology shifts, we need to give assurance to our customers.

The ease of test data creation using BlazeMeter is the best part of the solution. I worked with them on the test data creation and how they provided feedback in the early days. It was really good. They have implemented it on the performance and mock services. Originally, we managed the test data on CSVs and then ran it with JMeter scripts. It was good, but the way BlazeMeter created mocks with regular expressions and the test data is quite nice. It reduced some of the challenges that we had, and managing some of the data on cloud is really good. 

The features are really cool, and it also shifts the testing to the left because even before you have the software, you can build a mock, build the test cases in [Runscope], and work on different API specifications. Then, you can actually test the application before it is deployed and even before any development. That feedback is quite useful.

BlazeMeter provides the functional module. They provide the performance testing, and it's all based on JMeter, which is really nice. JMeter is an open-source tool. You can upload your JMeter scripts back into the performance tab, and you can run off it. It's really brilliant and gives us the ability to run the test from anywhere in the world.

[Runscope] provides the capability to run test cases from different locations across the world, but we use it on-premises, which is quite nice. The reporting capability is really good. When the test fails, it sends a message. When it passes again, it sends a message. We know what's happening. The integration back into Teams is interesting because you can put the dashboard on Teams, which is nice.

It's really important that BlazeMeter is a cloud-based and open-source testing platform because for some of the functionalities, we don't always need to rely on BlazeMeter reporting. Their reporting is really good. Having the ability to use open-source tools means we can also publish it to our internal logging mechanisms. We have done loads of integrations. We also worked with them on developing the HTTP/2 plugin, which is now available open-source. 

The way they have collaborated and how they support open-source tools is really brilliant because that's how we came to know that the JMeter HTTP/2 plugin was provided by BlazeMeter, so we contacted them. We already tried that open-source tool and it was working at that stage. We started off with the mocks, using open API specifications. They also provide free trial versions.

With the shift-left, we build a mock and then start to use [Runscope] to validate those test cases. At that stage, we know even before the application is deployed that we can actually get something moving. When the real application is available within that sprint, we already have cases that are being validated across mocks and immediately configure them with the real applications and real environment variables. For a majority of the time, it would work and sometimes it might be a case where we update the data and then at that stage, we get the test cases to work. The moment we do that, we put it on schedule 24/7, every hour or every half an hour, depending on the number of changes that we do on the specific nodes. We always know whether or not it works.

This solution absolutely helps us implement shift-left testing. We really started building our core network this year. Last year, it was all about the planning phase. We almost got our APIs and everything automated with the mocks. We started to use the feedback loop and knew which ones worked. We did a lot of work around our own automation frameworks and with [Runscope]. 

We stopped some of the work we did on our own automation frameworks and slowly started to move them into BlazeMeter. We knew that as long as the tool supported it, we would continue with that. If we hit a problem, then we would see. At this stage, a majority of the work is done on the BlazeMeter set of tools, which is really nice because we started off with our own JMeter data framework test.

BlazeMeter competes with the tools we have built in-house, and there's no way we can match their efficiency, which is why we slowly moved to BlazeMeter. The team loves it.

We also use BlazeMeter's ability to build test data on-the-fly. Sometimes when we run the test, we realize that some of the information has to be changed. I just click on it and it opens on a web interface. I'll update the number in my columns because CSV also displays it as a table. For us, it's a lot easier. We don't have to go back into Excel, open a CSA, manipulate the data, do a git check, etc.

I like that the fly test data meets compliance standards because you get that feedback immediately, and it's not like they're holding the data somewhere else. We can also pull in the data from our own systems. It's all encrypted, so it's secure.

Generating reports off BlazeMeter is also quite nice. You can just click export or you can click on executed reports.

What needs improvement?

Overall, it's helped our ability to address test data challenges. The test data features on their own are very good, but version control test data isn't included yet. I think that's an area for improvement.

We can update the test data on the cloud. That's a good feature. There's also test data management, which is good. [Runscope] doesn't have the test data management yet. Mock services do, and performance testing has it. We can do the same test through JMeter, validating the same criteria, but the feedback from [Runscope] is quite visible. We can see the request and the response, what data comes back, and add the validation criteria. We can manage the test environments and test data, but running the same API request for multiple test data is missing. We cloned the test cases multiple times to run it. They need to work on that.

Version controlling of the test cases and the information, the ability to compare the current version and the previous version within [Runscope] would be really nice. The history shows who made the changes, but it doesn't compare the changes.

In the future, I would like to see integrations with GitLab and external Git reports so we could have some sort of version control outside as well. There is no current mechanism for that. The ability to have direct imports of spoken API specifications instead of converting them to JSON would be nice. There are some features they could work on.

For how long have I used the solution?

I have been using this solution for more than a year and a half.

I came across BlazeMeter because I was looking for something around mock services. I was also looking for a product or tool that tests HTTP/2, particularly HTTP/3 because the 5G core network is built on HTTP/2. I couldn't find a tool other than BlazeMeter that supports it.

I tried to build mock services and tested the solution. Once I was happy, I also realized they have BlazeMeter [Runscope], so I wanted to try it.

What do I think about the stability of the solution?

It's stable. I wouldn't say any application is without bugs, but I haven't seen many. We had issues once or twice, but it was mostly with browser caching. There haven't been any major issues, but there were improvements that could be made in a couple of areas. They were always happy to listen to us. They had their product teams, product owners, and product managers listen to our feedback. They would slowly take the right feedback and try to implement some of the features we wanted. They always ask us, "What is your priority? What will make the best impact for you as a customer?" We give our honest feedback. When we say what we need, they know that many other customers will love it.

They were also really good with Log4j vulnerabilities. They came back with a fix less than two days after that came out. We had to turn off the services, but it was all good because [Runscope] didn't have an immediate impact. It was the performance container. They had some vulnerabilities because the original JMeter uses some of those Log4j packages. They had to fix the log of JMeter and then update their container.

What do I think about the scalability of the solution?

It's very scalable. The solution is built for scalability. I didn't know that we could even move into this sort of API world. I used to think, "We do those tests like this." JMeter provides its own sort of capability, but with BlazeMeter, there's a wow factor.

We plan to increase coverage as much as possible.

How are customer service and support?

I would rate technical support 10 out of 10.

BlazeMeter absolutely helps bridge agile and COE teams. We had some of the BlazeMeter team invited into our show and tell when we started. They saw our work and were quite happy. We showed them how we build our test cases. They also provided the feedback loop and told us what we could improve in different areas.

We also have a regular weekly call with them to say, "These are the things that are working or not working," and they take that feedback. We'll get a response from them within a few weeks, or sometimes in a few days or a few hours, depending on the issue. If it's a new feature, it might take two or three weeks of additional development. If it's a small bug, they get back to us within hours. If it's a problem on our side, they have somebody on their team for support. I was really surprised to see tools provided to do that because I haven't seen anything like that with other tools. When there's a problem, they respond quickly.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We switched because we started off with a BDD framework that was done in-house. We realized that the number of security vulnerabilities that come off Docker containers was a risk to us.

We still continue with that work because we had to move toward mutual DLS in the wild too. We have that working at the moment, along with BlazeMeter. We've tried Postman, but it didn't support HTTP/2 when we looked a year and a half ago.

How was the initial setup?

I did most of the initial setup. We had to go through proxies and more when we connected to it. I currently use the Docker-based one because they support Docker and Kubernetes. At the moment, it's deployed in one or two locations, one is a sandbox for experimenting, and one in an actual development site, which is really good.

The initial deployment was very easy. It took a few hours. I forgot the proxy part, but once I did that, it was all good.

We can deploy a mock to build the application, and if we want to do it on-premises, as long as we have a Linux-based server, we can do it in 15 or 20 minutes. I was surprised because the moment it showed the hundreds of combinations for APIs that would happen, I was a bit shocked, but then I understood what it was doing.

I have a team of engineers who work on the solution and the different APIs that we need to support. I have two engineers who are really good with BlazeMeter. They were part of the virtualization team. There are a few engineers who started off with learning JMeter from YouTube and then did BlazeMeter University.

Most of the time, maintenance is done on the cloud. Because we are behind the proxy, we recently realized that when they did an upgrade, the upgrade failed and it took down the service. We provided that feedback, so the next time they do automated upgrades, we won't have any issues. Other than that, we haven't had any issues.

What was our ROI?

Since deployment, we use this solution every day. We have seen the value from the beginning because it helped us build our automation frameworks. It helped us understand how we can build better test cases, better automation test cases, how the feedback loop is enabled, etc. 

It's saved us a lot of time. It reduces the overall test intervals. We can run the test quite quickly. We can provide confidence to stakeholders. When trying to move toward DevOps and new ways of working, so the feedback loops need to be fast enough. When we deploy a change, we want to get fast feedback. That's very important, and BlazeMeter allows us to do that. 

We know that we can always trigger the test through [Runscope] on demand. At any point in time, it'll give us fast feedback immediately. It's quite easy to integrate with tools like Jenkins and Digital.ai, which is an overall orchestrator.

We tried to go the Jenkins route, but we realized that we don't even need to do that. The solution provided nice APIs that can work with this sort of CI/CD. They have webhooks and different ways of triggering it. They have built-in plugins to Jenkins for Jmeter, BlazeMeter, etc. They understand how the automation frameworks and tools work.

Their Taurus framework, which they built for the open-source community, is quite brilliant on its own, but BlazeMeter offers much more. Although it's built on the Taurus framework, you can still have test levels, you can group tests, etc.

What other advice do I have?

I would rate this solution 10 out of 10. 

We try to avoid scripting. We use the scriptless testing functionality about 95% of the time. With JMeter, you don't need a lot of scripting. I don't need to know a lot of automation or programming at this stage to use it.

We haven't faced any challenges in getting multiple teams to adopt BlazeMeter. I created a sandbox for my own team where they can experiment. People really wanted access to it, so I added more and more people, and the designers are now part of it.

For others who are evaluating this solution, my advice is to do the BlazeMeter University course first before you start to use the product. It will give you a general understanding of what it is. It only takes half an hour to an hour.

You don't always need to finish the course or pass the exam, but doing the course itself will definitely help. They have a JMeter basic and advanced course and a Taurus framework course. They have an API monitoring course, which will help for [Runscope], and one for mocks. Most of the courses are quick videos explaining what the product does and how it works. At that stage, you can go back and build your first automation test case on JMeter or [Runscope]. It's brilliant.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
BlazeMeter
January 2026
Learn what your peers think about BlazeMeter. Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
881,082 professionals have used our research since 2012.
Vice President at a financial services firm with 51-200 employees
Real User
Nov 9, 2023
A tool with good reporting functionalities that need to be made easier to operate from a programming perspective
Pros and Cons
  • "The most valuable features of the solution stem from the fact that BlazeMeter provides easy access to its users while also ensuring that its reporting functionalities are good."
  • "For a new user of BlazeMeter, it might be difficult to understand it from a programming perspective."

What is our primary use case?

Most of my company's use cases related to the tool stem from the needs of our customers. Basically, my company deals in the area of using the tool for mimicking contact center-related scenarios. When a customer calls an agent, the tool helps check whether the agent answers the call.

How has it helped my organization?

Basically, my company wanted to use BlazeMeter to act as a trigger for around 1,05,000 users who communicate with each other. For the aforementioned aspect, the first option was to choose between NeoLoad and LoadRunner, while the second option was to choose BlazeMeter, which runs on the cloud. With Blazemeter, it was easy for my company to create a script and then trigger it.

What is most valuable?

The most valuable features of the solution stem from the fact that BlazeMeter provides easy access to its users while also ensuring that its reporting functionalities are good. Users can schedule BlazeMeter to run, especially when the need to build a new application comes up since it allows them to manage and know the performance parameters easily.

What needs improvement?

For a new user of BlazeMeter, it might be difficult to understand it from a programming perspective. BlazeMeter should provide its users with a seamless experience in the area of programming. The tool should be made in such a way that in whatever scenario a need arises for it, new users should be able to use it without difficulty. It will be better if BlazeMeter can handle call scenarios using behavior-driven development, allowing technical and non-technical people to understand the tool.

The technical support team's turnaround time or response time is high, making it one of the product's shortcomings that requires improvement.

For how long have I used the solution?

I have been using BlazeMeter for three and a half years. I am a user of the solution.

What do I think about the stability of the solution?

Stability-wise, I rate the solution a seven out of ten.

What do I think about the scalability of the solution?

The scalability of BlazeMeter is good. The scalability of BlazeMeter is good. As BlazeMeter is a tool that allows me to trigger over 1,00,000 deployment-wise, I consider its scalability to be good.

Scalability-wise, I rate the solution a seven out of ten.

Around five or six people in my company use BlazeMeter.

How are customer service and support?

I rate the technical support a six out of ten.

How would you rate customer service and support?

Neutral

Which solution did I use previously and why did I switch?

I have experience with LoadRunner and TAF.

How was the initial setup?

I rate the setup phase of BlazeMeter a seven and a half on a scale of one to ten, where one is a difficult setup process and ten is an easy setup phase.

BlazeMeter can be deployed in three to four minutes, especially if the scripts and artifacts are ready, as users may only need to push the ready artifacts into their environments to trigger the deployment process.

The solution is deployed on the cloud.

What's my experience with pricing, setup cost, and licensing?

My company has opted for a pay-as-you-go model, so we don't make use of the free version of the product. The pricing part of BlazeMeter is fine, in my opinion. BlazeMeter is not a super expensive product for corporate businesses, considering that the product has evolved into a much more stable software.

Which other solutions did I evaluate?

Against BlazeMeter, my company had evaluated other options like NeoLoad and Visual Studio. Though all the options evaluated by my company were okay products in the market, BlazeMeter offers a more stable product. When using BlazeMeter, my company can get support and figure out areas where we lag through Google. BlazeMeter has a strong customer base.

What other advice do I have?

BlazeMeter offers options like test scope that provide visibility of what a user does. Moreover, the option provides users with a crystal clear outline of every step, which consists of things like what the request is for a particular response.

Considering its load-testing capabilities, I recommend BlazeMeter to those who plan to use it. It's a good tool that anyone can use either in their production environment or before entering the production phase. The tool performs well even with real traffic while providing good scalability options to its users.

I rate the overall tool a seven and a half out of ten.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
AVP at a financial services firm with 10,001+ employees
Real User
Top 5
Oct 15, 2024
Great UI and multitask features with very good support
Pros and Cons
  • "The user interface is good."
  • "The scalability features still need improvement."

What is our primary use case?

We use BlazeMeter for performance testing.

How has it helped my organization?

BlazeMeter helps us to easily scale up the products for performance testing and increases the scalability of the applications, which are outside of the corporate network.

What is most valuable?

The user interface is good. The multitask user and cloud missions testing are nice features.

What needs improvement?

The scalability features still need improvement. They have recently added dynamic user features, so we should evaluate that, which may enhance scalability. Storage capacity should be increased. 

There is a shared file repository with a limit of 999 file storage along with each payload, which is a maximum of fifty MB. That should be increased. When we run JMeter scripts in BlazeMeter, the BlazeMeter user interface does not recognize the property files we use in JMeter. This needs to be addressed.

For how long have I used the solution?

I have been working with BlazeMeter for five years.

What do I think about the scalability of the solution?

BlazeMeter's scalability features need improvement. They have added the dynamic user feature recently, and we should evaluate this feature for better scalability.

How are customer service and support?

The technical support is very good. I would give them ten out of ten.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

I have worked with LoadRunner and BlazeMeter simultaneously.

What's my experience with pricing, setup cost, and licensing?

BlazeMeter's pricing is competitive but can be negotiable.

Which other solutions did I evaluate?

I have worked with LoadRunner simultaneously with BlazeMeter.

What other advice do I have?

I'd rate the solution eight out of ten. 

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Ryan Mohan - PeerSpot reviewer
Quality Assurance Manager at a financial services firm with 10,001+ employees
Real User
Aug 10, 2022
Enterprise performance testing platform that gives us a centralized place to execute load tests, do reporting, and have different levels of user access control
Pros and Cons
  • "The orchestration feature is the most valuable. It's like the tourist backend component of BlazeMeter. It allows me to essentially give BlazeMeter multiple JMeter scripts and a YAML file, and it will orchestrate and execute that load test and all those scripts as I define them."
  • "BlazeMeter needs more granular access control. Currently, BlazeMeter controls everything at a workspace level, so a user can view or modify anything inside that workspace depending on their role. It would be nice if there was a more granular control where you could say, "This person can only do A, B, and C," or, "This user only has access to functional testing. This user only has access to mock services." That feature set doesn't currently exist."

What is our primary use case?

Our primary use case for BlazeMeter is performance testing. We leverage BlazeMeter as our enterprise performance testing platform. Multiple teams have access to it, and we execute all of our load tests with BlazeMeter and do all the reporting through it. We also use it for mock services.

We have a hybrid deployment model. The solution is hosted and maintained by BlazeMeter. We also have on-premise locations within our network that allow us to load test applications behind our corporate firewalls. That's for test environments and non-production applications that are not externally available. It's a hybrid role that is mostly SaaS, but the on-premises component allows us to execute those load tests and report the results back to the BlazeMeter SaaS solution.

The cloud provider is GCP. BlazeMeter also grants access to Azure and AWS locations which you can execute load tests from. They engaged with all three of the major cloud providers.

How has it helped my organization?

BlazeMeter gives us a centralized place to execute load tests, do reporting, and have different levels of user access control. BlazeMeter has a full API, which is the feature that's given us a lot of value. It allows us to integrate with BlazeMeter in our CI/CD pipelines, or any other fashion, using their APIs. It helps increase our speed of testing, our reporting, and our reporting consistency, and gives us a central repository for all of our tests, execution artifacts, and results.

BlazeMeter added a mock services portion. We used to leverage a different product for mock services, and now that's all done within BlazeMeter. Mock services help us tremendously with testing efforts and being able to mock out vendor calls or other downstream API calls that might impact our load testing efforts. We can very easily mock them out within the same platform that hosts our load tests. That's been a huge time saver and a great value add.

BlazeMeter absolutely helps bridge Agile and CoE teams. It gives us both options. BlazeMeter is designed so that we can grant access to whoever needs it. We can grant access to developers and anyone else on an Agile team. It allows us to shift left even farther than a traditional center of excellence approach would allow us.

It absolutely helps us implement shift-left testing. One of the biggest features of shifting left is BlazeMeter's full, open API. Regardless of the tools we're leveraging to build and deploy our applications, we can integrate them with BlazeMeter, whether that's Jenkins or some other pipeline technology. Because BlazeMeter has a full API, it lets us start tests, end tests, and edit tests. If we can name it, it can be done via the API. It tremendously helps us shift left, run tests on demand, and encode builds.

Overall, using BlazeMeter decreased our test cycle times, particularly because of the mock service availability and the ease with which we can stand out mock services, or in the case of an Agile approach, our development teams can stand out mock services to aid them in their testing. 

It's fast, and the ability to integrate with pipelines increases our velocity and allows us to test faster and get results back to the stakeholders even quicker than before.

The overall product is less costly than our past solutions, so we've absolutely saved money.

What is most valuable?

The orchestration feature is the most valuable. It's like the tourist backend component of BlazeMeter. It allows me to essentially give BlazeMeter multiple JMeter scripts and a YAML file, and it will orchestrate and execute that load test and all those scripts as I define them.

The reporting feature runs parallel with orchestration. BlazeMeter gives me aggregated reports, automates them, and allows me to execute scheduled tests easily on my on-premise infrastructure.

BlazeMeter's range of test tools is fantastic. BlazeMeter supports all sorts of different open-source tools, like JMeter and Gatling, and different web driver versions, like Python and YAML. If it's open-source, BlazeMeter supports it for the most part.

It's very important to me that BlazeMeter is a cloud-based and open-source testing platform because, from a consumer perspective, I don't have to host that infrastructure myself. Everything my end users interact with in the front-end UI is SaaS and cloud-based. We don't have to manage and deploy all of that, which takes a lot of burden off of my company.

The open-source testing platform is fantastic. They support all of the open-source tools, which gives us the latest and greatest that's out there. We don't have to deal with proprietary formats. A secondary bonus of being open-source and so widely used is that there is a tremendous amount of help and support for the tools that BlazeMeter supports.

What needs improvement?

BlazeMeter needs more granular access control. Currently, BlazeMeter controls everything at a workspace level, so a user can view or modify anything inside that workspace depending on their role. It would be nice if there was a more granular control where you could say, "This person can only do A, B, and C," or, "This user only has access to functional testing. This user only has access to mock services." That feature set doesn't currently exist.

For how long have I used the solution?

I have used this solution for almost five years.

What do I think about the stability of the solution?

The stability has absolutely gotten better over the years. They had some challenges when they initially migrated the platform to GCP, but most of those were resolved. Overall, they have very high availability for their platform. If there's an issue, they have a status page where they publish updates to keep customers in the loop. 

If you email their support team or open a ticket through the application, they're always very quick to respond when there's a more global uptime issue or something like that. Overall, they have very high availability.

How are customer service and support?

Technical support is absolutely phenomenal. I've worked with them very closely on many occasions. Whether it's because we found a bug on their side, or an issue we're having with our on-premises infrastructure, they're always there, always willing to support, and are very knowledgeable.

I would rate technical support as nine out of ten.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We previously used HP Performance Center. We used HP Virtual User Generator as a predecessor to JMeter for our scripting challenges.

We switched because it's a very outdated tool and toolset. BlazeMeter is a more modern solution. It supports many more tools, and it allows us to solve problems that were blocked by the old solution. 

The BlazeMeter platform is designed to be CI/CD, so it has continuous integration, it's continuous delivery-friendly, Agile-friendly, and it has all of the modern software development methodologies. 

Our old solution didn't really cooperate with that. It didn't have the API or any of the test data functionality that we've talked about with generating or pulling test data. It didn't have any of the mock services. BlazeMeter gave us the kind of one-stop-shop option that allows us to accelerate our development and velocity within our Agile space.

How was the initial setup?

From my company's side, I'm the "owner" of BlazeMeter. I worked with a support team to set up the on-premises infrastructure. I still work with them.

Deployment was straightforward and simple. We pulled some Docker images and deployed them. The whole on-premise deployment methodology is containerized, whether it's standalone unit servers running Docker or a Kubernetes deployment, which allows you to deploy on-premise BlazeMeter agents through a Kubernetes cluster and your own GCP environment or on-premises Kubernetes environment.

What about the implementation team?

We worked directly with BlazeMeter.

Which other solutions did I evaluate?

We evaluated Load.io and a couple of other solutions. When we brought on BlazeMeter five years ago, they were absolutely the leader in the pack, and I believe they still are. They have a much more mature solution and an enterprise feel. The whole platform is much more developed and user-friendly than some of the other options we evaluated. 

I don't know if there are any features in other platforms that BlazeMeter didn't have; it was mostly the other way around. There were things BlazeMeter had that other platforms didn't have, and existing relationships with the company that used to own BlazeMeter, Broadcom.

What other advice do I have?

I would rate this solution an eight out of ten. 

It's a fantastic solution and can do so many things. But unless you have a team that's already very experienced with JMeter and BlazeMeter, there will be some ramp-up time to get people used to the new platform. Once you're there, the features and functionality of BlazeMeter will let you do things that were absolutely not feasible on your previous platforms.

We don't really leverage the actual test data integration and creation functionality, but we leverage some of the synthetic data creation. BlazeMeter will let you synthetically generate data for load tests, API, or mock services. We have leveraged that, but we have not leveraged some of the more advanced functionality that ties in with test data management.

The ability to create both performance and functional testing data is not very important to us. A lot of the applications we test are very data-dependent and dependent on multiple downstream systems. We don't leverage a lot of the synthetic data creation, as much as some other organizations might.

We don't extensively use BlazeMeter's ability to build test data on-the-fly. We use it to synthetically generate some test data, but a majority of our applications rely on existing data. We mine that in the traditional sense. We don't generate a lot of synthetic test data or fresh test data for each execution.

BlazeMeter hasn't directly affected our ability to address test data challenges. We don't directly leverage a lot of the test data functionality built into BlazeMeter, but we're trying to move in that direction. We have a lot of other limitations on the consumer side that don't really let us leverage that as much as we could. It certainly seems like a great feature set that would be very valuable for a lot of customers, but so much of our testing is done with existing data.

We haven't had any significant challenges with getting our teams to adopt BlazeMeter. There were just typical obstacles when trying to get people to adopt anything that's new and foreign to them. Once most of our users actually spent time using the platform, they really enjoyed it and continued to use it. 

There were no significant hurdles. Their UI is very well-designed and user-friendly. Perforce puts a lot of effort into designing its features and functionalities to be user-friendly. I've participated in a few sessions with them for upcoming features and wire frameworks of new functionalities.

Which deployment model are you using for this solution?

Hybrid Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Google
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Test Lead at a non-profit with 10,001+ employees
Real User
Oct 27, 2023
Provides the virtual devices you need for realistic testing
Pros and Cons
  • "BlazeMeter's most valuable feature is its cloud-based platform for performance testing."
  • "The only downside of BlazeMeter is that it is a bit expensive."

What is our primary use case?

I use BlazeMeter for our WebApp Performance Desk. It helps me test web apps, APIs, databases, and mobile apps.

What is most valuable?

BlazeMeter's most valuable feature is its cloud-based platform for performance testing. It means you don't have to worry about having your own devices or servers when testing web applications, as BlazeMeter provides the virtual devices you need for realistic testing.

What needs improvement?

The only downside of BlazeMeter is that it is a bit expensive.

For how long have I used the solution?

I have been using BlazeMeter for three years.

What do I think about the stability of the solution?

BlazeMeter has been stable without downtime, and any performance issues are usually linked to the tested application, not BlazeMeter.

What do I think about the scalability of the solution?

The product is fairly scalable.

How are customer service and support?

BlazeMeter's tech support team has been excellent, providing helpful and responsive assistance through chat and email whenever we needed it. I would rate them as a nine out of ten.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

I have used LoadView and it is pricier and offers its scripting tool, but it is better in some aspects. While BlazeMeter primarily uses emulators for testing, LoadView utilizes actual devices and browsers, particularly for web applications.

How was the initial setup?

The initial setup is not too complex. It mainly involves configuring IP addresses and server communication, but it is a basic process similar to other tools.

What's my experience with pricing, setup cost, and licensing?

BlazeMeter is more affordable than some alternatives on the market, but it is still expensive.

What other advice do I have?

I would recommend giving BlazeMeter a try because they offer competitive pricing, and you can negotiate for discounts. BlazeMeter is more affordable than other products on the market but uses emulators instead of actual devices, which might be acceptable depending on your testing needs and budget.Additionally, it allows you to carry over unused virtual users to the next subscription, which can accumulate and save you money. Overall, I would rate BlazeMeter as an eight out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
QA Automation Engineer with 201-500 employees
Real User
Jun 7, 2022
The action groups allow us to reuse portions of our test and update multiple tests at once
Pros and Cons
  • "The feature that stands out the most is their action groups. They act like functions or methods and code, allowing us to reuse portions of our tests. That also means we have a single point for maintenance when updates are required. Instead of updating a hundred different test cases, we update one action group, and the test cases using that action group will update."
  • "The performance could be better. When reviewing finished cases, it sometimes takes a while for BlazeMeter to load. That has improved recently, but it's still a problem with unusually large test cases. The same goes for editing test cases. When editing test cases, it starts to take a long time to open those action groups and stuff."

What is our primary use case?

We have a couple of use cases for BlazeMeter. One is performance testing. It allows us to aggregate the execution and reporting of our performance tests. We can also create automated functional tests relatively quickly compared to writing tests in a coded platform like Java.

Around 20 people in the QA department are using BlazeMeter to test Mendix- based applications. We're doing regression testing on 22 applications, and we have at least two environments that we interact with regularly: a development environment and a pre-production environment.

How has it helped my organization?

Before BlazeMeter, we didn't have a performance test aggregator. They were running one-off JMeter tests that weren't stored in a repository. JMeter can generate some reporting, but it's nowhere near as nice as what BlazeMeter provides. And it's more readily understood by the development teams that we work with and the management. That part is great.

We initially purchased the tool for performance testing, but we discovered that we had access to functional testing, so we started using that. That's been great for a lot of the same reasons. It increases visibility and gets everybody on the same page about which tests can run and the status of our regression and functional tests.

BlazeMeter can create test data for performance and functional testing. We don't have much use for that currently, but I could see that being useful for individual functional tests in the future. It's nice to have automatic data generation for test cases.

We haven't used BlazeMeter for shift-left testing. The functional testers embedded with the sprint teams don't do automation. That's all kicked down the road, and the automation is done outside of the sprint. While there is a desire to start attacking things that way, it never really got any traction.

I believe BlazeMeter has also reduced our test times, but I can't quantify that.
It's helped us with our test data challenges. I think they have a lot of great implementation, so I don't want to detract from that, but we have some problems with our applications and some custom things. I think we work on a different platform than many other people do, so it hasn't been as beneficial to us probably as it would be for many other people.

What is most valuable?

The feature that stands out the most is their action groups. They act like functions or methods and code, allowing us to reuse portions of our tests. That also means we have a single point for maintenance when updates are required. Instead of updating a hundred different test cases, we update one action group, and the test cases using that action group will update.

The process is pretty straightforward. You can enter data into spreadsheets or use their test data generation feature. You can create thousands of data points if you want. We aren't currently using it to create that much data, but it could easily be used to scale to that. The solution includes a broad range of test tools, including functional tests, performance tests, API testing, etc. They're continuously expanding their features. 

I also like that it's a cloud-based solution, which gives me a single point of execution and reporting. That's great because we can take links to executed test cases and send those to developers. If they have questions, the developers can follow that link to the test and duplicate it or run the test for themselves.

A cloud solution can be a little bit slower than an on-premises client or maintaining test cases locally on our machine. However, we've also run into issues with that. Sometimes people mess up and push the latest changes to the repository. That's not a problem with BlazeMeter because we're doing all the work in the cloud.

Out of all the functional tests, scriptless testing has been the standout piece for my team because it's cloud-based. It's easy for everybody to get into the navigation, and it's pretty intuitive. There's a recorder that's already built into it. It's easy to get started writing test cases with scriptless testing.

BlazeMeter's object repository provides a single point of update for us with regard to locators or selectors for our web elements. It's the same with the action groups. It's incredibly valuable to have reusable action groups that give us a single point for maintenance. It saves a ton of maintenance time.

What needs improvement?

The performance could be better. When reviewing finished cases, it sometimes takes a while for BlazeMeter to load. That has improved recently, but it's still a problem with unusually large test cases. The same goes for editing test cases. When editing test cases, it starts to take a long time to open those action groups. 

For how long have I used the solution?

We've been using BlazeMeter for a little more than a year now.

What do I think about the stability of the solution?

BlazeMeter is pretty solid. The only complaint is performance. When we get massive tests, we run into some issues.

What do I think about the scalability of the solution?

We've never had issues with scalability. We've got hundreds of tests in BlazeMeter now, and we haven't had a problem aside from some performance problems with reporting. 

How are customer service and support?

I rate BlazeMeter support ten out of ten. The BlazeMeter team has been fantastic. Anytime we need something, they're always on it fast. We have regular meetings with the team where we have an opportunity to raise issues, so they help us find solutions in real-time. That's been great.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We were previously using Java and Selenium. We implemented BlazeMeter for the performance testing. When we discovered the functional test features, it was easy to pick up and start using. It was an accident that we stumbled into. Our use grew out of an initial curiosity of, "Let's see if we can create this test." And, "Oh, wow. That was really quick and easy." And it grew from there into a bunch more tests.

How was the initial setup?

Our DevOps team did all the setup, so I wasn't involved. We have faced challenges getting our functional test teams to engage with BlazeMeter. They don't have automation experience, so they're hesitant to pick it up and start using it. We've made a couple of attempts to show them how to get started with scriptless, but the incentive has not been good enough. Generally, it's still the regression team that handles the automation with Blazemeter, as well as whatever else we're using.

After deployment, we don't need to do much maintenance. Sometimes, we have to update test cases because they break, but BlazeMeter itself is low-maintenance.

What was our ROI?

We've seen a return. I don't know exactly how many test cases are in BlazeMeter now, but we've added quite a few functional test cases in there. It's the tool that our performance testing uses right now in conjunction with JMeter.

What's my experience with pricing, setup cost, and licensing?

I can't speak about pricing. My general evaluation isn't from that standpoint. I make the pitch to the leadership, saying, "I think we should get this," and somebody above me makes a decision about whether we can afford it.

Which other solutions did I evaluate?

We looked at other solutions for performance testing, not functional testing. 
A few points about BlazeMeter stood out. One was BlazeMeter's onboarding team. They seemed more helpful and engaged. We had a better rapport with them initially, and their toolset integrated well with JMeter, the solution we were already using. It's also a much more cost-effective solution than the other options.

What other advice do I have?

I rate BlazeMeter nine out of ten. There's still some room to grow, but it's a pretty solid product. If you're comparing this to other tools and you're thinking about using BlazeMeter for functional testing, take a look at the action groups, object library, and test data generation features. Those three things make your day-to-day work a lot easier. It simplifies creating and maintaining your tests. 

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Other
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Ramandeep S - PeerSpot reviewer
Director of Quality Engineering at a manufacturing company with 501-1,000 employees
Real User
Nov 29, 2023
The shareability of resources allows multiple people to access the same scripts across different environments
Pros and Cons
  • "The extensibility that the tool offers across environments and teams is valuable."
  • "The tool fails to offer better parameterization to allow it to run the same script across different environments, making it a feature that needs a little improvement."

What is our primary use case?

My company started to use BlazeMeter since we wanted parallel runs and more penetration across teams with more ease, allowing better reporting. BlazeMeter doesn't do anything on its own since it uses the same script used in JMeter. BlazeMeter serves as a tool for orchestration, and to arrange better testing, parallel testing, and better reporting, making it easy for developers to use were some of the factors that led my company to opt for BlazeMeter.

What is most valuable?

The most valuable feature of the solution is that I like its workspace and shareability of resources, allowing multiple people to access the same scripts and use them in different environments. The extensibility that the tool offers across environments and teams is valuable.

What needs improvement?

The tool fails to offer better parameterization to allow it to run the same script across different environments, making it a feature that needs a little improvement. The tool should offer some ease of use across environments.

The solution's scalability is an area of concern where improvements are required.

For how long have I used the solution?

BlazeMeter was introduced a year ago in my new organization because we had a higher demand. My company is a customer of the product.

What do I think about the stability of the solution?

Stability-wise, I rate the solution an eight out of ten since my organization is still streamlining things at our end.

What do I think about the scalability of the solution?

Scalability-wise, I rate the solution a seven or eight out of ten.

How are customer service and support?

Technical support doesn't respond the moment you put up a query, so it takes time to get a response from the customer support team. The support team does respond with enough information.

I rate the technical support an eight out of ten.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

I used mostly commercial IT tools in my previous organization, including JMeter.

How was the initial setup?

The product's deployment phase is fine and is not difficult.

I can't comment on the time taken to install the solution since our organization uses a shared installation with our enterprise account. My team didn't need to actually install the product, so we just created our workspace, and that was it.

What's my experience with pricing, setup cost, and licensing?

I rate the product's price two on a scale of one to ten, where one is very cheap, and ten is very expensive. The solution is not expensive.

What other advice do I have?

Maintenance-wise, the product is fine.

Based on my initial perception and initial experiences, I rate the overall tool an eight out of ten.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
Download our free BlazeMeter Report and get advice and tips from experienced pros sharing their opinions.
Updated: January 2026
Buyer's Guide
Download our free BlazeMeter Report and get advice and tips from experienced pros sharing their opinions.