What is our primary use case?
We use this solution as a tester. When it comes to 5G, there are loads of changes because we're trying to build the first 5G core network with the standalone architecture. Everything is based on APIs and API-based communications with a new HTTP/2 protocol. When we build the core network, we constantly change and tweak the network.
When it comes to testing, whether it's with Postman or any other tool, normally we run the test, make sure it works, and then move on. I was pretty impressed with [Runscope] because we can keep the test running 24/7 and are able to see feedback at any time.
A proper feedback loop is enabled through their graphical user interface. We can add loads of validation criteria. As a team, if we make changes and something fails on the core service, we can actually find it.
For example, we had a security patch that was deployed on one of the components. [Runscope] immediately identified that the network mode failed at that API layer. The monitoring capability allows us to provide fast feedback.
We can also trigger it with Jenkins Pipelines. We can integrate it into our DevOps quite easily, and they have webhooks. The validation criteria is quite simple. Most of the team love it and the stakeholders love the feedback loop as well. They can look at it, run it, and see what's happening.
The final solution will be across four different locations. The performance will run in a specific location. [Runscope] will run across different locations and test different development environments. At the moment, it's only on two environments. One is a sandbox where we experiment, and one is a real environment where we test the core network.
There are around 10 to 15 people using the application, but some of them only view the results. They're not always checking whether it works or not. We have multiple endpoints.
We use the solution on-premises.
How has it helped my organization?
The on-the-fly test data improved our testing productivity a lot. The new test data features changed how we test the applications because there are different things we can do. We can use mock data or real data. We can also build data based on different formats.
For example, an IMEI number should be a 15 digit number. If you need various combinations of it, BlazeMeter can do it as long as we can provide regular expressions and say, "The numbers should be in this format." Mobile subscriber identities, which are pretty common in the telecom world, are easy. This solution has changed how we test things. Fundamentally, it helped us a lot.
Previously, most of the test projects that I delivered before I moved into automation used to take months. Now, the entire API test is completed within minutes. Because we look at millisecond latency, the tests don't take any longer. It's less than a minute.
The moment those tests run on schedule, I don't really need to do anything. I just concentrate on what other tests I can add and what other areas I can think of.
Recently, I have seen BlazeMeter's other products in their roadmap, and they're really cool products. They use some AI and machine learning to build new API level tests. I don't think it's available to the wider market yet, but there are some really cool features they're developing.
BlazeMeter reduced our test operating costs by quite a lot because normally to do the same level of testing, we need loads of resources, which are expensive. Contractors and specialists are expensive, and offshore is quite expensive. However we do it, we have to spend a lot of time. Running those tests manually, managing the data manually, and updating the data manually take a lot of time and effort. With this project, we definitely save a lot in costs, and we give confidence to the stakeholders.
For previous projects and even smaller projects, we used to charge 100k to 200K for testing. We're using BlazeMeter for massive programs, and the cost is a lot less.
What is most valuable?
Scheduling is the most valuable feature. You can run the test 24/7 and then integrate it into the on-premises internal APIs. How it connects to the internal APIs and how it secures the data is very important for us and definitely helped us.
It enables the creation of test data that can be used both for performance and functional testing of any application. Within the performance module of BlazeMeter, they have a different capability that supports performance testing. We have the performance test run on schedule, which is quite nice. It uses something called the Taurus framework. We built our own containers with the Taurus framework, but we moved to BlazeMeter because of the security vulnerabilities with Log4j.
They've been more proactive in fixing those, but it was quite hard for us. We did it over five days, but they came back with the fixes in two days. We realized that their container solutions are much more secure. But at the same time, when it comes to [Runscope]. they have yet to add the data-driven approach, but they are really good. They support test data creation in their functional module, but there could be a few improvements made in test data management on [Runscope].
The ability to create performance and functional test data that can be used for testing any application is very important to our organization because we're looking at big loads of customers moving onto 5G standalone architecture. We're also looking at Narrowband IoT, machine-to-machine communications, and vehicle-to-vehicle communications.
All of these require the new low latency tests, so that if we ship a piece of telecom equipment and move the customers onto the new 5G architecture, we can be confident enough to say, "Yes, this works perfectly."
Also, running those tests continuously means we can give assurance to our stakeholders and customers that we can build the applications in a way that can support the load. There are more than 20 million customers in the UK, and there's growing traffic on private networks and on IoT. As the technology shifts, we need to give assurance to our customers.
The ease of test data creation using BlazeMeter is the best part of the solution. I worked with them on the test data creation and how they provided feedback in the early days. It was really good. They have implemented it on the performance and mock services. Originally, we managed the test data on CSVs and then ran it with JMeter scripts. It was good, but the way BlazeMeter created mocks with regular expressions and the test data is quite nice. It reduced some of the challenges that we had, and managing some of the data on cloud is really good.
The features are really cool, and it also shifts the testing to the left because even before you have the software, you can build a mock, build the test cases in [Runscope], and work on different API specifications. Then, you can actually test the application before it is deployed and even before any development. That feedback is quite useful.
BlazeMeter provides the functional module. They provide the performance testing, and it's all based on JMeter, which is really nice. JMeter is an open-source tool. You can upload your JMeter scripts back into the performance tab, and you can run off it. It's really brilliant and gives us the ability to run the test from anywhere in the world.
[Runscope] provides the capability to run test cases from different locations across the world, but we use it on-premises, which is quite nice. The reporting capability is really good. When the test fails, it sends a message. When it passes again, it sends a message. We know what's happening. The integration back into Teams is interesting because you can put the dashboard on Teams, which is nice.
It's really important that BlazeMeter is a cloud-based and open-source testing platform because for some of the functionalities, we don't always need to rely on BlazeMeter reporting. Their reporting is really good. Having the ability to use open-source tools means we can also publish it to our internal logging mechanisms. We have done loads of integrations. We also worked with them on developing the HTTP/2 plugin, which is now available open-source.
The way they have collaborated and how they support open-source tools is really brilliant because that's how we came to know that the JMeter HTTP/2 plugin was provided by BlazeMeter, so we contacted them. We already tried that open-source tool and it was working at that stage. We started off with the mocks, using open API specifications. They also provide free trial versions.
With the shift-left, we build a mock and then start to use [Runscope] to validate those test cases. At that stage, we know even before the application is deployed that we can actually get something moving. When the real application is available within that sprint, we already have cases that are being validated across mocks and immediately configure them with the real applications and real environment variables. For a majority of the time, it would work and sometimes it might be a case where we update the data and then at that stage, we get the test cases to work. The moment we do that, we put it on schedule 24/7, every hour or every half an hour, depending on the number of changes that we do on the specific nodes. We always know whether or not it works.
This solution absolutely helps us implement shift-left testing. We really started building our core network this year. Last year, it was all about the planning phase. We almost got our APIs and everything automated with the mocks. We started to use the feedback loop and knew which ones worked. We did a lot of work around our own automation frameworks and with [Runscope].
We stopped some of the work we did on our own automation frameworks and slowly started to move them into BlazeMeter. We knew that as long as the tool supported it, we would continue with that. If we hit a problem, then we would see. At this stage, a majority of the work is done on the BlazeMeter set of tools, which is really nice because we started off with our own JMeter data framework test.
BlazeMeter competes with the tools we have built in-house, and there's no way we can match their efficiency, which is why we slowly moved to BlazeMeter. The team loves it.
We also use BlazeMeter's ability to build test data on-the-fly. Sometimes when we run the test, we realize that some of the information has to be changed. I just click on it and it opens on a web interface. I'll update the number in my columns because CSV also displays it as a table. For us, it's a lot easier. We don't have to go back into Excel, open a CSA, manipulate the data, do a git check, etc.
I like that the fly test data meets compliance standards because you get that feedback immediately, and it's not like they're holding the data somewhere else. We can also pull in the data from our own systems. It's all encrypted, so it's secure.
Generating reports off BlazeMeter is also quite nice. You can just click export or you can click on executed reports.
What needs improvement?
Overall, it's helped our ability to address test data challenges. The test data features on their own are very good, but version control test data isn't included yet. I think that's an area for improvement.
We can update the test data on the cloud. That's a good feature. There's also test data management, which is good. [Runscope] doesn't have the test data management yet. Mock services do, and performance testing has it. We can do the same test through JMeter, validating the same criteria, but the feedback from [Runscope] is quite visible. We can see the request and the response, what data comes back, and add the validation criteria. We can manage the test environments and test data, but running the same API request for multiple test data is missing. We cloned the test cases multiple times to run it. They need to work on that.
Version controlling of the test cases and the information, the ability to compare the current version and the previous version within [Runscope] would be really nice. The history shows who made the changes, but it doesn't compare the changes.
In the future, I would like to see integrations with GitLab and external Git reports so we could have some sort of version control outside as well. There is no current mechanism for that. The ability to have direct imports of spoken API specifications instead of converting them to JSON would be nice. There are some features they could work on.
For how long have I used the solution?
I have been using this solution for more than a year and a half.
I came across BlazeMeter because I was looking for something around mock services. I was also looking for a product or tool that tests HTTP/2, particularly HTTP/3 because the 5G core network is built on HTTP/2. I couldn't find a tool other than BlazeMeter that supports it.
I tried to build mock services and tested the solution. Once I was happy, I also realized they have BlazeMeter [Runscope], so I wanted to try it.
What do I think about the stability of the solution?
It's stable. I wouldn't say any application is without bugs, but I haven't seen many. We had issues once or twice, but it was mostly with browser caching. There haven't been any major issues, but there were improvements that could be made in a couple of areas. They were always happy to listen to us. They had their product teams, product owners, and product managers listen to our feedback. They would slowly take the right feedback and try to implement some of the features we wanted. They always ask us, "What is your priority? What will make the best impact for you as a customer?" We give our honest feedback. When we say what we need, they know that many other customers will love it.
They were also really good with Log4j vulnerabilities. They came back with a fix less than two days after that came out. We had to turn off the services, but it was all good because [Runscope] didn't have an immediate impact. It was the performance container. They had some vulnerabilities because the original JMeter uses some of those Log4j packages. They had to fix the log of JMeter and then update their container.
What do I think about the scalability of the solution?
It's very scalable. The solution is built for scalability. I didn't know that we could even move into this sort of API world. I used to think, "We do those tests like this." JMeter provides its own sort of capability, but with BlazeMeter, there's a wow factor.
We plan to increase coverage as much as possible.
How are customer service and support?
I would rate technical support 10 out of 10.
BlazeMeter absolutely helps bridge agile and COE teams. We had some of the BlazeMeter team invited into our show and tell when we started. They saw our work and were quite happy. We showed them how we build our test cases. They also provided the feedback loop and told us what we could improve in different areas.
We also have a regular weekly call with them to say, "These are the things that are working or not working," and they take that feedback. We'll get a response from them within a few weeks, or sometimes in a few days or a few hours, depending on the issue. If it's a new feature, it might take two or three weeks of additional development. If it's a small bug, they get back to us within hours. If it's a problem on our side, they have somebody on their team for support. I was really surprised to see tools provided to do that because I haven't seen anything like that with other tools. When there's a problem, they respond quickly.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
We switched because we started off with a BDD framework that was done in-house. We realized that the number of security vulnerabilities that come off Docker containers was a risk to us.
We still continue with that work because we had to move toward mutual DLS in the wild too. We have that working at the moment, along with BlazeMeter. We've tried Postman, but it didn't support HTTP/2 when we looked a year and a half ago.
How was the initial setup?
I did most of the initial setup. We had to go through proxies and more when we connected to it. I currently use the Docker-based one because they support Docker and Kubernetes. At the moment, it's deployed in one or two locations, one is a sandbox for experimenting, and one in an actual development site, which is really good.
The initial deployment was very easy. It took a few hours. I forgot the proxy part, but once I did that, it was all good.
We can deploy a mock to build the application, and if we want to do it on-premises, as long as we have a Linux-based server, we can do it in 15 or 20 minutes. I was surprised because the moment it showed the hundreds of combinations for APIs that would happen, I was a bit shocked, but then I understood what it was doing.
I have a team of engineers who work on the solution and the different APIs that we need to support. I have two engineers who are really good with BlazeMeter. They were part of the virtualization team. There are a few engineers who started off with learning JMeter from YouTube and then did BlazeMeter University.
Most of the time, maintenance is done on the cloud. Because we are behind the proxy, we recently realized that when they did an upgrade, the upgrade failed and it took down the service. We provided that feedback, so the next time they do automated upgrades, we won't have any issues. Other than that, we haven't had any issues.
What was our ROI?
Since deployment, we use this solution every day. We have seen the value from the beginning because it helped us build our automation frameworks. It helped us understand how we can build better test cases, better automation test cases, how the feedback loop is enabled, etc.
It's saved us a lot of time. It reduces the overall test intervals. We can run the test quite quickly. We can provide confidence to stakeholders. When trying to move toward DevOps and new ways of working, so the feedback loops need to be fast enough. When we deploy a change, we want to get fast feedback. That's very important, and BlazeMeter allows us to do that.
We know that we can always trigger the test through [Runscope] on demand. At any point in time, it'll give us fast feedback immediately. It's quite easy to integrate with tools like Jenkins and Digital.ai, which is an overall orchestrator.
We tried to go the Jenkins route, but we realized that we don't even need to do that. The solution provided nice APIs that can work with this sort of CI/CD. They have webhooks and different ways of triggering it. They have built-in plugins to Jenkins for Jmeter, BlazeMeter, etc. They understand how the automation frameworks and tools work.
Their Taurus framework, which they built for the open-source community, is quite brilliant on its own, but BlazeMeter offers much more. Although it's built on the Taurus framework, you can still have test levels, you can group tests, etc.
What other advice do I have?
I would rate this solution 10 out of 10.
We try to avoid scripting. We use the scriptless testing functionality about 95% of the time. With JMeter, you don't need a lot of scripting. I don't need to know a lot of automation or programming at this stage to use it.
We haven't faced any challenges in getting multiple teams to adopt BlazeMeter. I created a sandbox for my own team where they can experiment. People really wanted access to it, so I added more and more people, and the designers are now part of it.
For others who are evaluating this solution, my advice is to do the BlazeMeter University course first before you start to use the product. It will give you a general understanding of what it is. It only takes half an hour to an hour.
You don't always need to finish the course or pass the exam, but doing the course itself will definitely help. They have a JMeter basic and advanced course and a Taurus framework course. They have an API monitoring course, which will help for [Runscope], and one for mocks. Most of the courses are quick videos explaining what the product does and how it works. At that stage, you can go back and build your first automation test case on JMeter or [Runscope]. It's brilliant.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.