What is our primary use case?
We write thousands of lines of code on a daily basis, and we cannot say that our code is free because there are a lot of other developers contributing to the source code and things like that. And this process is prone to human error, defects in the source code, etc.
How has it helped my organization?
To automate detection, we use Coverity's static analysis, which has a low false-positive ratio. That's because Coverity's analysis engine includes 20-plus patented technologies. A lot of other static analysis tools use pattern-based analysis, but Coverity's is flow based. That's why we ended up using it. Coverity is helping us identify some of the critical defects at the early stages of the development life cycle. So overall, it is giving us a greater ROI and making our application more mature and robust.
What is most valuable?
One of the most valuable features is Contributing Events. That particular feature helps the developer understand the root cause of a defect. So you can locate the starting point of the defect and figure out exactly how it is being exploited. So contributing Events lets you create that kind of a workflow.
We also need a tool that works in an environment that isn't dependent on the built environment. You point it to a folder. Then the tool picks it up, runs the scan, and gives you the report. That feature is available in Coverity. So you don't have to rely upon build artifacts or developer artifacts. So these are the two key features we use daily, and we've gotten good results.
What needs improvement?
Coverity's UI is the one thing that needs improvement. Technically speaking, it's doing an outstanding job otherwise. Also, they could reduce their executable size. Right now, the Coverity executable is around 1.2GB to download. If they can reduce it to approximately 600 or 700MB, that would be great. If they decrease the executable, it will be much easier to work in an environment like Docker.
For how long have I used the solution?
I've been using it for the past two years.
What do I think about the stability of the solution?
This product has been in the industry for more than 30 years, so it's pretty robust.
How are customer service and support?
Coverity has a decent SLA. The moment you purchase the tool, you also get an SLA agreement with all the email support. They have email support, call support, as well as WebEx and Zoom sessions on demand. Of course, that depends on the nature of the technical issue. If it's simple, it can be resolved with a couple of email exchanges, but if it really needs some attention, they're happy to get on a call. They've even delivered some custom patches as well.
Which solution did I use previously and why did I switch?
I used CodeSonar a few years back. Both tools have their advantages. In any static analysis tool, the first stage is the instrumentation of the source code. It'll try to capture the skeleton of your source code. So when I compare them based on the first phase alone, Coverity is far better than CodeSonar.
They both use a similar technique, but CodeSonar uses up way more storage resources. For example, to scan a 1GB code base, CodeSonar generates more than 5GB of instrumented files for every 1GB of code base. In total, that is 6GB. Coverity generates 500MB extra on top of 1GB, so that equals 1.5GB all in. That's a huge difference. CodeStar would eat up my disc space and hardware resources when I used it, whereas Coverity is minimal.
In terms of checkers, both CodeSonar and Coverity cover a good length and breadth, especially for C and C++ programming languages. But CodeSonar focuses only on four languages—C, C++, Java, and C#—only four programming languages, whereas Coverity supports more than 20-plus programming languages.
Also, the two are comparable with respect to their plugin offerings, but there are crucial differences. For example, CodeSonar only focuses on well-known integrations, like Jenkins and JIRA, but you cannot expect all customers to use the same tools. Coverity supports almost all CI/CD tools, including Jenkins and Bamboo. It also integrates with service providers like Azure DevOps Pipelines, AWS CodePipelines that CodeSonar hasn't added yet. The plugins are available in the marketplace, and you don't have to pay extra. You just have to download it from the marketplace, hook the plugin in your pipeline, and ready to use kind of approach. So these are some of the major use cases, three major use cases I would say when you compare apples to apples with CodeSonar and Coverity.
How was the initial setup?
Setting up Coverity is pretty simple. It comes with a normal executable. You just double click, follow the wizard, and complete the setup. It also have on screen instructions as well, which makes it pretty easy and cool. Deployment is a much broader question. It depends on how many projects you are trying to scan using Coverity and whether you are integrating this static analysis solution with your CI/CD setup, ID, bug tracking, etc. That all factors in to the total deployment time. So if we're talking about overall deployment, including bug tracking, integration, email notification, CI/CD integration, and everything, it took us 15 to 20 days to onboard 600 projects with 20 users, including all integration.
We don't have a lot of maintenance. There is a major release every quarter, and we get information on new upgrades, patches, and things like that. And we do have the option to not upgrade. The maintenance is mostly covered by the vendor itself, meaning they deliver the patches and upgrades on time. So I don't see that as a hurdle right now. It's been taken care of.
What's my experience with pricing, setup cost, and licensing?
I'm not sure about the licensing. My commercial team deals with that.
What other advice do I have?
I rate Coverity nine out of 10. It's a good choice. If you plan to use Coverity, you should read through the manual to really understand its settings. You have to tune the Coverity engine to get the best research and scalability out of it. A Coverity recently added some smart features that automatically compute the hardware requirements in your current machine. It automatically scales up. For example, it can detect how much multi-core CPU power it needs to run an analysis and how much memory is required, so it makes resources available for other applications running on the same machine. That intelligence has been built on. So initially, I recommend going over the fundamentals and fine-tuning it based on one's own requirements.
Which deployment model are you using for this solution?
Hybrid Cloud
Disclosure: My company does not have a business relationship with this vendor other than being a customer.