What is our primary use case?
The use case is that we have quite a few projects on GitHub. As we are a consulting company, some of these projects are open source and others are enterprise and private. We do security investigating for these projects. We scan the repository for both the static analysis—to find things that might be dangerous—and we use the Software Composition Analysis as well. We get notifications when we are using some open source library that has a known vulnerability and we have to upgrade it. We can plan accordingly.
We are using the software as a service.
How has it helped my organization?
It has improved the way our organization functions mostly because we can perfect the security issues on our products. That means our product managers can plan accordingly regarding when to fix something based on the severity, and plan fixes for specific releases. So, it has improved our internal process. It has also improved the image of the company from the outside, because they can see in the release notes of our products that we take security seriously, and that we are timely in the way that we address issues.
The solution has helped with developer security training because when we open a ticket with information coming from Veracode, it explains, for example, that some code path or patterns that we have used might be dangerous. That knowledge wasn't there before. That has really helped developers to improve in terms of awareness of security.
What is most valuable?
The feature that we use the most is the static analysis, by uploading the artifacts. We have two types of applications. They are either Java Server applications using Spring Boot or JavaScript frontend applications. We scan both using the static analysis. Before, we used to do the software composition on one side and the static analysis. For about a year now, we have had a proper security architect who's in charge of organizing the way that we scan for security. He suggested that we only use the static analysis because the software composition has been integrated. So in the reports, we can also see the version of the libraries that have vulnerabilities and that need to be upgraded.
It is good in terms of the efficiency of creating secure software.
My team only does cloud-native applications. Ultimately, the part that we are interested in, in testing, works fine.
There are some false positives, like any products that we have tried in this area, but slightly less. I would trust Veracode more than the others. For example, we had quite a few issues with Snyk which was much worse in terms of false positives, when we tested it for open source.
Also, the solution's ability to prevent vulnerable code from going into production is perfectly fine. It delivers, at least for the reports that we have been checking on Java and JavaScript. It has reported things that were helpful.
What needs improvement?
What could improve a lot is the user interface because it's quite dated. And in general, as we are heavy users of GitHub, the integration with the user interface of GitHub could be improved as well.
There is also room for improvement in the reporting in conjunction with releases. Every time we release software to the outside world, we also need to provide an inventory of the libraries that we are using, with the current state of vulnerabilities, so that it is clear. And if we can't upgrade a library, we need to document a workaround and that we are not really touched by the vulnerability. For all of this reporting, the product could offer a little bit more in that direction. Otherwise, we just use information and we drop these reports manually.
Another problem we have is that, while it is integrated with single sign-on—we are using Okta—the user interface is not great. That's especially true for a permanent link of a report of a page. If you access it, it goes to the normal login page that has nothing that says "Log in with single sign-on," unlike other software as a service that we use. It's quite bothersome because it means that we have to go to the Okta dashboard, find the Veracode link, and log in through it. Only at that point can we go to the permanent link of the page we wanted to access.
Veracode has plenty of data. The problem is the information on the dashboards of Veracode, as the user interface is not great. It's not immediately usable. Most of the time, the best way to use it is to just create issues and put them in JIRA. It provides visibility into the SAST, DAST and SCA, but honestly, all the information then travels outside of the system and it goes to JIRA.
In the end, we are an enterprise software company and we have some products that are not as modern as others. So we are used to user interfaces that are not great. But if I were a startup, and only had products with a good user interface, I wouldn't use Veracode because the UI is very dated.
Also, we're not using the pipeline scan. We upload using the Java API agent and do a standard scan. We don't use the pipeline scan because it only has output on the user interface and it gets lost. When we do it as part of our CI process, all the results are only available in the log of the CI. In our case we are using Travis, and it requires someone to go there and check things in the build logs. That's an area where the product could improve, because if this information was surfaced, say, in the checks of the code we test on GitHub—as happens with other static analysis tools that we use on our code that check for syntax errors and mapping—in that case, it would be much more usable. As it is, it is not enough.
The management of the false positives is better than in other tools, but still could improve in terms of usability, especially when working with multiple branches. Some of the issues that we had already marked as "To be ignored" because they were either false positives or just not applicable in our context come down, again, to the problem of the user interface. It should have been better thought out to make it easier for someone who is reviewing the list of the findings to mark the false positives easily. For example, there were some vulnerabilities mentioning parts of libraries that we weren't actually using, even if we were including them for different reasons, and in that case we just ignore those items.
We have reported all of these things to product management because we have direct contact with Veracode, and hopefully they are going to be fixed. Obviously, these are things that will improve the usability of the product and are really needed. I'm totally happy to help them and support them in going in the right direction, meaning the right direction from my perspective.
For how long have I used the solution?
I have used Veracode for quite a long time now, about two years. I have been working here for three years. In my first year, the company was using a different product for security and then it standardized on Veracode because every department had its own before that. There was consolidation with Veracode.
What do I think about the stability of the solution?
The stability is good. What I have seen in the stats is that there is downtime of the service a little too often, but it's not something, as a service, where you really need that level of availability on. So I'm not really bothered by that.
What do I think about the scalability of the solution?
We don't have to do anything to scale, because it's SaaS.
We started with a smaller number of users and then we extended to full single sign-on.
How are customer service and technical support?
The staff of Veracode is very good. They're very supportive. When the product doesn't report something that we need and is not delivering straight away, they always help us in trying to find a solution, including writing custom code to call the APIs.
From that point of view, Veracode is great. The product, much less so, but I believe that they have good people. They are promising and they listen so I hope they can improve.
Which solution did I use previously and why did I switch?
We started with WhiteSource, but it didn't have some features like the static analysis, so it was an incomplete solution. And we were already using Veracode for the static analysis, so when Veracode bought SourceClear, we decided to switch.
How was the initial setup?
The initial setup is easy and quite well documented. I was really impressed by the quality of the technical support. When I had problems, that the product wasn't good enough for me, they were always there to help and give suggestions.
Being a service, there wasn't really much of an implementation. It's not complex to use.
What was our ROI?
My job is mostly technical. I don't own a budget and I don't track numbers. But as the customers are really keen on having us checking security issues, I would definitely say that we have seen a return on investment.
Most of our customers tend, especially in the software composition analysis, to apply their own in-house tools to the artifacts that we share with them. Whenever we release a new version of software and Docker images, they upload it to their systems. Some of them have the internal equivalent of Veracode and they come back to us to say, "Hey, you haven't taken care of this vulnerability." So it is very important for us to be proactive on each set of release notes. We need to show the current status of the product: that we have fixed these vulnerabilities and that we still have some well-known vulnerabilities, but that there are workarounds that we document. In addition they can check the reports that we attach, the reports from Veracode, that show that the severity is not high, meaning they don't create a big risk.
It delivers because we haven't been thinking, "Okay, let's consider another product." We might see some savings so I think the pricing is right.
Which other solutions did I evaluate?
For open source projects we mostly tested Snyk, which works quite well with JavaScript but much less so with other technologies. But it has some bigger problems because Snyk considers each file inside a repository of GitHub as a separate project, so it was creating a lot of false positives. That made it basically unmanageable, so we gave up on using it.
We have also been using an open source project called the OWASP Dependency-Check that was doing a decent job of software composition analysis but it required a lot of effort in checking false positives. To be honest, it would have been a good solution only if we didn't have a budget for Veracode, but luckily we had the budget, so there was no point in using it.
Another one that we tried, mostly because it was a small company and we had the opportunity to speak directly with them to ask for some small changes, was a company called the Meterian. It doesn't do static analysis, but otherwise the software composition analysis and the library report were the best of the bunch. From my perspective, if we didn't have the need for static analysis, I would have chosen Meterian, mostly because the user interface is much more usable than Veracode's. Also, the findings were much better. We still use it on the open source project because they offer a free version for open source—which is another good thing about some of these products, where the findings are available to anyone. For a company like ours, where we have both open source and enterprise products, this is quite good. Unfortunately, with Veracode, if we scan the open source project, we cannot link the pages of Veracode with the findings because they are private. That's a problem. In the end, for the open source projects, we are still using Meterian because the quality is good.
My main issues with Veracode, in general, are mostly to do with the user interface of the web application and, sometimes, that some pages are inconsistent with each other. But the functionality underneath is there, which is the reason we stay with Veracode.
What other advice do I have?
Usually, we open tickets now using the JIRA/GitHub integration and then we plan them. We decide when we want to fix them and we assign them to developers, mostly because there are some projects that are a little bit more on the legacy side. Changing the version of the library is not easy as in the newer projects, in terms of testing. So we do some planning. But in general, we open tickets and we plan them.
We also have it integrated in the pipelines, but that's really just to report. It's a little bit annoying that the pipeline might break because of security issues. It's good to know, but the fact that that interrupts development is not great. When we tried to put it as a part of the local build, it was too much. It was really getting in the way. The developers worried that they had to fix the security issues before releasing. Instead, we just started creating the issues and started doing proper planning. It is good to have visibility, but executing it all the time is just wrong, from our experience. You have to do it at the right time, and not all the time.
The solution integrates with developer tools, if you consider JIRA and GitHub as developer tools. We tried to use the IntelliJ plugin but it wasn't working straightaway and we gave up.
We haven't been using the container scanning of Veracode, mostly because we are using a different product at the moment to store our Docker images, something that already has some security scanning. So we haven't standardized. We still have to potentially explore the features of Veracode in that area. At the moment we are using Key from IBM Red Hat, and it is also software as a service. When you upload a Docker image there, after some time you also get a security scan, and that's where our customers are getting our images from. It's a private registry.
Overall, I would rate Veracode as a five out of 10, because the functionality is there, but to me, the usability of the user interface is very important and it's still not there.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
How good is adding agents working in Banking and financial and Healthcare industries?