Senior AIML Engineer at a tech vendor with 1,001-5,000 employees
Real User
Top 5
Dec 10, 2025
There are areas for improvement; we do notice sometimes finding vulnerabilities which gives us visibility to find them quickly. However, there could be a mechanism they can build on top of that for validation as they identify the issues. What will the real risk be for that identifiable issue? Sometimes it could be open because of the traffic; how they detected it could be seen as vulnerable, but upon testing, it might not be a real issue. It could be a false positive because there could be a honeypot that we built. My thinking is about validation, so if they can build that validation part before they expose the risk to the specific asset, that would help. Additionally, based on their reporting, they could also build risk scores and prioritization, which would also aid us. I would suggest adding dashboards and custom reporting, which could help us by enabling rich custom reports with filters. That is especially for leadership because they will not look at each technical area, but overall they would be looking at the risk score and what the assets or critical exposure areas are. Customizable reporting based on requirements would be valuable. I chose 9 out of 10 because the reporting and dashboards would be the first thing I would consider for improvement, and then the second is about the validation part, which could probably improve to 10 out of 10. I cannot think of too much for additional improvements. Maybe some good automation with the API solutions that could be integrated with the CI/CD pipeline or DevOps tools we are running would also be automated and tested.
Senior Manager and Global Capability Lead - Offensive Security at a tech vendor with 10,001+ employees
Real User
Top 20
Nov 3, 2025
Bitsight's scan could be more rigorous and then more accurate. I think it would be good to try to see each and everything of the company in a more accurate way. Their scan scheduling could be improved and they could take more inputs from the companies they are working with. If they can speed up that process, they would obviously increase that score. We found that some of the findings are clear false positives, but they still report that, and based on that, the rating goes down until we rectify them. So that is something they need to work towards; the number of false positives they are rating should focus on producing more accurate results to get a higher rating.
BitSight could improve the classes and lower-level detections of anomalies that compound the information used to compute the rating. They could evolve to be a more powerful scanner of cyber hygiene for a company's exposed attack surface, allowing them to compete with companies like Qualys and CyCognito. It's important to ensure a correlation between the score and detailed information to avoid confusion.
We face difficulties in acquiring designs and findings. There may be room for improvement in the methodology for identifying findings, as occasional errors occur on the technical side of BitSight.
The solution’s benchmarking should be improved. The weakness was that they could only benchmark five companies simultaneously. I'm unsure whether this was due to the trial or another reason.
There could be an ability to adapt the score faster. At the moment, when the vulnerability score decreases, it remains the same for quite a while, even though issues are resolved in 24 hours. It reduces faster and increases very slowly. This particular area needs improvement.
There has been quite a bit of data discrepancy in BitSight. When we observe a particular event or alert and check it three to four days a month, the alert seems to be gone, but the vulnerability still exists. In addition, certain assets are becoming repetitive for the same vulnerability. We have reported these couple of instances to BitSight, but we haven't received any updates from them yet. So we are unsure if the issue is from the access end or the BitSight end when it fails to detect that particular asset. We would like to see better data enrichment to give more information about the particular asset. For example, if BitSight scouts a specific website, it tells you that the website is using TLS Version 1.1 or that the web server is accessible using this server. It will be good if it can give a screenshot of what version BitSight scouts and allow us to validate whether it is aligned. Also, I think the alert system can also be fixed. Still, data enrichment is the major issue because we only see some information that is provided by the data and specific fixes about particular vulnerabilities. If we check for remediation tips for certain vulnerabilities, it only gives generic information.
BitSight transforms how organizations manage cyber risk. The BitSight Security Ratings Platform applies sophisticated algorithms, producing daily security ratings that range from 250 to 900, to help organizations manage their own security performance; mitigate third party risk; underwrite cyber insurance policies; conduct financial diligence; and assess aggregate risk. With over 2,100 global customers and the largest ecosystem of users and information, BitSight is the Standard in Security...
There are areas for improvement; we do notice sometimes finding vulnerabilities which gives us visibility to find them quickly. However, there could be a mechanism they can build on top of that for validation as they identify the issues. What will the real risk be for that identifiable issue? Sometimes it could be open because of the traffic; how they detected it could be seen as vulnerable, but upon testing, it might not be a real issue. It could be a false positive because there could be a honeypot that we built. My thinking is about validation, so if they can build that validation part before they expose the risk to the specific asset, that would help. Additionally, based on their reporting, they could also build risk scores and prioritization, which would also aid us. I would suggest adding dashboards and custom reporting, which could help us by enabling rich custom reports with filters. That is especially for leadership because they will not look at each technical area, but overall they would be looking at the risk score and what the assets or critical exposure areas are. Customizable reporting based on requirements would be valuable. I chose 9 out of 10 because the reporting and dashboards would be the first thing I would consider for improvement, and then the second is about the validation part, which could probably improve to 10 out of 10. I cannot think of too much for additional improvements. Maybe some good automation with the API solutions that could be integrated with the CI/CD pipeline or DevOps tools we are running would also be automated and tested.
Bitsight's scan could be more rigorous and then more accurate. I think it would be good to try to see each and everything of the company in a more accurate way. Their scan scheduling could be improved and they could take more inputs from the companies they are working with. If they can speed up that process, they would obviously increase that score. We found that some of the findings are clear false positives, but they still report that, and based on that, the rating goes down until we rectify them. So that is something they need to work towards; the number of false positives they are rating should focus on producing more accurate results to get a higher rating.
BitSight could improve the classes and lower-level detections of anomalies that compound the information used to compute the rating. They could evolve to be a more powerful scanner of cyber hygiene for a company's exposed attack surface, allowing them to compete with companies like Qualys and CyCognito. It's important to ensure a correlation between the score and detailed information to avoid confusion.
We face difficulties in acquiring designs and findings. There may be room for improvement in the methodology for identifying findings, as occasional errors occur on the technical side of BitSight.
The solution’s benchmarking should be improved. The weakness was that they could only benchmark five companies simultaneously. I'm unsure whether this was due to the trial or another reason.
There could be an ability to adapt the score faster. At the moment, when the vulnerability score decreases, it remains the same for quite a while, even though issues are resolved in 24 hours. It reduces faster and increases very slowly. This particular area needs improvement.
The solution's factor analysis feature could be better.
There has been quite a bit of data discrepancy in BitSight. When we observe a particular event or alert and check it three to four days a month, the alert seems to be gone, but the vulnerability still exists. In addition, certain assets are becoming repetitive for the same vulnerability. We have reported these couple of instances to BitSight, but we haven't received any updates from them yet. So we are unsure if the issue is from the access end or the BitSight end when it fails to detect that particular asset. We would like to see better data enrichment to give more information about the particular asset. For example, if BitSight scouts a specific website, it tells you that the website is using TLS Version 1.1 or that the web server is accessible using this server. It will be good if it can give a screenshot of what version BitSight scouts and allow us to validate whether it is aligned. Also, I think the alert system can also be fixed. Still, data enrichment is the major issue because we only see some information that is provided by the data and specific fixes about particular vulnerabilities. If we check for remediation tips for certain vulnerabilities, it only gives generic information.