Senior Solution Architect at Hitachi Systems India Private Ltd
Real User
Top 5
Dec 12, 2025
Cohere could improve in areas where the command model is not as creative as some larger LLMs available in the market, which is expected but noticeable in open-ended generative tasks. Reporting and analytics in the dashboard could be more detailed and fine-tuned, which would enhance the experience. Fine-tuning could be simplified to support broader teams without deep ML expertise. For speeding up, what I have already suggested is that it can be more creative, and their reporting and analytics can be improved, as this would help teams without machine learning expertise and speed up their end goals. The dashboard reporting can be improved.
Senior Data Scientist at a tech vendor with 10,001+ employees
Real User
Top 10
Dec 4, 2025
I am uncertain about how Cohere can be improved. The documentation and support could be improved, as there is limited documentation available on the web.
I believe Cohere can be improved technically by providing more feedback, logs, and metrics for embedding requests, as it currently appears to be a black box without any understanding of quality. Quality can only be understood after using it with customer requests, and during the embedding process, measurable metrics are not visible. There are no particularly unique features distinguishing Cohere from other solutions.
It would be better to have a dashboard for users to showcase how reranking helps improve quality. When end users choose the service, they want to see the actual output. The evaluation part is challenging for recent large language model applications but remains very important. If Cohere could provide a dashboard where we can employ an LLM as a judge to check quality before and after reranking, that would be helpful. We could either have another large language model evaluate this part or allow UAT users to manually check with humans in the middle. As an enterprise provider, we want such features because when chatting with clients, we can demonstrate that employing Cohere's reranking model significantly improves results compared to not using it. Documentation is not a major blocking issue for us as we are sophisticated software engineers. Integration and the API provided for reranking models are not complicated, so we can easily handle that. The documentation is good. The major point is to prove the value through evaluation. We need a sophisticated solution to showcase visibly to our clients and engineering team to convince them that using this model creates improvements.
Sr Test engineer at a tech vendor with 10,001+ employees
Real User
Top 10
Oct 8, 2025
When performing similarity matching between text descriptions and the catalog descriptions created using Cohere, the matching could be improved. Because it does not have extensive understanding of Oracle functionalities in ERP, it sometimes gives wrong results or the confidence score is lower than desired. Improving that understanding would provide better matches. When working with Cohere and providing large data sets, there was some hallucination, though it mostly works fine without many issues.
Cohere has text generation. I think it is mainly focused on AI search. If there was a way to combine the searches with images, I think it would be nice to include that.
Cohere provides a robust language AI platform designed for efficient implementation in various domains, offering advanced features for automation and data analysis.
Cohere delivers a scalable AI language model that facilitates automation in data-driven environments. Highly adaptable to industry-specific requirements, it supports tasks such as text generation, summarization, and anomaly detection. This flexibility, along with its integration capabilities, makes it valuable for tech-savvy users...
Cohere could improve in areas where the command model is not as creative as some larger LLMs available in the market, which is expected but noticeable in open-ended generative tasks. Reporting and analytics in the dashboard could be more detailed and fine-tuned, which would enhance the experience. Fine-tuning could be simplified to support broader teams without deep ML expertise. For speeding up, what I have already suggested is that it can be more creative, and their reporting and analytics can be improved, as this would help teams without machine learning expertise and speed up their end goals. The dashboard reporting can be improved.
I am uncertain about how Cohere can be improved. The documentation and support could be improved, as there is limited documentation available on the web.
I believe Cohere can be improved technically by providing more feedback, logs, and metrics for embedding requests, as it currently appears to be a black box without any understanding of quality. Quality can only be understood after using it with customer requests, and during the embedding process, measurable metrics are not visible. There are no particularly unique features distinguishing Cohere from other solutions.
It would be better to have a dashboard for users to showcase how reranking helps improve quality. When end users choose the service, they want to see the actual output. The evaluation part is challenging for recent large language model applications but remains very important. If Cohere could provide a dashboard where we can employ an LLM as a judge to check quality before and after reranking, that would be helpful. We could either have another large language model evaluate this part or allow UAT users to manually check with humans in the middle. As an enterprise provider, we want such features because when chatting with clients, we can demonstrate that employing Cohere's reranking model significantly improves results compared to not using it. Documentation is not a major blocking issue for us as we are sophisticated software engineers. Integration and the API provided for reranking models are not complicated, so we can easily handle that. The documentation is good. The major point is to prove the value through evaluation. We need a sophisticated solution to showcase visibly to our clients and engineering team to convince them that using this model creates improvements.
When performing similarity matching between text descriptions and the catalog descriptions created using Cohere, the matching could be improved. Because it does not have extensive understanding of Oracle functionalities in ERP, it sometimes gives wrong results or the confidence score is lower than desired. Improving that understanding would provide better matches. When working with Cohere and providing large data sets, there was some hallucination, though it mostly works fine without many issues.
Cohere has text generation. I think it is mainly focused on AI search. If there was a way to combine the searches with images, I think it would be nice to include that.