Programmer at a insurance company with 1,001-5,000 employees
Real User
Top 5
2025-08-01T14:08:47Z
Aug 1, 2025
There are areas where ChatGPT can still improve. If you ask it the same question twice, it gives you a slightly different answer, which even a human being does unless it memorizes something. As information changes, ChatGPT is obviously going to do it differently, slightly different each time. If you ask it to, the more simple your question is, the more chance you're going to get multiple answers. For instance, if you tell it to draw a green box with a white circle, it might make it bigger or smaller or one shade of green versus another. The more specific your query is, your question, the better the result you're going to get. ChatGPT is kind of already getting better, so just continuous improvements are needed.
Senior Investment Analyst at a financial services firm with 1-10 employees
Real User
Top 20
2025-06-24T16:43:12Z
Jun 24, 2025
Some areas that could be improved with ChatGPT include the accuracy aspect. Sometimes when we are trying to backtrack sources or where data came from, it can be difficult to navigate.
Some areas that could be improved with ChatGPT include the information they provide, as sometimes they hallucinate something that's not real. Sometimes they comment positively without being asked, and I have questioned ChatGPT that I don't want the prompt to agree with me but rather to challenge me. If I provide a slide and ask to compare it to McKinsey or other big companies and scale it from zero to 10, they might say it's an eight and suggest improvements to reach nine or ten. However, these evaluations aren't always realistic because if you're not a specialist on the subject, you might create something that's not accurate. This becomes problematic because we are working with real clients, not experiments. When working on real cases, I have to be the one to validate that the information is accurate. Sometimes they provide information that isn't real. You have to be the gate to validate that this information is correct before proceeding. For simpler tasks adjusting communication style or making text more formal, they perform perfectly. However, for complex cases, I don't use ChatGPT as the source of truth because it's not always accurate. If you delegate it to make decisions in your place, it could create significant problems to solve.
The information base of ChatGPT has to grow because, in some cases, it does not include information about my city. I am referring specifically to information about my city.
ChatGPT Team - Enterprise offers fast query processing and seamless integration, emphasizing efficient knowledge access and customizable processes. It facilitates swift idea organization and code generation, delivering quick insights to streamline workflows, benefiting users from diverse backgrounds. Designed for enterprises seeking operational efficiency, ChatGPT Team - Enterprise enhances workflows by providing fast query processing and easy integration. With capabilities in chat, talk,...
There are areas where ChatGPT can still improve. If you ask it the same question twice, it gives you a slightly different answer, which even a human being does unless it memorizes something. As information changes, ChatGPT is obviously going to do it differently, slightly different each time. If you ask it to, the more simple your question is, the more chance you're going to get multiple answers. For instance, if you tell it to draw a green box with a white circle, it might make it bigger or smaller or one shade of green versus another. The more specific your query is, your question, the better the result you're going to get. ChatGPT is kind of already getting better, so just continuous improvements are needed.
Some areas that could be improved with ChatGPT include the accuracy aspect. Sometimes when we are trying to backtrack sources or where data came from, it can be difficult to navigate.
Some areas that could be improved with ChatGPT include the information they provide, as sometimes they hallucinate something that's not real. Sometimes they comment positively without being asked, and I have questioned ChatGPT that I don't want the prompt to agree with me but rather to challenge me. If I provide a slide and ask to compare it to McKinsey or other big companies and scale it from zero to 10, they might say it's an eight and suggest improvements to reach nine or ten. However, these evaluations aren't always realistic because if you're not a specialist on the subject, you might create something that's not accurate. This becomes problematic because we are working with real clients, not experiments. When working on real cases, I have to be the one to validate that the information is accurate. Sometimes they provide information that isn't real. You have to be the gate to validate that this information is correct before proceeding. For simpler tasks adjusting communication style or making text more formal, they perform perfectly. However, for complex cases, I don't use ChatGPT as the source of truth because it's not always accurate. If you delegate it to make decisions in your place, it could create significant problems to solve.
The information base of ChatGPT has to grow because, in some cases, it does not include information about my city. I am referring specifically to information about my city.
I think it's pretty good. No recommendations.