
Large language models have an awkward history with telling the truth, especially if they cannot provide a real answer. Hallucinations have been a hazard for AI chatbots since the technology debuted a few years ago. But ChatGPT 5 seems to be taking a new, more humble approach to not knowing answers; admitting it.
Although most AI chatbot responses are accurate, it is impossible to interact with an AI chatbot for long before it provides a partial or complete fabrication as an answer. The AI shows just as much confidence in its answers regardless of their accuracy. AI hallucinations have plagued users and even led to embarrassing moments for the developers during demonstrations.
OpenAI had hinted that the new version of ChatGPT would be willing to admit ignorance when making up an answer, and a viral X post by Kol Tregaskes has drawn attention to the groundbreaking concept of ChatGPT saying, "I don't know - and I can't reliably find out."
GPT-5 says 'I don't know'. Love this, thank you. pic.twitter.com/k6SNFKqZbg August 18, 2025
Technically, hallucinations are baked into how these models work. They're not retrieving facts from a database, even if it looks that way; they're predicting the next most likely word based on patterns in language. When you ask about something obscure or complicated, the AI is guessing the right words to answer it, not doing a classic search engine hunt. Hence, the appearance of entirely made-up sources, statistics, or quotes.
But GPT-5's ability to stop and say, "I don't know," reflects an evolution in how AI models handle their limitations in terms of their responses, at least. A candid admission of ignorance replaces fictional filler. It may seem anticlimactic, but it's more significant for making the AI seem more trustworthy.
Clarity over hallucinations
Trust is crucial for AI chatbots. Why would you use them if you don't trust the answers? ChatGPT and other AI chatbots have built-in warnings about not relying too much on their answers due to hallucinations, but there are always stories of people ignoring that warning and getting into trouble. If the AI just says it can't answer a question, people might be more inclined to trust the answers it does provide.
Of course, there's still a risk that users will interpret the model's self-doubt as failure. The phrase "I don't know" might come off as a bug, not a feature, if you don't realize the alternative is a hallucination, not the correct answer. Admitting uncertainty isn't how the all-knowing AI some imagine ChatGPT would behave.
But it's arguably the most human thing ChatGPT could do in this instance. OpenAI's proclaimedThe goal is artificial general intelligence, AI that can perform any intellectual task a human can. But one of the ironies of AGI is that mimicking human thinking includes uncertainties as well as capabilities.
Sometimes, the smartest thing you can do is to say you don't know something. You can't learn if you refuse to admit there are things you don't know. And, at least it avoids the spectacle of an AI telling you toeat rocksfor your health.
You might also like
- GPT-5 Pro is brilliant, but it's still nowhere near real AGI, says one of the professors who coined the term
- OpenAI's CEO says he is scared of GPT-5
- AI that seems conscious is coming - and that's a big problem, says Microsoft AI's CEO
Like this article? For more stories like this, follow us on MSN by clicking the +Follow button at the top of this page.
0 comments:
Ikutan Komentar