The discourse on “AI” systems, chat bots, “assistants” and “research helpers” is defined by a lot of future promises. Those systems are disfunctional or at least not working great right now but there’s the promise of things getting better in the future. Which is how we often perceive tech to work: Early versions might be […]
I think not many people are aware of that. No matter how well you build the systems with this type of AI, they don’t yet know. Now, maybe they’re useful, maybe not, but this awareness that everything is actually just made up, by statistics and such, is lacking from peoples minds.
This is something I’ve been saying for a while now, because it really needs to be understood.
LLMs do not “sometimes hallucinate.” Everything they produce is a hallucination. They are machines for creating hallucinations. The goal is that the hallucination will - through some careful application of statistics - align with reality.
But there’s literally no feasible way that anyone has yet found to guarantee that.
LLMs were designed to effectively impersonate human interaction. They’re actually pretty good at that. They take intelligence so well that it becomes really easy to convince people that they are in fact intelligent. As a model for passing the Turing test they’re brilliant, but what they’ve taught us is that the Turing test is a terrible model for gauging the advancement of machine intelligence. Turns out, effectively reproducing the results a stupid human can achieve isn’t all that useful for the most part.