• zlatko@programming.dev
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 days ago

    I think not many people are aware of that. No matter how well you build the systems with this type of AI, they don’t yet know. Now, maybe they’re useful, maybe not, but this awareness that everything is actually just made up, by statistics and such, is lacking from peoples minds.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      This is something I’ve been saying for a while now, because it really needs to be understood.

      LLMs do not “sometimes hallucinate.” Everything they produce is a hallucination. They are machines for creating hallucinations. The goal is that the hallucination will - through some careful application of statistics - align with reality.

      But there’s literally no feasible way that anyone has yet found to guarantee that.

      LLMs were designed to effectively impersonate human interaction. They’re actually pretty good at that. They take intelligence so well that it becomes really easy to convince people that they are in fact intelligent. As a model for passing the Turing test they’re brilliant, but what they’ve taught us is that the Turing test is a terrible model for gauging the advancement of machine intelligence. Turns out, effectively reproducing the results a stupid human can achieve isn’t all that useful for the most part.