• slazer2au@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    13 days ago

    immensely high GPU memory requirements for local LLMs will also cripple their games.

    Not really, you can tune a llm to do what you want.

    Why have a llm know about 17th century European politics or modern science when you are sticking it into a fantasy video game.

    • parpol@programming.dev
      link
      fedilink
      arrow-up
      12
      ·
      13 days ago

      How small can you make an LLM before it starts having issues with grammar and coherency? I would argue that the bare minimum still would be rather large, and in videogames we’re already using vram for other resources. In a 3D game especially I imagine very little vram is left to utilize.

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        arrow-up
        7
        ·
        13 days ago

        You’d be surprised how small you can go. That’s IMO pretty much the future of AI - a shit ton of small specialized models. While the heavyweights have their use, they’re way too expensive and overkill for specialized tasks.

        Some small models can comfortably run on the CPU as well, games can easily detect whether you have VRAM to spare and use GPU or CPU based on that.

        It’s not there, yet, but what some of the small models can do is impressive. And if you train them extensively on fantasy scripts, I can see them generating NPC lines on the fly.

      • slazer2au@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        13 days ago

        Not sure, but what I am sure on is companies paying “ai engineers” (or whatever they are called) to trim them to a usable point instead of hiring a better writing team.

        • thanks_shakey_snake@lemmy.ca
          link
          fedilink
          arrow-up
          3
          ·
          13 days ago

          That’s immensely expensive though, and not guaranteed to work because much of that stuff is still research stage. You’re right that paring down the models to make them leaner and more specialized is the primary direction that current research is pursuing, but it’s far from certain at this point how to do it, how well it will work, and how small you can get them before they start to fall apart. Not something game studios are likely to gamble their budgets on, at least not yet.

          We’re nowhere near the “just hire a guy to trim it down instead of hiring writers” stage, and it’s unclear yet whether or not that’s where we’ll end up. We could pull off “just hire a guy to fine-tune an existing foundation model,” but that doesn’t make them smaller.