Generative AI could also provide more opportunities for players to go off-script and create their own stories if designers can craft environments that feel more alive and can react to players’ choi…
LLM-powered NPCs will quickly fall out of fashion as people realize they’re literally just talking to chatGPT.
The either forced always-online requirement with privacy violating telemetry for server-side LLMs,
or immensely high GPU memory requirements for local LLMs will also cripple their games.
How small can you make an LLM before it starts having issues with grammar and coherency? I would argue that the bare minimum still would be rather large, and in videogames we’re already using vram for other resources. In a 3D game especially I imagine very little vram is left to utilize.
You’d be surprised how small you can go. That’s IMO pretty much the future of AI - a shit ton of small specialized models. While the heavyweights have their use, they’re way too expensive and overkill for specialized tasks.
Some small models can comfortably run on the CPU as well, games can easily detect whether you have VRAM to spare and use GPU or CPU based on that.
It’s not there, yet, but what some of the small models can do is impressive. And if you train them extensively on fantasy scripts, I can see them generating NPC lines on the fly.
Not sure, but what I am sure on is companies paying “ai engineers” (or whatever they are called) to trim them to a usable point instead of hiring a better writing team.
That’s immensely expensive though, and not guaranteed to work because much of that stuff is still research stage. You’re right that paring down the models to make them leaner and more specialized is the primary direction that current research is pursuing, but it’s far from certain at this point how to do it, how well it will work, and how small you can get them before they start to fall apart. Not something game studios are likely to gamble their budgets on, at least not yet.
We’re nowhere near the “just hire a guy to trim it down instead of hiring writers” stage, and it’s unclear yet whether or not that’s where we’ll end up. We could pull off “just hire a guy to fine-tune an existing foundation model,” but that doesn’t make them smaller.
LLM-powered NPCs will quickly fall out of fashion as people realize they’re literally just talking to chatGPT.
The either forced always-online requirement with privacy violating telemetry for server-side LLMs, or immensely high GPU memory requirements for local LLMs will also cripple their games.
Not really, you can tune a llm to do what you want.
Why have a llm know about 17th century European politics or modern science when you are sticking it into a fantasy video game.
How small can you make an LLM before it starts having issues with grammar and coherency? I would argue that the bare minimum still would be rather large, and in videogames we’re already using vram for other resources. In a 3D game especially I imagine very little vram is left to utilize.
You’d be surprised how small you can go. That’s IMO pretty much the future of AI - a shit ton of small specialized models. While the heavyweights have their use, they’re way too expensive and overkill for specialized tasks.
Some small models can comfortably run on the CPU as well, games can easily detect whether you have VRAM to spare and use GPU or CPU based on that.
It’s not there, yet, but what some of the small models can do is impressive. And if you train them extensively on fantasy scripts, I can see them generating NPC lines on the fly.
Not sure, but what I am sure on is companies paying “ai engineers” (or whatever they are called) to trim them to a usable point instead of hiring a better writing team.
That’s immensely expensive though, and not guaranteed to work because much of that stuff is still research stage. You’re right that paring down the models to make them leaner and more specialized is the primary direction that current research is pursuing, but it’s far from certain at this point how to do it, how well it will work, and how small you can get them before they start to fall apart. Not something game studios are likely to gamble their budgets on, at least not yet.
We’re nowhere near the “just hire a guy to trim it down instead of hiring writers” stage, and it’s unclear yet whether or not that’s where we’ll end up. We could pull off “just hire a guy to fine-tune an existing foundation model,” but that doesn’t make them smaller.