Intel’s Q1 2025 earnings press release talked up their new AI-enabled chips. But these are not selling. [Intel] In the earnings call, CFO Dave Zinsner mentioned they had “capacity constraints in In…
Gen AI should be private, secure, local and easier to train by it’s users to fit their own needs. Closest thing to this at the moment seems to be Kobold.
Nah we’re up to running Qwen3 and Deepseek r1 locally with accessible hardware at this point so we have access to what you describe. Ollama is the app.
The problem continues to be that LLMs are not suitable for many applications, and where they are useful, they are sloppy and inconsistent.
My laptop is one of the ones they are talking about in the article. It has an AMD NPU, it’s a 780M APU that also runs games about as well as an older budget graphics card. It handles running local models really well for its size and power draw. Running local models is still lame as hell, not how I end up utilizing the hardware. 😑
I wasn’t talking about their effectiveness though. Yeah, they’re sloppy as hell, but I’d rather trust a sloppy tool I set up at home and use myself than having someone I don’t trust at home using their sloppy tools, tinkering with my property without permission when I’m not looking and changing their terms and prices each day.
But granted your point is a really good one. These AI ready laptops don’t give the bang for your buck you’d expect. We’re all better off taking good care of our older harware and waiting longer for components that are a true improvement to replace them.
It sounds like I’m ragging on the hardware and I’m not, the chip is really cool for gaming and other fun stuff.
You are right about that older hardware too. Americans oughta look at their scrap market right now because it’s good as hell and artificially cheap because of this nonsense LLM boom.
it’s weird your local AI told you fuck all about the labor movement whose name you’re using as an insult, but oddly enough I don’t feel like wasting my time explaining this quote unquote “tech takes” forum to the type of asshole we exist to sneer at
Gen AI should be private, secure, local and easier to train by it’s users to fit their own needs. Closest thing to this at the moment seems to be Kobold.
Nah we’re up to running Qwen3 and Deepseek r1 locally with accessible hardware at this point so we have access to what you describe. Ollama is the app.
The problem continues to be that LLMs are not suitable for many applications, and where they are useful, they are sloppy and inconsistent.
My laptop is one of the ones they are talking about in the article. It has an AMD NPU, it’s a 780M APU that also runs games about as well as an older budget graphics card. It handles running local models really well for its size and power draw. Running local models is still lame as hell, not how I end up utilizing the hardware. 😑
Does Ollama accept custom parameters now?
I wasn’t talking about their effectiveness though. Yeah, they’re sloppy as hell, but I’d rather trust a sloppy tool I set up at home and use myself than having someone I don’t trust at home using their sloppy tools, tinkering with my property without permission when I’m not looking and changing their terms and prices each day.
But granted your point is a really good one. These AI ready laptops don’t give the bang for your buck you’d expect. We’re all better off taking good care of our older harware and waiting longer for components that are a true improvement to replace them.
It sounds like I’m ragging on the hardware and I’m not, the chip is really cool for gaming and other fun stuff.
You are right about that older hardware too. Americans oughta look at their scrap market right now because it’s good as hell and artificially cheap because of this nonsense LLM boom.
Removed by mod
no thx
What’s wrong with running your own AI on your own PC? That’s a very luddite reaction for someone who moderates a “tech takes” forum.
it’s weird your local AI told you fuck all about the labor movement whose name you’re using as an insult, but oddly enough I don’t feel like wasting my time explaining this quote unquote “tech takes” forum to the type of asshole we exist to sneer at
remarkably, your post is one of those “tech takes” that we hang around the watercooler for
although I’m not sure you’ll get why
lol and then only after posting so I read @self’s post. good going, me!