

Significantly how? Both LLMs and AlphaFold are transformer-based neural networks. The LLM chatbot is trained on sequences of words, and AlphaFold is trained on sequences of amino acids. Certainly training AlphaFold to make real amino acid chains was ‘easier’ because we know how they’re formed so it only has so many sequences it can produce and there’s a checklist to determine whether the sequence it produced is real or impossible, so it’s also easier to have it produce a reliable output and makes it very good at a specific task, but they both work the same under the hood. Word prediction LLMs can’t have that deterministic output because we use words for so many different things. It would be like asking a person to only ever communicate in poetry and no other way.
one clear scientific purpose
Computer scientists in academia are using Deepseek to solve new problems in new ways too. They especially like Deepseek and Chinese models because they’re open-weights and don’t obfuscate any of their inner workings (such as the reasoning chain), so they can fine-tune them to their specific needs.
I have to assume the purpose of amino acids wasn’t so clear when we first found out about them and before we set out to investigate and, through extensive research and testing found out how they work and what they actually do. It’s on us to discover the laws of the universe, they don’t come to us beamed from heaven straight into our brain.









The calculator was also a nuclear device when compared to what it replaced/came before it. so was the car, and yet today nobody would tell you you can’t afford not to own a horse (except Homer Simpson maybe).
Things change and if we want to criticize that from a marxist perspective we have to offer something better than “I don’t like change”. It’s not all greener pastures with neural networks, but we need to be clear about what it is we criticize, and for that we need to understand things deeply.
But what is the ‘human experience’? I push for people to define their words when it comes to talking about neural networks because more often than not it shows more similarities with what already exists than a break. It’s not that different from what we already live with daily. The more you use, understand and work with LLMs the more you realize that it’s really not so dissimilar from what we already know. You’re worried about a Wargames situation, i.e. the artificial intelligence making the logical conclusion that to win a nuclear standoff you should dump your warheads on the enemy first. But this has always been the plan; as soon as this technology was going to be available people were going to rush for exactly that - it just happened to happen in 2022 instead of 2065 or 2093, and so we have to reckon with the reality of it now, not later. Complaining that this is now possible won’t change that it exists and that it’s being used, so instead I made the choice to find my own uses out of LLMs that could be useful to communists (and incidentally I think we could probably organize for socialism much more efficiently around “the army wants to offload targets to an AI” than “AI bad destroy it all”). I’m not saying this to be dismissive, but rather that again we need to offer a studied, marxist perspective on the matter.
But speaking on the human experience/cognition, I mean, there are plenty of neurodivergent people who may not fare well with typical peer-to-peer communication (speech or written) and they appreciate having LLMs to organize and make sense of their thoughts and feelings. Disabled people have found answers from LLMs. Human cognition is not universal, and we see that LLMs already offer assistance there. When walkmans first came out, there was a huge panic around what they actually meant for society, that the youth saw them as a form of escapism, that it was an identity thing – it went so far that even novels were written about kids turning into mind-zombies after getting a walkman and some people event went on TV to say that using a walkman was a gateway to committing crime. We’re talking about the iPod that reads CDs.
I’m not even convinced by these studies that supposedly find all sorts of ills with usage of LLMs because I bet in just a few years plenty of errors will be found with them. They are lab studies, not real world, and I remember studies saying the same thing about search engines when they came out. I talked in another comment about how search engines are a memory bank for us; instead of remembering everything, we offload it to the search engine – I don’t necessarily remember what each property in CSS’s box-shadow does, but I know how to look that up on a search engine and find the information. Likewise we stopped remembering phone numbers the moment we got mobile phones (although we should probably remember one or two emergency numbers).