If I hear one more person say something along the lines of, “AI is the future” I’m going to strangle them. Of all the people that say that shit, none of them can explain how it works.
well lemme ask chatgpt first then i can explain it to you
Uh…recombinant DNA experiments were never paused, and while human cloning is illegal in non removeds, Sam Altman has a company to genetically modify embryos in San Francisco called Preventive.
I’ve been hesitant to play around with AI just because of how sneaky business is done lately and I don’t trust “business”. I can’t consciously reconcile my use of AI with the horrendous resources required to keep it up and running. I’d rather go “green” and figure out shit on my own, using old school research methodologies. My only caveat to this is if I really, really wanted a funny image. Maybe a Spongebob and Magilla Gorilla mashup. That, I’d sell out for. /s
Image generation can easily be done locally with open weight models. You can avoid businesses.
Nah they developed all that tech in multiple underground bunkers somewhere…
You can easily make a “blinding laser weapon” in your house with a soldering iron and parts readily available online, no secret bunker necessary. Instant, permanent total blindness in a handheld device. It’s honestly shocking to me that it isn’t already in wide use among dissidents/terrorists/etc.
Progress cannot be stopped, it will continue until the apocalypse comes because of it and no one can stop it. It’s a pattern.
None are paused tho. They might say they are
And the beauty of this stance is that it’s literally impossible to disprove, so you never have to be wrong.
Of course the problem is that you as the claimant have burden of proof.
I’ll do you one better by just using logic. There is no more work needed for blinding lasers, you can pick up a battery powered IR setup for a few hundred dollars and strap it to a rifle, done. Recomb DNA actually is still being studied, allow me to gesture very broadly to ALL the shit we do with yeast and I dated a girl working with M. Maydis for treating breast cancer.
Translation from bafflegab to English: “I have no evidence and thus cannot cite it.”
I think you’re confused, because not only am I agreeing with you, but none of the things I said should be confusing to anyone with a > 10th grade education.
ZDL is confused. Human cloning? Easy, done. Not used, but well understood. You explained the lasers. Recombinant DNA? That is the basis of all current biotech outside of mRNA and CRISPR (which is also recombinant DNA, just very focused).
I dunno what you’re trying to argue. They accused me of using needlessly confusing language, that was what I was referring to.
I think you’re confused, SomethingSnappy is agreeing with you.
They just used normal words tho? This comment seems to be telling on yourself more than anything.
If AI can only be done in such secrecy that it’s impossible to disprove then I’d call that a win.
Yeah literally just let us have some peace and quiet before we’re suddenly turned into paper clips

Best incremental game btw
Of course the problem is that you as the claimant have burden of proof.
People who say this kind of thing about claims regarding government or industry-level activities have no clue about security classifications.
How are you supposed to provide proof for something that is being deliberately withheld from the public?
You’re not. The problem isn’t just that they’re not supplying proof. It’s that they’re making assertions without supplying proof.
It is the pairing that is toxic.
The person to whom I responded said this flatly:
None are paused tho.
That is a bold, positive claim made with a very certain voice. And precisely because there is no way to verify this, it is impossible to prove or disprove. Which places it fully in the realm of unsupported speculation.
Had there been some form of tempering, clearly identifying it as opinion or speculation, then I’d have no problem with it.
LOL, they probably haven’t paused any of it, though. I mean like they’d tell us if they were!
There’s an example. See the difference?
I’ve got a blinding laser in my CD burner.
Allegedly, a Dr in China was already creating designer babies, and recombinant DNA products exist (and therefore, the research to create those products is being done.) Hell, I’ve done my own recombinant DNA experiments in my bio labs during college.
I’m not a particular fan of AI but I’m not naive enough to believe that research would stop just because everyone claimed it had.
The kind of “AI” involved here (LLMbeciles and other such degenerative AI forms) are difficult to do in secret given, you know, the massive server racks with an extreme thirst for power and water they involve…
The world is full of data centres who’s to say what their purpose is. How are you going to verify other nations compliance.
There is no scenario in which this genie is getting put back in the bottle
There is no answering the conspiracy mindset.
By which I mean the American one.
was not the original claim that they were paused? why is it always the claim that you disagree with that has the burden of proof, not the original claim?
The files basically confessed
Bingo!
The difference between AI and the other 3: AI has the potential to save all the rich people trillions through the firing of the proletariat whereas the 3 numbered items were merely a small group of people trying to make money for themselves.
Wut. Rich people will shoot themselves in the foot by firing the proletariat. AI is trash.
The only thing that would save them is a bail out when everything crashes.
So much of the white collar work is frankly a bit performative in general, and doing it well versus doing it badly versus not even doing it at all is sometimes not at all possible to tell.
Thanks to mismanagement, people are brought in “in case they might be useful” a bunch of material is produced that is beyond the ken of the management who just smiles and nods because they have no idea.
Witnessed a group manage to coast on doing effectively nothing for over a year on “we are going to do analytics in the cloud” as executive after executive sagely nodded. New executive came into the fold and got the same pitch and said “ok, fine, but what analytics, with what data sources, what do you expect to get out of it?” In a rare moment of competence an executive actually dared to figure out something instead of just smiling over the buzzwords. That same executive was gone within 3 months, because broadly speaking this was a problem for his peers that mostly operated by buzzword alignment.
There’s a mountain of internal project document material that must be created, but is never used, because of processes where non-technical executives imagine they can review a technical design as long as it isn’t “code”, or that they can fire their coders and replace with new coders if they can reference some ‘non-code’ document to help.
GenAI may be pretty bad, but depressingly it might not matter given how much pretty bad stuff is already out there.
Makes sense! So your theory is leadership will fire themselves and replace themselves with genai, keeping the rank and file workers?
Nah, that rank and file workers will go and the leadership will happily let genai keep doing performative bullshit that doesn’t matter and claim it’s like super important
“An evil man will burn his own nation to the ground to rule over the ashes.” ~ Sun Tzu
“AI Slop” is not mutually exclusive with “AI fascism”. Billionaires are already burning down the planet. Clearly they don’t care about killing humanity on the way.
In addition to what the other reply says, the current state of AI isn’t necessarily the best AI could be. Even with the iterative changes on the LLM-based model, things are improving so fast that it might be safe to shrink the workforce for technical tasks soon.
But I’m sure I’m not the only one that thinks the LLM-focused approach itself is just a local minimum the industry is stuck trying to optimize while another approach that isn’t just a big data “throw everything we can at it and hope it spits out useful results” but something more methodological that encodes our knowledge from experts to give it a head start as well as robust reasoning strategies and logic to let it improve on that starting point as it seeks and adds relevant data in ways similar to how we do science and engineering.
I believe that it’s a race between an AI that truly can outcompete us and societal collapse, because the real reason AI is more difficult to stop than those other three is how easy it is to hide development. The massive data centers are required for the current approach being scaled up for the world to use it. AI research and development can be done on home PCs, especially if you’re more interested in results than speed (in which case you aren’t limited by cores or memory but just by storage and time).
Eh it’s the illusion of speed. Scaling brought enormous returns from GPT-3 -> GPT-4 but it’s been far less significant for every major release since. To compensate for this, every research lab is coming up with new ways to extract value of it of models: CoT, RL, Agent Harness etc
However, these are all hacks to make LLMs more efficient or (try) to make them more reliable. They still have significant drawbacks which will take years (probably decades) to ever get them to the point where they can reliably replace knowledge workers. China knows this and is taking a far different approach to LLM development (not a tankie fyi). Scaling is a horrible idea which will burn billions of dollars with an astronomically low chance of return.
Yeah, while I have some doubts, I believe that LLMs have fundamental issues that will always hold them back. The doubts come because Claude Code seems like they’ve built a system where they are effective at giving it a good context, and it has relatively quickly solved some annoying obscure issues with my environment that I was unable to make any progress on my own with and other LLMs were also useless for.
I still think it’s a series of patches/bandaids to cover up those flaws, but my doubt comes in the form of “what if those patches can get it to average human level or even skilled”. I don’t think LLMs can get to the true innovator level like Einstein and Tesla, but doing competent work is well below that level and at this point I think LLMs might be able to get there.
And I think other approaches could do even better. Not that I know what they are, but just based on the assumption that we haven’t found the ideal approach in the still infancy of what AI could be.
Edit: Funny enough but the current/recent advancements seem to be aimed at eliminating the job of “prompt expert” first.
1 and 3 could easily make a boatload of money, and could allow rich people to “live forever” and edit themselves in the process.
I just want to say the hair in your “blank” profile picture got me
kudos
Firing and rehiring at a lower wage. That is, if they’re motivated to continue producing functional products. It’s clear that at this point many aren’t. So maybe this content is moot.
None of those were paused LOLOLOLOL
Came here to say this, but without the LOLs.
Came here to say this, but with one lol.
Idk if you know this but lol counts as punctuation too you don’t even need a period lol see
this is the way
And then there’s antichiral bacteria, where the entire scientific community will shoot you if you even breath wrong adjacent to the idea
As someone who has family that died from mad cow (prion disease), fuck everything about that. The fact that there are prion-tainted spaces out in the wild, is terrifying enough.
ooo, that’s a fun concept to think of. yeah, grab your go bag.
What’s that and what do you mean by breathing wrong at the idea? Is someone trying to breed some sort of supervillain bacteria?
Others have already answered, but yeah it’s a bit of a Pandora’s box. We almost certainly wouldn’t be able to contain it, and there’s no way of knowing what it would do the the world or even universe. It’s some supremely scary shit.
Almost every organic molecule has a mirrored counterpart, like a normal screw and a left-handed screw.
Almost none of them occur in the nature.
So we have the technology to synthesize them now, and synthesize a bacteria out of them.
But if you do that, and the bacteria escapes, all your existing medicine will be useless, so you need to re-synthesize all your antibiotics in left-hand configuration.
That typically does not happen with regular bacteria experiments, because most of what you can synthesize in the lab will be a descendant of some other well-known bacteria, which already have an appropriate medicine to treat it, and in most cases it will be effective against your new strain.
Though wouldn’t that incompatibility go both ways? Current drugs and antibodies wouldn’t work with them but wouldn’t they use the mirrored proteins for energy and functioning, thus our bodies would be of no use to them?
I’ve been wondering if bio-compatability would mean one doesn’t have a chance against the other or if it’s more like separate worlds that can only interact at a high level (like via the senses) but not at a lower level (sharing infections, food, and other biological processes).
In theory yes. In practice no one wants to try it.
Maybe?
Worth risking life as we know it just to find out, for shiggles?
The truth is, there will be somewhere that they outcompete native fauna for resources but can’t be stopped by what controls the natives, and whoops, there goes the ecosystem.
I think it would be important to know in the context of space exploration, assuming we can solve the other very hard problems standing in the way of a Star Trek future (though I’m not holding my breath lol), we’d need to know if we should stay the fuck away from any planets we find with life or if we can make contact without potentially dooming both our planet and theirs to potentially returning to the single-celled life stage.
But yeah, it is likely a real world pandora’s box.
I find AI very frustrating. I had a script I wanted to turn into a systemd service which I’ve never done. I searched the web, didn’t find quite what I wanted so I asked AI. It gave a great answer to exactly my question and explained what every field was doing. It got me there faster than searching and browsing forums would have.
So great, I also wanted to set up a watchdog on the pi to reboot. It tells me to get watchdog package from apt then edit a systemd conf file. An hour later with nothing working right gave up and found a tutorial in about 30 seconds of web browsing that made it clear AI was mixing up instructions from 2 different methods.
So it saved me 5 minutes on one thing, cost me an hour on another. I feel like the internet and search engines of 10 years ago were much better than what we have now.
It was better ten years ago.
That is my exact experience. I was basically just incoherently whining about an issue I had that involved accessing the DB for old legacy windows photo albums and preserving them, and it spit out a fully working program that did all that.
Then again, it often latches onto a way to do something that messes things up and leads nowhere, and I have to be the one to say: “STOP. The goal is to install a scanner on a very common OS, one that is praised for being particularly compatible to this. Now you want me to add 50 lines of custom configuration to a background service and switch it to an unsupported version. We are clearly on the wrong path here.”
Hence I do experiment with it at home to see its limits, but my customers get 100 % human generated solutions.
That touches on the heart of it; search engines have been so enshittified that AI is by default better, because it occasionally gets information from its training data that isn’t easily found through normal searching.
(Some) AI has it’s place, as in GAN AI is amazing at finding subtle indicators of patterns that can be extrapolated to new data, but got it’s just so bad at 99% of applications it has ever been used for, including the entire concept of LLMs which are such an inherently flawed technology that they’ll never be passable as useful for anyone that isn’t a greedy shortsighted CEO wanting to replace workers as soon as possible.
here’s how I do it :
Word it as best as I can. If the AI gives a specific and likely answer, doublecheck the documentation or stack overflow, or its listed sources.
It sucks a lot of the stuff I’m searching comes from the same three fucking AI generated things from 2024 onwards
I’d be pretty certain that if any of these has “paused” its just because research reached its limits and is waiting for the next big development that enables it to continue.
Also “recombinant DNA experiments”??? What in the world is meant by that?
I think this is what they mean https://en.wikipedia.org/wiki/Recombinant_DNA#Controversy
I read the article in french and it seems they use it with insuline and EPO ? I also think this argument is slippery, these technologies are more in a “pause mode” no ?
TIL, thanks! I agree that this is a fitting example then.
I read through it and it seems that mostly they were unsure about the security so they placed a moratorium on it until they could figure out if it was, and it was deemed not dangerous.
CFCs.
only after years of kicking and screaming, and companies that made CFCs switched to more expensive HFCs and later HFOs so it’s not like they went out of the business. (alternative is use of hydrocarbons which is much cheaper but flammability was used as a reason to restrict their use) CFCs are also still used as chemical intermediates and as late as during covid there was an operational illegal R12 factory somewhere in northern china
Only because there was a viable alternative that does the exact same job. AI is both the technology and the goal, it’s a billionaire money printing circlejerk
That one was about commercialization, not research (which is the issue around dangerous AI)
That’s not how this meme format works
And yet it was made, posted, saved, and shared. Because posting MORE content is better than posting GOOD content
I’m not worried about AI ruining the internet…we’ve already done it ourselves.
We do recombinant DNA experiments all the time. We just don’t do it on humans, and even it depends on the specifics when done on test animals.
yeah, didn’t they make mammoth meat recently but they’re all too coward to eat it (call me scientists i want to eat that mammoth)
The grifters want to become “too big to fail”, so that a “pause” would cause finance market drama. So, first, block the sale of the shares – that’s needed to avoid the grifters offloading to bigger fools.
A lot of people think that these companies want to become too big to fail but I suspect something else is going on as well.
Step 1. Make/pioneer new tech or buy it.
Step 2. Get investors onboard with it so that you can get a lot of seed money.
Step 3. Invest in the infrastructure to support widespread use of the technology.
Step 4. Develope the technology even though it has known flaws.
Step 5. Let those known flaws stir the pot and cause uncertainty in the market.
Step 6. If the technology fails and the bubble pops, get government bailout.
Step 7. Use money from bailout to buy up the infrastructure and components at a ridiculously rock bottom rate using these companies own money.
Step 8. Use a combination of bailout money and insurance to make the investors whole (or as whole as their contract stipulates).
Now at step 9 you are left with a bunch of commodities that tech companies need (data centers, supply lines for components, better power infrastructure, a knowlegable tech work force etc). And you don’t have to pay astronomical prices because the bottom fell out of the market.

















