Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Interesting (in a depressing way) thread by author Alex de Campi about the fuckery by Unbound/Boundless (crowdfunding for publishing, which segued into financial incompetence and stealing royalties), whose latest incarnation might be trying to AI their way out of the hole they’ve dug for themselves.
From the liquidator’s proposals:
We are also undertaking new areas of business that require no funds to implement, such as starting to increase our rights income from book to videogaming by leveraging our contacts in the gaming industry and potentially creating new content based on our intellectual property utilizing inexpensive artificial intelligence platforms.
(emphasis mine)
They don’t appear to actually own any intellectual property anymore (due to defaulting on contracts) so I can’t see this ending well.
Original thread, for those of you with bluesky accounts: https://bsky.app/profile/alexdecampi.bsky.social/post/3lqfmpme2722w
Loose Mission Impossible Spoilers
The latest Mission Impossible movie features a rogue AI as one of the main antagonists. But on the other hand, the AI’s main powers are lies, fake news, and manipulation, and it only gets as far as it does because people allow fear to make themselves manipulable and it relies on human agents to do a lot of its work. So in terms of promoting the doomerism narrative, I think the movie could actually be taken as opposing the conventional doomer narrative in favor of a calm, moderate, internationally coordinated (the entire plot could have been derailed by governments agreeing on mutual nuclear disarmament before the AI subverted them) response against AI’s that ultimately have only moderate power.
Adding to the post-LLM hype predictions: I think post LLM bubble popping, “Terminator” style rogue AI movie plots don’t go away, but take on a different spin. Rogue AI’s strength’s are going to be narrower, their weaknesses are going to get more comical and absurd, and idiotic human actions are going to be more of a factor. For weaknesses it will be less “failed to comprehend love” or “cleverly constructed logic bomb breaks its reasoning” and more “forgets what it was doing after getting drawn into too long of a conversation”. For human actions it will be less “its makers failed to anticipate a completely unprecedented sequence of bootstrapping and self improvement” and more “its makers disabled every safety and granted it every resource it asked for in the process of trying to make an extra dollar a little bit faster”.
New Blood In The Machine: The “AI jobs apocalypse” is for the bosses
OT: just got a job interview and wanted to pass the good vibes on!
Noice!
Girls think the “eu” in “eugenics” means EW. Don’t get the ick, girls! It literally means good.
So if you’re not into eugenics, that means you must be into dysgenics. Dissing your own genes! OMG girl what
… how is this man still able to post from inside the locker he should be stuffed in 24/7
Seeing Yarvin mansplain eugenics really does make one wonder how he doesn’t just get suckerpunched whenever he says anything at someone in public.
Not beating the sexism allegations.
sounds like he’s posting from inside a dilapidated white panel van parked strategically just outside a legally-mandated exclusion radius surrounding a middle school
So, he’s essentially Drake if he got into AI doom
The eigenrobot thread he’s responding to is characteristically bizarre and gross. You’d think eigenrobot being anti-eugenics is a good thing but he still finds a way to make it suspect. (He believes being unable to make babies is worse than death?)
I think he means “mass sterilisation of a population” Vs “mass murder of the same population”, which is genocide either way, and then he would opt for the faster method.
Or something. Feels extra creepy discussing which genocide is better with the ongoing genocide in Gaza.
Re: extra creepy: and also with people their people in power.
I mean I guess you can argue that straight up murder has a certain honesty to it? At the same time that is mainly good because it makes it harder to justify what’s happening compared to anti-miscegenation laws or restricting people to an open-air prison for a few generations. And we can see how that’s working out in the current political climate.
A new LLM plays pokemon has started, with o3 this time. It plays moderately faster, and the twitch display UI is a little bit cleaner, so it is less tedious to watch. But in terms of actual ability, so far o3 has made many of the exact same errors as Claude and Gemini including: completely making things up/seeing things that aren’t on the screen (items in Virdian Forest), confused attempts at navigation (it went back and forth on whether the exit to Virdian Forest was in the NE or NW corner), repeating mistakes to itself (both the items and the navigation issues I mentioned), confusing details from other generations of Pokemon (Nidoran learns double kick at level 12 in Fire Red and Leaf Green, but not the original Blue/Yellow), and it has signs of being prone to going on completely batshit tangents (it briefly started getting derailed about sneaking through the tree in Virdian Forest… i.e. moving through completely impassable tiles).
I don’t know how anyone can watch any of the attempts at LLMs playing Pokemon and think (viable) LLM agents are just around the corner… well actually I do know: hopium, cope, cognitive bias, and deliberate deception. The whole LLM playing Pokemon thing is turning into less of a test of LLMs and more entertainment and advertising of the models, and the scaffold are extensive enough and different enough from each other that they really aren’t showing the models’ raw capabilities (which are even worse than I complained about) or comparing them meaningfully.
I like how all of the currently running attempts have been equipped with automatic navigation assistance, i.e. a pathfinding algorithm from the 60s. And that’s the only part of the whole thing that actually works.
I wouldn’t say even that part works so well, given how Mt. Moon is such a major challenge even with all the features like that.
The actual pathfinding algorithm (which is surely just A* search or similar) works just fine; the problem is the LLM which uses it.
Im sure this is fine https://infosec.exchange/@paco/114509218709929701
"Paco Hope #resist @[email protected]
OMG. #Microsoft #Copilot bypasses #Sharepoint #security so you don’t have to!
“CoPilot gets privileged access to SharePoint so it can index documents, but unlike the regular search feature, it doesn’t know about or respect any of the access controls you might have set up. You can get CoPilot to just dump out the contents of sensitive documents that it can see, with the bonus feature* that your access won’t show up in audit logs.”
The S in CoPilot stands for Security! https://pivotnine.com/the-crux/archive/remembering-f00fs-of-old/"
Veering semi-OT: the guy behind the godawful Windows 11 GUI has revealed himself:
Looking at his Twitter profile, its clear he’s a general dumpster fire of a human being - most of his feed’s just him retweeting AI garbage or fash garbage.
It’s not healthy for me to have my biases confirmed like this.
But it lets you adjust your priors so pleasantly!
It also means you can update your priors about your own
biasespredictive instincts being good, allowing you to be more confident in literally everything you’ve ever believed or thought about for half a second. Superpredictors unite!
this one is a joke, i think. he is definitely on the fashy bullshit though
@BlueMonday1984 lol @ “I try not to let [performance] considerations get in the way”
Also why do you even put a React Dev on that task 🤡“I try not to let [performance] considerations get in the way
You could show me this without any context whatsoever and my first thought would’ve been “did a React dev say that”
:(
Not advocating violence, but Achewood did demonstrate one possible set of reactions to discovering a Microsoft designer at large in public.
I was trying out free github copilot to see what the buzz is all about:
It doesn’t even know its own settings. This one little useful thing that isn’t plagiarism, providing natural language interface to its own bloody settings, it couldn’t do.
New piece from Iris Meredith: Keeping up appearances, about the cultural forces that gave us LLMs and how best to defeat them
Reminds me something F.D. Signifier said on a music podcast.
Progressives are losing the cultural war in a lot of ways, but they’ll always need us because we’re the ones pushing the boundaries on art, and it turns out, no matter how ghoulish people want to act, everyone has genuine love of fucking awesome art. The true loss condition is being captured by the tools of the master.
Rekindled a desire to maybe try my own blog ^^.
I think beyond “Keeping up appearances” it’s also the stereotype of fascists—and by extension LLM lovers—having trouble (or pretending to) distinguishing signifying and signified.
In a world that chases status, be prestigious
I’ll keep that in mind…
this is ridiculously good
Hey look, it’s this meme for the n-th time
I don’t get it, how is every one of the most touted people in the AI space among the least credible people in the industry.
Like literally every time its a person whose name I recognize from something else they’ve done, that something else is something I hate.
In the collection of links of what Ive has done in recent years, there’s one to an article about a turntable redesign he worked on, and from that article:
The Sondek LP12 has always been entirely retrofittable and Linn has released 50 modular hardware upgrades to the machine, something that Ive said he appreciates. “I love the idea that after years of ownership you can enjoy a product that’s actually better than the one you bought years before,” said Ive.
I don’t know, should I laugh, or should I scream, that it’s Ive, of all people, saying that.
ED ZITRON
FROM THE TOP ROPE
Some quality sneers in Extropic’s latest presentation about their thermodynamics hardware. My favorite part was the Founder’s mission slide “e/acc maximizes the watts per civilization while Extropic maximizes intelligence per watt”.
I’m not going to watch more than a few seconds but I enjoyed how awkward Beff Jezos is coming across.
is Extropic now claiming to have actually done anything?
Apparently they are going to ship their development kits sometime later this year. He still sounds confusing AF to me and my BS indicator is going off all the time. He also makes incorrect statements (around 9 minutes in) such as
Neural nets came from energy-based models
which makes 0 sense historically. According to Wikipedia, EBMs were first introduced in 2003.
OT: Welp. Think interview went well. Just waiting for them to check references (oh god) and I should know whats what by Monday.
Good luck! I’m rooting for you.
Another critihype article from the BBC, with far too much credulousness at the idea behind supposed AI consciousness at the cost of covering the harms of AI as things stand, e.g. the privacy, environmental, data set bias problems:
Tried to read it, ended up glazing over after the first or second paragraph, so I’ll fire off a hot take and call it a day:
Artificial intelligence is a pseudoscience, and it should be treated as such.
Every AI winter, the label AI becomes unwanted and people go with other terms (expert systems, machine learning, etc.)… and I’ve come around to thinking this is a good thing, as it forces people to specify what it is they actually mean, instead of using a nebulous label with many science fiction connotations that lumps together decent approaches and paradigms with complete garbage and everything in between.
I’m gonna be polite, but your position is deeply sneerworthy; I don’t really respect folks who don’t read. The article has quite a few quotes from neuroscientist Anil Seth (not to be confused with AI booster Anil Dash) who says that consciousness can be explained via neuroscience as a sort of post-hoc rationalizing hallucination akin to the multiple-drafts model; his POV helps deflate the AI hype. Quote:
There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious. Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as “blindly optimistic and driven by human exceptionalism.” … “We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn’t mean they go together in general, for example in animals.”
At the end of the article, another quote explains that Seth is broadly aligned with us about the dangers:
In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won’t be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. “It will mean that we trust these things more, share more data with them and be more open to persuasion.” But the greater risk from the illusion of consciousness is a “moral corrosion”, he says. “It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives” – meaning that we might have compassion for robots, but care less for other humans.
A pseudoscience has an illusory object of study. For example, parapsychology studies non-existent energy fields outside the Standard Model, and criminology asserts that not only do minds exist but some minds are criminal and some are not. Robotics/cybernetics/artificial intelligence studies control loops and systems with feedback, which do actually exist; further, the study of robots directly leads to improved safety in workplaces where robots can crush employees, so it’s a useful science even if it turns out to be ill-founded. I think that your complaint would be better directed at specific AGI position papers published by techbros, but that would require reading. Still, I’ll try to salvage your position:
Any field of study which presupposes that a mind is a discrete isolated event in spacetime is a pseudoscience. That is, fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific. This position has no open evidence against it (because it’s definitional!) and aligns with the expectations of Seth and others. It is compatible with definitions of mind given by Dennett and Hofstadter. It immediately forecloses the possibility that a computer can think or feel like humans; at best, maybe a computer could slowly poorly emulate a connectome.
you seem to have invented a definition of pseudoscience on the fly
it’s not pseudoscience unless it’s from the “literally studying ghosts” region of crankery, otherwise it’s just sparkling… actually I don’t know what your point is with all this
I am not sure that having “an illusory object of study” is a standard that helps define pseudoscience in this context. Consider UFOlogy, for example. It arguably “studies” things that do exist — weather balloons, the planet Venus, etc. Pseudoarchaeology “studies” actual inscriptions and actual big piles of rocks. Wheat gluten and seed oils do have physical reality. It’s the explanations put forth which are unscientific, while attempting to appeal to the status of science. The “research” now sold under the Artificial Intelligence banner has become like Intelligent Design “research”: Computers exist, just like bacterial flagella exist, but the claims about them are untethered.
Scientists and philosophers have spilled a tanker truck of ink about the question of how to demarcate science from non-science or define pseudoscience rigorously. But we can bypass all that, because the basic issue is in fact very simple. One of the most fundamental parts of living a scientific life is admitting that you don’t know what you don’t know. Without that, it’s well-nigh impossible to do the work. Meanwhile, the generative AI industry is built on doing exactly the opposite. By its very nature, it generates slop that sounds confident. It is, intrinsically and fundamentally, anti-science.
Now, on top of that, while being anti-science the AI industry also mimics the form of science. Look at all the shiny PDFs! They’ve got numbers in them and everything. Tables and plots and benchmarks! I think that any anti-science activity that steals the outward habits of science for its own purposes will qualify as pseudoscience, by any sensible definition of pseudoscience. In other words, wherever we draw the line or paint the gray area, modern “AI” will be on the bad side of it.
No, I think BlueMonday is being reasonable. The article has some quotes from scientists with actually relevant expertise, but it uncritically mixes them with LLM hype and speculation in a typical both sides sort of thing that gives lay readers the (false) impression that both sides are equal. This sort of journalism may appear balanced, but it ultimately has contributed to all kinds of controversies (from Global Warming to Intelligent Design to medical pseudoscience) where the viewpoints of cranks and uninformed busybodies and autodidacts of questionable ability and deliberate fraudsters get presented equally with actually educated and researched viewpoints.
Having now read the thing myself, I agree that the BBC is serving up criti-hype and false balance.
…fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific.
When a good man gazes into the palantir and sees L Ron Hubbard looking back
Incomplete sneer, ten-yard penalty. First down, plus coach has to go read Chasing the Rainbow: The Non-conscious Nature of Being (Oakley & Halligan, 2017) to see what psychology thinks of itself once the evidence is rounded up in one place.
not sure Frontiers apologetics is it chief
To be fair I also believe psychology is by and large pseudoscience, but the answer to it is sociology, not the MRI gang.
There are parts of the field that have major problems, like the sorts of studies that get done on 20 student volunteers and then get turned into a pop psychology factoid that gets tossed around and over-generalized while the original study fails to replicate, but there are parts that are actually good science.
Touting neuroscience as especially informed and scientific about minds is very brave.