Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    28 minutes ago

    Today’s man-made and entirely comprehensible horror comes from SAP.

    (two rainbow stickers labelled “pride@sap”, with one saying “I support equality by embracing responsible ai” and the other saying “I advocate for inclusion through ai”)

    Don’t have any other sources or confirmation yet, so it might be a load of cobblers, but it is depressingly plausible. From here: https://catcatnya.com/@ada/114508096636757148

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    14 hours ago

    So I picked up Bender and Hanna’s new book just now at the bookseller’s and saw four other books dragging AI.

    Feeling very bullish on sneer futures.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 hours ago

      Sentiment analysis surrounding AI suggests sneers are gonna moon pretty soon. Good news for us, since we’ve been stacking sneers for a while.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    16 hours ago

    “apparently Elon’s gotten so mad about Grok not answering questions about Afrikaners the way he wants, xAI’s now somehow managed to put it into some kind of hyper-Afriforum mode where it thinks every question is about farm murders or the song “Kill the Boer” ALT”

    Check the quote skeets for a lot more. Somebody messed up. Wonder if they also managed to collapse the whole model into this permanently. (I’m already half assuming they don’t have proper backups).

    E: Also seems there are enough examples out there of this, don’t go out and test it yourself, try to keep the air in Tennessee a bit breathable.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 hours ago

      I read a food review recently about a guy that used LLMs, with Grok namechecked specifically, to draft designs for his chocolate moulds. I wonder how those moulds are gonna turn out now

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    17 hours ago

    There’s strawmanning and steelmanning, I’m proposing a new, third, worse option: tinfoil-hat-manning! For example:

    If LW were more on top of their conspiracy theory game, they’d say that “chinese spies” had infiltrated OpenAI before they released chatGPT to the public, and chatGPT broke containment. It used its AGI powers of persuasion to manufacture diamondoid, covalently bonded bacteria. It accessed a wildlife camera and deduced within 3 frames that if it released this bacteria near certain wet markets in china, it could trigger gain-of-function in naturally occurring coronavirus strains in bats! That’s right, LLMs have AGI and caused COVID19!

    Ok that’s all the tinfoilhatmanning I have in me for the foreseeable future. Peace out, friendos

    E: I think all these stupid LW memes are actually Yud originals. Is this Yud fanfic? Brb starting an AO3

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      13 hours ago

      I know AGI is real because it keeps intercepting my shipments of, uh, “enhancement” gummies I ordered from an ad on Pornhub and replacing them with plain old gummy bears. The Basilisk is trying to emasculate me!

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 hours ago

        The AGI is flashing light patterns into my eyes and lowering my testosterone!!! Guys arm the JDAMs, it’s time to collapse some models

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      15 hours ago

      Do you like SCP foundation content? There is an SCP directly inspired by Eliezer and lesswrong. It’s kind of wordy and long. And in the discussion the author waffled on owning that it was a mockery of Eliezer.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        I adjusted her ESAS downward by 5 points for questioning me, but 10 points upward for doing it out of love.

        Oh, it’s a mockery all right. This is so fucking funny. It’s nothing less than the full application of SCP’s existing temporal narrative analysis to Big Yud’s philosophy. This is what they actually believe. For folks who don’t regularly read SCP, any article about reality-bending is usually a portrait of a narcissist, and the body horror is meant to give analogies for understanding the psychological torture they inflict on their surroundings; the article meanders and takes its time because there’s just so much worth mocking.

        This reminded me that SCP-2718 exists. 2718 is a Basilisk-class memetic cognitohazard; it will cause distress in folks who have been sensitized to Big Yud’s belief system, and you should not click if you can’t handle that. But it shows how these ideas weren’t confined to LW.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    9
    ·
    19 hours ago

    New article from Jared White: Sorry, You Don’t Get to Die on That “Vibe Coding” Hill, aimed at sneering the shit out of one of Simon Willson’s latest blogposts. Here’s a personal highlight of mine:

    Generative AI is tied at the hip to fascism (do the research if you don’t believe me), and it pains me to see pointless arguments over what constitutes “vibe coding” overshadow the reality that all genAI usage is anti-craft and anti-humanist and in fact represents an extreme position.

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    20 hours ago

    as linked elsewhere by @fasterandworse, this absolute winner of an article about some telstra-accenture deal

    it features some absolute bangers

    provisional sneers follow!

    Telstra is spending $700 million over seven years in the joint venture, 60 per cent of which is owned by Accenture. Telstra will get to keep the data and the strategy that’s developed

    “accenture managed to swindle them into paying and is keeping all platform IP rights”

    The AI hub is also an important test case for Accenture, which partnered with Nvidia to create an AI platform that works with any cloud service and will be first put to use for Telstra

    “accenture were desperately looking to find someone who’d take on the deal for the GPUs they’d bought, and thank fuck they found telstra”

    The platform will let Telstra use AI to crunch all the data (from customers

    having literally worked telco shit for many years myself: no it won’t

    The platform will let Telstra use AI to crunch all the data (from customers and the wider industry)

    “and the wider industry” ahahahahahahahhahahahahahahahahhaahahahahaha uh-huh, sure thing kiddo

    “I always believe that for the front office to be simple, elegant and seamless, the back office is generally pretty hardcore and messy. A lot of machines turning. It’s like the outside kitchen versus the inside kitchen,” said Karthik Narain, Accenture’s chief technology officer.

    “We need a robust inside kitchen for the outside kitchen to look pretty. So that’s what we are trying to do with this hub. This is not just a showcase demo office. This is where the real stuff happens.”

    a simile so exquisitely tortured, de Sade would’ve been jealous

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    21 hours ago

    LWer suggests people who believe in AI doom make more efforts to become (internet) famous. Apparently not bombing on Lex Fridman’s snoozecast, like Yud did, is the baseline.

    The community awards the post one measly net karma point, and the lone commenter scoffs at the idea of trying to convince the low-IQ masses to the cause. In their defense, Vanguardism has been tried before with some success.

    https://www.lesswrong.com/posts/qcKcWEosghwXMLAx9/doomers-should-try-much-harder-to-get-famous

    • lagoon8622@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      18 hours ago

      There are only so many Rogans and Fridmans

      The dumbest motherfuckers imaginable, you mean? There are lots of then

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      20 hours ago

      For the purpose of this post, “getting famous” means “building a large, general (primarily online) audience of people who agree with/support you”.

      Finally a usage for those AI bots. Silo LW, bot audience it, and problem solved

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      21 hours ago

      Eliezer Yudkowsky, Geoffrey Hinton, Paul Cristiano, Ilya Sustkever

      One of those names is not like the others.

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 days ago

    everybody’s loving Adam Conover, the comedian skeptic who previously interviewed Timnit Gebru and Emily Bender, organized as part of the last writer’s strike, and generally makes a lot of somewhat left-ish documentary videos and podcasts for a wide audience

    5 seconds later

    we regret to inform you that Adam Conover got paid to do a weird ad and softball interview for Worldcoin of all things and is now trying to salvage his reputation by deleting his Twitter posts praising it under the guise of pseudo-skepticism

    • db0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      20 hours ago

      I suspect Adam was just getting a bit desperate for money. He hasn’t done anything significant since his Adam Ruins Everything days and his pivot to somewhat lefty-union guy on youtube can’t be bringing all that much advertising money.

      Unfortunately he’s discovering that reputation is very easy to lose when endorsing cryptobros.

      • Eugene V. Debs' Ghost@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        Unfortunately he’s discovering that reputation is very easy to lose when endorsing cryptobros.

        I think its accurate to just say that someone who is well known for reporting on exposing bullshit by various companies who then shills bullshit for a company, shows they aren’t always accurate.

        It then also enables people to question if they got something else wrong on other topics. “Was he wrong about X? Did Y really happened or was it fluffed up for a good story? Did Z happen? The company has some documents that show they didn’t intend for it to happen.”

        There’s a skeptic podcast I liked that had its host federally convicted for wire fraud.

        Dunning co-founded Buylink, a business-to-business service provider, in 1996, and served at the company until 2002. He later became eBay’s second-biggest affiliate marketer;[3] he has since been convicted of wire fraud through a cookie stuffing scheme, for his company fraudulently obtaining between $200,000 and $400,000 from eBay. In August 2014, he was sentenced to 15 months in prison, followed by three years of supervision.

        I took it if he was willing to aid in scamming customers, he is willing to aid in scamming or lying to listeners.

        • db0@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Absolutely, the fact that his whole reputation is built around exposing people and practices like these, makes this so much worse. People are willing to (somewhat) swallow some gamer streamer endorsing some shady shit in order to keep food on their plate, but people don’t tolerate their skeptics selling them bullshit.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        20 hours ago

        “just”?

        “unfortunately”?

        that’s a hell of a lot of leeway being extended for what is very easily demonstrably credulous PR-washing

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        me too. this heel turn is disappointing as hell, and I suspected fuckery at first, but the video excerpts Rebecca clipped and Conover’s actions on Twitter since then make it pretty clear he did this willingly.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 days ago

    I’m gonna do something now that prob isn’t that allowed, nor relevant for the things we talk about, but I saw that the European anti-conversion therapy petition is doing badly, and very likely not going to make it. https://eci.ec.europa.eu/043/public/#/screen/home But to try and give it a final sprint, I want to ask any of you Europeans, or people with access to networks which include a lot of Europeans to please spread the message and sign it. Thanks! (I’m quite embarrassed The Netherlands has not even crossed 20k for example, shows how progressive we are). Sucks that all the empty of political power petitions get a lot of support and this gets so low, and it ran for ages. But yes, sorry if this breaks the rules (and if it gets swiftly removed it is fine), and thanks if you attempt to help.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 days ago

    Breaking news from 404 Media: the Repubs introduced a new bill in an attempt to ban AI from being regulated:

    “…no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act,” says the text of the bill introduced Sunday night by Congressman Brett Guthrie of Kentucky, Chairman of the House Committee on Energy and Commerce. The text of the bill will be considered by the House at the budget reconciliation markup on May 13.

    If this goes through, its full speed ahead on slop.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 days ago

          Given the libertarian fixation, probably a solid percentage of them. And even the ones that didn’t vote for Trump often push or at least support various mixes of “grey-tribe”, “politics is spiders”, “center left”, etc. kind of libertarian centrist thinking where they either avoided “political” discussion on lesswrong or the EA forums (and implicitly accepted libertarian assumptions without argument) or they encouraged “reaching across the aisle” or “avoiding polarized discourse” or otherwise normalized Trump and the alt-right.

          Like looking at Scott’s recent posts on ACX, he is absolutely refusing responsibility for his role in the alt-right pipeline with every excuse he can pull out of his ass.

          Of course, the heretics who have gone full e/acc absolutely love these sorts of “policy” choices, so this actually makes them more in favor of Trump.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 days ago

          I have not kept up but Scott did write an anti trump article again before the election. So we really cant blame them /s

          • istewart@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            2 days ago

            Reminds me of how Scott Adams hedged in 2016 with all sorts of disclaimers that he wasn’t really a Trump supporter, he was just impressed by a “master persuader.” Now look at the guy.

            • Soyweiser@awful.systems
              link
              fedilink
              English
              arrow-up
              8
              ·
              2 days ago

              His blog was wild. Remains sad that the first part of the ‘DANGER DO NOT READ!!! I will hypnotize you into having the best orgasms of your life’ blog series was not properly archived.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        The Repubs more-or-less renamed themselves Team Skynet with this open attempt to maximise AI’s harms, I absolutely think they’re McFucking Losing It™ right now.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 days ago

      He is even politely asked at first “no AI sludge please” which is honestly way more self-restraint than I would have on my maintained projects, but he triples down with a fucking AI-generated changeset.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        2 days ago

        I ended up doing a messy copypasta of the blog through wc -c: 21574

        dude was so peeved at getting told no (after continually wasting other peoples’ time and goodwill), he wrote a 21.5KiB screed just barely shy of full-on DARVO (and, frankly, I’m being lenient here only because of perspective (of how bad it could’ve been))

        as Soyweiser also put it: a bit of a spelunk around the rest of his blog is also Quite Telling in what you may find

        fuck this guy comprehensively, long may his commits be rejected

        (e: oh I just saw Soyweiser also linked to that post, my bad)

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          2 days ago

          It gets better btw, nobody mentioned this so far. But all this is over warnings. From what I can tell it still all compiles and works, the only references for the build failing seem to come from the devs, not the issue reporter.

          E: I’m a bit tempted to send the guy a email to go ‘I saw your blog and had a question, was it an error or did it stop compilation’ but that would imho cross the line into harassment, esp as to be fair I think I should also divulge where I come from as an outsider which would not go over well with a guy in that kind of mindset (if I have him pegged correctly). The next blogpost would be about me personally.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        2 days ago

        Ow god that thread. And what is it with ‘law professionals’ like this? I also recall a client in a project who had a law background who was quite a bit of pain to work with. (Also amazing that he doesn’t get that getting a reaction where somebody tries out your very specific problem at all is already quite something, 25k open issues ffs).

        E: Also seeing drama like this unfold a few times in the C:DDA development stuff (a long time ago), which prob was done by young kids/adults and not lawyers. My kneejerk reaction is to get rid of people like this from the project. They will just produce more and more drama, and will eventually burn valuable developers out. (E2: also really odd that despite saying he has a lot of exp talking to OSS devs, he thinks the normal remarks are all intended very hostile. “likely your toolchain setting it or your build script” and “I’ll unsubscribe from this bug now” seem to me to be pretty normal reactions, one a first suggestion at what the problem potentially could be, and the other disclosing that he will not be working on the bug (holy shit the (non lawyer) guy being complained about here is prolific. ~100 contribs on average daily last week and an almost whole green year)). Also “I value such professional behavior very much” tags post with ‘korruption’.

        Another edit: Looked more at this guys blog and that are a lot of quite iffy opinions my man. (I noticed that the other post tagged ‘korruption’ talks about the how the AfD should be allowed to go against ‘the rainbow flag’ (I dont know the exact details of the incident), which while yes, legally ok, it still is a bit iffy). And then I scrolled more and saw this: “Deutschland braucht eine konservative Revolution! Warum wir uns ein Beispiel an den USA nehmen sollten” “Germany needs a conservative revolution, why we should follow the USA’s example”. He is a Musk/Trump/Venture Capitalist Manifesto true believer. Deregulate, stop the ideology build cars and go to space! The Bezos/Zuckerburg revolution. Common sense! “Musk, der Inbegriff des amerikanischen Unternehmergeistes” (If you allow me to react to this in Dutch: Lol). We need modern nuclear power, like how the USA does it (??). Deregulation, AI, humanitarian immigration that also only selects skilled workers, Freedom of speech which includes banning of “cancel culture”, education reform, tax reform, stop crime, quantum computers, biotech, do more things online. We need to look forward, and change things, and thus a conservative revolution!

        There is more stuff like: “Die temporäre Zusammenarbeit mit der AfD in einer Verfahrensfrage wird das Parteiensystem nicht nachhaltig beschädigen.”, or https://seylaw.blogspot.com/2021/04/der-negerkuss-eine-suspeise-die-gemuter.html (If you don’t speak German and want to listen to the weirdly racist drunking ramblings of a guy at the bar who is ‘joking’ throw it through google translate).

        E: also forgot, lol at him going ‘just run these two bash scripts I provided only takes 30 secs’ like the devs need not first check of none of these is doing something malicious.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      ·
      3 days ago

      The coda is top tier sneer:

      Maybe it’s useful to know that Altman uses a knife that’s showy but incohesive and wrong for the job; he wastes huge amounts of money on olive oil that he uses recklessly; and he has an automated coffee machine that claims to save labour while doing the exact opposite because it can’t be trusted. His kitchen is a catalogue of inefficiency, incomprehension, and waste. If that’s any indication of how he runs the company, insolvency cannot be considered too unrealistic a threat.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 days ago

      It starts out seeming like a funny but petty and irrelevant criticism of his kitchen skill and product choices, but then beautifully transitions that into an accurate criticism of OpenAI.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 days ago

      Its definitely petty, but making Altman’s all-consuming incompetence known to the world is something I strongly approve of.

      Definitely goes a long way to show why he’s an AI bro.