Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week’s thread

(Semi-obligatory thanks to @dgerard for starting this)

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 months ago

    one of OpenAI’s cofounders wrote some thoroughly unhinged shit about the company’s recent departures

    Thank you, guys, for being my team and my co-workers. With each of you, I have collected cool memories — with Barret, when we had a fierce conflict about compute for what later became o1; with Bob, when he reprimanded me for doing a jacuzzi with a coworker; and with Mira, who witnessed my engagement.

    • FredFig@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 months ago

      I am in awe of the sheer number of GPUs… whose lives ChatGPT has changed.

      If it was just this one line, this would be in the top 10 funniest things ever written around genAI. Too bad the rest of the rambling insanity ruins it.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      3 months ago

      I can’t be the only one reading that super passive aggressively right? “Thank you Barret, whom I hated. Bob, for ruining my hot Jacuzzi date. And Mira, for existing.”

    • jax@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      3 months ago

      lmao this is weird as fuck, reminds me of the bullshit Lex Fridman comes up with, I can totally imagine him saying things like this

  • Mii@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 months ago

    I am sure this is totally not sketchy in the slightest and the people behind it have no nefarious agenda whatsoever.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      fuck that’s gross

      where’d you find/run across that? can’t tell if it’s a normal ad or some gig-site thing or what

      • Mii@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 months ago

        Saw it posted on Reddit. It’s apparently from clickworker.com which is a weird-ass website by itself at first glance, with great pitches like this:

        People are happier if they are more financially independent. We can help you achieve this.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      Ah yes the advertisement which causes T&S people (who have seen some things) to go on long rants on why you should never put pictures of your kids online publically.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 months ago

    A lobsters states the following in regard to LLMs being used in medical diagnoses:

    If you have very unusual symptoms, for example, there’s a higher chance that the LLM will determine that they are outside of the probability space allowed and replace them with something more common.

    Another one opines:

    Don’t humans and in particular doctors do precisely that? This may be anecdotal, but I know countless stories of people being misdiagnosed because doctors just assumed the cause to be the most common thing they diagnose. It is not obvious to me that LLMs exhibit this particular misjudgement more than humans. In fact, it is likely that LLMs know rare diseases and symptoms much better than human doctors. LLMs also have way more time to listen and think.

    <Babbage about UK parlaimentarians.gif>

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 months ago

      nothing hits worse than an able-bodied techbro imagining what medical care must be like for someone who needs it. here, let me save you from the possibility of misdiagnosis by building and mandating the use of the misdiagnosis machine

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 months ago

      Also please fill in the obligatory rant about how LLMs don’t actually know any diseases or symptoms. Like, if your training data was collected before 2020 you wouldn’t have a single COVID case, but if you started collecting in 2020 you’d have a system that spat out COVID to a disproportionately large fraction of respiratory symptoms (and probably several tummy aches and broken arms too, just for good measure).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      The question (which doesn’t matter) now is, does he really understand crypto? Or did he get at the right conclusion because he thinks that everybody else, like him, is just scamming all the time?

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        Given the kinds of crowds he hangs out with (i.e. mostly other rich people and political elite) is that not an understandable conclusion?

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      anything that makes yglesias have a bad day is generally a good thing

      but it sounds like the orange man understands the crypto market perfectly: the numbers are all made up and everyone’s lying

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      I mean there are definitely some brain rotted crypto bros who would buy shares at face value because it’s totally gonna go to the moon guys

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      Do you think they still say all that bullshit even when they’re not screenshooting it for twitter? Probably, right

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      This was the woman who took over during Sam Altman’s temporary removal as CEO, which we’re pretty sure happened because the AI doom cultists weren’t satisfied that Altman was enough of an AI doom cultist.

      Yudkowsky was solidly in favor of her ascension. I take no joy in saying this as someone who wants this AI nonsense to stop soon, but OpenAI is probably better off financially with fewer AI doom cultists in high positions.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      yes. that’s all true, but academics and artists and leftists are actually calling for Buttlerian jihad all the time. when push comes to shove they will ally with fascists on AI

      This guy severely underestimates my capacity for being against multiple things at the same time.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 months ago

        The type of guy who was totally convinced by the ‘but what if the AI needs to generate slurs to stop the nukes?’ argument.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      Guy invented a new way to misinterpret the matrix, nice. Was getting tired of all the pilltalk

  • David Gerard@awful.systemsM
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 months ago

    ya know that inane SITUATIONAL AWARENESS paper that the ex-OpenAI guy posted, which is basically a window into what the most fully committed OpenAI doom cultists actually believe?

    yeah, Ivanka Trump just tweeted it

    But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them.

    oh boy

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      From that blog post:

      You can see the future first in San Francisco.

      “And that, I think, was the handle - that sense of inevitable victory over the forces of old and evil. Not in any mean or military sense; we didn’t need that. Our energy would simply prevail. We had all the momentum; we were riding the crest of a high and beautiful wave. So now, less than five years later, you can go up on a steep hill in Las Vegas and look west, and with the right kind of eyes you can almost see the high-water mark - that place where the wave finally broke and rolled back.”

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      3 months ago

      barron: shows ivanka the minecraft speedrun

      ivanka: I HAVE SEEN THE LIGHT, BROTHER

      edit: accidentally read ivanka as melania, now corrected

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 months ago

    I vaguely remember mentioning this AI doomer before, but I ended up seeing him openly stating his support for SB 1047 whilst quote-tweeting a guy talking about OpenAI’s current shitshow:

    pro-1047 doomer

    I’ve had this take multiple times before, but now I feel pretty convinced the “AI doom/AI safety” criti-hype is going to end up being a major double-edged sword for the AI industry.

    The industry’s publicly and repeatedly hyped up this idea that they’re developing something so advanced/so intelligent that it could potentially cause humanity to get turned into paperclips if something went wrong. Whilst they’ve succeeded in getting a lot of people to buy this idea, they’re now facing the problem that people don’t trust them to use their supposedly world-ending tech responsibly.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      it’s easy to imagine a world where the people working on AI that are also convinced about AI safety decide to shun OpenAI for actions like this. It’s also easy to imagine that OpenAI finds some way to convince their feeble, gullible minds to stay and in fact work twice as hard. My pitch: just tell them GPT X is showing signs of basilisk nature and it’s too late to leave the data mines

    • imadabouzu@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      Isn’t the primary reason why people are so powerful persuaded by this technology, because they’re constantly sworn to that if they don’t use its answers they will have their life’s work and dignity removed from them? Like how many are in the control group where they persuade people with a gun to their head?

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    3 months ago

    People are “blatantly stealing my work,” AI artist complains

    When Jason Allen submitted his bombastically named Théâtre D’opéra Spatial to the US Copyright Office, they weren’t so easily fooled as the judges back in Colorado. It was decided that the image could not be copyrighted in its entirety because, as an AI-generated image, it lacked the essential element of “human authorship". The office decided that, at best, Allen could copyright specific parts of the piece that he worked on himself in Photoshop.

    “The Copyright Office’s refusal to register Theatre D’Opera Spatial has put me in a terrible position, with no recourse against others who are blatantly and repeatedly stealing my work without compensation or credit.” If something about that argument rings strangely familiar, it might be due to the various groups of artists suing the developers of AI image generators for using their work as training data without permission.

    via @[email protected]