Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

  • skillissuer@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    24
    ·
    edit-2
    5 months ago

    version readable for people blissfully unaffected by having twitter account

    “Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans.”

    yeah ez just lemme build dc worth 1% of global gdp and run exclusively wisdom woodchipper on this

    “Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might.”

    power grid equipment manufacture always had long lead times, and now, there’s a country in eastern europe that has something like 9GW of generating capacity knocked out, you big dumb removed, maybe that has some relation to all packaged substations disappearing

    They are doing to summon a god. And we can’t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.

    i see that besides 50s aesthetics they like mccarthyism

    “As the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 we’ll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, the endgame will be on. “

    how cute, they think that their startup gets nationalized before it dies from terminal hype starvation

    “I make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.

    “We don’t need to automate everything—just AI research”

    “Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. “

    just needs tiny increase of six orders of magnitude, pinky swear, and it’ll all work out

    it weakly reminds me how Edward Teller got an idea of a primitive thermonuclear weapon, then some of his subordinates ran numbers and decided that it will never work. his solution? Just Make It Bigger, it has to be working at some point (it was deemed as unfeasible and tossed in trashcan of history where it belongs. nobody needs gigaton range nukes, even if his scheme worked). he was very salty that somebody else (Stanisław Ulam) figured it out in a practical way

    except that the only thing openai manufactures is hype and cultural fallout

    “We’d be able to run millions of copies (and soon at 10x+ human speed) of the automated AI researchers.” “…given inference fleets in 2027, we should be able to generate an entire internet’s worth of tokens, every single day.”

    what’s “model collapse”

    “What does it feel like to stand here?”

    beyond parody

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      19
      ·
      5 months ago

      “Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. “

      Also this doesn’t give enough credit to gradeschoolers. I certainly don’t think I am much smarter (if at all) than when I was a kid. Don’t these people remember being children? Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems? I’m not sure if it’s me being the weird one, to me growing up is not about becoming smarter, it’s more about gaining perspective, that is vital, but actual intelligence/personhood is a pre-requisite for perspective.

      • Mii@awful.systems
        link
        fedilink
        English
        arrow-up
        18
        ·
        edit-2
        5 months ago

        Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems?

        Yes. They literally think that. I mean, why else would they assume a spicy text extruder with a built-in thesaurus is so smart?

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      5 months ago

      To engage with the content:

      That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.

      I see this is becoming their version of “too the moon”, and it’s even dumber.

      To engage with the form:

      wisdom woodchipper

      Amazing, 10/10 no notes.

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        11
        ·
        5 months ago

        I see this is becoming their version of “too the moon”, and it’s even dumber.

        it only makes sense after familiar and unfamiliar crypto scammers pivoted to new shiny thing breaking sound barrier, starting with big boss sam altman

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        8
        ·
        5 months ago

        wisdom woodchipper

        i think i used that first time around the time when sneer come out about some lazy removedes that tried and failed to use chatgpt output as a meaningful filler in a peer-reviewed article. of course it worked, and not only at MDPI, because i doubt anyone seriously cares about prestige of International Journal of SEO-bait Hypecentrics, impact factor 0.62, least of all reviewers

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      5 months ago

      They are doing to summon a god. And we can’t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.

      Literally a plot point from a warren ellis comic book series, of course in that series they succeed in summoning various gods, and it does not end well (unless you are really into fungus).

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      5 months ago

      source of that image is also bad hxxps://waitbutwhy[.]com/2015/01/artificial-intelligence-revolution-1.html i think i’ve seen it listed on lessonline? can’t remember

      not only they seem like true believers, they are so for a decade at this point

      In 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist?” It asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI—i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:2

      Median optimistic year (10% likelihood): 2022

      Median realistic year (50% likelihood): 2040

      Median pessimistic year (90% likelihood): 2075

      just like fusion, it’s gonna happen in next decade guys, trust me

      • 200fifty@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        5 months ago

        I believe waitbutwhy came up before on old sneerclub though in that case we were making fun of them for bad political philosophy rather than bad ai takes

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          11
          ·
          5 months ago

          there’s a lot of bad everything, it looks like a failed attempt at rat-scented xkcd. and yeah they were invited to lessonline but didn’t arrive

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      5 months ago

      “Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans.”

      They are doing to summon a god. And we can’t do anything to stop it.

      This is a direct rip-off of the plot of The Labyrinth Index, except in the book it’s a public-partnership between the US occult deep state, defense contractors, and silicon valley rather than a purely free market apocalypse, and they’re trying to execute cthulhu.exe rather than implement the Acausal Robot God.