Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    Okay but like is it materially different than whatever the current Claude thing is or did they just pump the size of the matrix?

        • wizardbeard@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          I still laugh every time I see that this is what qualifies as proper “tuning” and “security controls” for these things.

          I had hoped that with the whole “agent” push that we would start seeing more sane usage, like having AI be a fuzzy logic step in a chain of formal logic and existing deterministic tools, but the cult still has people treating them like reliable second brains. They’re used as the baseline fucking orchestrator rather than anywhere they might make a bit of sense.

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            1 day ago

            I had hoped that with the whole “agent” push that we would start seeing more sane usage, like having AI be a fuzzy logic step in a chain of formal logic and existing deterministic tools

            I think this is the best you can expect out of LLMs, and the relatively more successful “agentic” AI efforts are probably doing exactly this, but their relative success is serving as hype fuel for the more impossible promises of LLMs. Also, if you have formal logic and deterministic tools wrapping and sanity checking the LLM bits… I think the value add of evaporating rivers and firing up jet turbines to train and serve “cutting edge” models that only screw up 1% of the time isn’t there because you can run a open weight model 1/100th the size that screws up 10% of the time instead. (Note one important detail: training costs go up quadratically with model size, so a 100x size model is 10,000x training compute.) I think the frontier LLM companies should have pivoted to prioritizing smaller size, greater efficiency, and actually sustainable business practices 4 years ago. At the very latest, 2 years ago, with the release of 4o OpenAI should have realized pushing up model size was the wrong direction (as they should have realized training Chain-of-Thought was not going to be the magic bullet).

            And to be clear I still think this is really generous to the use case of smaller LMs.