• 0 Posts
  • 26 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle

  • I feel like some of the doomers are already setting things up to pivot when their most major recent prophecy (AI 2027) fails:

    From here:

    (My modal timeline has loss of control of Earth mostly happening in 2028, rather than late 2027, but nitpicking at that scale hardly matters.)

    It starts with some rationalist jargon to say the author agrees but one year later…

    AI 2027 knows this. Their scenario is unrealistically smooth. If they added a couple weird, impactful events, it would be more realistic in its weirdness, but of course it would be simultaneously less realistic in that those particular events are unlikely to occur. This is why the modal narrative, which is more likely than any other particular story, centers around loss of human control the end of 2027, but the median narrative is probably around 2030 or 2031.

    Further walking the timeline back, adding qualifiers and exceptions that the authors of AI 2027 somehow didn’t explain before. Also, the reason AI 2027 didn’t have any mention of Trump blowing up the timeline doing insane shit is because Scott (and maybe some of the other authors, idk) like glazing Trump.

    I expect the bottlenecks to pinch harder, and for 4x algorithmic progress to be an overestimate…

    No shit, that is what every software engineering blogging about LLMs (even the credulous ones) say, even allowing LLMs get better at raw code writing! Maybe this author is better in touch with reality than most lesswrongers…

    …but not by much.

    Nope, they still have insane expectations.

    Most of my disagreements are quibbles

    Then why did you bother writing this? Anyway, I feel like this author has set themselves up to claim credit when it’s December 2027 and none of AI 2027’s predictions are true. They’ll exaggerate their “quibbles” into successful predictions of problems in the AI 2027 timeline, while overlooking the extent to which they agreed.

    I’ll give this author +10 bayes points for noticing Trump does unpredictable batshit stuff, and -100 for not realizing the real reason why Scott didn’t include any call out of that in AI 2027.






  • This isn’t debate club or men of science hour, this is a forum for making fun of idiocy around technology. If you don’t like that you can leave (or post a few more times for us to laugh at before you’re banned).

    As to the particular paper that got linked, we’ve seen people hyping LLMs misrepresent their research as much more exciting than it actually is (all the research advertising deceptive LLMs for example) many many times already, so most of us weren’t going to waste time to track down the actual paper (and not just the marketing release) to pick apart the methods. You could say (raises sunglasses) our priors on it being bullshit were too strong.


  • As to cryonics… for both LLM doomers and accelerationists, they have no need for a frozen purgatory when the techno-rapture is just a few years around the corner.

    As for the rest of the shiny futuristic dreams, they have give way to ugly practical realities:

    • no magic nootropics, just Scott telling people to take adderal and other rationalists telling people to micro dose on LSD

    • no low hanging fruit in terms of gene editing (as epistaxis pointed out over on reddit) so they’re left with eugenics and GeneSmith’s insanity

    • no drexler nanotech so they are left hoping (or fearing) the god-AI can figure it (which is also a problem for ever reviving cryonically frozen people)

    • no exocortex, just over priced google glasses and a hallucinating LLM “assistant”

    • no neural jacks (or neural lace or whatever the cyberpunk term for them is), just Elon murdering a bunch of lab animals and trying out (temporary) hope on paralyzed people

    The future is here, and it’s subpar compared to the early 2000s fantasies. But hey, you can rip off Ghibli’s style for your shitty fanfic projects, so there are a few upsides.









  • Is this water running over the land or water running over the barricade?

    To engage with his metaphor, this water is dripping slowly through a purpose dug canal by people that claim they are trying to show the danger of the dikes collapsing but are actually serving as the hype arm for people that claim they can turn a small pond into a hydroelectric power source for an entire nation.

    Looking at the details of “safety evaluations”, it always comes down to them directly prompting the LLM and baby-step walking it through the desired outcome with lots of interpretation to show even the faintest traces of rudiments of anything that looks like deception or manipulation or escaping the box. Of course, the doomers will take anything that confirms their existing ideas, so it gets treated as alarming evidence of deception or whatever property they want to anthropomorphize into the LLM to make it seem more threatening.