• 6 Posts
  • 337 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle

  • “How AI Impacts Skill Formation” has two authors. So even on the bare factual matters you are wrong. The disempowerment paper has four authors, but all of them look like they are computer scientists from looking at their bios, so the general thrust of fiat_lux’s comment is also true about that paper.

    I don’t mind academics reaching outside their fields of expertise, but they really should get collaborators with the appropriate background, and the fact that anthropic hasn’t hired any humanities researchers to help support this sort of research is a bad sign.




  • Sovereign citizens think their made up procedures or words will actually let them bypass the law. Whereas I think Eliezer would fold to actual pressure from the government (despite all his talk about game theory and ignoring threat-like incentives he would in fact want to avoid going to jail). At least, that is the vibes I’ve gotten from seeing his absolute refusal to suggest non-governmental direct action to stop the AI doom he is so certain is coming.



  • Edit: Isn’t Dath Ilan the setting of the Project Wonderful glowfic? The setting where people with good genes get more breeding licenses than people with bad genes?

    Yep, Project Lawful. dath ilan is Eliezer’s “utopian” world the isekai’d protagonist is from. It is described in dath ilan that if you have “bad” genes you lost your UBI if you had kids anyway (it was technically Gregorist-style citizen’s dividend, but it basically UBI) and if you had “good” genes you got extra payment for having more kids.

    Eliezer is basically saying unless the government meets the “standards” of his made up fantasy “utopia” he won’t cooperate with it, even in prosecuting literal child raping pedophiles or carrying out social repercussions against said child rapists.


  • Multiple hackernews insist that SpaceX must have discovered new physics that solves orbital heat management, because otherwise Musk and the stockholders are dumb.

    The leaps in logic are so idiotic “he managed to land a rocket up right, so maybe he can pull it off!” (as if Elon personally made that happen, or as if a engineering challenge and fundamental thermodynamic limits are equally solvable). This is despite multiple comments replying with back of the envelope calcs on energy generation and heat dissipation of the ISS and comparing it to what you would need for even a moderately sized data center. Or even the comments that are like “maybe there is a chance”, as if it is wiser to express uncertainty…








  • Has anyone done the math on if Elon can keep these plates spinning until he dies of old age or if it will implode sooner than that? I wouldn’t think he can keep this up another decade, but I wouldn’t have predicted Tesla limping along as long as it has even as Elon squeezes more money out of it, so idk. It would be really satisfying to watch Elon’s empire implode, but probably he holds onto millions even if he loses billions because consequences aren’t for the ultra rich in America.






  • To add to your sneers… lots of lesswrong content fits you description of #9, with someone trying to invent something that probably exists in philosophy, from (rationalist, i.e. the sequences) first principles and doing a bad job at it.

    I actually don’t mind content like #25 where someone writes an explainer topic? If lesswrong was less pretentious about it and more trustworthy (i.e. cited sources in a verifiable way and called each other out for making stuff up) and didn’t include all the other junk and just had stuff like that it would be better at its stated goal of promoting rationality. Of course, even if they tried this, they would probably end up more like #47 where they rediscover basic concepts because they don’t know how to search existing literature/research and cite it effectively.

    45 is funny. Rationalists and rationalist adjacent people started OpenAI, ultimately ignored “AI safety”. Rationalist spun off anthropic, which also abandoned the safety focus pretty much after it had gotten all the funding it could with that line. Do they really think a third company would be any better?