Came across this fuckin disaster on Ye Olde LinkedIn by ‘Caroline Jeanmaire at AI Governance at The Future Society’
"I’ve just reviewed what might be the most important AI forecast of the year: a meticulously researched scenario mapping potential paths to AGI by 2027. Authored by Daniel Kokotajlo (>lel) (OpenAI whistleblower), Scott Alexander (>LMAOU), Thomas Larsen, Eli Lifland, and Romeo Dean, it’s a quantitatively rigorous analysis beginning with the emergence of true AI agents in mid-2025.
What makes this forecast exceptionally credible:
-
One author (Daniel) correctly predicted chain-of-thought reasoning, inference scaling, and sweeping chip export controls one year BEFORE ChatGPT existed
-
The report received feedback from ~100 AI experts (myself included) and earned endorsement from Yoshua Bengio
-
It makes concrete, testable predictions rather than vague statements that cannot be evaluated
The scenario details a transformation potentially more significant than the Industrial Revolution, compressed into just a few years. It maps specific pathways and decision points to help us make better choices when the time comes.
As the authors state: “It would be a grave mistake to dismiss this as mere hype.”
For anyone working in AI policy, technical safety, corporate governance, or national security: I consider this essential reading for understanding how your current work connects to potentially transformative near-term developments."

Bruh what is the fuckin y axis on this bad boi?? christ on a bike, someone pull up that picture of the 10 trillion pound baby. Let’s at least take a look inside for some of their deep quantitative reasoning…

…hmmmm…

O_O

The answer may surprise you!
I am thrilled to return to this old post to announce
-This paper now has two documentary adaptations available on youtube, one is made by an AI sensationalist channel (Documenting AGI) and the other was made by 80,000 Hours, an EA org (AI in context)
-The authors of it have since extended their timelines to 2031 for full automation of coding and 2034 for the superintelligence itself. So the paper is now outdated!
How much money would be saved by just funneling the students of these endless ‘AI x’ programs back to the humanities where they can learn to write (actually good) science fiction to their heart’s content? Hey, finally a way AI actually lead to some savings!
Is this the corresponding lesswrong post: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1 ?
Committing to a hard timeline at least means making fun of them and explaining how stupid they are to laymen will be a lot easier in two years. I doubt the complete failure of this timeline will actually shake the true believers though. And the more experienced
griftersforecasters know to keep things vaguer so they will be able to retroactively reinterpret their predictions as correct.Looks like that is indeed the post. I have a number of complaints, but the most significant one is actually in the early part of the narrative where they just assume “companies start to integrate AI” with little detail on how this is done, what kind of value it creates over their competitors, whether it’s profitable for anyone, etc. I’m admittedly trusting David Gerard’s and Ed Zitron’s overall financial analysis here, but at present it seems like the trajectory is moving in the opposite direction, with the AI industry as a whole looking likely to flame out as they burn through their ability to raise capital without ever actually finding a net return on that investment. At which point all the rest of it is sci-fi nonsense. Like, if you want to tell me a story about how we get from here to The Culture (or I Have No Mouth and I Must Scream), those are the details that need to be filled in. How do the intermediate steps actually work. Otherwise it’s the same story we’ve been reading since the 70s.
Even without the Sci-fi nonsense, the political elements of the story also feel absurd: the current administration staying on top of the situation and making reasoned (if not correct) responses and keeping things secret feels implausible given current events. It kind of shows the political biases of the authors that they can manage to imagine the Trump administration acting so normally or competently. Oh and the hyper-competent Chinese spies (and the Chinese having no chance at catching up without them) feels like another one of the authors’ biases coming through.
This is Scott “Richard Lynn was right actually” Alexander we’re talking about here, the chinese not being able to catch up without resorting to spies is absolutely apart of his agenda
Every competent apocalyptic cult leader knows that committing to hard dates is wrong because if the grift survives that long, you’ll need to come up with a new story.
Luckily, these folks have spicy autocomplete to do their thinking!
I was going to make a comparison to Elron, but… oh, too late.
I think Eliezer has still avoided hard dates? In the Ted talk, I distinctly recall he used the term “0-2 paradigm shifts” so he can claim prediction success for stuff LLMs do, and paradigm shift is vague enough he could still claim success if its been another decade or two and there has only been one more big paradigm shift in AI (that still fails to make it AGI).
late reply but yes Eliezer has avoided hard dates because “predictions are hard”
the closest he’s gotten is his standing bet with Bryan Chaplan that it’ll happen before 2030 (when I looked into this bet Eliezer himself said that he made it so he could “exploit Bryan’s amazing bet-winning ability and my amazing bet-losing ability” to ensure AGI doesn’t wipe everyone out before 2030) he said in a 2024 interview that if you put a gun to his head and forced him to make probabilities, “it would look closer to 5 years than 50” (unhelpfully vague since it puts the ballpark at like 2-27 years) but did say in a more recent interview that he thinks 20 years feels like it’s starting to push it (possible but he doesn’t think so)
So basically, no hard dates but “sooner rather than later” vagueness
It really is pathetic given the entire rationalist claim about making accurate predictions about reality and comparing predictions as the ultimate way to judge theories and models.
for someone who believes prediction markets should make all our laws he’s certainly dodging a lot of predictions
Huh, 2 paradigm shifts is about what it takes to get my old Beetle up to freeway speed, maybe big Yud is onto something
After minutes of meticulous research and quantitative analysis, I’ve come up with my own predictions about the future of AI.

The report received feedback from ~100 AI experts (myself included)
“It’s Shake and Bake — and I helped!”
The obvious effort is to mark each temporal milestone, then post snarkily as each is missed
We’re already behind schedule, we’re supposed to have AI agents in two months (actually we were supposed to have them in 2022, but ignore the failed bits of earlier prophecy in favor of the parts you can see success for)!
“USG gets captured by AGI”.
Promise?
A markov chain is smarter than the current POTUS.
Smarterbot for president!
It is with great regret that I must inform you that all this comes with a three-hour podcast featuring Scoot in the flesh: 2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo
I’m fascinated by the way they’re hyping up Daniel Kokotajlo to be some sort of AI prophet. Scott does it here, but so does Caroline Jeanmaire in the OP’s twitter link. It’s like they all got the talking point (probably from Scott) that Daniel is the new guru. Perhaps they’re trying to anoint someone less off-putting and awkward than Yud. (This is also the first time I’ve ever seen Scott on video, and he definitely gives off a weird vibe.)
Kokotajlo is a new name to me. What’s his background? Prolific LW poster?
He made some predictions about AI back in 2021 that if you squint hard enough and totally believe the current hype about how useful LLMs are you could claim are relatively accurate.
His predictions here: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like
And someone scoring them very very generously: https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far
My own scoring:
The first prompt programming libraries start to develop, along with the first bureaucracies.
I don’t think any sane programmer or scientist would credit the current “prompt engineering” “skill set” with comparison to programming libraries, and AI agents still aren’t what he was predicting for 2022.
Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1.
There was a jump from GPT-2 to GPT-3, but the subsequent releases in 2022-2025 were not as qualitatively big.
Revenue is high enough to recoup training costs within a year or so.
Hahahaha, no… they are still losing money per customer, much less recouping training costs.
Instead, the AIs just make dumb mistakes, and occasionally “pursue unaligned goals” but in an obvious and straightforward way that quickly and easily gets corrected once people notice
The safety researchers have made this one “true” by teeing up prompts specifically to get the AI to do stuff that sounds scary to people to that don’t read their actual methods, so I can see how the doomers are claiming success for this prediction in 2024.
The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics.
They also try to contrive scenarios
Emphasis on the word"contrive"
The age of the AI assistant has finally dawned.
So this prediction is for 2026, but earlier predictions claimed we would have lots of actually useful if narrow use-case apps by 2022-2024, so we are already off target for this prediction.
I can see how they are trying to anoint his as a prophet, but I don’t think anyone not already drinking the kool aid will buy it.
The first prompt programming libraries start to develop, along with the first bureaucracies.
I went three layers deep in his references and his references’ references to find out what the hell prompt programming is supposed to be, ended up in a gwern footnote:
It's the ideologized version of You're Prompting It Wrong. Which I suspected but doubted, because why would they pretend that LLMs being finicky and undependable unless you luck into very particular ways of asking for very specific things is a sign that they're doing well.
gwern wrote:
I like “prompt programming” as a description of writing GPT-3 prompts because ‘prompt’ (like ‘dynamic programming’) has almost purely positive connotations; it indicates that iteration is fast as the meta-learning avoids the need for training so you get feedback in seconds; it reminds us that GPT-3 is a “weird machine” which we have to have “mechanical sympathy” to understand effective use of (eg. how BPEs distort its understanding of text and how it is always trying to roleplay as random Internet people); implies that prompts are programs which need to be developed, tested, version-controlled, and which can be buggy & slow like any other programs, capable of great improvement (and of being hacked); that it’s an art you have to learn how to do and can do well or poorly; and cautions us against thoughtless essentializing of GPT-3 (any output is the joint outcome of the prompt, sampling processes, models, and human interpretation of said outputs).
OpenBrain “responsibly” elects not to release its model publicly to avoid it being called “underwhelming” and, to use a technical term, “gobshite”.
Oh lord one of my less online friends posted this in a group chat. Love that group, but I am NOT happy about having to read so much of Scott’s writing again to explain the various ways it’s loony.
“First, he started his blog with the deliberate goal of giving a veneer of respectability to racist pseudoscience. Second, everything else…”








