Predictions for when most cognitive labor will be fully automated. Icons are medians, with approximate confidence intervals.
Follow up post to: AI 2027, One Year Later
Many orgs track AGI timelines, like AI Futures, Metaculus, and Epochs. Recently, though, I noticed that many great researchers have now published two or more precise forecasts, all using similar definitions of AGI, and all providing confidence intervals. So I was able to visualize how their forecasts changed over time.
(Most conventionally famous people in AI don't make specific predictions, and when they do, they don't generally update them, so we can't see their views changing over time. I actually prefer to learn from the people shown here, though, as I think they have the best track records.)
The overlapping AGI definition I use here is "Most purely cognitive labor is automatable at better quality, speed, and cost than humans". For some of these researchers, saying they use this definitions is a bit of a stretch, but I included everyone who I judged as close enough to be informative.
So now I could ask, are the best AI forecasters updating the same way that I wrote about last week, how Daniel Kokotajlo and Eli Lifland pushed their AGI timelines out during 2025, but then pulled them back in early 2026 given the rapid progress from Anthropic?
If you look at the graphic, you'll see that from 2023 to 2025, most people brought their AGI timelines in to be sooner, though with some exceptions like Tamay Besilogru. From 2025 to 2026, joining Daniel and Eli in pushing their timelines out are the Metaculus community, Dario Amodei, and elite forecast Peter Wildeford. In fact, across 2025, only Benjamin Todd brought in his timelines to say AGI would happen sooner.
(The graphic also rounds the date of the forecasts made to 4 points, for simplicity of seeing the patterns, so apologies to forecasters who are shown as making, for example, a "late 2025" update that actually did made in January 2026.)
And every single person who updated their timelines from January 2026 to April 2026 has moved it their timeline to say AGI is coming sooner, myself included.
So I think the data supports the impression I got from Daniel, Eli, and the AI Futures team. One way I could characterize it is: in the ChatGPT era, people updated towards AI coming sooner. Then in the xAI, Meta, and Gemini era, people updated towards it coming later. Then in the Anthropic era, people updated towards AI coming sooner. Take from that what you will.
Good Bayesians shouldn't be able to predict which direction they will update. When I have intuitions about how I or others may update soon, it's useful to interpet that as evidence that I should update now. I kind of feel like I know what direction I'm going to update next, so...