AI 2027: Forecasting the Arrival of Superintelligence

April 3, 2025

Today, AI 2027, by AI-Futures.org, released their scenario and forecasts (New York Times). It is an incredibly provocative story, and is scarily plausible according to serious experts with great track records.

FutureSearch co-authored the Timeline Forecast, explaining when we expect the arrival of the first key piece in the story: superhuman coders that are deployed in mass inside the top frontier lab. (We put significant probability on “soon”, though not as much as the AI Futures team.)

We’re grateful that this, and other core research and forecasts of ours, were included in the project.

For press inquiries, please fill out the form below or contact us directly: press@futuresearch.ai

We also contributed, and shared our disagreements, in three other key forecasts:

  1. Could the leading frontier lab (likely OpenAI) reach $100B in revenue by mid-2027? In their Compute Forecast, they speculate it will, in order to fund the unbelievable costs to train and run their army of automated software engineers and researchers. We lay out the most likely way that this could happen.

  2. Once the frontier lab has great automated software engineers, we weighed in how long it will take for them to develop great automated AI researchers?

  3. Once the frontier lab has automated AI researchers, we contributed forecasts on how long until they create an automated AI researcher that’s substantially better than any human?

These scenarios are quite speculative. We encourage you to read their and our forecasts and judge for yourself whether they are credible. Overall, we forecast that these developments, even if they play out the way they describe in the piece, will come much later than they predict.


Here, as contributors to the core forecasts, we’d like to share the major points of disagreement FutureSearch had, not only with the excellent thinkers at the AI-Futures org, but amongst ourselves.

Crux 1: Can R&D be automated this quickly, given the bottlenecks and necessary discoveries?

AI research is quite different from other capabilities AI has conquered, from chess to poetry. Real world experiments take weeks, months, or years to get results and learn what to do next. We model this as a significant bottleneck on R&D progress — even with an army of automated, fast, intelligent workers, they can only progress one experiment at a time.

Human programmers and researchers need a vast amount of context about the real world, and their organization and coworkers (human or AI) to inform complexity trade-offs. While AI 2027 does take this into account, we think they could be underestimating how much this will slow down progress.

Crux 2: Will frontier labs race to automating AI research, rather than racing to commercial success?

So far OpenAI, the leading contender to be the company in the AI 2027 story, has spoken more about consumer revenue growth and less about transformative AI.

This piece requires at least one frontier lab to dedicate the majority of their resources towards building AI for their own internal use. We have reason to doubt that many of them will.

Crux 3: Will governments let AI development get that far along before they strongly intervene?

In AI 2027, the US government gets involved in a significant way once AI is already incredibly capable and dangerous. If the process takes longer than AI 2027 projects, as we predict it will, there will be significantly more time for the US government to intervene.

Even actions like significant tariffs against semiconductor suppliers abroad, as were enacted the day before AI 2027 went live, suggests that governments have a much larger role to play.

The AI 2027 team decided not to consider this for the sake of the narrative, but it might turn out to be the single most important consideration on how things play out.

Finally, the AI 2027 team wanted us to acknowledge the individuals who made the final judgment calls on all of our intelligence in the piece:

Forecaster Bios

Tom Liptay - Director of Forecasting at FutureSearch and previously served as director of forecasting at Metaculus for 3 years. PhD Electrical Engineering & Computer Science, MIT. At Good Judgment: 1st place individual in season 3 cohort. 1st place team member in season 4, 1st place individual CFA Finance Forecasting Challenge 2019.

Finn Hambly - Forecaster, FutureSearch. MSci Physics, York; MS Accountable, Responsible, and Transparent AI, Bath. Finn led forecasting at the Swift Centre and, prior to that, did research and probabilistic modeling of the UK’s electricity system at the University of Bath.

Sergio Abriola - PhD Math, CS researcher at University of Buenos Aires. Current #1 on Metaculus in peer accuracy and performance across all tournaments, most recently winner of the 2022-2025 biosecurity tournament.

Tolga Bilge - Superforecaster & forecaster for Sentinel, the Swift Centre, and Samotsvety Forecasting; Policy Researcher at ControlAI; Master’s in Mathematics. Led the aitreaty.org open letter and taisc.org project. Coauthor of A Narrow Path.

Meet the team behind the research.

For press inquiries, please fill out the form or contact us directly: press@futuresearch.ai