The future of fundamental science led by generative closed-loop artificial intelligence
Review article, 2026

Artificial intelligence is approaching the point at which it can complete the scientific cycle, from hypothesis generation to experimental design and validation, within a closed loop that requires little human intervention. Yet, the loop is not fully autonomous: humans still curate data, set hyperparameters, adjudicate interpretability, and decide what counts as a satisfactory explanation. As models scale, they begin to explore regions of hypothesis and solution space that are inaccessible to human reasoning because they are too intricate or alien to our intuitions. Scientists may soon rely on AI strategies they do not fully understand, trusting goals and empirical payoffs rather than derivations. This prospect forces a choice about how much control to relinquish to accelerate discovery while keeping outputs human relevant. The answer cannot be a blanket policy to deploy LLMs or any single paradigm everywhere. It demands principled matching of methods to domains, hybrid causal and neurosymbolic scaffolds around generative models, and governance that preserves plurality and counters recursive bias. Otherwise, recursive training and uncritical reuse risk model collapse in AI and an epistemic collapse in science, as statistical inertia amplifies flaws and narrows the investigation. We argue for graded autonomy in AI-conducted science: systems that can close the loop at machine speed, while remaining anchored to human priorities, verifiable mechanisms, and domain-appropriate forms of understanding.

epistemic singularity

closed-loop discovery

graded autonomy

AI-conducted science

cognitive collapse

human-machine collaboration

domain-method alignment

AI4Science

Author

Hector Zenil

King's College London

The Francis Crick Institute

University of Cambridge

Alan Turing Institute

University of Oxford

Jesper Tegnér

King Abdullah University of Science and Technology (KAUST)

Karolinska Institutet

Felipe S. Abrahão

Laboratorio Nacional de Computacao Cientifica (LNCC)

State University of Campinas

King's College London

University of Oxford

Alexander Lavin

Pasteur Labs

Vipin Kumar

University of Minnesota

Jeremy G. Frey

University of Southampton

Adrian Weller

University of Cambridge

Alan Turing Institute

Larisa N. Soldatova

Goldsmiths, University of London

A. Bundy

University of Edinburgh

Nicholas R. Jennings

Loughborough University

Koichi Takahashi

RIKEN

Keio University

Lawrence Hunter

Colorado School of Public Health

Saso Dzeroski

Jozef Stefan Institute

Andrew Briggs

University of Oxford

Frederick D. Gregory

DEVCOM U.S. Army Combat Capabilities Development Command

Carla P. Gomes

Cornell University

Jon Rowe

University of Birmingham

Alan Turing Institute

James A. Evans

University of Chicago

Hiroaki Kitano

Okinawa Institute of Science and Technology Graduate University

Alan Turing Institute

Ross King

Alan Turing Institute

Chalmers, Computer Science and Engineering (Chalmers), Data Science and AI

University of Cambridge

Frontiers in Artificial Intelligence

26248212 (eISSN)

Vol. 9 1678539

Subject Categories (SSIF 2025)

Philosophy

DOI

10.3389/frai.2026.1678539

PubMed

41755911

More information

Latest update

3/6/2026 8