By Jim Shimabukuro (assisted by Copilot)
Editor
The Nature editorial on “AI scientists” (25 March 2026) frames its central claim as a new inflection point: once AI systems can autonomously generate hypotheses, design experiments and interpret results, institutions, funders and publishers must rethink how research is organized, credited and governed. Yet almost every substantive concern it raises—automation of discovery, blurred authorship, accountability for errors, inequities in access to powerful models, and the lag of governance behind technical capability—has already been articulated in detail over the past two years in other venues. The piece reads less like a conceptual breakthrough and more like a compact synthesis of an emerging consensus that has been forming since at least 2023–2024 about “agentic” AI in science and the institutional reforms it demands.1-3
On the basic worry that generative and agentic AI will collide with already‑fragile research integrity, The Lancet’s 2024 editorial “Rethinking research and generative artificial intelligence” had already warned that paper mills, biased outputs, hallucinated analyses and opaque training data could erode trust in the scientific record unless journals enforced disclosure, transparency and human oversight.4 It explicitly asked what happens when rapid advances in generative AI meet a publishing system already strained by mass retractions and manipulated peer review, and it called for governance, education and accountability mechanisms that closely mirror the Nature editorial’s call for institutions and publishers to “respond” before automation outpaces norms and rules.1,4 In that sense, Nature’s 2026 piece reiterates rather than originates the idea that AI‑driven automation of research amplifies existing integrity crises and must be met with stronger oversight and clearer responsibility.
Concerns about ceding scientific judgment to machines and the role of AI in peer review were also explored in depth by Carl Bergstrom and Joe Bak‑Coleman in their 2025 Nature careers column “AI, peer review and the human activity of science.”2 They argued that when researchers offload evaluation and even writing to AI systems, something essential about human judgment, responsibility and creativity is lost, and they urged journals and reviewers to treat AI as a tool rather than an arbiter of scientific merit.2 The Nature editorial’s worries about how to attribute credit, assign responsibility for errors, and preserve human agency in AI‑mediated discovery thus echo a line of argument that had already been made explicitly in the context of peer review and scientific authorship.
The more technical notion of “AI scientists” as semi‑autonomous research agents is likewise not new in the 2026 editorial. A 2025 Comment in Nature Machine Intelligence, “Towards agentic science for advancing scientific discovery,” had already defined “agentic science” as the use of AI agents capable of reasoning, planning and interacting with digital and physical environments to carry out substantial portions of the scientific workflow.3 That piece outlined both the promise—faster hypothesis generation, automated experimentation, integration across data modalities—and the risks, including reliability, reproducibility, and the need for responsible integration into existing scientific practice.3 The Nature editorial’s description of AI systems that can autonomously explore hypothesis spaces and run experiments, and its call for institutions to adapt evaluation and governance structures to such agents, closely tracks this earlier conceptualization rather than extending it in a fundamentally new direction.
Outside the Nature family, several writers have already treated AI agents as “co‑scientists” and examined their implications for research practice. A 2025 article in Science on the Net, “AI Agents as assistants in scientific research,” surveyed systems such as Agent Laboratory, AgentRxiv, AI Scientist‑v2 and Co‑Scientist, emphasizing how they automate literature review, experimental coding and data analysis while raising questions about division of labor, expertise and responsibility in research teams.5 Similarly, a 2025 blog essay from the Kempner Institute, “From Models to Scientists: Building AI Agents for Scientific Discovery,” described frameworks like ToolUniverse that allow language‑model agents to orchestrate hundreds of scientific tools, explicitly casting them as “computational co‑scientists” and asking how such agents should be evaluated, trusted and integrated into human workflows.6 Both pieces prefigure the Nature editorial’s core claim that AI systems are moving from tools to partners in discovery and that institutions must adjust incentives, evaluation and infrastructure accordingly.
On the funding and institutional side, the editorial’s call for funders to rethink grant mechanisms, evaluation criteria and infrastructure for AI‑intensive science also has clear precedents. A 2025 Frontiers in Artificial Intelligence article, “Breaking the gatekeepers: how AI will revolutionize scientific funding,” argued that AI could both expose and mitigate structural biases in grant review, potentially shifting power away from gerontocratic, risk‑averse panels toward more meritocratic, data‑driven evaluation—while also warning that algorithmic systems could entrench new forms of opacity and bias if not carefully governed.7 In parallel, the European Commission’s 2025 report “Framework conditions and funding for AI in Science” catalogued barriers to AI adoption in research—lack of compute, skills gaps, fragmented infrastructure, unclear incentives—and recommended coordinated investments, governance frameworks and capacity‑building to enable responsible AI‑enabled science.8 The Nature editorial’s insistence that funders must adapt to AI‑driven automation of discovery thus reiterates a policy conversation already underway in both scholarly and governmental documents.
Major public‑sector science organizations have also been explicitly grappling with the same issues the editorial presents as newly urgent. NASA’s 2024 “Artificial Intelligence Workshop Report” highlighted foundation models and large language models as transformative tools for scientific disciplines, identified key challenges such as verification, validation and reproducibility, and called for new collaborative strategies, resources and governance mechanisms to integrate these models into scientific workflows.9 The U.S. National Academies’ 2025 report “Foundation Models for Scientific Discovery and Innovation” similarly examined how foundation models could complement traditional computational methods, while stressing the need for trustworthy models, robust validation, and careful attention to uncertainty and reproducibility in scientific applications.10 Both documents anticipate the Nature editorial’s concern that the ability to automate parts of the discovery process raises unresolved questions about how research should be conducted, evaluated and governed.
Even at the level of day‑to‑day workflows and peer review, writers have been mapping out the same terrain. A 2025 essay, “Peer Review Meets Power Tools: How AI Is Quietly Rewriting Scientific Workflows,” described how AI systems are becoming “workflow partners” rather than mere assistants, reshaping literature triage, hypothesis generation, experiment planning and manuscript preparation, and it asked what happens when science is conducted at “machine speed” while institutions and peer‑review processes remain calibrated for slower, human‑paced work.11 That question is essentially the same as the Nature editorial’s worry that the automation of discovery outstrips existing norms and institutional structures, suggesting that the editorial is participating in, rather than initiating, a broader reflection on AI‑accelerated science.
Taken together, these examples show that human writers—working without the assistance of AI “scientists”—have already been circling the same cluster of problems and implications that the 2026 Nature editorial presents as the cutting edge: the shift from tools to agents, the tension between speed and governance, the reconfiguration of authorship and responsibility, the need for new funding and infrastructure models, and the risk that existing inequities and integrity problems will be amplified rather than solved. The Nature piece is valuable as a high‑profile synthesis and signal to mainstream scientific institutions, but conceptually it largely re‑assembles ideas that have been articulated across editorials, policy reports, technical comments and essays since at least 2024. That, in itself, illustrates the point that even thoughtful human authors, writing in prestigious venues, can easily “rehash” existing ideas—sometimes productively, sometimes redundantly—because in fast‑moving domains like AI and science policy, the real novelty often lies less in the ideas themselves than in who repeats them, where, and with what institutional authority.
References
- “AI scientists are changing research — institutions, funders and publishers must respond.” Nature Editorial, 25 March 2026. https://www.nature.com/articles/d41586-026-00934-w
- Bergstrom, C. T., & Bak‑Coleman, J. “AI, peer review and the human activity of science.” Nature Careers Column, 25 June 2025.
https://www.nature.com/articles/d41586-025-01839-w(nature.com in Bing) - Xin, H., Kitchin, J. R., & Kulik, H. J. “Towards agentic science for advancing scientific discovery.” Nature Machine Intelligence, 10 September 2025.
https://www.nature.com/articles/s42256-025-00940-3(nature.com in Bing) - “Rethinking research and generative artificial intelligence.” The Lancet Editorial, 6 July 2024.
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(24)01469-2/fulltext(thelancet.com in Bing) - “AI Agents as assistants in scientific research.” Science in the Net, 11 April 2025.
https://www.scienceonthenet.eu/articles/ai-agents-assistants-scientific-research(scienceonthenet.eu in Bing) - Gao, S., Zhu, R., & Zitnik, M. “From Models to Scientists: Building AI Agents for Scientific Discovery.” Kempner Institute Blog, 9 October 2025.
https://www.kempnerinstitute.harvard.edu/news/from-models-to-scientists(kempnerinstitute.harvard.edu in Bing) - Mangalam, M. “Breaking the gatekeepers: how AI will revolutionize scientific funding.” Frontiers in Artificial Intelligence, 28 August 2025.
https://www.frontiersin.org/articles/10.3389/frai.2025.1667752/full(frontiersin.org in Bing) - European Commission. “Framework conditions and funding for AI in Science. Mutual Learning Exercise on National Policies for AI in Science – First thematic report.” 2025.
https://research-and-innovation.ec.europa.eu/system/files/2025-04/framework-conditions-and-funding-for-ai-in-science.pdf(research-and-innovation.ec.europa.eu in Bing) - NASA Science Mission Directorate. “Artificial Intelligence Workshop Report.” March 2024.
https://science.nasa.gov/wp-content/uploads/2024/03/ai-workshop-report.pdf(science.nasa.gov in Bing) - National Academies of Sciences, Engineering, and Medicine. “Foundation Models for Scientific Discovery and Innovation: Opportunities Across the Department of Energy and the Scientific Enterprise.” 2025.
https://nap.nationalacademies.org/catalog/27785/foundation-models-for-scientific-discovery-and-innovation(nap.nationalacademies.org in Bing) - “Peer Review Meets Power Tools: How AI Is Quietly Rewriting Scientific Workflows.” Cognaptus Insights, 14 November 2025.
https://www.cognaptus.com/insights/peer-review-meets-power-tools-ai-scientific-workflows(cognaptus.com in Bing)
###
Filed under: Uncategorized |














































































































































































































































































































































































Leave a comment