The Nature editorial on “AI scientists” (25 March 2026) frames its central claim as a new inflection point: once AI systems can autonomously generate hypotheses, design experiments and interpret results, institutions, funders and publishers must rethink how research is organized, credited and governed. Yet almost every substantive concern it raises—automation of discovery, blurred authorship, accountability for errors, inequities in access to powerful models, and the lag of governance behind technical capability—has already been articulated in detail over the past two years in other venues. The piece reads less like a conceptual breakthrough and more like a compact synthesis of an emerging consensus that has been forming since at least 2023–2024 about “agentic” AI in science and the institutional reforms it demands.1-3
Overview: Work at the intersection of artificial intelligence and DNA now spans fundamental genomics, genome editing, clinical translation, and ethics, and a small set of authors recur across the most influential, recent contributions. Anshul Kundaje and collaborators such as Katherine S. Pollard and Jian Ma are central voices on using deep learning to decode regulatory DNA and molecular biology more broadly, articulating both technical advances and conceptual roadmaps for AI in molecular biology.5 Chong Wu and Peng Wei have emerged as leading figures in DNA foundation language models, benchmarking and comparing architectures that treat DNA as a “language” and setting standards for how such models should be evaluated and selected for real genomic tasks.6,10
Many observers argue that beyond lived experience, cultural specificity, and deep emotions, AI writing also lacks genuine understanding, embodied perception, moral agency, long-term memory of a life, and a stable point of view anchored in an actual self, which collectively shape the narrative texture of human prose.1-4 At the same time, a growing technical and literary discussion claims that with sufficiently rich “backstories” and conditioning, large language models can be trained into relatively coherent personas that imitate many of these attributes well enough for some readers and researchers to treat them as if they had inner lives.3-7
[Note: An earlier version of this article was accidentally published before we had a chance to review and edit it. We apologize for any inconvenience this might have caused. -js]
James A. Michener: The Architect of Collaborative Epic Fiction
Few novelists of the twentieth century achieved the commercial and cultural reach of James Albert Michener (1907–1997). Born a foundling in Doylestown, Pennsylvania, and raised in Quaker poverty, he went on to sell an estimated 75 million copies of his books worldwide, winning a Pulitzer Prize for his debut story collection Tales of the South Pacific (1947) and producing a string of decade-defining blockbusters—Hawaii, The Source, Centennial, Chesapeake, The Covenant, Poland, and Texas, among many others—each an immersive survey of a region’s geology, history, culture, and people across sweeping time frames.1,2 His novels were usually massive in scope, several running more than a thousand pages, and each was grounded in exhaustive research that could take years to complete.1
1. Stephen Marche: The Literary Curator and the Hip-Hop Producer
Stephen Marche is a Canadian novelist and essayist whose byline has appeared in The New Yorker, The New York Times, The Atlantic, and Esquire, among others. His books include The Next Civil War, a nonfiction work that required him to travel across the United States conducting hundreds of interviews, and On Writing and Failure, a candid essay-length meditation on the peculiar perseverance demanded by the literary life. Writing is not a side project for Marche but the whole of his professional existence — his livelihood, his method of inquiry, and his primary mode of contributing to public life. He has described himself as constitutionally incapable of coherence as a person, a writer whose projects are so radically different from one another that no single image of him holds still for long.8
Research on AI systems that can act as cross-lingual chatbots—able to converse in one language while seamlessly drawing on sources in many others—has accelerated sharply since 2023, especially under the banner of “multilingual” or “cross-lingual” large language models (LLMs). Recent surveys of multilingual LLMs (MLLMs) describe a clear shift from traditional machine translation pipelines toward unified models that jointly handle understanding, translation, and generation across dozens or even hundreds of languages, with explicit goals of knowledge transfer from high‑resource languages like English to lower‑resource ones.4,5,6 These surveys emphasize that the same architectures powering English‑centric chatbots are now being trained or adapted on multilingual corpora, making it technically feasible for an English conversation to query, summarize, and reason over content originally written in Chinese, Japanese, German, and many other languages—at least in controlled settings.4,5,6
Introduction: In the last couple of months, I’ve noticed what appears to be a startling improvement in the quality of prose generated by chatbots in their free tiers. To determine if I’m hallucinating, I asked Perplexity to look into what appears to be an exponential refinement in style. -js
AI-generated prose in free-tier chatbots has become markedly more fluent, versatile, and “human-sounding” since late 2022, but the evidence points to rapid, stepwise improvement rather than clean exponential growth, with important ceilings and distortions that become visible as soon as you look past surface polish.1,4,6,7,17,20 Your sense that something has changed in just the last few months is consistent with the pattern researchers are now documenting: frequent model upgrades, better alignment and instruction-tuning, and widespread human-in-the-loop workflows have collectively raised average output quality and blurred the line between AI-assisted and purely human prose in everyday settings, even though true originality, voice, and long-form coherence remain recognizably human strengths.4,5,13,14,17,20
To understand how agentic AI and the emerging prospect of AGI will reshape developmental models of human intelligence, one must first grasp what distinguishes these systems from the generative AI that has already become familiar. Generative AI — the kind that produces text, images, and code in response to prompts — is fundamentally reactive. It generates outputs but does not pursue goals across time, manage multi-step reasoning autonomously, or adapt its behavior based on consequences. Agentic AI, by contrast, refers to systems that can autonomously achieve specific goals with limited supervision. Unlike traditional AI models, agentic AI demonstrates autonomy, goal-driven behavior, and adaptability. It builds on generative AI capabilities but extends beyond content creation to solve complex, multi-step problems through reasoning, planning, and tool use.(ScienceDirect) AGI — Artificial General Intelligence — extends this concept further still, referring to a hypothetical but increasingly plausible system capable of matching or exceeding human cognitive performance across the full range of intellectual domains without task-specific training.
Multiple countries are actively developing what can reasonably be described as “robot tanks,” more formally called unmanned ground combat vehicles (UGCVs) or heavily armed unmanned ground vehicles (UGVs). What is striking in 2024–2026 is not just experimentation, but early operational deployment, especially in the Russia–Ukraine war, which has become the first large-scale laboratory for robotic ground warfare.
DNI Tulsi Gabbard’s opening remarks present a single overarching thesis: the United States faces a rapidly evolving, multi‑domain threat environment in which homeland security, transnational crime, terrorism, state adversaries, cyber operations, and emerging technologies are converging in ways that demand vigilance, coordination, and sustained national resolve. She frames the intelligence community’s assessment as non‑political and rooted in statutory duty, emphasizing that the briefing reflects analytic judgments rather than personal opinion.
On March 18, 2026, Director of National Intelligence Tulsi Gabbard delivered opening remarks at a Senate Select Committee on Intelligence (SSCI) hearing for the Annual Threat Assessment of the U.S. Intelligence Community. The opening statement1 as delivered is below.
Jensen Huang’s GTC2026 keynote framed “physical AI” and robotics not as a side bet but as the next multi‑trillion‑dollar wave of the AI economy, continuous with today’s datacenters rather than a separate field.1,4 In both NVIDIA’s own recap and detailed press coverage, he cast robots, autonomous vehicles, and industrial automation as the natural endpoint of an “AI factory” stack where gigawatt‑scale infrastructure produces models that flow into embodied systems, arguing that the next gold rush after digital agents will be robots and other “physical AI” burning even more data and compute.3,4,6 This is less about a new technical thesis than a macro‑industrial one: embodied AI is presented as an infrastructure market similar in scale and inevitability to cloud and GPUs, with NVIDIA positioning itself as the full‑stack vendor from energy to humanoid controllers. In that sense, Huang’s message differs from classic robotics talks by making physical AI primarily an inference and datacenter story, with robots as endpoints of a vertically integrated pipeline rather than standalone machines.3,4
AI inference chips sit at the center of a major shift in how artificial intelligence is actually used—and that shift explains why they dominated Jensen Huang’s keynote at NVIDIA’s GTC2026 and why they now anchor the company’s strategy.
“The Emerging AI‑First University Paradigm” (ETC Journal, 16 March 2026) makes a compelling case that Unity Environmental University, Ohio State University, the University of Washington, CUNY, and SUNY collectively sketch a new “AI-first” template for higher education — one in which AI is treated as a design principle rather than a peripheral tool, structures are reconfigured around AI’s capabilities, and ethics and equity are foregrounded as conditions of scale.¹ The five institutions do represent a meaningful advance beyond the typical university’s reactive, policy-memo approach to generative AI. Yet, when measured against what Thomas Kuhn understood as a genuine paradigm shift — a revolutionary displacement of the organizing assumptions, methods, and purposes of an entire field — these examples fall well short. They represent, rather, an intensification of one pole within the existing paradigm: the adoption-and-adaptation pole. The deeper anomaly AI poses to higher education — the radical destabilization of what universities are for, and of the three founding pillars on which they rest — remains largely unaddressed.
Unity Environmental University, Ohio State University, University of Washington, City University of New York, and the State University of New York, taken together, sketch the salient features of an emerging AI‑first university paradigm. First, AI is treated as a design principle and strategic core, not a peripheral technology: Unity codifies AI‑First Design Principles,¹ Ohio State builds an AI‑first educational environment,³ UW adopts an AI‑first institutional strategy,⁸ CUNY envisions human‑AI powered education,⁹ and SUNY embeds AI into system‑wide policy and infrastructure.¹¹ Second, AI‑first universities reconfigure structures—degrees, faculty hiring, governance, and system‑level coordination—around AI’s capabilities and risks, rather than trying to fit AI into legacy forms.
OpenClaw is a relatively new example of what researchers and developers call agentic AI—software that does not simply respond to prompts but can observe, reason, and act autonomously on a user’s behalf. The project began in late 2025 as an open-source experiment by Austrian developer Peter Steinberger and quickly grew into one of the most visible autonomous-agent frameworks in 2026.¹ OpenClaw is distributed under an MIT open-source license and is designed to run locally on a user’s computer while connecting to external large language models such as GPT, Claude, or open-source models.¹
Conference passes have sold out, but you can still participate in person with an Exhibits Only pass (use code GTC26-20 for 20% off) or virtually [free].
NVIDIA GTC is the premier global AI conference, where developers, researchers, and business leaders come together to explore the next wave of AI innovation. From physical AI and AI factories to agentic AI and inference, GTC 2026 will showcase the breakthroughs shaping every industry. The conference venues are spread throughout downtown San Jose. For inspiring sessions, be part of the unique GTC experience.
The ETC Journal article “AI-Native Operating Systems: From Procedural to Intent-Based to Ambient” (13 March 2026) opens with a brisk diagnosis of where personal computing has been stuck: for three decades, users have had to navigate windows, files, and menus, actively directing machines step by step. The article argues that a growing number of technologists now believe the operating system itself may be on the verge of a fundamental transformation — one in which AI agents interpret human intentions and orchestrate digital actions automatically, rather than passively organizing applications and hardware as they do today. What the article calls the third and most radical pathway — ambient computing — is the destination where this trajectory ultimately leads: a world in which the operating system dissolves into a distributed intelligence layer spanning multiple devices and cloud services, and a person’s AI assistant manages communications, schedules events, and retrieves information regardless of which device is currently being used.¹ The following four articles expand on the idea of ambient computing.
For more than three decades, the personal-computer operating system has been dominated by a familiar paradigm: the graphical desktop. Systems such as Microsoft Windows and macOS organize computing around icons, windows, files, and applications. The user launches programs, manipulates menus, and manually coordinates tasks between software tools. Beneath this interface, the operating system manages memory, hardware resources, and processes, but the overall architecture remains rooted in a conceptual model that dates to the late twentieth century. That model assumes that humans must actively direct computers step by step, selecting applications and instructing them how to perform tasks.
The idea of an AI-driven tax system that eliminates the need for individuals to file returns is not merely speculative; it is actively being explored by governments, researchers, and private companies. However, as of 2026, most efforts are focused on partial automation—automating compliance, enforcement, and preparation—rather than replacing the entire filing structure. Tax administrations around the world have been integrating machine learning and advanced analytics into their operations, primarily to detect fraud, streamline workflows, and improve taxpayer services. An OECD survey found that 29 of 38 member countries already deploy AI in their tax administrations, using it to identify patterns of tax evasion, automate routine case processing, and differentiate simple filings that can be handled automatically from complex cases requiring human judgment.¹ These deployments represent the early infrastructure of a future system in which tax authorities already possess most of the necessary data and can pre-compute liabilities without taxpayers filling out forms themselves.
Autonomous driving in early 2026 sits in a strange middle ground—no longer a sci‑fi promise, but still far from ubiquitous. In multiple U.S. and Chinese cities, you can already hail a driverless robotaxi or see a Class 8 truck moving freight with no one in the cab, yet these services remain tightly geofenced, heavily supervised, and politically fragile. Waymo now delivers on the order of 250,000 paid robotaxi rides per week across several U.S. cities, making it the clear U.S. leader in commercial Level 4 robotaxis, while global weekly robotaxi rides have climbed into the hundreds of thousands according to industry surveys tracking more than 700,000 fully autonomous rides per week worldwide.1,2,3 In parallel, China’s Baidu Apollo Go has matched or exceeded Waymo’s scale, also reaching roughly 250,000 weekly rides and more than 140 million driverless miles, underscoring how quickly Chinese robotaxi operators have moved under more centralized regulatory regimes.4,5
The history of technology is not written primarily by the powerful. It is written by the restless. Steve Jobs and Steve Wozniak assembled the Apple I in a California garage. Bill Gates and Paul Allen wrote their BASIC interpreter in a college dorm room before any computer existed to run it. The disruptors of every technological era tend to arrive from the margins — not because the center is incompetent, but because the center is invested in the status quo. They cannot afford to imagine the world differently. The outsiders can.
The Russia‑Ukraine war has underlined that national resilience and societal will can matter as much as raw military power, and that lesson should sit at the center of any thinking about a potential US/Israel‑Iran war. Ukraine’s ability to mobilize its population, maintain governance under fire, disperse critical infrastructure, and keep basic services functioning has repeatedly blunted Russian objectives and bought time for diplomacy and external support.1 In a US/Israel‑Iran context, that translates into prioritizing civilian preparedness, continuity of government, and rapid repair capabilities not only in Israel but across the wider region, including partners in the Gulf and beyond, so that societies can absorb shocks without collapsing into chaos. This matters for the “greater good” because wars that shatter basic social systems tend to radicalize populations, prolong grievances, and make any eventual peace far more fragile.
This is no longer a hypothetical scenario. On February 28, 2026, the United States and Israel launched joint airstrikes on Iran, killing Supreme Leader Ali Khamenei. The stated goals are to destroy Iran’s missile and military capabilities, prevent the state from obtaining a nuclear weapon, and ultimately to achieve regime change by bringing the Iranian opposition to power.2 In response, Iranian forces launched missiles and armed drones against Israel and US military facilities in all six Gulf Cooperation Council countries.6 The opening of this war, which the US calls “Operation Epic Fury,” has been swift and devastating — but the far more dangerous question is what comes next. The conditions for a prolonged, grinding standoff comparable to the Russia-Ukraine war are alarmingly present.
Bresnick, Probasco, and McFaul’s core thesis in “China’s AI Arsenal: The PLA’s Tech Strategy Is Working” (2 March 2026) is that the People’s Liberation Army has moved beyond aspirational rhetoric about “intelligentized warfare” and is now systematically translating AI ambitions into concrete capabilities across command-and-control, sensing, targeting, and unmanned systems, in ways that are beginning to work at scale and that the United States has not yet fully internalized in its own strategy.1 This argument builds directly on their recent empirical mapping of more than 9,000 AI-related PLA requests for proposals and nearly 3,000 AI-related defense contract awards between 2023 and 2024, which reveal a broad, coherent, and rapidly growing demand signal for AI in every warfighting domain.2,3