Jensen Huang’s GTC2026 keynote framed “physical AI” and robotics not as a side bet but as the next multi‑trillion‑dollar wave of the AI economy, continuous with today’s datacenters rather than a separate field.1,4 In both NVIDIA’s own recap and detailed press coverage, he cast robots, autonomous vehicles, and industrial automation as the natural endpoint of an “AI factory” stack where gigawatt‑scale infrastructure produces models that flow into embodied systems, arguing that the next gold rush after digital agents will be robots and other “physical AI” burning even more data and compute.3,4,6 This is less about a new technical thesis than a macro‑industrial one: embodied AI is presented as an infrastructure market similar in scale and inevitability to cloud and GPUs, with NVIDIA positioning itself as the full‑stack vendor from energy to humanoid controllers. In that sense, Huang’s message differs from classic robotics talks by making physical AI primarily an inference and datacenter story, with robots as endpoints of a vertically integrated pipeline rather than standalone machines.3,4
AI inference chips sit at the center of a major shift in how artificial intelligence is actually used—and that shift explains why they dominated Jensen Huang’s keynote at NVIDIA’s GTC2026 and why they now anchor the company’s strategy.
“The Emerging AI‑First University Paradigm” (ETC Journal, 16 March 2026) makes a compelling case that Unity Environmental University, Ohio State University, the University of Washington, CUNY, and SUNY collectively sketch a new “AI-first” template for higher education — one in which AI is treated as a design principle rather than a peripheral tool, structures are reconfigured around AI’s capabilities, and ethics and equity are foregrounded as conditions of scale.¹ The five institutions do represent a meaningful advance beyond the typical university’s reactive, policy-memo approach to generative AI. Yet, when measured against what Thomas Kuhn understood as a genuine paradigm shift — a revolutionary displacement of the organizing assumptions, methods, and purposes of an entire field — these examples fall well short. They represent, rather, an intensification of one pole within the existing paradigm: the adoption-and-adaptation pole. The deeper anomaly AI poses to higher education — the radical destabilization of what universities are for, and of the three founding pillars on which they rest — remains largely unaddressed.
Unity Environmental University, Ohio State University, University of Washington, City University of New York, and the State University of New York, taken together, sketch the salient features of an emerging AI‑first university paradigm. First, AI is treated as a design principle and strategic core, not a peripheral technology: Unity codifies AI‑First Design Principles,¹ Ohio State builds an AI‑first educational environment,³ UW adopts an AI‑first institutional strategy,⁸ CUNY envisions human‑AI powered education,⁹ and SUNY embeds AI into system‑wide policy and infrastructure.¹¹ Second, AI‑first universities reconfigure structures—degrees, faculty hiring, governance, and system‑level coordination—around AI’s capabilities and risks, rather than trying to fit AI into legacy forms.
OpenClaw is a relatively new example of what researchers and developers call agentic AI—software that does not simply respond to prompts but can observe, reason, and act autonomously on a user’s behalf. The project began in late 2025 as an open-source experiment by Austrian developer Peter Steinberger and quickly grew into one of the most visible autonomous-agent frameworks in 2026.¹ OpenClaw is distributed under an MIT open-source license and is designed to run locally on a user’s computer while connecting to external large language models such as GPT, Claude, or open-source models.¹
Conference passes have sold out, but you can still participate in person with an Exhibits Only pass (use code GTC26-20 for 20% off) or virtually [free].
NVIDIA GTC is the premier global AI conference, where developers, researchers, and business leaders come together to explore the next wave of AI innovation. From physical AI and AI factories to agentic AI and inference, GTC 2026 will showcase the breakthroughs shaping every industry. The conference venues are spread throughout downtown San Jose. For inspiring sessions, be part of the unique GTC experience.
The ETC Journal article “AI-Native Operating Systems: From Procedural to Intent-Based to Ambient” (13 March 2026) opens with a brisk diagnosis of where personal computing has been stuck: for three decades, users have had to navigate windows, files, and menus, actively directing machines step by step. The article argues that a growing number of technologists now believe the operating system itself may be on the verge of a fundamental transformation — one in which AI agents interpret human intentions and orchestrate digital actions automatically, rather than passively organizing applications and hardware as they do today. What the article calls the third and most radical pathway — ambient computing — is the destination where this trajectory ultimately leads: a world in which the operating system dissolves into a distributed intelligence layer spanning multiple devices and cloud services, and a person’s AI assistant manages communications, schedules events, and retrieves information regardless of which device is currently being used.¹ The following four articles expand on the idea of ambient computing.
For more than three decades, the personal-computer operating system has been dominated by a familiar paradigm: the graphical desktop. Systems such as Microsoft Windows and macOS organize computing around icons, windows, files, and applications. The user launches programs, manipulates menus, and manually coordinates tasks between software tools. Beneath this interface, the operating system manages memory, hardware resources, and processes, but the overall architecture remains rooted in a conceptual model that dates to the late twentieth century. That model assumes that humans must actively direct computers step by step, selecting applications and instructing them how to perform tasks.
The idea of an AI-driven tax system that eliminates the need for individuals to file returns is not merely speculative; it is actively being explored by governments, researchers, and private companies. However, as of 2026, most efforts are focused on partial automation—automating compliance, enforcement, and preparation—rather than replacing the entire filing structure. Tax administrations around the world have been integrating machine learning and advanced analytics into their operations, primarily to detect fraud, streamline workflows, and improve taxpayer services. An OECD survey found that 29 of 38 member countries already deploy AI in their tax administrations, using it to identify patterns of tax evasion, automate routine case processing, and differentiate simple filings that can be handled automatically from complex cases requiring human judgment.¹ These deployments represent the early infrastructure of a future system in which tax authorities already possess most of the necessary data and can pre-compute liabilities without taxpayers filling out forms themselves.
Autonomous driving in early 2026 sits in a strange middle ground—no longer a sci‑fi promise, but still far from ubiquitous. In multiple U.S. and Chinese cities, you can already hail a driverless robotaxi or see a Class 8 truck moving freight with no one in the cab, yet these services remain tightly geofenced, heavily supervised, and politically fragile. Waymo now delivers on the order of 250,000 paid robotaxi rides per week across several U.S. cities, making it the clear U.S. leader in commercial Level 4 robotaxis, while global weekly robotaxi rides have climbed into the hundreds of thousands according to industry surveys tracking more than 700,000 fully autonomous rides per week worldwide.1,2,3 In parallel, China’s Baidu Apollo Go has matched or exceeded Waymo’s scale, also reaching roughly 250,000 weekly rides and more than 140 million driverless miles, underscoring how quickly Chinese robotaxi operators have moved under more centralized regulatory regimes.4,5
The history of technology is not written primarily by the powerful. It is written by the restless. Steve Jobs and Steve Wozniak assembled the Apple I in a California garage. Bill Gates and Paul Allen wrote their BASIC interpreter in a college dorm room before any computer existed to run it. The disruptors of every technological era tend to arrive from the margins — not because the center is incompetent, but because the center is invested in the status quo. They cannot afford to imagine the world differently. The outsiders can.
The Russia‑Ukraine war has underlined that national resilience and societal will can matter as much as raw military power, and that lesson should sit at the center of any thinking about a potential US/Israel‑Iran war. Ukraine’s ability to mobilize its population, maintain governance under fire, disperse critical infrastructure, and keep basic services functioning has repeatedly blunted Russian objectives and bought time for diplomacy and external support.1 In a US/Israel‑Iran context, that translates into prioritizing civilian preparedness, continuity of government, and rapid repair capabilities not only in Israel but across the wider region, including partners in the Gulf and beyond, so that societies can absorb shocks without collapsing into chaos. This matters for the “greater good” because wars that shatter basic social systems tend to radicalize populations, prolong grievances, and make any eventual peace far more fragile.
This is no longer a hypothetical scenario. On February 28, 2026, the United States and Israel launched joint airstrikes on Iran, killing Supreme Leader Ali Khamenei. The stated goals are to destroy Iran’s missile and military capabilities, prevent the state from obtaining a nuclear weapon, and ultimately to achieve regime change by bringing the Iranian opposition to power.2 In response, Iranian forces launched missiles and armed drones against Israel and US military facilities in all six Gulf Cooperation Council countries.6 The opening of this war, which the US calls “Operation Epic Fury,” has been swift and devastating — but the far more dangerous question is what comes next. The conditions for a prolonged, grinding standoff comparable to the Russia-Ukraine war are alarmingly present.
Bresnick, Probasco, and McFaul’s core thesis in “China’s AI Arsenal: The PLA’s Tech Strategy Is Working” (2 March 2026) is that the People’s Liberation Army has moved beyond aspirational rhetoric about “intelligentized warfare” and is now systematically translating AI ambitions into concrete capabilities across command-and-control, sensing, targeting, and unmanned systems, in ways that are beginning to work at scale and that the United States has not yet fully internalized in its own strategy.1 This argument builds directly on their recent empirical mapping of more than 9,000 AI-related PLA requests for proposals and nearly 3,000 AI-related defense contract awards between 2023 and 2024, which reveal a broad, coherent, and rapidly growing demand signal for AI in every warfighting domain.2,3
The transformation of university pedagogy that agentic AI demands is perhaps the most visible and immediate of the three domains, and it begins with a fundamental rethinking of what learning is supposed to produce. Commentators inside higher education have described the emerging shift as the move “from generative assistant to autonomous agent,” emphasizing that generative models will increasingly sit behind agentic layers that decide when and how to use them.1 This means that course designs built around the submission of finished products — essays, problem sets, take-home exams — are structurally vulnerable in ways that syllabi policies cannot patch.
Agentic AI in higher education is in a visible but early, uneven phase: it is talked about as “the next evolution” beyond prompt‑driven generative tools, yet most campuses still treat it as a set of pilots and thought experiments rather than core infrastructure. A widely used working definition frames agentic AI as systems that can pursue complex, often long‑horizon goals with minimal human intervention, planning multi‑step actions, using tools, maintaining memory, and adapting to changing contexts—what some researchers call a “qualitative leap” from static chatbots and rule engines.1 In practice, this means moving from “AI that answers” to “AI that acts”: agents that can orchestrate tasks across learning platforms, student information systems, and communication channels, rather than simply generating text on demand. Commentators inside higher ed have started to describe this shift as the move “from generative assistant to autonomous agent,” emphasizing that generative models will increasingly sit behind agentic layers that decide when and how to use them.6
1. The Attack Violated the UN Charter and International Law
The most foundational and broadly shared criticism of Operation Epic Fury is that it constitutes an illegal use of force under international law. Article 2(4) of the United Nations Charter prohibits the threat or use of force against the territorial integrity or political independence of any state. Two exceptions exist: Security Council authorization under Chapter VII, and individual or collective self-defense in response to an armed attack under Article 51. Neither applies here. The Security Council did not authorize the use of force against Iran. The United States did not request such authorization. Iran was not attacking the United States or Israel at the time of the strikes.
On February 28, 2026, the United States and Israel began conducting extensive strikes against a wide range of targets in Iran. The strikes were dubbed Operation Epic Fury by the United States and Operation Roaring Lion by Israel. President Trump announced the operation in an eight-minute video posted to Truth Social around 2:30 a.m. ET, saying the United States had begun “major combat operations in Iran.” There was no address to the media or public briefing to Congress beyond a notification to the Gang of Eight shortly before strikes commenced. The video concluded with a direct message from Trump to Iranians, stating “the hour of your freedom is at hand.” (White House | CSIS | The Hilltop)
The publicly available record since 28 February 2026 shows that drones played a central, visible role in the opening waves of the joint U.S.–Israeli strikes on Iran, while the role of artificial intelligence is more indirect and mostly inferred from the types of systems and operations described. The Institute for the Study of War’s running assessment of the campaign notes that U.S. and Israeli forces launched a broad strike effort aimed at Iranian leadership, air defenses, missile and drone infrastructure, and command-and-control nodes, implying a heavy reliance on networked surveillance, targeting, and battle management systems that almost certainly incorporate AI-enabled data fusion and decision-support, even if officials do not label them as such in public statements.1 Chatham House’s early expert commentary similarly frames the operation as a technologically sophisticated attempt to decapitate Iran’s leadership and degrade its strike capabilities, but it does not provide granular detail on specific AI tools, underscoring how the most sensitive aspects of targeting and command systems remain classified.2
The three best uses of AI in education in 2026, judged by convergence across recent systematic reviews, major policy guidance (UNESCO, OECD) and large‑scale survey/analytic work, are: first, AI‑driven personalized tutoring and adaptive learning; second, AI‑supported assessment and feedback for learning; and third, AI as an assistant that offloads teachers’ routine workload so they can focus on higher‑value human work.1-8 These uses recur at the top of expert syntheses as the clearest cases where AI capabilities align with robust evidence of learning gains, better formative information, and improved teaching conditions, while remaining compatible with ethical, human‑centered principles.1,2,5,6
To discuss this matter, I must consider the past, present, and future of artificial intelligence and the growth of technology. I was there when the phrase “artificial intelligence” first appeared in 1955, but I encountered it much later in the early 1960s. Its earliest incarnation was as learning machines. I was so fascinated by the topic that I purchased Nils Nilsson’s “Learning Machines” and studied it closely. I still have the book and my notes. Much more recently, I took a MOOC from Caltech, my alma mater, on artificial intelligence to gain greater insight into the subject.
Introduction: Taken together, these four documents form a complementary quartet. RAND tells you what is already happening in U.S. schools at scale. UNESCO situates the pedagogical and governance challenges in global theoretical and comparative context. UNICEF grounds the analysis in children’s rights and provides actionable design standards for governments and the private sector. And Brookings supplies the developmental science framework and the precautionary logic that justifies urgent policy intervention. No single document suffices on its own; each fills gaps the others leave open.
The proposition of relocating AI data centers to orbit faces significant skepticism from leading figures in aerospace engineering, climate science, and cloud infrastructure, many of whom argue that the physical and economic barriers remain insurmountable within Musk’s projected timeline. Dr. Josef Aschbacher, Director General of the European Space Agency (ESA), has expressed profound caution regarding the environmental and operational logistics of such a shift. Aschbacher has noted that while space-based infrastructure is evolving, the “unprecedented thermal management challenges” posed by high-density AI chips in a vacuum—where heat can only be dissipated via radiation rather than convection—make the immediate scalability of orbital compute centers highly questionable.1
Elon Musk envisions solar-powered orbital AI data centers as a constellation of up to one million satellites positioned in low Earth orbit, around 310 to 1,000 kilometers above the surface, where they would harness continuous solar energy through advanced panels to power AI computing without the interruptions of night, weather, or atmospheric filtering that reduce efficiency on Earth.1 These satellites would form a networked cluster connected via laser links for seamless data transfer, allowing them to process massive AI workloads like those for xAI’s Grok chatbot, while radiating excess heat directly into the vacuum of space for natural cooling, eliminating the need for water or energy-intensive terrestrial systems.2
Foundation models are large machine-learning systems trained on extremely broad, often multimodal datasets (text, images, code, scientific data and more), usually with self‑supervision at scale, and then adapted to many different downstream tasks such as question answering, prediction, and content generation.1,2 This makes them “foundations”: incomplete but general models that can be further specialized for particular domains, from science and medicine to law, education, the arts, and public policy.2