AI in Jan. 2026: Three Critical Global Decisions — ‘global AI operating system’

By Jim Shimabukuro (assisted by Perplexity)
Editor

(Related reports: Dec 2025Nov 2025Oct 2025, Sep 2025)

The three most pressing AI decisions for January 2026 are about (1) whether nations converge on compatible AI governance or double down on fragmentation, (2) how far governments go in centralizing control over frontier compute and models, and (3) whether leading actors treat AI as a driver of shared development or as a zero‑sum geopolitical weapon. Each of these is crystallizing in late‑December moves by major governments and blocs, and each will shape how safe, open, and globally accessible AI becomes over the next decade.weforum+5

Image created by Copilot

1. Will January 2026 lock in fragmented AI governance or push toward a shared global framework?

In January 2026, policymakers will be trying to decide, in practice, whether to keep treating AI governance as a patchwork of national rules or to lean into emerging efforts to build a shared, interoperable global framework. This is not an abstract issue anymore; it is being forced by concrete moves like the G20’s evolving AI language, the European Union’s operationalization of the AI Act, India’s new AI Governance Guidelines, and the first implementation steps around national executive actions such as the December 2025 U.S. AI order.

The way governments respond in January—through formal consultations, standard‑setting, and bilateral negotiations—will either deepen regulatory fragmentation or start aligning rules so that systems, companies, and people can move across borders without constantly running into incompatible requirements.riskinfo+5

An influential December 2025 article from the World Economic Forum bluntly describes the current situation: “Artificial intelligence (AI) is developing globally but governance of this technology still happens locally,” warning that “governance fragmentation, technical incompatibility and a fundamental deficit of trust” are already taxing global growth. The same piece, written by a team at the Continuum Institute and published as “How the world can build a global AI governance framework” (World Economic Forum, 9 November 2025, argues that “a global governance framework could ensure AI development is less fragmented, enabling everyone to share in its growth.”

That encapsulates the decision in front of governments: keep letting national rules diverge until cross‑border AI trade, safety assurance, and trust become almost impossible, or move—quickly—toward some baseline of shared principles, verification methods, and institutional venues.weforum

In December 2025, the G20 signaled that at least some major economies want to frame AI as a tool for “inclusive and sustainable development,” not just a weapon of competition, in a declaration that an Atlantic Council analysis says “reflects an important shift in how states are positioning themselves on AI governance.” The Atlantic Council piece by Konstantinos Komaitis, “The G20 is moving forward on global AI governance—and the US risks being left out” (Atlantic Council, 1 December 2025, stresses that many G20 states are starting to converge on AI as a development and equity issue.

The quote that crystallizes the stakes is: “Many of the world’s major economies signaled a growing alignment around how artificial intelligence (AI) and data should be approached—not primarily as instruments of geopolitical competition, but as vehicles for inclusive and sustainable development.” If that framing hardens into actual regulatory cooperation in January—through working groups, technical standards, and shared audit tools—it could define the global norm.atlanticcouncil

At the same time, the EU is not waiting around. A December 2025 legal briefing from King & Spalding notes that the AI Act is entering “a new operational phase, with the rules on general-purpose AI (‘GPAI’) models taking effect,” and that the Commission is building a “Transparency Code of Practice on transparent generative AI systems” as a voluntary but powerful compliance tool. The article, “EU & UK AI Round-up – December 2025” (King & Spalding, 10 December 2025, explains that providers who adhere to the code may be able to demonstrate compliance more easily and will be monitored largely through that lens.

The key line describing this emerging soft‑law architecture is: “If the EC later approves this Code as adequate, providers and deployers will be able to use it to demonstrate compliance; enforcement for these actors will focus on monitoring adherence to the code.” That is the EU’s invitation to other democracies and firms: align with our transparency and labelling practices, and you gain smoother access to our market and a clearer compliance path.kslaw

Beyond Europe, India has stepped forward with its own national framework. A December 2025 overview from RiskInfo.ai reports that on 5 November 2025, India’s Ministry of Electronics and IT released its long‑awaited AI Governance Guidelines, built around seven guiding principles—fairness, transparency, accountability—and six structural “pillars” including infrastructure, regulation, and institutions.

The piece, “AI Insights: Key Global Developments in December 2025” (RiskInfo.ai, 16 December 2025, captures India’s intent with the quote: the framework aims to promote “safe and trusted AI innovation.” India’s choices in January—how aggressively it moves from high‑level principles to enforcement and standards, and whether it aligns with the EU, with the US, or with the more development‑framed G20 narrative—will shape the emerging “Global Majority” template for AI governance.riskinfo

Major actors here include:

  • Countries: EU member states, India, the G20 economies (especially Brazil, South Africa, Indonesia, and Saudi Arabia), the United States, China, and the UK.atlanticcouncil+2
  • Organizations: European Commission and its AI Office, the G20 and associated working groups, India’s Ministry of Electronics and IT, multilateral bodies exploring AI norms.kslaw+3
  • Companies: Frontier AI labs and cloud providers that must navigate overlapping EU, Indian, and U.S. obligations—firms like OpenAI, Anthropic, Google DeepMind, Meta, Microsoft, Amazon, and leading Chinese tech players.weforum+2

If January tilts toward cooperation—say, with G20‑inspired dialogues connecting to EU transparency codes and Indian principles—then over the next few years AI could run on a kind of “global operating system of trust,” to borrow the World Economic Forum’s phrase. The WEF article envisions “a global operating system of trust that enables interoperability and verification across borders,” coupled with “a standing council for cooperative intelligence that aligns national strategies, private innovation and social safeguards.”

That would make it easier to certify safe models, share red‑team findings, and manage cross‑border deployment of powerful systems. But if January instead sees major states doubling down on unilateral rules, or if the United States stays aloof from the G20 and from EU‑India convergences, then fragmentation will harden. In that world, companies will have to build different products and guardrails for different blocs, safety signals will be harder to interpret across jurisdictions, and AI will become yet another site of trade conflict. The decision is critical precisely because it is still open—but not for long.weforum


2. Will governments move from talking about frontier AI controls to actually centralizing compute and model oversight?

The second January 2026 decision is whether governments, especially in the United States and its allies, will follow through on December’s hints and start treating access to frontier‑scale compute and powerful models as a centrally regulated resource rather than a loosely supervised private asset. While earlier months focused on broad “balance innovation and regulation” questions, the conversation in late 2025 has narrowed toward concrete levers: national AI executive orders, export controls on chips, and entity‑based rules for the handful of firms that can train state‑of‑the‑art models. January is when those levers start to be pulled—through rulemaking agendas, enforcement signals, and early guidance that will tell labs and cloud providers how seriously to take frontier constraints.morganlewis+4

A December 2025 client alert from Morgan Lewis, “The New Rules of AI: A Global Legal Overview” (Morgan, Lewis & Bockius LLP, 21 December 2025, highlights the shift underway in the United States. It states that “The White House’s December 2025 executive order marks a major shift toward a unified national policy framework for AI, with broad implications for technology companies, state governments, and regulated industries” and that it “aims to establish a minimally burdensome national standard for AI policy, limiting state-level regulatory divergence.”

This centralization is about more than preempting state laws; a parallel December 22, 2025 analysis from Sidley, “Unpacking the December 11, 2025 Executive Order: Ensuring a National Policy Framework for AI” (Sidley Austin LLP, 22 December 2025, notes that the order “seeks to centralize AI policy by mobilizing the DOJ to identify and challenge ‘onerous’ state AI laws, discourages state experimentation, and creates an AI Litigation Task Force.”

The quote that shows how aggressive this might become is: “Ultimately, the EO seeks to centralize AI policy by mobilizing the DOJ to identify and challenge ‘onerous’ state AI laws, discourages state experimentation, and creates an AI Litigation Task Force to enforce federal primacy.” Taken seriously, that means Washington wants not just to harmonize rules but to hold the levers that matter most for frontier AI deployment.datamatters.sidley+1

At the global level, export controls and frontier‑entity regulation are increasingly seen as the realistic tools to manage cutting‑edge risks. A July 2025 paper from the Carnegie Endowment, “Entity-Based Regulation in Frontier AI Governance” (Carnegie Endowment for International Peace, 6 July 2025, argues that as models become more complex and as scaffolding and inference compute matter more, it will be nearly impossible to rely solely on model‑based thresholds.

The authors write: “For now, at least, inference compute and inference-based reasoning techniques are likely to contribute as much to frontier model capabilities (and associated risks) as training compute,” and they conclude that “entity-based regulation should play a significant role in frontier AI governance.” The key quote supporting the January decision is: “No one has yet devised a model-based regulatory trigger that might function as a better proxy for heightened capabilities and associated risks,” which leads them to propose focusing on the relatively small number of firms running the biggest training and inference clusters. That conceptual shift is already visible in policy drafts; January will show whether it becomes operational.carnegieendowment

Europe is on a parallel track. The King & Spalding December round‑up notes that the AI Act’s rules for general‑purpose AI models took effect in August 2025 and are now being paired with practical codes of practice. By creating a Transparency Code of Practice that can function as a de facto standard, the European Commission is signaling that it expects GPAI providers—essentially, the frontier labs—to accept detailed obligations on transparency, data provenance, and deepfake labelling if they want smooth access to the EU market.

The article explains that this will be “a voluntary tool for implementing the obligations in Article 50(2) and (4) of the AI Act on labelling and detecting AI-generated or manipulated content and deepfakes,” and that, if later approved by the Commission, adherence to this code will become a primary way to demonstrate compliance. That is an indirect but powerful form of centralized oversight: Brussels is not telling labs what models they can train, but it is setting conditions on how those models can be used and how their outputs must be marked.kslaw

India’s guidelines, again highlighted in RiskInfo.ai’s December update, show another direction for centralized control. By organizing its framework around explicit pillars—regulation, risk, accountability, and institutions—the Indian government is positioning itself to become the central hub for approving and monitoring high‑risk AI applications.

The article underscores that the guidelines are designed to support “safe and trusted AI innovation,” which implies that licensing and oversight functions will likely be built up in Delhi over the next year. For a country with a massive digital public infrastructure and ambitions to be a top AI power, this is not a minor move; January’s follow‑up (consultations, draft rules, pilot enforcement) will show whether India wants light‑touch oversight or something closer to China’s tightly managed regime.riskinfo

The main players in this decision are:

  • Governments: The U.S. federal executive (White House, DOJ, NIST), EU institutions, India, plus export‑control‑heavy states like Japan, the Netherlands, and South Korea.morganlewis+4
  • Companies: Frontier labs (OpenAI, Anthropic, Google DeepMind, Meta, xAI), semiconductor firms (NVIDIA, AMD, TSMC), and cloud providers whose compute clusters are the new chokepoints.carnegieendowment+2
  • Analysts and NGOs: Policy think tanks like the Carnegie Endowment and CNAS, which are supplying blueprints for entity‑based and compute‑based regimes.cnas+1

If January brings firm steps toward centralization—such as the U.S. setting clear criteria for which labs fall under special federal oversight, or the EU and India coordinating risk classifications—then frontier AI development will become more like civil aviation or nuclear energy: permissible, but tightly supervised at the top end. That could slow down the most dangerous experimentation while preserving space for open‑source and mid‑scale models. It might also create barriers to entry that lock in today’s leaders, with consequences for innovation and global equity.

If, instead, governments blink—choosing symbolic executive orders and voluntary codes with no teeth—then the next year will see ever more capable models trained and deployed under a regime that effectively relies on self‑regulation. Given how rapidly capabilities are advancing, that is a gamble with both catastrophic‑risk and misuse dimensions. The choice in January is less about whether to regulate and more about who, concretely, will control access to the most dangerous capabilities.datamatters.sidley+3


3. Will leading states commit to AI as a shared development tool or let competition and domestic politics turn it into a zero‑sum weapon?

The third January 2026 decision concerns narrative and allocation: do the most powerful AI states and companies actually act on the idea that AI should serve global development and digital equity, or do they retreat into a “my growth, your risk” posture shaped by industrial policy and domestic political pressure? This goes beyond U.S.–China rivalry as such, which was already treated as a critical decision in the November 2025 ETCJ article.

The fresh question emerging in December is whether the inclusive language from the G20 and from multilateral reports will be backed up by real commitments of compute, models, and governance capacity to the Global South. January will feature early budget and program decisions—on development aid, AI capacity‑building funds, and shared infrastructure—that make this fork very tangible.etcjournal+4

The Atlantic Council’s analysis of the G20 declaration emphasizes that many countries are consciously reframing AI as part of a development agenda. The article, again by Konstantinos Komaitis, notes that the declaration “offers a snapshot of an emerging global conversation that increasingly links AI to development goals and digital equity.”

The key supporting quote is: “Many of the world’s major economies signaled a growing alignment around how artificial intelligence (AI) and data should be approached—not primarily as instruments of geopolitical competition, but as vehicles for inclusive and sustainable development.” That is a direct challenge to the dominant narrative of “AI arms race” and suggests that at least some leaders want AI to be governed more like climate or health: as a global public concern where rich states have obligations to poorer ones.atlanticcouncil

The World Economic Forum’s November 2025 piece on global AI governance makes this link explicit by comparing the current moment to the post‑war creation of Bretton Woods institutions. The article argues that “a global governance framework could ensure AI development is less fragmented, enabling everyone to share in its growth,” and later concludes that “the world needs a new kind of global leadership to steward it into the Intelligent Age, one that fuses governance with innovation, informed by intelligence, driven by purpose and accountable to all stakeholders, including future generations.”

The sentence that backs the January decision is: “As AI reshapes the global economy, mistrust has become its greatest tax,” which implies that a development‑oriented, equity‑aware approach is not just morally attractive but economically rational. If rich countries hoard AI capabilities or weaponize them through export controls without providing alternatives, mistrust will deepen, and global uptake of beneficial AI will suffer.weforum

The International Telecommunication Union’s “Annual AI Governance Report 2025: Steering the Future of AI” also leans heavily on the idea that governance must be “proactive, inclusive, and adaptive” and that it should “support sustainable development and reduce global inequalities” (The Annual AI Governance Report 2025, ITU, 2025. A representative line from the report highlights that “inclusive and adaptive governance is essential to ensure that AI’s benefits are broadly shared and that existing inequalities are not exacerbated.” In other words, the UN system is now on record saying that global AI governance is failing if it leaves low‑income countries behind or treats them merely as data sources and testbeds.itu

On the ground, however, December 2025 developments also show the gravitational pull of domestic competition. The RiskInfo.ai December overview highlights that the new U.S. AI executive order is explicitly designed to preserve national leadership by creating a “minimally burdensome” framework and by preempting state‑level regulations that could, in the administration’s view, slow down domestic deployment.

That kind of framing—“we must stay ahead, and strict rules are the enemy of leadership”—can easily morph into justifying tight export controls and restrictive licensing that have real downstream effects on poorer countries’ access to compute and models. At the same time, states like India are crafting “safe and trusted AI” frameworks that are meant to turn them into major AI hubs in their own right, potentially juggling their role as development champions with their desire to attract investment and talent.cnas+3

The main actors involved in this January decision include:

  • States and blocs: G20 members; the “Global South” coalition that is increasingly vocal in UN and ITU forums; the U.S. and its allies setting export and security policies; China positioning AI as a driver of the UN 2030 Agenda through its outreach to Africa and Asia.itu+2
  • Multilateral bodies: ITU, UNESCO, the UN Secretary‑General’s AI initiatives, development banks that are starting to fund AI‑related infrastructure.itu
  • Companies and foundations: Big tech firms pledging AI for Good initiatives, philanthropies funding compute grants and AI capacity‑building in the Global South, and open‑source communities pushing for accessible models.itu+1

If January moves toward a development‑first approach—say, with G20 follow‑up meetings that specify concrete commitments on AI capacity‑building, or with the launch of multilateral funds that provide subsidized access to safe frontier and mid‑scale models for low‑income countries—then the next few years could see a more balanced landscape, where AI tools help close gaps in health, education, and climate resilience rather than widening them.

In that world, export controls and security‑minded regulations would be paired with positive obligations to share safe capabilities and to include Global South voices in governance bodies. If, instead, domestic politics and great‑power competition dominate January’s decisions, AI will increasingly be framed as strategic infrastructure to be controlled and withheld.

That would likely accelerate a split into “AI‑rich” and “AI‑poor” nations, foster workarounds and gray markets for compute and models, and make it much harder to build the kind of global governance architecture that the WEF and ITU argue is needed. The decision is critical because it will influence not just technical trajectories but the moral and political story that anchors AI in the world’s imagination—and that story, once entrenched, is very hard to rewrite.atlanticcouncil+2

  1. https://www.weforum.org/stories/2025/11/trust-ai-global-governance/
  2. https://www.riskinfo.ai/post/ai-insights-key-global-developments-in-december-2025
  3. https://www.atlanticcouncil.org/blogs/new-atlanticist/the-g20-is-moving-forward-on-global-ai-governance-and-the-us-risks-being-left-out/
  4. https://www.morganlewis.com/pubs/2025/12/the-new-rules-of-ai-a-global-legal-overview
  5. https://datamatters.sidley.com/2025/12/23/unpacking-the-december-11-2025-executive-order-ensuring-a-national-policy-framework-for-artificial-intelligence/
  6. https://www.kslaw.com/news-and-insights/eu-uk-ai-round-up-december-2025
  7. https://etcjournal.com/2025/09/01/ai-in-sep-2025-three-critical-global-decisions/
  8. https://etcjournal.com/2025/10/26/ai-in-nov-2025-three-critical-global-decisions/
  9. https://carnegieendowment.org/research/2025/06/artificial-intelligence-regulation-united-states?lang=en
  10. https://www.cnas.org/publications/reports/future-proofing-frontier-ai-regulation
  11. https://www.itu.int/epublications/en/publication/the-annual-ai-governance-report-2025-steering-the-future-of-ai/en
  12. https://etcjournal.com/2025/09/26/ai-in-oct-2025-three-critical-global-decisions/
  13. https://etcjournal.wordpress.com/2009/10/23/2698/
  14. https://etcjournal.com
  15. https://etcjournal.com/2025/10/23/10-critical-articles-on-ai-in-higher-ed-oct-2025/
  16. https://etcjournal.com/2025/12/24/status-of-artificial-general-intelligence-dec-2025-ability-to-teach-itself/
  17. https://etcjournal.com/2025/11/30/ed-tech-in-higher-ed-three-issues-for-dec-2025-institutional-trust/
  18. https://etcjournal.com/2025/12/03/professors-embrace-ai-dec-2025/
  19. https://etcjournal.wordpress.com/2008/11/10/resistance-to-technology-conscious-or-unconscious/

[End]

One Response

  1. […] ETC Journal (2025年12月28日) “AI in Jan. 2026: Three Critical Global Decisions” https://etcjournal.com/2025/12/28/ai-in-jan-2026-three-critical-global-decisions-global-ai-operating… […]

Leave a comment