Trump’s Impact on AI (Sep. 2025)

By Jim Shimabukuro (assisted by Copilot)
Editor

[Also see Trump’s Impact on AI (Nov. 2025), Trump’s Impact on AI (Oct 2025)]

Introduction: President Trump’s engagement with AI spans both his first and current terms, and it’s marked by a blend of deregulation, national competitiveness, and ideological framing. As of 11 Sep. 2025, here are 10 of the most memorable actions and statements. Appended is a chart of 20 key figures who currently shape Trump’s AI strategy.

Image created by ChatBox.

1. 2019: First-Ever U.S. Executive Order on AI

Trump signed the American AI Initiative, the first executive order focused entirely on artificial intelligence. It prioritized R&D, workforce development, and international collaboration. This was Trump’s foundational move.

Status as of September 2025: Mixed Success, Foundational Legacy

Launched in February 2019, the American AI Initiative was President Trump’s first major foray into artificial intelligence policy. It directed federal agencies to prioritize AI R&D, open government datasets for training, and develop technical standards. While it lacked dedicated funding at the outset, the initiative catalyzed a shift in federal posture—from passive observation to active engagement.

By 2025, its legacy is visible in the architecture of Trump’s second-term AI Action Plan. Many of the Initiative’s goals—like interagency coordination, public-private partnerships, and workforce development—have been expanded and institutionalized. The National AI Research Institutes, for example, trace their origin to this early push.

However, critics argue that the Initiative’s voluntary nature and absence of enforcement mechanisms limited its impact. Agencies varied widely in their implementation, and without centralized oversight, progress was uneven. Moreover, the Initiative did not address ethical risks or regulatory guardrails, leaving a vacuum that later administrations attempted to fill.

Outlook for September 2026: Institutionalization and Ideological Reframing

The Initiative’s principles have now been reframed under Trump’s 2025 AI Action Plan, which emphasizes deregulation, ideological neutrality, and national competitiveness. While the original Initiative was technocratic and bipartisan, its successor is overtly political—embedding free speech protections, anti-bias mandates, and open-source promotion into federal procurement and R&D.

By next year, we can expect the Initiative’s legacy to be fully absorbed into a more aggressive national strategy. The Office of Science and Technology Policy (OSTP), now led by Trump appointees, is consolidating AI governance under a centralized framework. The White House AI & Crypto Czar is coordinating cross-sector efforts, and the National Institute of Standards and Technology (NIST) is pivoting from safety to “truth-seeking” standards.

In short, the American AI Initiative laid the groundwork—but its future lies in how it’s being reinterpreted. If Trump’s current trajectory continues, the Initiative will be remembered less for its original goals and more as the launchpad for a new era of AI nationalism.


2. 2020: Launch of National AI Research Institutes

His administration established multiple AI research hubs across the U.S., aiming to accelerate innovation in areas like health, agriculture, and cybersecurity. This has been a cornerstone of U.S. AI strategy since 2020.

Status as of September 2025: Strong Success, Expanding Impact

The National AI Research Institutes, launched under President Trump’s first term and expanded through bipartisan support, have become one of the most successful public investments in AI infrastructure. Funded by the National Science Foundation (NSF) and other federal partners, these institutes now span 29 centers across more than 40 states, each receiving up to $20 million over five years.

Their impact is tangible. Institutes have accelerated breakthroughs in agriculture, cybersecurity, materials science, and education. For example, the NSF AI-Materials Institute at Cornell is pioneering sustainable materials discovery, while the Institute for Student AI-Teaming at the University of Colorado Boulder is transforming collaborative STEM learning. These centers have also trained thousands of students and professionals, building a robust AI workforce pipeline.

Importantly, the institutes have fostered open-source development, cross-sector collaboration, and regional innovation. They’ve become hubs where academia, industry, and government converge to solve real-world problems. Their distributed model has helped democratize access to AI resources, especially in underserved regions.

Critics note that some institutes have struggled with uneven funding cycles and bureaucratic delays. Others argue that the ethical oversight of these centers remains inconsistent, especially as they tackle sensitive domains like mental health and surveillance. Still, the overall trajectory is one of growth and institutional maturity.

Outlook for September 2026: Consolidation and Strategic Reorientation

Under President Trump’s second term, the institutes are being strategically realigned to support the 2025 AI Action Plan. This includes a stronger emphasis on national competitiveness, open-source AI, and ideological neutrality. The White House AI & Crypto Czar is now coordinating inter-institute collaboration, and new centers are being proposed in defense, semiconductors, and decentralized infrastructure.

By next year, we can expect:

  • New funding rounds targeting AI for national security and manufacturing.
  • Expanded partnerships with private sector leaders in crypto, cloud, and edge computing.
  • Curriculum shifts toward “values-neutral” AI design and free speech protections.
  • Potential rebranding of some institutes to align with Trump’s deregulatory ethos.

The institutes are evolving from academic enclaves into strategic assets. Their success lies not just in research output, but in their ability to adapt to shifting political and technological landscapes. If they maintain their collaborative spirit while embracing new mandates, they’ll remain central to America’s AI future.


3. 2020: Doubling Federal Investment in AI R&D

Trump committed to doubling non-defense AI research funding, signaling a major push toward global AI leadership.

Status as of September 2025: Partial Fulfillment, Strategic Realignment

In 2020, President Trump’s FY21 budget committed to doubling non-defense AI R&D funding by 2022, marking a historic moment: it was the first time AI was explicitly named a top-tier federal research priority. Agencies like the National Science Foundation (NSF), Department of Energy (DOE), and National Institutes of Health (NIH) saw significant boosts. NSF’s AI budget alone surged to over $830 million, a 70% increase from FY20.

This infusion of capital catalyzed new interdisciplinary research centers, expanded AI applications in agriculture and chronic disease, and accelerated workforce development—especially at community colleges and Historically Black Colleges and Universities (HBCUs). DARPA and the Department of Defense also increased their AI budgets, though these were not part of the non-defense doubling pledge.

By 2025, however, the landscape has shifted. Trump’s second-term AI Action Plan has reoriented R&D priorities toward infrastructure, open-source models, and national security. Funding has continued to grow, but the distribution now favors politically resilient sectors like fossil fuels, cybersecurity, and manufacturing. Agencies aligned with Trump’s deregulatory ethos—such as DOE and DARPA—have gained, while those focused on climate, equity, or progressive ethics have seen stagnation or reallocation.

The ideological neutrality clause introduced in 2025 has also reshaped grant criteria. AI projects perceived as “woke” or aligned with progressive social agendas face higher scrutiny, while bias-free, open-weight models are prioritized. [“Weights” are calibrated processing functions, and “open” means users are able to see and adjust them. -js] This has created friction between federal and state-level AI policies, with some states resisting the shift toward deregulation.

Outlook for September 2026: Growth with Guardrails

Looking ahead, federal AI R&D funding is expected to continue rising, but under tighter ideological and strategic constraints. The White House AI & Crypto Czar is steering investments toward infrastructure modernization, defense applications, and decentralized systems. NSF and DOE are likely to receive new mandates to support “values-neutral” AI and open-source development.

We can anticipate:

  • Expanded funding for AI in semiconductors, logistics, and energy.
  • Reduced support for projects involving climate modeling, DEI analytics, or social justice applications.
  • Legal challenges from states and universities over federal grant conditions.
  • Greater emphasis on reproducibility, transparency, and national competitiveness.

In essence, Trump’s original pledge to double AI R&D has been partially fulfilled—but its spirit has been reinterpreted. The focus is no longer just on growth, but on shaping the ideological and infrastructural contours of American AI.


4. 2020: First-Ever AI Regulatory Guidance

The White House issued principles for AI regulation, emphasizing minimal government interference and promoting innovation over precaution. This was the first-ever AI regulatory guidance, and its legacy has evolved by September 2025.

Status as of September 2025: Fragmented Implementation, Ideological Pivot

In January 2020, the Trump administration issued the world’s first formal guidance for regulating artificial intelligence. The document, released by the Office of Management and Budget (OMB), laid out ten principles for federal agencies to follow when crafting AI regulations. These included public trust, risk assessment, fairness, and transparency—but with a strong emphasis on avoiding overregulation that could stifle innovation.

Initially, the guidance was praised for its light-touch approach and its attempt to balance innovation with public safety. It encouraged agencies to consider non-regulatory tools like voluntary standards and industry self-regulation. However, by 2025, the results are mixed. According to a recent Government Accountability Office (GAO) report, federal agencies now face a tangled web of 94 separate AI-related requirements stemming from multiple laws, executive orders, and guidance documents. This regulatory fragmentation has created confusion and slowed adoption of generative AI across government sectors.

Moreover, the guidance’s original principles have been reinterpreted under Trump’s second-term executive order, Removing Barriers to American Leadership in Artificial Intelligence (EO 14179). This order rescinded Biden-era safeguards and directed agencies to revise or eliminate any policies deemed to hinder innovation. The new framework emphasizes ideological neutrality, free speech, and national competitiveness over risk mitigation and equity.

While some agencies have embraced this deregulatory shift—cutting deals with AI vendors under the OneGov contracting strategy—others report that the lack of centralized oversight and ethical clarity is hampering deployment. The tension between innovation and accountability remains unresolved.

Outlook for September 2026: Deregulation with Strategic Guardrails

Looking ahead, Trump’s regulatory philosophy is expected to solidify. The White House AI & Crypto Czar is leading efforts to streamline federal AI governance, likely consolidating oversight under fewer agencies. The revised OMB memoranda will prioritize:

  • Minimal compliance burdens for private developers.
  • Open-source and open-weight models as default standards.
  • Protection of free speech and ideological neutrality in AI outputs.
  • National security exemptions for high-risk applications.

However, this approach may trigger pushback from states, civil society groups, and international partners concerned about ethical safeguards. Legal challenges over federal preemption and procurement bias are already surfacing.

In sum, Trump’s 2020 AI regulatory guidance was a landmark—but its legacy is now defined by a deregulatory pivot. Whether this fosters responsible innovation or unchecked deployment will depend on how agencies interpret the new mandates and whether Congress steps in to legislate a more coherent framework.


5. 2025: Revocation of Biden’s AI Executive Order

In his second term, Trump signed a new executive order that eliminated what he called “harmful” Biden-era AI policies, arguing they stifled private sector innovation. This was a pivotal moment in the ideological reorientation of U.S. AI policy under President Trump’s second term.

Status as of September 2025: Disruption and Realignment

On January 20, 2025, President Trump revoked Executive Order 14110, signed by President Biden in 2023, which had mandated safety disclosures from AI developers and established the U.S. AI Safety Institute. This revocation was part of Trump’s broader Initial Rescissions of Harmful Executive Orders and Actions, and it marked a dramatic shift in the federal government’s approach to AI oversight.

Biden’s order had required companies to submit safety testing data for high-impact AI systems before public release, especially those affecting national security, public health, or the economy. It also tasked agencies with developing internal AI governance frameworks and directed NIST to create bias mitigation standards.

Trump’s revocation halted these efforts almost immediately. The U.S. AI Safety Institute’s future is now uncertain, and federal agencies have paused or restructured their internal AI risk protocols. The new Trump executive order, Removing Barriers to American Leadership in Artificial Intelligence, emphasizes innovation over regulation and directs agencies to eliminate any policies that might “present obstacles” to AI development.

Industry reaction has been mixed. Accelerationist voices in tech welcomed the rollback, arguing that Biden’s order imposed burdensome reporting requirements and risked exposing trade secrets. Others, including civil society groups and some state governments, expressed concern over the loss of centralized safety oversight, especially as generative AI systems grow more powerful and pervasive.

Outlook for September 2026: Ideological Consolidation and Legal Tensions

By next year, Trump’s deregulatory stance is expected to solidify. The White House AI & Crypto Czar is coordinating efforts to replace Biden-era safeguards with a framework focused on:

  • Open-source model promotion
  • Free speech protections in AI outputs
  • National security exemptions for high-risk systems
  • Minimal federal oversight of private AI development

However, this trajectory may face legal and political challenges. California and other states with active AI legislation are pushing back, maintaining transparency and deepfake laws that conflict with federal deregulation. Internationally, the U.S. now stands in stark contrast to the EU’s AI Act, which enforces strict rules on facial recognition and high-risk applications.

In short, Trump’s revocation of Biden’s AI order is not just a policy shift—it’s an ideological pivot. It redefines the role of government in AI development, favoring speed and sovereignty over caution and coordination. Whether this unleashes innovation or invites risk will depend on how industry, states, and global partners respond.


6. 2025: Creation of the AI Action Plan

This centerpiece of President Trump’s second-term tech strategy focuses on three pillars: accelerating innovation, building AI infrastructure, and leading in international diplomacy and security.

Status as of September 2025: Strategic Clarity, Ideological Controversy

Unveiled in July 2025, the AI Action Plan is President Trump’s most ambitious blueprint for artificial intelligence to date. It outlines 90 federal policy positions across three pillars: Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security. The plan is accompanied by three executive orders that promote the “American AI Technology Stack,” streamline permitting for data centers, and mandate “Unbiased AI Principles” in federal procurement.

The plan has received widespread acclaim from industry leaders. Nvidia CEO Jensen Huang called it “America’s unique advantage,” while Amazon and Chevron praised its focus on infrastructure, energy, and national competitiveness. The AI Innovation Association hailed it as a “bold path to global American leadership”.

At its core, the Action Plan seeks to:

  • Remove regulatory barriers to AI deployment.
  • Promote open-source and open-weight models.
  • Protect free speech and ideological neutrality in AI outputs.
  • Build high-security data centers for defense and intelligence.
  • Export American AI standards to allies and counter Chinese influence.

However, the plan has also sparked controversy. Critics argue that its emphasis on deregulation and ideological neutrality may weaken ethical safeguards. The revocation of Biden-era safety protocols and the sidelining of the U.S. AI Safety Institute have raised concerns about unchecked deployment of frontier models. Democratic lawmakers have questioned the administration’s commitment to data protection and environmental standards.

Moreover, the plan’s preemption of state-level AI laws—such as California’s deepfake and transparency statutes—has triggered legal and political resistance. Senator Ted Cruz’s proposal to ban state AI regulations was removed from the Republican budget bill after bipartisan backlash.

Outlook for September 2026: Consolidation and Global Positioning

Over the next year, the AI Action Plan is expected to evolve into a fully institutionalized framework. The White House AI & Crypto Czar is coordinating cross-agency implementation, and new federal standards for “truth-seeking” AI outputs are being drafted. Infrastructure permitting reforms are accelerating the construction of AI data centers, while export controls are tightening to protect U.S. semiconductor and compute assets.

By September 2026, we can anticipate:

  • Expanded federal procurement of open-weight models.
  • Legal battles over federal preemption of state AI laws.
  • Increased international pressure to align with EU-style safety standards.
  • Deployment of AI in defense, manufacturing, and biosecurity at scale.

In essence, the AI Action Plan is both a strategic roadmap and a political manifesto. It positions the U.S. as a global AI leader—but on terms defined by deregulation, infrastructure dominance, and ideological reframing. Whether this leads to sustainable leadership or geopolitical friction will depend on how the plan balances innovation with accountability.


7. 2025: Appointment of a White House AI & Crypto Czar

Trump created a new leadership role to oversee AI and crypto policy, signaling a fusion of emerging tech governance with implications for governance, innovation, and ideological alignment.

Status as of September 2025: Symbolic Power, Strategic Influence

In early 2025, President Trump created a new executive role: the White House AI & Crypto Czar. The position was filled by David Sacks, a Silicon Valley venture capitalist and co-host of the All-In podcast. Sacks, known for his libertarian views and close ties to Trump’s tech donors, was tasked with coordinating federal policy across artificial intelligence and digital assets.

The appointment was met with enthusiasm from industry leaders. Many saw it as a signal that the administration would prioritize innovation, deregulation, and public-private collaboration. Sacks’s background in venture capital and his connections to Elon Musk and the PayPal mafia gave him credibility among tech insiders. His role as chair of the Presidential Working Group on Digital Asset Markets further expanded his influence, especially in shaping crypto policy.

However, the position’s structure has raised concerns. It is part-time, lacks Senate confirmation, and allows Sacks to retain his role at Craft Ventures. Critics argue this creates potential conflicts of interest and undermines transparency. The murkiness surrounding his authority—especially in AI safety, procurement, and federal standards—has led to uneven implementation across agencies.

Despite these concerns, Sacks has played a key role in advancing Trump’s AI Action Plan. He’s pushed for open-source models, ideological neutrality in AI outputs, and the construction of high-security data centers. His influence is particularly strong in crypto, where he’s helped dismantle Biden-era restrictions and promote a national digital assets stockpile.

Outlook for September 2026: Institutionalization or Fragmentation?

Over the next year, the future of the AI & Crypto Czar role will hinge on two factors: institutional clarity and political resilience. If the administration formalizes the position—perhaps through legislation or expanded budgetary authority—it could become a permanent fixture in U.S. tech governance. This would allow for:

  • Coordinated federal AI and crypto policy across defense, finance, and infrastructure.
  • Streamlined procurement of open-weight models and decentralized platforms.
  • Export of American AI standards to allies and strategic partners.

However, if the role remains informal and politically tied to Trump’s inner circle, it risks fragmentation. Agencies may resist centralized oversight, and legal challenges over conflict of interest could intensify. The lack of ethical guardrails—especially in AI safety and surveillance—may also provoke backlash from civil society and international regulators.

In short, the AI & Crypto Czar is a bold experiment in executive coordination. Whether it becomes a lasting institution or a partisan flashpoint will depend on how it balances innovation with integrity.


8. 2025: Emphasis on Free Speech and Anti-Bias in AI

He directed agencies to ensure AI systems uphold American values like free speech and avoid “engineered social agendas.” This emphasis is one of the most ideologically charged components of his second-term tech agenda.

Status as of September 2025: Politicized Implementation, Industry Tensions

In July 2025, President Trump signed an executive order requiring that AI systems used by the federal government be “free of ideological bias” and promote “free speech and expression.” This directive is part of the broader AI Action Plan and has reshaped federal procurement standards. Developers seeking government contracts must now certify that their models do not suppress political viewpoints or promote “socially engineered agendas.”

The policy has sparked intense debate. Supporters argue that it protects against censorship and ensures that AI systems reflect a diversity of perspectives. Critics, however, contend that the order politicizes technical standards and risks undermining efforts to mitigate harmful biases—especially those related to race, gender, and misinformation.

Implementation has been uneven. The General Services Administration (GSA) is drafting procurement language that requires models to meet “truthfulness” standards, but it remains unclear how ideological bias will be defined or measured. Some agencies have paused contracts with vendors whose models were flagged for “content filtering,” while others have quietly ignored the directive.

Industry reaction is split. Open-source developers and libertarian tech leaders have embraced the policy, seeing it as a bulwark against centralized control. Meanwhile, enterprise AI firms—especially those with global clients—worry that compliance will force them to alter model behavior in ways that conflict with international norms or ethical commitments.

Outlook for September 2026: Legal Challenges and Global Divergence

Over the next year, Trump’s free speech and anti-bias mandate is likely to face legal scrutiny. Civil liberties groups are preparing lawsuits challenging the constitutionality of federal viewpoint mandates in AI systems. States like California and New York may enact counter-legislation requiring bias audits and transparency disclosures, setting up a clash between federal and state standards.

Internationally, the U.S. is diverging sharply from the EU, which is finalizing its AI Act with strict rules on bias mitigation and content moderation. This divergence could complicate cross-border AI deployment and raise questions about interoperability and ethical alignment.

By September 2026, we can expect:

  • Federal procurement rules to be finalized, potentially favoring open-weight models with minimal content filtering.
  • Legal battles over the definition and enforcement of “ideological bias.”
  • Increased polarization in AI development communities, with some firms aligning with Trump’s vision and others resisting.
  • Global fragmentation in AI standards, with the U.S. promoting free speech and the EU emphasizing safety and equity.

In essence, Trump’s anti-bias and free speech directive reframes AI as a cultural battleground. Whether it fosters pluralism or politicizes infrastructure will depend on how the courts, agencies, and developers navigate its ambiguities.


9. 2025: Push for Open-Source AI Models

Trump’s AI strategy encourages open-weight and open-source models to counter centralized control and promote transparency. This was a technically bold and philosophically charged move that’s reshaping the U.S. AI landscape.

Status as of September 2025: Rapid Adoption, Rising Risks

One of the most defining features of President Trump’s 2025 AI Action Plan is its aggressive promotion of open-source and open-weight AI models. The administration argues that these models democratize innovation, reduce centralized control, and accelerate national competitiveness. Trump’s executive order mandates that federal agencies prioritize open-weight models in procurement and encourages developers to release source code and training data wherever feasible.

This policy has catalyzed a wave of adoption. Public-private partnerships are flourishing, and open-source models are now powering federal services in agriculture, logistics, and education. The National AI Research Resource (NAIRR) has been revitalized to support open-access compute and data infrastructure. Academic institutions and startups have embraced the shift, citing lower barriers to entry and increased transparency.

However, the risks are mounting. Experts from the Council on Foreign Relations warn that open models, while fueling diffusion, also increase the likelihood of misuse. Without robust safety protocols, these systems can be repurposed for disinformation, surveillance, or autonomous weapons. The revocation of Biden-era safety mandates has left a vacuum in oversight, and the U.S. AI Safety Institute remains in limbo.

Moreover, the lack of clear implementation timelines and agency responsibilities has led to uneven enforcement. Some federal contracts now require “truth-seeking” outputs, but definitions remain vague. The ideological framing—especially the rejection of “woke AI”—has politicized technical standards and created friction with international partners.

Outlook for September 2026: Expansion with Ethical Crossroads

By next year, Trump’s open-source AI policy is expected to deepen. The White House AI & Crypto Czar is coordinating efforts to build a national repository of open-weight models and expand infrastructure for decentralized training. Export controls are tightening to prevent adversarial use, and new standards for reproducibility and transparency are in development.

We can anticipate:

  • Wider federal deployment of open-source models in defense, manufacturing, and public services.
  • Legal and ethical debates over model misuse, especially in surveillance and misinformation.
  • International divergence, with the EU enforcing stricter controls on open-source AI.
  • Grassroots innovation, as communities and nonprofits leverage open models for education and civic tech.

In essence, Trump’s push for open-source AI is a double-edged sword. It democratizes access and accelerates innovation—but without robust safeguards, it risks unleashing powerful tools into unstable contexts. The next year will test whether openness can coexist with accountability.


10. 2025: Framing AI as a Tool for National Security

He positioned AI as essential to military readiness, semiconductor manufacturing, and cyber defense — even proposing high-security data centers for defense use. This is about Trump’s framing of AI as a national security asset, where infrastructure, geopolitics, and ideology converge.

Status as of September 2025: Strategic Momentum, Ethical Ambiguities

President Trump’s second-term AI strategy places national security at the heart of artificial intelligence policy. The 2025 AI Action Plan explicitly links AI leadership to defense readiness, semiconductor sovereignty, and cyber resilience. Three executive orders issued in July 2025 reinforce this framing: one streamlines permitting for AI data centers, another promotes open-weight models as strategic assets, and a third mandates ideological neutrality in defense-related AI systems.

This framing has galvanized federal investment. The Department of Defense (DoD) is deploying AI in logistics, threat detection, and autonomous systems. The Department of Energy is accelerating AI-enhanced grid security. The Intelligence Community is piloting AI tools for anomaly detection and multilingual analysis. These efforts are coordinated through the White House AI & Crypto Czar, who is also overseeing export controls to prevent adversarial access to U.S. frontier models.

Cybersecurity has emerged as a foundational priority—not just a compliance layer, but a strategic condition for scale. The AI Action Plan embeds cyber provisions across all three pillars, recognizing that resilience and trust are prerequisites for dominance. The administration is also investing in high-security data centers, with streamlined permitting aimed at countering Chinese infrastructure expansion.

However, ethical concerns persist. The revocation of Biden-era safety mandates has left gaps in oversight, especially for dual-use systems. Civil society groups warn that the fusion of AI and national security may accelerate surveillance, erode privacy, and sideline democratic accountability. International partners, particularly in the EU, are wary of the U.S. pivot toward deregulation and militarization.

Outlook for September 2026: Expansion, Contestation, and Global Stakes

Over the next year, Trump’s national security framing of AI is expected to intensify. The administration is preparing a classified annex to the AI Action Plan focused on defense applications, and new legislation may formalize AI’s role in strategic deterrence. Export controls will likely tighten, and federal procurement will prioritize models that meet “truth-seeking” and bias-free standards.

We can anticipate:

  • Expanded deployment of AI in defense, intelligence, and critical infrastructure.
  • Legal and ethical debates over surveillance, dual-use risks, and civil liberties.
  • Global competition over AI standards, with the U.S. promoting openness and speed, and the EU emphasizing safety and rights.
  • Increased pressure on developers to align with national security goals, potentially reshaping the innovation landscape.

In essence, Trump’s framing of AI as a national security asset is reshaping the terrain. It’s a high-conviction bet on speed, sovereignty, and infrastructure—but one that must navigate ethical fault lines and geopolitical friction. The next year will test whether this strategy can deliver resilience without sacrificing accountability.

The following is a curated list of 20 key figures who currently shape President Trump’s AI strategy as of September 2025. These individuals span official government roles, private sector influence, and ideological alignment. Together, they form a constellation of advisors who guide the administration’s aggressive, deregulatory, and infrastructure-heavy approach to AI.


Top 20 AI Influencers in Trump’s Circle (2025)

Here’s a curated list of 20 key figures who currently shape President Trump’s AI strategy as of September 2025. These individuals span official government roles, private sector influence, and ideological alignment. Together, they form a constellation of advisors who guide the administration’s aggressive, deregulatory, and infrastructure-heavy approach to AI. They represent a fusion of technical expertise, ideological alignment, and strategic access. Trump’s AI strategy is not just shaped by technocrats—it’s sculpted by loyalists, venture capitalists, and infrastructure hawks who share his vision of American dominance through deregulated innovation.

NameRoleBackgroundIdeological LeaningWhy Trump Relies on Them
David SacksWhite House AI & Crypto CzarVC, All-In Podcast, Craft VenturesLibertarian, pro-decentralizationCoordinates AI/crypto policy, aligns with Trump’s deregulatory ethos
Sam AltmanCEO, OpenAIFrontier model developerPro-innovation, cautious accelerationistPublicly praised Trump’s pro-business stance; key voice on infrastructure and openness
Greg BrockmanPresident, OpenAITechnical architect of GPT modelsPragmatic, open-source advocateSupports Trump’s optimism and infrastructure-first approach
Sergey BrinCo-founder, GoogleAI pioneer, deep learning advocateTechnocratic, globalistEndorsed Trump’s support for industry over regulation
Safra CatzCEO, OracleEnterprise software, defense contractsConservative, pro-national securityOracle’s AI stack powers federal systems; strong ties to Trump’s cabinet
Tim CookCEO, AppleHardware, privacy, supply chainModerate, pro-American manufacturingApple’s $600B investment aligns with Trump’s infrastructure goals
Paras MalikTreasury Counselor & CAIOFintech, digital assetsPro-market, crypto-forwardOversees AI deployment in Treasury; bridges finance and tech
Conner ProchaskaDOE Nominee, ARPA-EEnergy innovation, quantum computingInfrastructure-focusedLeads AI in energy resilience and grid modernization
Clark MinorActing CAIO, HHSHealth informatics, AI diagnosticsTechnocratic, pro-efficiencyGuides AI in public health and biosecurity
Greg HoganCIO & CAIO, OPMFederal HR systems, automationConservative, pro-efficiencyOversees AI in personnel and hiring systems
Hartley CaldwellCAIO, SBASmall business tech enablementLibertarian, pro-entrepreneurshipPromotes AI for small business growth and deregulation
Michael KratsiosFormer CTO, informal advisorTrump’s first-term tech leadNationalist, deregulatoryArchitect of 2019 AI Initiative; still influential behind the scenes
Peter ThielVC, Palantir co-founderSurveillance, defense AINationalist, anti-ChinaShapes Trump’s views on AI as a strategic asset
Elon MuskCEO, xAI & TeslaAutonomous systems, AGIAnti-centralization, pro-open weightsInformal advisor; aligns with Trump’s anti-regulation stance
Keith RaboisVC, Founders FundCrypto, AI startupsLibertarian, anti-woke techVocal supporter of Trump’s ideological neutrality in AI
Vivek RamaswamyEntrepreneur, political surrogateBiotech, AI ethicsAnti-woke, nationalistShapes messaging around AI and free speech
JD VanceSenator, informal policy conduitTech policy, populist economicsNationalist, deregulatoryBridges Trump’s AI agenda with legislative allies
Linda McMahonChair, America First Policy InstituteSmall business, infrastructurePro-growth, pro-American techPromotes AI in economic development and workforce training
Kash PatelFormer NSC official, defense advisorIntelligence, cyber policyNational security hawkAdvises on AI in defense and surveillance contexts
Brooke RollinsCEO, America First Policy InstituteDomestic policy strategistDeregulatory, pro-innovationHelps translate Trump’s AI vision into policy platforms

Leave a comment