Nearly a month ago, on 1 September 2025, I asked Perplexity to identify three pressing decisions that the global AI community is or should be facing. Perplexity came up with these three: (1) How should the world structure AI governance to ensure both innovation and collective safety, following the recent UN General Assembly decision to create global oversight panels? (2) Will major companies and nations implement meaningful, enforceable AI governance to comply with the new EU AI Act and similar regulations—or will compliance remain superficial? (3) Can the international AI community overcome short-term competitive pressures to prioritize responsible development, given the accelerating risks of rapid deployment without oversight?
The three most pressing decisions facing the global field of AI in September 2025 center on international governance, real-world implementation of new regulations, and the urgent race to balance innovation with responsibility. Each decision is framed as an open-ended question:
How should the world structure AI governance to ensure both innovation and collective safety, following the recent UN General Assembly decision to create global oversight panels?
Will major companies and nations implement meaningful, enforceable AI governance to comply with the new EU AI Act and similar regulations—or will compliance remain superficial?
Can the international AI community overcome short-term competitive pressures to prioritize responsible development, given the accelerating risks of rapid deployment without oversight?