Introduction: Elon Musk predicted, at the US-Saudi Forum 19 Nov 2025, that “work will be optional” in approximately 10-to-20 years as a result of advances in AI technology. I asked ChatGPT to search the current (2025) literature for (1) the three strongest arguments FOR Musk’s prediction and (2) the three strongest arguments AGAINST his prediction. I added that the arguments need not refer to Musk or the US-Saudi forum. -js
Introduction: On Thanksgiving Day 2025, I asked ChatGPT to identify ten individuals in the world that we should be thanking for significant contributions to the growth of AI in 2025. -js
The field of AI is heading into December 2025 with three urgent decisions: how to govern frontier AI models, how to handle the open‑source versus closed‑source race, and how to expand AI compute without blowing through energy, water, and climate constraints. Each of these comes with big power struggles between governments and tech companies, and the choices made in the next few weeks will shape who leads AI, how safe it is, and who gets access.anecdotes+2
Introduction: I asked Claude to review articles published in the last three months that focused on effective leadership styles for the AI era. Based on the three selections, I asked for generalizations about ideal leadership and a definition for this new leadership style. -js
Research suggests several AI trends are gaining traction in specialized tech communities and industries during November 2025, though they haven’t yet captured widespread public attention. These include advancements that could reshape how AI integrates into workflows, infrastructure, and user experiences, but evidence leans toward them remaining niche for now due to technical complexity and limited mainstream adoption. Here are the top five, selected based on mentions in recent reports and discussions:
“As Andrej Karpathy just wrote, humanity is having first contact with a type of intelligence that does not come from biology, evolution, fear, hunger, status, or shame. For the first time in history, we are dealing with a mind that isn’t an animal. We just haven’t adjusted our thinking to match. Human intelligence isn’t the default – it’s a local anomaly. For our entire existence, we’ve assumed that our way of thinking is the template for intelligence itself. It isn’t. It’s just the only version we’ve ever met…. Organisations often do things that make no commercial sense: [1] meetings with fifteen people because exclusion feels threatening, [2] decisions delayed because no-one wants to be wrong first, [3] brilliant ideas softened into mediocrity so no-one gets upset, [4] and vanity projects that limp on long after the data has declared them dead…. AI isn’t trying to be human – and it isn’t trying to be anything at all. It simply optimises whatever objective it is given. And that is the key thing that most people keep fumbling over…. A system can generate brilliant strategies without wanting power. It can persuade without caring about influence. It can outperform a human without dreaming of replacing them. Ability is not agency. Agency only emerges if we design it – by giving systems goals, tools, and persistence. As my friend Dr Rami Mukhtar always says: AI HAS NO AGENCY” (Constantine Frantzeskos, 25 Nov 2025).
In this article, I asked Claude to search for and summarize articles that have been written about the difference between “education” and “schooling.” In grad school, in the mid-1980s, Professor Solomon Jaeckel, University of Hawaiʻi at Manoa, began his course with the question, “What is the difference between schooling and education?” And throughout the semester, whenever we hit the wall in discussions about issues in educational foundations, he brought up that refrain, “What is the difference between schooling and education?” We danced around it throughout the semester but never got his nod, and he never answered it for us. He once told us a joke about finding, scribbled on his classroom chalkboard before a final exam, “This, too, shall pass.” We all thought it referred to his tough course and exams, but now I’m thinking he meant the chalkboard, classroom, and college itself. In short, schooling becomes education when it takes on a broader meaning. -js
I asked Claude, Gemini, ChatGPT, and Grok to search for and select critical articles on AI in higher ed published in November 2025. Out of their selections, I chose and ranked the 10 best. -js
tsuzumi 2. “Traditional large language models require dozens or hundreds of GPUs, creating electricity consumption and operational cost barriers that make AI deployment impractical for many organisations…. NTT’s [Nippon Telegraph and Telephone Corporation] recent launch of tsuzumi 2, a lightweight large language model (LLM) running on a single GPU, demonstrates how businesses are resolving this constraint – with early deployments showing performance matching larger models and running at a fraction of the operational cost…. More significantly, on-premise deployment [Tokyo Online University] addresses data privacy concerns that prevent many educational institutions from using cloud-based AI services that process sensitive student information…. NTT’s tsuzumi 2 deployment demonstrates that sophisticated AI implementation doesn’t require hyperscale infrastructure – at least for organisations whose requirements align with lightweight model capabilities” (Dashveenjit Kaur, 20 Nov 2025).
To avoid tell-tale AIstyle1 in your writing, see “Wikipedia: Signs of AI writing” (tip from Russell Brandom, 20 Nov 2025). Warning signs: (1) Undue emphasis on symbolism, legacy, and importance. (2) Undue emphasis on notability, attribution, and media coverage. (3) Superficial analyses. (4) Promotional and advertisement-like language. (5) Didactic, editorializing disclaimers. (6) Section summaries. (7) Outline-like conclusions about challenges and future prospects. (8) Leads treating Wikipedia lists or broad article titles as proper nouns. This is just the tip of the AIstyle iceberg. For much more, see the Wikipedia article. -js
Introduction: The following informal transcript was grabbed off a YouTube video this afternoon, Nov 19, 2025. I relied on the audio and CC. I focused on Elon Musk’s and Jensen Huang’s talks. I omitted the introductions, host’s comments, and small talk. I didn’t have the time or resources to review and edit, so expect typos and possible errors. -js
Introduction: Text transcripts or other recordings of higher education presentations at key conferences are rarely if ever freely accessible by the overwhelming majority of educators in the U.S. and the world. In the case of Stanford’s February 25, 2025, conference, “The future is already here: AI and education in 2025,” video recordings of nine entire presentations have been made available to the public at their site and on YouTube. I asked ChatGPT to summarize them. -js
These three advances in AGI were announced after ETC Journal’s Oct. 17, 2025, article was published: (1) DeepMind’s SIMA 2, a Gemini-powered agent that “thinks” in 3D virtual worlds, (2) DeepMind’s new work on aligning visual representations, improving how models “see” the world, and (3) Anthropic’s $50 billion US compute / data-center investment, a large infrastructure bet to sustain frontier-model training.
Authors: Adam Cheng, Aaron Calhoun, and Gabriel Reedy
Journal:Advances in Simulation
Publication Date: April 18, 2025
DOI: 10.1186/s41077-025-00350-6
The central thesis of this article is that generative artificial intelligence tools can be ethically integrated into academic writing processes as long as researchers adhere to principles of transparency, maintain human accountability for content, and use AI to enhance rather than replace critical thinking and scholarly development.
AI the new source of geopolitical power. “The dialogue [TRENDS’ 2nd Annual Dialogue on AI] concluded that artificial intelligence has become the new source of geopolitical power, surpassing natural resources and military strength. Soft power is no longer limited to culture and education but now includes digital identity systems, innovative services, and AI models” (MSN, 14 Nov 2025).
Introduction: I asked Copilot to identify and rank order the 10 world leaders in AI drone warfare as of November 13, 2025, using the following criteria: R&D, Industrial Scale, Battlefield Performance, and Export/Influence. When Ukraine failed to make the list, I asked Copilot to explain. I think you’ll find the explanation insightful. -js
1. Apple’s reported partnership with Google to power Siri with Gemini
Between October 14 and November 13, 2025, one headline cut through the noise: Apple reportedly partnering with Google to supercharge Siri with Gemini—framed as a leap toward trillion-parameter intelligence on consumer devices. The article “Apple Joins Forces with Google to Supercharge Siri with 1.2 Trillion-Parameter AI!” by Mackenzie Ferguson, published on OpenTools on November 6, 2025, captured the public imagination and crystallized a turning point in platform strategy. The piece appeared on OpenTools’ AI News page and set out the basic claim and its significance for the smartphone AI battleground opentools.ai.
“Today’s AI differs from previous generations’ because it can tell stories and create images. Built from online human stories rather than facts or logic, generative AI mimics human intelligence by collecting and recombining our digital narratives. While earlier AI managed specific organizational functions, generative AI directly addresses how humans think and communicate. Unintended consequences: Because generative AI is built from people’s digital commentary, it inherently propagates biases and misinformation.
Mark Zandi, chief economist of Moody’s Analytics (X.com)
In mid-October, analysis of the Trump administration’s 2025 AI Action Plan highlighted tangible momentum: expanded data center build-outs, “innovation sandboxes,” and targeted federal funding intended to accelerate U.S. AI leadership. This period’s developments underscored a pro-innovation posture—streamlining permits and encouraging private-sector deployment—while signaling an export-forward stance that positions American AI to compete globally.
Video created by Grok via an image created by Copilot 11/12/2025Continue reading →
Thomas Claburn1 reports that “The 339 respondents participating in the [Murphy et al.2] project – AI and ML scientists, economists, technical staff at frontier AI companies, and policy experts from NGOs – believe that AI will spur significant social changes by 2040.” Claburn says the project found that “there’s only about a 20 to 25 percent chance that the AI train will be slowed by lack of AI literacy, societal unease, lack of use cases, and costs. Data quality, regulations, and cultural resistance are seen as more likely (30 to 35 percent) barriers to adoption. Integration and unreliability are expected to be the most significant obstacles (40 percent).”
MicroAdapt is a new approach to edge artificial intelligence developed at The University of Osaka’s Institute of Scientific and Industrial Research (SANKEN). At its core, MicroAdapt is a family of self-evolving, dynamic modeling algorithms designed to watch time-evolving data streams on small devices, automatically identify recurring regimes or patterns in that stream, and maintain — on device — a compact ensemble of tiny models that are created, updated, and retired as the situation demands. In other words, rather than shipping raw data to the cloud and relying on a single large model trained offline, MicroAdapt performs continual modeling and short-term forecasting in situ on modest hardware such as a Raspberry Pi, using very little memory and power. This on-device learning architecture is what the research team describes as “self-evolving” edge AI. (sanken.osaka-u.ac.jp)
Yasuko Matsubara, Institute of Scientific and Industrial Research, University of Osaka
The research paper by Kestin, Miller, Klales and colleagues* represents a watershed moment in educational technology research, offering rigorously controlled evidence that properly designed AI tutoring can surpass traditional pedagogical best practices. Conducted at Harvard University during Fall 2023 and published on 3 June 2025 in Scientific Reports, this randomized controlled trial provides empirical validation for claims about artificial intelligence’s transformative potential in education.
Introduction: I asked Claude to report on articles published in 2025 that discuss why banning AI chatbots is impossible or unwise in college settings. It found four. I also asked ChatGPT to add two more. -js
By Michael Akuchie English Composition Instructor Southern Illinois University Carbondale
The United States has a reading problem, and according to findings by the National Endowment for the Arts (NEA), it is not wrong to worry about the future of classroom learning and the culture of reading for pleasure. Per the NEA’s survey of US adults who read books in 2022, only 48.5% said that they had read a book within that period. When asked about literary works, such as novels and short story collections, the percentage of adults who reported having consumed at least one literary piece declined to 37.6%. As adults pay less attention to books, especially literary works, that apathy has unfortunately trickled down to first-year college students, who represent the future of America’s labor force.