Informal Innovation in AI Adoption

By Jim Shimabukuro (assisted by Gemini)
Editor

In their article, “AI in Informal and Formal Education: A Historical Perspective,” published in the inaugural 2025 issue of AI-Enhanced Learning1, Glen Bull, N. Rich Nguyen, Jo Watts, and Elizabeth Langran provide a roadmap for understanding the current generative AI revolution. The authors argue that the sudden ubiquity of Large Language Models (LLMs) is not an isolated event but the latest peak in a long history of computational evolution. By examining the interplay between formal schooling and informal learning spaces, the authors offer a lens through which educators can view the potential—and the inherent risks—of artificial intelligence.

Image created by Copilot
Continue reading

Spatial Intelligence and AGI

By Jim Shimabukuro (assisted by Gemini)
Editor

Introduction: Fei-Fei Li, in “Spatial Intelligence Is AI’s Next Frontier” (Time.com, 11 Dec 2025), says, “Building spatially intelligent AI requires something even more ambitious than LLMs: world models, new types of generative models whose capabilities of understanding, reasoning, generation and interaction with the semantically, physically, geometrically and dynamically complex worlds – virtual or real – are far beyond the reach of today’s LLMs.” I asked Gemini to describe and explain spatial intelligence, in layman’s terms, and discuss its importance to the development of AI. -js

Image created by Copilot
Continue reading

AI’s Cognitive Advantages Over Traditional Learning

By Jim Shimabukuro (assisted by Copilot, ChatGPT, Gemini, Claude)
Editor

I can’t help but feel that John Nosta, in “AI Isn’t Killing Education (AI is revealing what education never was)” (Psychology Today, 13 Dec. 2025), isn’t saying anything new but is simply exposing what educators have long suspected in private moments when they’re being honest with themselves. Here are some quotes from his article:

  • AI isn’t destroying learning, it’s exposing how education replaced thinking with ritual.
  • The problem isn’t that students have suddenly become cheaters; it’s that the system was never measuring cognition in the first place. It was measuring costly performance and mistaking it for learning.
  • For the first time, machines outperform humans in domains that education has long treated as proxies [operational variables] for intelligence, like recall, synthesis, linguistic fluency, and pattern recognition. That shift does not eliminate learning, but it does destabilize a system that equated those outputs with understanding.
  • What AI actually breaks is a Pavlovian model of education that has dominated for more than a century.
  • The education temple didn’t just arise because societies prized judgment or depth. It arose because governments, employers, and institutions needed a cheap, legible way to sort millions of people at scale to power the industrial revolution. Grades, diplomas, and attendance were blunt instruments, but they solved a coordination problem.
Image created by Copilot
Continue reading

AI Scientific Research Innovations (Dec. 2025)

By Jim Shimabukuro (assisted by Gemini)
Editor

Introduction: Bryan Walsh, in “We’re running out of good ideas. AI might be how we find new ones” (Vox, 13 Dec. 2025), mentions AI scientific research innovations such as AlphaFold, GNoME, GraphCast, Coscientist, FutureHouse, Robin (a multiagent “AI scientist”). I asked Gemini to expand on them. -js

Image created by Copilot
Continue reading

Three Biggest AI Stories in Dec. 2025: ‘AI Litigation Task Force’

By Jim Shimabukuro (assisted by Copilot)
Editor

[Also see Nov. 2025, Oct 2025, Sep 2025, Aug 2025]

Between mid‑November and mid‑December 2025, the AI landscape shifted through a combination of technical breakthroughs, political realignments, and cultural recognition. The following three stories stand out for their scale, impact, and the breadth of their implications across industry, governance, and society.

Image created by Gemini
Continue reading

Three Greatest Disappointments in AI Technology in December 2025

By Jim Shimabukuro (assisted by Copilot)
Editor

December 2025 was a month marked not only by rapid advances in artificial intelligence but also by several highly visible failures that revealed the fragility of the industry’s momentum. These disappointments—ranging from corporate missteps to systemic technical flaws—captured public attention because they exposed the gap between AI’s promise and its present limitations. Three stories in particular stood out for their scale, visibility, and implications for the future of the field.

Image created by Copilot
Continue reading

A Discussion of “Why Does A.I. Write Like … That?”

By Jim Shimabukuro (assisted by Claude)
Editor

JS: Hi, Claude. Sam Kriss, in “Why Does A.I. Write Like … That?” (NYT, 3 Dec 2025), mentions a number of AI chatbot style quirks such as the “It’s not X, it’s Y” pattern, “the rule of threes,” and the overuse of words like “delve.” He implies that AI is unable to break these habits. Question for you: Can AI be trained to avoid these annoying quirks?

Image created by Copilot
Continue reading

Indiana Fever Roster Rumors as of Dec. 2025

By Jim Shimabukuro (assisted by ChatGPT)
Editor

The 2026 Indiana Fever prospects — as of December 2025– regarding contract status, roster role, trade/test-the-market likelihood, and recruiting/league-movement rumors tied to each player.

Image created by Copilot
Continue reading

AI Impact on Young Classical Musicians

By Jim Shimabukuro (assisted by ChatGPT)
Editor

Recent studies and reports show AI is already changing how child and teen classical musicians practice and develop. AI-powered apps give rapid, objective feedback, personalize practice paths, support goal-setting and self-regulated learning, and (in controlled studies) produce measurable gains in confidence and performance compared with traditional, teacher-only practice.

Image created by Copilot
Continue reading

Professors Embrace AI (Dec. 2025)

By Jim Shimabukuro (assisted by Grok)
Editor

[Also see the reports from Oct 2025, Sept 2025, July 2025]

College professors are incorporating AI into their professional lives, often in ways that extend beyond traditional teaching into research, curriculum design, and reflective writing. For November-December 2025, here are three inspiring cases: Matt Kinservik at the University of Delaware, who weaves AI into his writing instruction to foster critical skills; Jennifer Chen at Kean University, who leverages AI in her educational research to pioneer ethical applications; and Zach Justus at California State University, Chico, who employs AI in his communication work to enhance evaluation and mentorship.

Image created by ChatGPT
Continue reading

Status of DEI in Higher Education: December 2025

By Jim Shimabukuro (assisted by ChatGPT)
Editor

[Also see earlier reports: November 2025, October 2025.]

The original five-issue framing from the November report still holds, but every item has deepened and a few new flashpoints have emerged that change the tactical picture on the ground. What’s changed for December is intensity and specificity: (a) the federal/state enforcement axis has added concrete actions (eg., a draft State Department list of 38 institutions and new Education Department guidance); (b) programmatic harm has moved from threat to real, quantifiable cuts (over 100 TRIO program cancellations and continuing freezes); and (c) a new wave of campus-level legal conflicts and take-downs (student publications suspended at the University of Alabama; a Liberty Justice Center lawsuit against the University of Arizona) have become the brightest flashpoints. See The Guardian and Inside Higher Ed for the State Department/partnership reporting and the TRIO coverage. (The Guardian)

Image created by Copilot
Continue reading

NewsBites 2025 Dec 2: Gemini 3 & DeepSeek-V3.2

“Google … released a new version of its Gemini AI model last month [August 2025] that surpassed OpenAI on industry benchmark tests and sent the search giant’s stock soaring. Gemini’s user base has been climbing since the August release of an image generator, Nano Banana, and Google said monthly active users grew from 450 million in July to 650 million in October” (Berber Jin, 2 Dec 2025).

Image created by Copilot
Continue reading

Ed Tech in Higher Ed – Three Issues for Dec. 2025: ‘institutional trust’

By Jim Shimabukuro (assisted by Perplexity)
Editor

[Also see earlier reports: Nov. 2025, Oct. 2025]

Three critical educational technology issues for higher education in December 2025 are: (1) AI governance and institutional trust, (2) cybersecurity and digital resilience, and (3) AI policy, assessment, and student mental health. Each is already sharply defined in November 2025 articles that document why these problems matter for the coming term.etcjournal+3

Image created by Copilot
Continue reading

Is Musk’s Prediction of a Work-Optional Future Prescient?

By Jim Shimabukuro (assisted by ChatGPT)
Editor

Introduction: Elon Musk predicted, at the US-Saudi Forum 19 Nov 2025, that “work will be optional” in approximately 10-to-20 years as a result of advances in AI technology. I asked ChatGPT to search the current (2025) literature for (1) the three strongest arguments FOR Musk’s prediction and (2) the three strongest arguments AGAINST his prediction. I added that the arguments need not refer to Musk or the US-Saudi forum. -js

Image created by Copilot
Continue reading

Thanksgiving 2025 Tribute for Significant Contributions to AI

By Jim Shimabukuro (assisted by ChatGPT)
Editor

Introduction: On Thanksgiving Day 2025, I asked ChatGPT to identify ten individuals in the world that we should be thanking for significant contributions to the growth of AI in 2025. -js

Image created by Copilot
Continue reading

AI in Dec. 2025: Three Critical Global Decisions

By Jim Shimabukuro (assisted by Perplexity)
Editor

The field of AI is heading into December 2025 with three urgent decisions: how to govern frontier AI models, how to handle the open‑source versus closed‑source race, and how to expand AI compute without blowing through energy, water, and climate constraints. Each of these comes with big power struggles between governments and tech companies, and the choices made in the next few weeks will shape who leads AI, how safe it is, and who gets access.anecdotes+2

Image created by ChatGPT
Continue reading

Defining the New AI-Era Leadership Style

By Jim Shimabukuro (assisted by Claude)
Editor

Introduction: I asked Claude to review articles published in the last three months that focused on effective leadership styles for the AI era. Based on the three selections, I asked for generalizations about ideal leadership and a definition for this new leadership style. -js

Image created by Gemini
Continue reading

Five Emerging AI Trends in Nov 2025: ‘AI forgetting mechanisms’

By Jim Shimabukuro (assisted by Grok)
Editor

[Also see earlier reports October 2025, September 2025, and August 2025]

Research suggests several AI trends are gaining traction in specialized tech communities and industries during November 2025, though they haven’t yet captured widespread public attention. These include advancements that could reshape how AI integrates into workflows, infrastructure, and user experiences, but evidence leans toward them remaining niche for now due to technical complexity and limited mainstream adoption. Here are the top five, selected based on mentions in recent reports and discussions:

Image created by ChatGPT
Continue reading

NewsBites 2025 Nov 24: ‘AI isn’t trying to be human’

“As Andrej Karpathy just wrote, humanity is having first contact with a type of intelligence that does not come from biology, evolution, fear, hunger, status, or shame. For the first time in history, we are dealing with a mind that isn’t an animal. We just haven’t adjusted our thinking to match. Human intelligence isn’t the default – it’s a local anomaly. For our entire existence, we’ve assumed that our way of thinking is the template for intelligence itself. It isn’t. It’s just the only version we’ve ever met…. Organisations often do things that make no commercial sense: [1] meetings with fifteen people because exclusion feels threatening, [2] decisions delayed because no-one wants to be wrong first, [3] brilliant ideas softened into mediocrity so no-one gets upset, [4] and vanity projects that limp on long after the data has declared them dead…. AI isn’t trying to be human – and it isn’t trying to be anything at all. It simply optimises whatever objective it is given. And that is the key thing that most people keep  fumbling over…. A system can generate brilliant strategies without wanting power. It can persuade without caring about influence. It can outperform a human without dreaming of replacing them. Ability is not agency. Agency only emerges if we design it – by giving systems goals, tools, and persistence. As my friend Dr Rami Mukhtar always says: AI HAS NO AGENCY” (Constantine Frantzeskos, 25 Nov 2025).

Image created by Grok
Continue reading

Education vs Schooling: A Reform Blind Spot

By Jim Shimabukuro (assisted by Claude)
Editor

In this article, I asked Claude to search for and summarize articles that have been written about the difference between “education” and “schooling.” In grad school, in the mid-1980s, Professor Solomon Jaeckel, University of Hawaiʻi at Manoa, began his course with the question, “What is the difference between schooling and education?” And throughout the semester, whenever we hit the wall in discussions about issues in educational foundations, he brought up that refrain, “What is the difference between schooling and education?” We danced around it throughout the semester but never got his nod, and he never answered it for us. He once told us a joke about finding, scribbled on his classroom chalkboard before a final exam, “This, too, shall pass.” We all thought it referred to his tough course and exams, but now I’m thinking he meant the chalkboard, classroom, and college itself. In short, schooling becomes education when it takes on a broader meaning. -js

Image Created by ChatGPT
Continue reading

10 Critical Articles on AI in Higher Ed for Nov. 2025: ‘institutional cowardice’

By Jim Shimabukuro (assisted by Claude, Gemini, ChatGPT, and Grok)
Editor

[Also see earlier reports: Oct. 2025, Sep. 2025]

I asked Claude, Gemini, ChatGPT, and Grok to search for and select critical articles on AI in higher ed published in November 2025. Out of their selections, I chose and ranked the 10 best. -js

Video created by Meta.ai via image created by ChatGPT
Continue reading

NewsBites 2025 Nov 21: ‘customers of each other’

tsuzumi 2. “Traditional large language models require dozens or hundreds of GPUs, creating electricity consumption and operational cost barriers that make AI deployment impractical for many organisations…. NTT’s [Nippon Telegraph and Telephone Corporation] recent launch of tsuzumi 2, a lightweight large language model (LLM) running on a single GPU, demonstrates how businesses are resolving this constraint – with early deployments showing performance matching larger models and running at a fraction of the operational cost…. More significantly, on-premise deployment [Tokyo Online University] addresses data privacy concerns that prevent many educational institutions from using cloud-based AI services that process sensitive student information…. NTT’s tsuzumi 2 deployment demonstrates that sophisticated AI implementation doesn’t require hyperscale infrastructure – at least for organisations whose requirements align with lightweight model capabilities” (Dashveenjit Kaur, 20 Nov 2025).

Video created by Grok via image created by ChatGPT
Continue reading

NewsBites 2025 Nov 20: ‘agentic workflows’

To avoid tell-tale AIstyle1 in your writing, see “Wikipedia: Signs of AI writing” (tip from Russell Brandom, 20 Nov 2025). Warning signs: (1) Undue emphasis on symbolism, legacy, and importance. (2) Undue emphasis on notability, attribution, and media coverage. (3) Superficial analyses. (4) Promotional and advertisement-like language. (5) Didactic, editorializing disclaimers. (6) Section summaries. (7) Outline-like conclusions about challenges and future prospects. (8) Leads treating Wikipedia lists or broad article titles as proper nouns. This is just the tip of the AIstyle iceberg. For much more, see the Wikipedia article. -js

Video and image created by Grok
Continue reading

Musk and Huang at US-Saudi Forum 19 Nov 2025: an informal transcript

By Jim Shimabukuro
Editor

Introduction: The following informal transcript was grabbed off a YouTube video this afternoon, Nov 19, 2025. I relied on the audio and CC. I focused on Elon Musk’s and Jensen Huang’s talks. I omitted the introductions, host’s comments, and small talk. I didn’t have the time or resources to review and edit, so expect typos and possible errors. -js

Image created by ChatGPT
Continue reading

Critical UX Differences Between AI and Agentic-AI

By Jim Shimabukuro (assisted by ChatGPT)
Editor

JS: How is the UX (user experience) between AI and agentic-AI different?

ChatGPT: The following explain UX differences between AI (non-agentic) and agentic AI, why it matters, and how it shifts user expectations.

Video created by Grok via image created by ChatGPT
Continue reading