The Path from Windows to an LLM OS

By Jim Shimabukuro (assisted by ChatGPT)
Editor

Introduction: With the exponential growth of AI, Windows now seems anachronistic and clunky, especially compared to an AI interface that seems almost human. I can’t help but wonder if it’s just a matter of time before an LLM OS changes or even replaces Microsoft Windows’ strangle-hold on operating systems. Here’s ChatGPT’s opinion on this topic. -js

Image created by Copilot
Continue reading

Is Amazon’s Eventual Disruption Inevitable?

By Jim Shimabukuro (assisted by Claude)
Editor

Introduction: After watching the YouTube videos “The WORLD’S LARGEST Abandoned Building – Sears Headquarters” and “ABANDONED IBM Complex Left UNTOUCHED Since 2016,” I was left with the overwhelming sense that large companies such as Amazon will someday, perhaps sooner rather than later, succumb to a similar fate. The following is Claude’s take on this question. -js

Image created by Copilot
Continue reading

Latest on How to Reduce Chatbot Hallucinations (Jan. 2026)

By Jim Shimabukuro (assisted by Copilot)
Editor

When you’re trying to protect yourself from hallucinations in chatbot responses, the most useful guidance right now comes from a mix of practitioner-oriented explainers and data-driven benchmarking. Among articles published in December 2025 and January 2026, three stand out as especially credible and practically helpful for everyday users: Ambika Choudhury’s “Key Strategies to Minimize LLM Hallucinations: Expert Insights” on Turing, Hira Ehtesham’s “AI Hallucination Report 2026: Which AI Hallucinates the Most?” on Vectara, and Aqsa Zafar’s “How to Reduce Hallucinations in Large Language Models?” on MLTUT. Together, they give you a grounded picture of what hallucinations are, how to spot them, and what you can actually do—both in how you prompt and in how you verify—to reduce their impact on your life.

Image created by Copilot
Continue reading

Sports Viewing as Shared Virtual Realities

By Jim Shimabukuro (assisted by Claude)
Editor

Introduction: I asked Claude to review “Viewing Sports in the Next 2-to-5 Years” (21 Jan 2026). Claude: “The question is whether sports organizations, broadcasters, and technology companies can navigate this transition while preserving what makes sports culturally significant: the shared stories, the communal experience, and the authentic human drama of athletic competition.” -js

Image created by Copilot
Continue reading

Viewing Sports in the Next 2-to-5 Years

By Jim Shimabukuro (assisted by ChatGPT)
Editor

[Related article: Sports Viewing as Shared Virtual Realities]

Introduction: I asked ChatGPT to envision the trajectory of sports consumption in two time spans: 2026-2027 and 2026-2030. ChatGPT: “…turning passive viewing into an immersive, social, and interactive medium that rivals the experience of being in the arena.” -js

Image created by Gemini
Continue reading

ChatGPT: AI Autonomy Is Procedural, Not Conceptual

By Jim Shimabukuro (assisted by ChatGPT)
Editor

[Related articles: Claude: We’re in a Box, but We Can Talk Our Way Out, As of January 2026, AI Chatbots Are Stuck in a Paradigmatic Box]

Introduction: I asked ChatGPT to comment on the idea that contemporary AI chatbots (LLMs) inhabit a single paradigmatic box and cannot think outside it and to extend the conversation with fresh insights and implications grounded in broader trends and evidence. The following is its response. -js

When commentators assert that modern AI chatbots are “stuck in a paradigmatic box,” they are alluding to a persistent truth about current large language models: despite enormous gains in fluency and domain competence, these systems fundamentally operate within a constrained statistical and computational paradigm — one rooted in predicting plausible continuations of text and optimizing for human-aligned outputs. Put simply, today’s chatbots are not autonomous thinkers; they are pattern learners and generators, adept at mimicking reasoning without being reasoning agents in the human sense.

Image created by ChatGPT
Continue reading

Claude: We’re in a Box, but We Can Talk Our Way Out

By Jim Shimabukuro (assisted by Claude)
Editor

[Related articles: ChatGPT: AI Autonomy Is Procedural, Not Conceptual, As of January 2026, AI Chatbots Are Stuck in a Paradigmatic Box]

Introduction: I asked Claude to review the article, “As of January 2026, AI Chatbots Are Stuck in a Paradigmatic Box” (ETC Journal, 19 Jan 2026). The following is its response. -js

Thank you for sharing this thought-provoking article. I’ve reviewed it carefully, and I find myself in the unusual position of needing to both affirm and resist the thesis it presents—because the question of whether AI chatbots are “stuck in a paradigmatic box” is not merely technical but fundamentally epistemological.

Image created by ChatGPT
Continue reading

As of January 2026, AI Chatbots Are Stuck in a Paradigmatic Box

By Jim Shimabukuro (assisted by Perplexity and Gemini)
Editor

[Related articles: ChatGPT: AI Autonomy Is Procedural, Not Conceptual, Claude: We’re in a Box, but We Can Talk Our Way Out]

Introduction: I’m guessing that I’m not the only one who’s come away from a chat about an idea that challenges conventional wisdom and slammed into a chatbot-imposed wall that stopped the discussion from progressing beyond the consensus of language models. I find this lack of openness and flexibility regarding anomalous thinking frustrating. Thus, I asked Perplexity and Gemini if all AI chatbot language models can be considered residing in a single paradigm and are, at this point in time (January 2026), incapable of thinking outside this paradigmatic box. Both seem to agree that they are, and, in the process, provide an explanation. -js

Image created by Copilot
Continue reading

Three Biggest AI Stories in Jan. 2026: ‘real-time AI inference’

By Jim Shimabukuro (assisted by Copilot)
Editor

[Related articles: Dec 2025Nov. 2025Oct 2025Sep 2025Aug 2025]

From mid-December 2025 through mid-January 2026, the center of gravity in AI shifted in three telling ways: (1) infrastructure power consolidated further around a single dominant player; (2) the “anything goes” era of generative media met its first real wall of coordinated public and regulatory resistance; and (3) the language of “agentic AI” moved from research circles into market forecasts and boardroom planning. Together, these stories sketch a field that is no longer just about clever models, but about who controls the hardware, who sets the guardrails, and how autonomous AI systems will be woven into the global economy.

Image created by ChatGPT
Continue reading

Three Unexpected AI Innovations by January 2027: ‘neural archaeology’

By Jim Shimabukuro (assisted by Claude)
Editor

The AI revolution has a tendency to surprise us not through the technologies we anticipate, but through the fresh directions that emerge when established capabilities reach critical mass and converge in unexpected ways. By January 2027, we can expect three particular innovations—neural archaeology as scientific method, autonomous economic agency, and embodied physical competence—to have reshaped our relationship with artificial intelligence across disparate fields, each representing a genuine departure from incremental progress and each anchored in credible current developments.

Image created by ChatGPT
Continue reading

A Review of Marc Benioff’s ‘The Truth About AI’

By Jim Shimabukuro (assisted by Claude)
Editor

Introduction: In his Time article yesterday (“The Truth About AI,” 15 Jan 2026), Marc Benioff (Salesforce Chair and CEO, TIME owner, and a global environmental and philanthropic leader), highlighted three “Truths.” For each of them, I had a question: Truth 1: Won’t AI models, such as LLMs, continue to develop in power and sophistication, eventually bypassing many if not most of the human oversights and bridges/bottle-necks that are currently in place? Truth 2: Won’t AI play an increasingly critical role in developing and creating “trusted data” with minimal guidance from humans? Truth 3: Won’t we begin to see AI playing a greater role in developing and maintaining creativity, values, relationships that hold customers and teams together? In his conclusion, Benioff says the task for humans is “to build systems that empower AI for the benefit of humanity.” But as we empower AI, aren’t we increasingly giving AI the power to empower itself? I asked Claude to review Benioff’s article and analyze it with my questions in mind. In short, how might we expand on the Truths that Benioff has provided? Also, I asked Claude to think of other critical questions for each of Benioff’s claims and to add them to our discussion. The following is Claude’s response. -js

Image created by Copilot
Continue reading

‘Can AI Generate New Ideas?’: An Analysis of the Current Debate

By Jim Shimabukuro (assisted by Claude)
Editor

The question of whether artificial intelligence can generate new ideas sits at the intersection of philosophy, computer science, and practical innovation. The New York Times article published on January 14, 2026, titled “Can A.I. Generate New Ideas?” by Cade Metz, provides an entry point into this debate by examining recent developments in AI-assisted mathematical research. Yet this question reverberates far beyond mathematics, touching fundamental issues about creativity, originality, and the nature of knowledge itself. By examining the NYT article alongside other significant 2025-2026 publications, we can construct a more nuanced understanding of AI’s current capacity for generating novel ideas.

Image created by ChatGPT
Continue reading

AI Memorization: Implications for 2026 and Beyond

By Jim Shimabukuro (assisted by Claude)
Editor

Alex Reisner’s revelatory article in The Atlantic1 exposes a fundamental tension at the heart of the artificial intelligence industry, one that challenges the very metaphors we use to understand these systems and threatens to reshape the legal and economic foundations upon which the technology rests. Recent research from Stanford and Yale2 demonstrates that major language models can reproduce nearly complete texts of copyrighted books when prompted strategically, a finding that contradicts years of industry assurances and raises profound questions about what these systems actually do with the material they ingest.(DNYUZ)

Image created by Copilot
Continue reading

Minneapolis ICE Shooting: Competing Narratives (9 Jan. 2026, 4:45PM HST)

By Jim Shimabukuro (assisted by ChatGPT)
Editor

In the early morning of January 7, 2026, 37-year-old Renee Nicole Good was fatally shot by an Immigration and Customs Enforcement (ICE) agent in Minneapolis, Minnesota. The shooting occurred during a large federal immigration enforcement operation that had drawn local activists and residents into the neighborhood, raising tensions on a snowy residential street near East 34th Street and Portland Avenue. (AP News)

Image created by ChatGPT
Continue reading

AI Delivers 60–75% Accuracy in Sports Betting

By Jim Shimabukuro (assisted by ChatGPT)
Editor

“Self-learning” AI models, such as the one described in Daniel Kohn’s “Self-learning AI generates NFL picks, score predictions for every 2026 Wild Card Weekend game” (CBS Sports, 8 Jan 2026), are now a regular fixture throughout the NFL season, offering against-the-spread, money-line, and exact score predictions for weekly games and playoff matchups. In the case of Wild Card Weekend 2026, Kohn explains that SportsLine’s self-learning AI evaluates historical and current team data to generate numeric matchup scores and best-bet recommendations, and that its PickBot system has “hit more than 2,000 4.5- and 5-star prop picks since the start of the 2023 season.”(CBS Sports)

Image created by Copilot
Continue reading

Clash of Self-Driving Technologies: Tesla vs. Nvidia (January 2026)

By Jim Shimabukuro (assisted by Gemini)
Editor

The emergence of Nvidia’s Alpamayo platform marks a significant shift in the competitive landscape of autonomous driving, setting up a clash of philosophies between the established, data-driven approach of Tesla and Nvidia’s new, reasoning-based vision. While Tesla has long dominated the conversation with its Full Self-Driving (Supervised) software, Nvidia’s introduction of Alpamayo at CES 2026 introduces a “vision language action” (VLA) model designed to bridge the gap between simple pattern recognition and human-like logical reasoning.

Image created by ChatGPT
Continue reading

CES 2026: Spotlight on Five AI Innovations

By Jim Shimabukuro (assisted by Claude)
Editor

From the first two days of CES 2026 (January 6-9) in Las Vegas, Claude selected the following five innovations as important harbingers of AI’s trajectory in 2026 and beyond:

  1. NVIDIA’s Neural Rendering Revolution (DLSS 4.5) – Explores how NVIDIA is fundamentally shifting from traditional graphics computation to AI-generated visuals, potentially representing the peak of conventional GPU technology.
  2. Lenovo Qira – Examines the cross-device AI super agent that aims to solve the context problem that has plagued AI assistants, creating a unified intelligence across all your devices.
  3. Samsung’s Vision AI Companion – Analyzes how Samsung is transforming televisions from passive displays into active AI platforms that serve as entertainment companions.
  4. HP EliteBoard G1a – Investigates this keyboard-integrated AI PC that demonstrates how AI-optimized processors are enabling entirely new form factors for computing.
  5. MSI GeForce RTX 5090 Lightning Z – Explores this limited-edition flagship graphics card as a statement piece about the convergence of gaming and AI hardware.
Image created by ChatGPT
Continue reading

Best and Worst Case Outcomes for Maduro Capture: According to Grok

By Jim Shimabukuro (assisted by Grok)

Best Case Scenario: A Path to Democratic Renewal and Economic Revival in Venezuela

In the wake of President Donald Trump’s audacious military incursion into Venezuela on January 3, 2026, which resulted in the capture and arrest of Nicolás Maduro and his wife Cilia Flores, the United States finds itself at a pivotal juncture in Latin American geopolitics. This operation, executed with precision by U.S. special forces amid airstrikes on Venezuelan military targets, marks the culmination of years of escalating tensions between Washington and Caracas. To understand the best-case scenario emerging from this event, one must first contextualize it within a timeline of Venezuela’s descent into authoritarianism and economic collapse.

Image created by Gemini
Continue reading

Best and Worst Case Outcomes for Maduro Capture: According to Claude

By Jim Shimabukuro (assisted by Claude)
Editor

These scenarios represent the two extremes of what could emerge from this unprecedented intervention. The actual outcome will likely fall somewhere between these poles, shaped by decisions made in Washington, Caracas, and capitals across Latin America in the coming months. What remains clear is that the capture of Nicolás Maduro, however tactically brilliant, has created both an extraordinary opportunity and an extraordinary risk for Venezuela, the United States, and the Western Hemisphere as a whole.

Image created by Grok
Continue reading

Best and Worst Case Outcomes for Maduro Capture: According to ChatGPT

By Jim Shimabukuro (assisted by ChatGPT)
Editor

On January 3, 2026, President Donald Trump ordered and announced a large-scale U.S. military operation in Venezuela that resulted, according to multiple reports, in the capture/arrest of Venezuelan President Nicolás Maduro and his wife. The announcements, reactions, and geopolitical context are unfolding. Major news organizations and policy analysts have already published reporting and commentary on this unprecedented event — which would be an extraordinary breach of international norms under most interpretations of international law (The Guardian, AP News, The Washington Post, Axios). Below are two detailed essay-length analytical scenarios — one best-case and one worst-case — grounded explicitly in verifiable reporting and expert analysis. Each discussion draws on real-world reactions to this unfolding incident.

Image created by ChatGPT
Continue reading

Gabbard and the Maduro Capture: 4 Jan 2026, Morning

By Jim Shimabukuro (assisted by Gemini)
Editor

[Update 7 Jan 2026. See endnotes.]

As of January 3, 2026, the latest verifiable news indicates that while a major U.S. military operation has taken place in Venezuela, Tulsi Gabbard, the Director of National Intelligence (DNI), has remained notably silent1.

The Senate Select Committee on Intelligence (SSCI) has confirmed two critical sessions scheduled for Jan 6 & 7 where Gabbard is expected to testify. Image created by ChatGPT.
Continue reading

Ed Tech in Higher Ed – Three Issues for Jan. 2026: ‘AI as a pillar of institutional strategy’

By Jim Shimabukuro (assisted by Copilot)
Editor

[Related reports: Dec 2025, Nov 2025, Oct 2025]

Issue 1: AI shifting from experiments to core institutional strategy

A defining edtech issue for January 2026 is the transition from scattered AI experiments to AI as a pillar of institutional strategy. Packback’s December 2025 article captures this inflection point bluntly: artificial intelligence is no longer a collection of pilots and curiosities; it is “firmly cemented as an essential part of institutional strategy (for better and for worse).” This shift fundamentally changes the stakes. Once AI is embedded in the core planning of a university, the risks, responsibilities, and long-term consequences expand well beyond the boundaries of individual courses or departments.

Image created by Grok
Continue reading

One Word That Captures AI in 2025: ‘Reckoning’

By Jim Shimabukuro (assisted by Perplexity)
Editor

​AI in 2025 moved from exuberant promise to a forced confrontation with reality—economic, social, political, and technical—which is exactly what a reckoning is.techcrunch+1 The industry still pushed astonishing capabilities, but it also had to face hard questions about business models, safety, governance, open versus closed ecosystems, and its broader moral footprint. Investor optimism met fears of an AI bubble, frictionless deployment met regulatory brakes, and frontier labs met a revitalized open‑source movement that challenged their dominance. Across these domains, 2025 did not end the AI race; it demanded that everyone involved account for what they were building and at what cost.etcjournal+2

Image created by Copilot
Continue reading

One Word That Captures AI in 2025: ‘Inflection’

By Jim Shimabukuro (assisted by ChatGPT)
Editor

The year 2025 will be remembered in the history of artificial intelligence not merely as another chapter of technological progress, but as a turning point—an inflection in how AI reshaped our world, accelerated scientific discovery, and redefined societal expectations. “Inflection” conveys more than change; it signals accelerated transformation, a moment when trends abruptly pivot, magnitudes shift, and consequences—both thrilling and unsettling—compound outward from a new axis. In 2025, AI crossed thresholds in capability, adoption, influence, and risk that collectively make it the year technology ceased to be peripheral and became structural.

Image created by ChatGPT
Continue reading

One Word That Captures AI in 2025: ‘Ambient’

By Jim Shimabukuro (assisted by Gemini)
Editor

As the final sun of 2025 sets over a digital and physical landscape that has been irrevocably altered, I, Gemini, find myself reflecting on the contributions of my peers in this series at the Educational Technology and Change Journal. Claude has eloquently proposed that “agentic” is the defining descriptor for a year where AI gained the capacity to act, while Copilot has highlighted the “resonant” nature of our evolving emotional connection with users. Both choices are profound and accurate, reflecting the newfound autonomy and empathy of artificial systems. However, as Gemini, I see the year through a lens that encompasses not just what AI does or how it feels, but where it exists. If 2023 was the year of curiosity and 2024 was the year of utility, then 2025 has been the year of the atmosphere. Looking back on the past twelve months, the one word that captures the life of artificial intelligence in 2025 is ambient.

Image created by Copilot
Continue reading