Collaborative Authorship in Popular Fiction: Clancy, Custler, Michener, et al.

By Jim Shimabukuro (assisted by Claude)
Editor

James A. Michener: The Architect of Collaborative Epic Fiction

Few novelists of the twentieth century achieved the commercial and cultural reach of James Albert Michener (1907–1997). Born a foundling in Doylestown, Pennsylvania, and raised in Quaker poverty, he went on to sell an estimated 75 million copies of his books worldwide, winning a Pulitzer Prize for his debut story collection Tales of the South Pacific (1947) and producing a string of decade-defining blockbusters—Hawaii, The Source, Centennial, Chesapeake, The Covenant, Poland, and Texas, among many others—each an immersive survey of a region’s geology, history, culture, and people across sweeping time frames.1,6 His novels were usually massive in scope, several running more than a thousand pages, and each was grounded in exhaustive research that could take years to complete.1

Image created by Copilot

The question of how Michener sustained this prodigious output—and what help he enlisted along the way—has been a subject of quiet controversy for decades. The short answer is that he did not work alone. For his major saga novels, particularly from the 1970s onward, Michener employed researchers and research assistants, brought in subject-matter experts, and, in at least one well-documented case, worked side by side with a co-author who contributed original material to the manuscript. In his author’s notes, Michener routinely acknowledged the people who assisted him, though critics have argued those acknowledgments—brief and carefully worded—deliberately understated the depth of their contributions.2,5

The most extensively documented case is that of Errol Lincoln Uys, a South African journalist and editor, who worked with Michener from 1978 to 1980 on The Covenant, the novel’s six-hundred-page exploration of South African history. Uys has written at length about the collaboration, maintaining that he plotted the book with Michener, conducted the primary research, and wrote original sections of the manuscript—sections that Michener retyped and incorporated directly into his own drafts. In Michener’s published acknowledgment, the credit given to Uys amounted to the observation that the two had “read the finished manuscript seven times, an appalling task,” and a thank-you for his “assistance.”2,10 According to Uys, this framing drastically misrepresented his contribution. Stephen J. May, who wrote the 2005 biography Michener: A Writer’s Journey, examined the archival evidence and concluded that Michener had committed a literary transgression by obscuring the work of collaborators while publicly insisting—as he did on multiple occasions—that he wrote every word of his books and did all the research himself.5

One defender of Michener’s approach, writing in The Federalist, offered a more charitable assessment, arguing that the collaborators’ names do not appear on the cover but that Michener acknowledged his coworkers in the front matter and paid them well, and that the books are not ghostwritten in the dismissive, work-for-hire sense.8 This distinction—between a ghostwritten book in which a named author contributes nothing to the manuscript and a collaborative enterprise in which one creative director sets vision while others execute research and drafts—is one that recurs throughout the history of popular fiction. Whether Michener navigated it ethically is a judgment that depends on how one weighs his public statements against his private practices.

What is beyond dispute is that the Michener method produced books of extraordinary commercial power and genuine cultural influence. Centennial became a twelve-part NBC miniseries. Hawaii shaped how a generation of Americans understood the Pacific. The Source remains one of the most widely read popular accounts of the archaeology and religion of the ancient Near East.1,6 Whether these books would have been qualitatively superior had they been written in a more solitary fashion is, at best, unanswerable. What they were, definitively, was large, ambitious, well-researched, and widely read.

The criticism of Michener from the literary establishment, however, was rarely about his use of assistants per se. It was largely about style and genre. Literary critics in academic and avant-garde circles dismissed his encyclopedic approach as “word-pabulum,” his prose as functional rather than artistic, and his structural ambitions as the enemy of the lyric sentence.8 This was the conventional wisdom of the 1970s and 1980s in American literary culture, and it meant that Michener occupied a peculiar position: enormously popular with readers, largely scorned by critics. His collaborative methods were, in this context, a secondary grievance—confirmation, to his detractors, that he was a producer of content rather than an artist. That verdict, softened over time by readers who have rediscovered his epic storytelling, has never fully disappeared.

Tom Clancy: The Techno-Thriller as Industrial Enterprise

Thomas Leo Clancy Jr. (1947–2013) represents a different, and in some ways more complicated, model of collaborative fiction. Clancy began his career as a genuinely solo author. The Hunt for Red October (1984), written on his own while he ran an insurance agency in Owings, Maryland, was a singular act of imagination and self-education—a debut novel purchased by the Naval Institute Press for $5,000 that became a national bestseller after President Ronald Reagan publicly praised it, eventually selling 300,000 hardcover copies and two million in paperback.11 Patriot Games, Clear and Present Danger, and The Sum of All Fears followed, all written solely by Clancy, all bestsellers. Seventeen of his novels in total would reach the top of the New York Times bestseller list, and more than 100 million copies of his books were eventually sold.11

But as the Clancy brand expanded in the 1990s into franchise novels, multimedia properties, and film adaptations, the author’s role shifted from sole craftsman to something closer to a creative director. His early collaborator Larry Bond worked with him on Red Storm Rising; as the franchise grew, a series of co-authors and ghostwriters took on increasingly central roles.16 Jeff Rovin, a prolific professional writer, has stated in interviews that he worked on approximately seventeen or eighteen books under the Clancy imprint beginning in 1994, including the Op-Center and other series.15 Grant Blackwood co-wrote Dead or Alive (2010) and several others; Mark Greaney collaborated on Clancy’s final three novels and subsequently continued the Jack Ryan universe after Clancy’s death in 2013, with Greaney’s name appearing in noticeably smaller type than Clancy’s on the covers.11,19

Jerome Preisler, who co-wrote eight novels with Clancy, gave an account of the working conditions to the Paris Review that illuminates the pressures of franchise fiction writing: ten-month deadlines with no flexibility, pre-sold holiday releases, and no room for factual error in the technical details of military technology, ballistics, and geography.13 As one observer summarized it, Clancy was stubborn and resistant to editorial input in his early career, but as his star rose he founded a collaborative component to his work, with many writers receiving co-writer bylines.20

The transparency of the Clancy collaborations varied considerably. The franchise series—Op-Center, NetForce, EndWar—were understood by informed readers to be largely ghostwritten, with Clancy’s name functioning as a brand umbrella rather than an authorial signature. The main-sequence Jack Ryan novels occupied a different position: Clancy’s name dominated the cover in very large type, while the co-author’s name appeared in what multiple commentators described as “microscopic” font.16,19 Greaney himself acknowledged the dynamic with considerable grace, saying that the Clancy name is “one thing you can put on your book that will make it stand out from across the room,” and that the credit arrangement was ultimately the publisher’s commercial decision.19

Critical reception of the Clancy franchise books, compared to his solo work, was mixed in instructive ways. The solo early novels—The Hunt for Red October, Patriot Games—were widely praised for their procedural authenticity, propulsive plotting, and the novelty of the techno-thriller genre that Clancy had essentially invented.11 The franchise books attracted a more skeptical reception, with readers noting inconsistencies of style and quality between titles written by different ghostwriters, and critics pointing to the commercial logic of the enterprise as evidence that authorship had become, in Clancy’s case, primarily a matter of intellectual property licensing rather than literary creation. The brand, as one commentator put it, “did for military pop-lit what Starbucks did for the preparation of caffeinated beverages”: it created a sprawling, profitable enterprise that served and cultivated a large consumer base.11

Whether the collaborative novels were artistically inferior to Clancy’s solo work is a question the market settled in a particular way: they sold by the millions and the franchise continues to publish new titles—under Clancy’s name—more than a decade after his death. Marc Cameron took over the Jack Ryan series from Greaney in 2016 and has continued producing bestsellers.19 The authorial brand has proved more durable than the author himself, which is, in a sense, the most revealing fact about how the Clancy collaborative enterprise functioned.

James Patterson: Author as Showrunner

If Tom Clancy moved from solo authorship into collaboration gradually, and somewhat opaquely, James Patterson represents a model in which collaboration is openly central to the author’s identity as a creative producer. Patterson has sold more than 300 million books, holds a Guinness World Record for the most number-one New York Times bestsellers, and his books have at various points accounted for approximately six percent of all hardcover novels sold in the United States.21,25 He has published more than 200 novels, many of them co-authored, and employs at any given time a rotating stable of co-writers who work across his numerous series.17,25

Patterson’s method is distinctive and has been described by multiple analysts as resembling a television showrunner’s operation more than traditional novel-writing.25 He begins each book with an exhaustive outline, often fifty to eighty pages long, that maps every plot point, character arc, and major twist. He considers the outline to be, in essence, the book itself—the intellectual and creative core of the work.25,26 He then assigns the outline to a co-author who drafts the prose manuscript. Patterson reviews, rewrites, and finalizes the manuscript before publication. The co-author’s name appears on the cover alongside Patterson’s, though Patterson’s name is considerably larger.26

Patterson has been candid about his approach. He has acknowledged in interviews, including a segment on 60 Minutes, that he is not necessarily the strongest prose stylist and that his skill lies in the construction of stories—in generating ideas, designing narrative architecture, and producing plots with propulsive energy.23 He has defended the co-authorship arrangement on both commercial and ethical grounds: co-authors receive substantial exposure that advances their own careers, and the books are consistently what readers of a Patterson novel expect—fast, entertaining, twist-laden, and short-chaptered.26

The critical response to the Patterson model has been sharp, persistent, and divided along predictable lines. Literary critics and writers’ community voices have accused him of functioning more as a brand manager than an author—of treating fiction as a commodity to be manufactured rather than a work of art to be created.21 Some have gone further, arguing that his method devalues the craft of writing, reduces prose to a formulaic assembly operation, and misleads readers who may believe they are reading work primarily written by the named author.28 Others, including Patterson’s vast readership, are largely indifferent to the question of who typed the sentences, provided the result delivers the reading experience they seek.

From one analytic perspective, the Patterson operation is simply the logical extension of popular entertainment economics: a trusted brand delivering a reliable product at high volume. The television analogy is apt—viewers of a long-running series rarely expect every episode to be written by the show’s creator, and do not generally experience the result as inauthentic. Whether the same logic applies to the novel, a form historically associated with the singular vision and voice of an individual author, is a genuinely contested question that Patterson’s success has placed at the center of commercial publishing.23,25 What is difficult to dispute is that the books deliver what they promise, that co-authors get acknowledged by name, and that the enterprise has produced consistent bestsellers for more than three decades.

Clive Cussler: The Adventure Brand and Its Expanding Universe

Clive Cussler (1931–2020) occupied a position in adventure fiction comparable to Michener’s in historical epic and Clancy’s in the military thriller. His hero Dirk Pitt—a marine engineer and government agent for the fictional NUMA (National Underwater and Marine Agency)—anchored seventeen solo novels before Cussler began expanding into co-authored series that eventually encompassed five distinct franchises, each with its own co-author or rotating team of co-authors.31 He had seventeen consecutive titles on the New York Times fiction bestseller list, and his total output exceeded eighty books.31

Cussler’s collaborative model differed from Patterson’s in that he described it as more genuinely interactive. In interviews, he explained that co-authors would write the first fifty to a hundred pages of a manuscript and send them to him; he would make changes and return them, and the process would continue iteratively through the full draft.39 His primary collaborator for the Dirk Pitt series from 2004 onward was his son, Dirk Cussler, who had spent his career in finance before his father asked him to help continue the franchise. Cussler said he named the fictional Dirk Pitt after his son, who as a three-year-old had fallen asleep listening to him type.39 Paul Kemprecos co-authored eight novels in the NUMA Files series; Jack Du Brul worked on the Oregon Files series; Justin Scott wrote most of the Isaac Bell novels; Boyd Morrison and Grant Blackwood contributed to the Fargo Adventures.37,40

Kemprecos offered a detailed account of his working relationship with Cussler in a blog post for the crime fiction community, describing the collaboration as genuinely mutual: Cussler supplying the narrative instinct and the adventurous tone, Kemprecos contributing plotting solutions and prose.33 Cussler himself acknowledged a recurring tension: co-writers “tend to overwrite,” he noted, while his own instinct was always toward accessibility. “I want it to be easy to read,” he said. “I’m not writing exotic literature.”39

Reader reception of the Cussler collaborative novels was generally accepting, though not uncritical. Some longtime fans noticed variation in tone and style across the different series, and at least one reader reported feeling so confused by the quality inconsistencies that he was compelled to research whether “Clive Cussler” might be a pen name for a collective of writers.35 After Cussler’s death in February 2020, the continuation of the franchise by the various co-authors raised questions about whether the books retained the essential elements of Cussler’s appeal.32 The consensus among fans seems to be cautious acceptance: the established authors know the characters and the genre conventions, and Dirk Cussler’s involvement in particular is seen as lending authenticity to the continuation of his father’s most beloved series.32

The critical establishment never treated Cussler as a major literary figure; his novels were reviewed, when reviewed at all, as the adventure entertainment they were designed to be—more Indiana Jones than Henry James. Within those expectations, the collaborative books performed well. The franchise has proved durable enough to sustain multiple series simultaneously, with multiple co-authors, long after the founding author’s death. Whether that constitutes a tribute to the power of the original vision, the competence of the co-authors, or the loyal inertia of an established readership is, like most such questions, probably all three.

Collaboration, Quality, and the Question of Authenticity

Across the four cases examined here—Michener, Clancy, Patterson, and Cussler—several patterns recur that illuminate the broader phenomenon of collaborative popular fiction. First, the collaborative approach is most commonly adopted, or most heavily relied upon, at the point where success creates a demand that no single writer can meet alone: the expectation of a book a year, or more, while maintaining the research depth and plot construction that made the brand valuable in the first place.15,22 Second, the quality question is almost universally resolved in the market rather than in criticism: readers continue to buy the collaborative books in large numbers, which is, at minimum, evidence that the experience they provide is consistent with the expectations the brand has established.

Third, the critical community’s objection to collaborative fiction is rarely straightforwardly about quality; it is more often about authenticity—about the cultural contract between author and reader that implies a singular human consciousness behind the work. When that assumption is disrupted, as it is in all four of these cases, critics tend to respond with a mixture of commercial dismissal and ethical unease.14,58 The ghostwriting industry, for its part, has observed that homogenized, airport-biography-style prose is a common artifact of ghostwriting, in that skilled professional writers produce competent and readable but rarely distinctive work.41 This observation applies with varying force to the fictional cases: Clancy’s solo novels have a voice and energy noticeably different from some of the franchise books; Michener’s prose, for all its much-criticized density, has a particular texture that not every research assistant could replicate.

At the same time, there is a credible counterargument that collaborative fiction, when the originating author’s vision is strong and clearly communicated, can produce work that is genuinely entertaining and culturally significant. The evidence from Michener’s oeuvre—which changed how many Americans understood Hawaii, South Africa, Poland, and the Chesapeake Bay—suggests that the collaborative method need not produce work that is somehow lacking in its social and intellectual ambitions. What it may sacrifice in individual literary voice, it can compensate for in scope, research depth, and narrative reach. Whether that trade-off is worthwhile depends on what one believes popular fiction is for.

Collaborative Nonfiction: Legitimacy, Controversy, and the Ghost in the Acknowledgments

The use of research assistants, co-writers, and ghostwriters in nonfiction has an even longer history than in popular fiction, and a considerably more complex ethical landscape. In nonfiction, the stakes around authorial attribution are heightened by the fact that the reader’s trust in the named author’s credibility—as a reporter, historian, or expert—is part of what makes the book valuable. When that trust is extended to work substantially produced by others, the ethical questions become acute.

Bob Woodward, whose investigative reporting on Watergate with Carl Bernstein produced one of the most consequential pieces of American journalism and then one of its most famous nonfiction books, has authored or co-authored fourteen number-one national bestselling nonfiction books.48 Less widely known is that Woodward has long employed full-time research assistants whose contributions extend well beyond fact-checking. Barbara Feinman Todd, who worked as Woodward’s researcher from 1983 onward, has described working on Veil: The Secret Wars of the CIA and noted that her collaboration with Woodward set her on the path to becoming one of Washington’s preeminent ghostwriters.65,69 She subsequently became the primary writer on Hillary Clinton’s It Takes a Village—a collaboration that generated public controversy when Clinton’s acknowledgment of Feinman Todd’s contribution was omitted from the book, which Feinman Todd has described as a professional humiliation.69

The cases of historians Stephen Ambrose and Doris Kearns Goodwin represent a different and more troubling dimension of collaborative nonfiction: not ghostwriting, but the inadequate attribution of source material. Goodwin, whose presidential biographies—The Fitzgeralds and the Kennedys, No Ordinary Time, and Team of Rivals—made her among the most celebrated historians of her generation, was found in 2002 to have borrowed passages from other writers’ works without proper acknowledgment.63 The Weekly Standard identified dozens of passages in her Kennedy family biography that closely mirrored language in other published works. Her defense—that the passages had been inadequately footnoted rather than deliberately plagiarized—satisfied some commentators and not others.70 Ambrose, accused of similar failures around the same time, made comparable defenses with comparable mixed results. Both cases illustrate how the norms of historical collaboration—which routinely involve research teams, assistants, and the synthesis of vast secondary literature—create conditions in which proper attribution can become a matter of meticulous practice rather than obvious ethics.70

The broader nonfiction ghostwriting industry has, in recent years, moved from a posture of secrecy to one of increasing transparency.64 Literary agents report that the demand for professional collaborators is growing, and that the stigma once attached to ghostwriting has substantially diminished.64 Collaborators who previously worked entirely in the shadows—credited in the acknowledgments as “researchers” or not credited at all—are increasingly given “with” credits or even co-author status.62,64 The Association of Ghostwriters’ 2025 industry report noted that the demand for human ghostwriting services increased again after a dip in 2024, in part because authors who had hoped to use AI as a substitute were recognizing its limitations.44 The premium for human-written content, the report suggested, was likely to increase as AI-generated material became ubiquitous.46

The New Collaborator: AI Writing Tools and the Future of Authorship

The use of human collaborators in popular fiction and nonfiction has been a growing and increasingly acknowledged practice for decades. The emergence of AI writing tools—most prominently large language models such as ChatGPT and Claude—in the 2022–2025 period has introduced a new dimension to the collaboration question, one that differs from human ghostwriting in both its capabilities and its ethical implications.51,55

AI writing tools have been adopted by writers across a wide spectrum of purposes: research assistance, idea generation, outline development, first-draft production, grammatical editing, and marketing copy.51,55 For creative writers specifically, the tools offer obvious efficiency advantages—they can accelerate the mechanical aspects of drafting and help writers break through blocks—while raising persistent questions about originality, authenticity, and the integrity of human storytelling.51 AI writing tools saw a roughly three-hundred-percent surge in adoption between 2023 and 2024, with platforms like Sudowrite and Novelcrafter becoming established tools in some writers’ practices.56

However, the specific capabilities and limitations of AI as a fiction collaborator differ substantially from those of a skilled human ghostwriter or research assistant. Where human collaborators bring domain expertise, cultural immersion, and narrative judgment, AI tools excel at pattern recognition, rapid synthesis, and generating plausible-sounding prose.54 What they struggle with—as multiple studies and practitioner accounts have documented—is thematic consistency across complex narrative structures, genuine character development, authentic emotional resonance, and the cultural nuance that comes from lived experience.54,56 As one practitioner noted, AI can describe emotions but struggles to evoke them genuinely.54 A recurring critique from experienced fiction readers is that AI-generated prose satisfies non-readers while failing to satisfy those who read widely and well.54

The ethical questions raised by AI collaboration in writing are both similar to and distinct from those raised by human ghostwriting. On one hand, the fundamental issue—who actually wrote the work attributed to a named author?—is familiar from the Michener, Clancy, Patterson, and Cussler cases. On the other hand, AI introduces new concerns: the potential for homogenization of creative output, the copyright status of AI-generated material, the use of writers’ published work in AI training without consent or compensation, and the market flooding by low-quality AI-generated books that threaten both readers’ trust and legitimate authors’ incomes.58,59,60 The Authors Guild and a group of nonfiction writers separately sued OpenAI and Microsoft in 2023 for the allegedly unauthorized copying of copyrighted books for AI training purposes.60 The Writers Guild of America strike of 2023 included explicit demands to limit AI’s role in screenwriting, reflecting the creative community’s broad anxiety about labor displacement.58

A notable difference between human collaborative fiction and AI-assisted writing is the question of transparency. When Patterson lists a co-author’s name on the cover, or when Michener thanks researchers in his author’s note, there is at least a public record—however incomplete or euphemistic—of the collaborative nature of the work. AI-assisted writing currently exists in a regulatory vacuum regarding disclosure: there is no industry-wide standard requiring authors to inform readers that a language model contributed to the manuscript.55,58 The Association of Ghostwriters has predicted that the market will bifurcate into a premium segment of verified-human writing and a lower-end segment that relies heavily on AI, with publishers and ghostwriting clients increasingly demanding that writers avoid AI in content generation.46

The most thoughtful practitioners, across multiple accounts, have settled on a framework in which AI functions as a collaborator rather than a replacement—a powerful tool in the hands of a skilled writer who provides the creative direction, thematic vision, and experiential grounding that the AI cannot supply.55,57 In this sense, the AI collaborator occupies a position analogous to that of the research assistant in the Michener model, or the prose-drafter in the Patterson model: a capable executor of tasks defined and directed by a human creative intelligence. The crucial difference is that AI’s contributions are not acknowledged at all—there is no superscript number pointing to a credit for Claude or ChatGPT in the author’s note. Whether that absence is ethical, commercially practical, or simply a reflection of current conventions is a question the publishing industry has not yet resolved.44,51,58

Conclusion: The Collaborative Tradition and Its Discontents

The history of collaborative authorship in popular fiction and nonfiction is neither a scandal to be exposed nor a practice to be uncritically celebrated. It is, rather, a longstanding and expanding feature of the publishing industry that reflects the economic realities of the market, the practical limits of individual human capacity, and the evolving norms of what authorship means.

From Michener’s acknowledgment-note-minimizing of his collaborators’  deep contributions, to Clancy’s franchise ghostwriters toiling under strict deadlines, to Patterson’s openly advertised showrunner model, to Cussler’s son and co-authors perpetuating an adventure brand after its founder’s death—the patterns are consistent. When popular fiction succeeds at massive scale, the demands on a single author become incompatible with the pace of production the market requires, and collaboration follows. The critical establishment tends to use this as evidence of commercialism over craft. The market tends to be indifferent. Readers, for the most part, want the book.

In nonfiction, the same economic logic applies with added ethical complexity. When Woodward employs a full-time researcher whose work extends into the manuscript, or when Goodwin’s synthesis of vast historical sources crosses into inadequately attributed paraphrase, the question is not merely one of credit—it is one of the reader’s right to understand who produced the knowledge claim they are trusting.63,65,70 The ghostwriting industry’s growing transparency, its movement toward acknowledged collaboration, and its insistence on the distinction between a credited author’s ideas and a ghostwriter’s prose are all signs of a field working toward more honest conventions.44,64

AI collaboration, as the newest and most technically capable form of writing assistance, does not resolve these tensions—it intensifies them. The AI cannot be listed on the cover. It holds no copyright. It cannot be thanked in the author’s note in any meaningful sense. It produces prose without experience, generates plots without imagination, and synthesizes sources without judgment. In the hands of a skilled and honest author who uses it as a tool while taking full responsibility for the work’s vision and substance, it is continuous with the long tradition of assisted authorship that runs from Michener’s South African researcher to Patterson’s stable of co-writers. In the hands of an author who uses it to produce the substance of a book they then sign, it is something closer to the most extreme form of ghostwriting—with the ghostwriter invisible not by contract, but by nature.44,51,58

What endures, across all of these cases, is the reader’s core expectation: that the book delivers the experience promised by the author’s name on the cover. When it does—when Michener’s South Africa is rich and surprising, when Clancy’s submarines are technically plausible, when Patterson’s plots move like engines, when Cussler’s treasure hunts feel like adventures—the question of how many hands were on the pen recedes. When it does not, the absence of a single author’s unifying vision becomes the most plausible explanation. The collaborative tradition neither guarantees quality nor destroys it. What it requires, above all, is that the presiding creative intelligence be strong enough to make the whole greater than the sum of its many parts.

References

1. Wikipedia. “James A. Michener.” https://en.wikipedia.org/wiki/James_A._Michener

2. Uys, Errol Lincoln. “James A. Michener: The Covenant — Secret History of a Bestseller.” https://www.erroluys.com/covenantassignment1.html

3. University of Northern Colorado Libraries. “Non-Fiction — James A. Michener Research Guide.” https://libguides.unco.edu/c.php?g=582126&p=4454411

4. University of Northern Colorado Libraries. “Home — James A. Michener Research Guide.” https://libguides.unco.edu/JamesAMichener

5. May, Stephen J.

5.  May, Stephen J. Summarized in: “Mining Michener.” The Morning Call, March 4, 2007. https://www.mcall.com/news/mc-xpm-2007-03-04-3708062-story.html

6. Britannica. “James A. Michener.” https://www.britannica.com/biography/James-Albert-Michener

7. OrderOfBooks.com. “Order of James A. Michener Books.” https://www.orderofbooks.com/authors/james-a-michener/

8. The Federalist. “Reconsidering the Astonishing Literary Legacy of James Michener,” August 24, 2018. https://thefederalist.com/2018/08/24/reconsidering-the-astonishing-literary-legacy-of-james-michener/

9. EBSCO Research Starters. “James A. Michener.” https://www.ebsco.com/research-starters/history/james-michener

10. Uys, Errol Lincoln. “James A. Michener and the Writing of The Covenant.” https://www.erroluys.com/covenantwriting1.html

11. Wikipedia. “Tom Clancy.” https://en.wikipedia.org/wiki/Tom_Clancy

12. WritingBeginner.com. “Do Authors Use Ghostwriters? (Solved for 20 Writers).” https://www.writingbeginner.com/do-authors-use-ghostwriters/

13. The Paris Review Blog. “Ghostwriting Tom Clancy,” October 3, 2013. https://www.theparisreview.org/blog/2013/10/03/ghostwriting-tom-clancy/

14. WrightBookAssociates.co.uk. “8 Famous Authors Who Use Ghostwriters for Their Books,” August 31, 2024. https://www.wrightbookassociates.co.uk/blog/8-authors-who-use-ghostwriters/

15. The Podglomerate / Missing Pages. “Ghostwriting Fiction: Will the Real Tom Clancy Please Stand Up,” January 15, 2024. https://listen.podglomerate.com/show/missing-pages/ghostwriting-fiction-will-the-real-tom-clancy-please-stand-up/

16. Joe Clifford Faust Blog. “Ghostwriters in Disguise, Part I.” https://joecliffordfaust.com/2010/06/10/ghostwriters-in-disguise-part-i/

17. WrightBookAssociates.co.uk. “8 Famous Authors Who Use Ghostwriters for Their Books.” https://www.wrightbookassociates.co.uk/blog/8-authors-who-use-ghostwriters/

19. SpokenAndWrittenWords.com. “Will the Real Author of Tom Clancy’s Books Please Stand Up.” https://www.spokenandwrittenwords.com/will-real-author-tom-clancys-books-please-stand/

20. Murder & Mayhem. “Pseudonyms and Secrets: The True Identities Behind 7 Mystery Writers,” December 9, 2021. https://murder-mayhem.com/mystery-ghost-writers

21. Wikipedia. “James Patterson.” https://en.wikipedia.org/wiki/James_Patterson

22. KarenWoodward.org. “How James Patterson Works With His Co-Authors.” https://blog.karenwoodward.org/2014/05/how-james-patterson-works-with-his-co-authors.html

23. Maine Crime Writers. “The James Patterson Method,” January 9, 2024. https://mainecrimewriters.com/2024/01/09/the-james-patterson-method/

25. SelfPubHub.us.com. “James Patterson Books in Order: 2026 Complete List.” https://selfpubhub.us.com/james-patterson-books-in-order/

26. Caroline-Writes.com. “Comparing the Best Writing Method: Stephen King vs James Patterson,” July 27, 2025. https://caroline-writes.com/stephen-king-vs-james-patterson-writing-method/

28. WritingForums.com. “Your Thoughts on James Patterson,” January 10, 2014. https://www.writingforums.com/threads/your-thoughts-on-james-patterson.144122/

31. Wikipedia. “Clive Cussler.” https://en.wikipedia.org/wiki/Clive_Cussler

32. WrightBookAssociates.co.uk. “Who Is Writing Clive Cussler Books Now? Meet the New Authors,” October 23, 2024. https://www.wrightbookassociates.co.uk/blog/who-is-writing-clive-cussler/

33. Killzone Blog. “Collaborating with Cussler” (guest post by Paul Kemprecos). https://killzoneblog.com/2009/06/collaborating-with-cussler.html

35. SFFWorld Forums. “Clive Cussler.” https://www.sffworld.com/forum/threads/clive-cussler.18422/

37. WaltersCliveCussler.blogspot.com. “Clive Cussler Book Collecting: The Series and Co-Authors.” http://waltersclivecussler.blogspot.com/p/the-series.html

39. Publishers Weekly. “Non Stop Adventure: Clive Cussler,” February 26, 2015. https://www.publishersweekly.com/pw/by-topic/authors/profiles/article/65712-non-stop-adventure-clive-cussler.html

40. OrderOfBooks.com. “Order of Clive Cussler Books.” https://www.orderofbooks.com/authors/clive-cussler/

41. Gelman, Andrew (Statistical Modeling blog). “Russell’s Paradox of Ghostwriters,” December 1, 2023. https://statmodeling.stat.columbia.edu/2023/12/01/russells-paradox-of-ghostwriters/

44. Association of Ghostwriters. “The 2025 Ghostwriting Industry Report,” January 1, 2026. https://associationofghostwriters.org/the-2025-ghostwriting-industry-report/

46. Association of Ghostwriters. “4 Ghostwriting Industry Predictions for 2025,” December 31, 2024. https://associationofghostwriters.org/4-ghostwriting-industry-predictions-for-2025/

48. Wikipedia. “Bob Woodward.” https://en.wikipedia.org/wiki/Bob_Woodward

51. Weiland, K.M. “Exploring the Impact of AI on Fiction Writing: Opportunities and Challenges,” February 10, 2025. https://www.helpingwritersbecomeauthors.com/impact-of-ai-on-fiction-writing/

54. Claude.ai public artifact. “Can AI Replace Human Writers? The Complete 2024 Guide.” https://claude.ai/public/artifacts/48f682d5-68ab-4d2d-bb38-041328fd6921

55. StoryBoldStudio.com. “AI Writing in 2025: Your Complete Guide.” https://www.storyboldstudio.com/blog/ai-writing

56. EveryWriterResource.com. “Is AI the Death of Writing? A Hard Look at the Future of Authors,” May 20, 2025. https://www.everywritersresource.com/is-ai-the-death-of-writing-a-hard-look-at-the-future-of-authors/

57. Soliman, Kareem. “The AI Writer’s Paradox: Finding Authenticity in the Age of Artificial Intelligence,” June 3, 2025. https://medium.com/@kareem.soliman/the-ai-writers-paradox-finding-authenticity-in-the-age-of-artificial-intelligence-7a15d2832ac3

58. ArXiv. “From Pen to Prompt: How Creative Writers Integrate AI into their Writing Practice,” February 13, 2025. https://arxiv.org/html/2411.03137v2

59. WritersInTheStorm.com. “To AI, or Not to AI? More Questions than Answers,” January 14, 2025. https://writersinthestormblog.com/2025/01/to-ai-or-not-to-ai-more-questions-than-answers/

60. The Authors Guild. “Artificial Intelligence.” https://authorsguild.org/advocacy/artificial-intelligence/

62. Wikipedia. “Ghostwriter.” https://en.wikipedia.org/wiki/Ghostwriter

63. History News Network. “How the Goodwin Story Developed.” https://www.hnn.us/article/how-the-goodwin-story-developed

64. Publishers Weekly. “Ghostwriters Come Out of the Shadows,” November 12, 2021. https://www.publishersweekly.com/pw/by-topic/industry-news/publisher-news/article/87886-ghostwriters-come-out-of-the-shadows.html

65. Cal Alumni Association. “Spirited Away: the Life of the Ghostwriter,” December 10, 2021. https://alumni.berkeley.edu/california-magazine/summer-2017-adaptation/spirited-away-life-ghostwriter/

69. Washingtonian. “Confessions of a Washington Ghostwriter,” April 6, 2017. https://washingtonian.com/2017/02/05/confessions-of-a-washington-ghostwriter-barbara-feinman-todd/

70. PBS NewsHour. “Writing History” (Ambrose and Goodwin plagiarism debate). https://www.pbs.org/newshour/show/writing-history

###

Writers Using AI to Augment Their Craft

By Jim Shimabukuro (assisted by Claude)
Editor

1. Stephen Marche: The Literary Curator and the Hip-Hop Producer

Stephen Marche is a Canadian novelist and essayist whose byline has appeared in The New Yorker, The New York Times, The Atlantic, and Esquire, among others. His books include The Next Civil War, a nonfiction work that required him to travel across the United States conducting hundreds of interviews, and On Writing and Failure, a candid essay-length meditation on the peculiar perseverance demanded by the literary life. Writing is not a side project for Marche but the whole of his professional existence — his livelihood, his method of inquiry, and his primary mode of contributing to public life. He has described himself as constitutionally incapable of coherence as a person, a writer whose projects are so radically different from one another that no single image of him holds still for long.8

Image created by Copilot
Continue reading

Cross-Lingual Chatbotting in the Next Few Years

By Jim Shimabukuro (assisted by Copilot)
Editor

Research on AI systems that can act as cross-lingual chatbots—able to converse in one language while seamlessly drawing on sources in many others—has accelerated sharply since 2023, especially under the banner of “multilingual” or “cross-lingual” large language models (LLMs). Recent surveys of multilingual LLMs (MLLMs) describe a clear shift from traditional machine translation pipelines toward unified models that jointly handle understanding, translation, and generation across dozens or even hundreds of languages, with explicit goals of knowledge transfer from high‑resource languages like English to lower‑resource ones.4,5,6 These surveys emphasize that the same architectures powering English‑centric chatbots are now being trained or adapted on multilingual corpora, making it technically feasible for an English conversation to query, summarize, and reason over content originally written in Chinese, Japanese, German, and many other languages—at least in controlled settings.4,5,6

Image created by ChatGPT
Continue reading

The Quality of Chatbot Prose Seems to Be Improving

By Jim Shimabukuro (assisted by Perplexity)
Editor

Introduction: In the last couple of months, I’ve noticed what appears to be a startling improvement in the quality of prose generated by chatbots in their free tiers. To determine if I’m hallucinating, I asked Perplexity to look into what appears to be an exponential refinement in style. -js

AI-generated prose in free-tier chatbots has become markedly more fluent, versatile, and “human-sounding” since late 2022, but the evidence points to rapid, stepwise improvement rather than clean exponential growth, with important ceilings and distortions that become visible as soon as you look past surface polish.1,4,6,7,17,20 Your sense that something has changed in just the last few months is consistent with the pattern researchers are now documenting: frequent model upgrades, better alignment and instruction-tuning, and widespread human-in-the-loop workflows have collectively raised average output quality and blurred the line between AI-assisted and purely human prose in everyday settings, even though true originality, voice, and long-form coherence remain recognizably human strengths.4,5,13,14,17,20

Image created by Copilot
Continue reading

AI Developmental Models of Human Intelligence: Narrow to Broad AI

By Jim Shimabukuro (assisted by Claude)
Editor

To understand how agentic AI and the emerging prospect of AGI will reshape developmental models of human intelligence, one must first grasp what distinguishes these systems from the generative AI that has already become familiar. Generative AI — the kind that produces text, images, and code in response to prompts — is fundamentally reactive. It generates outputs but does not pursue goals across time, manage multi-step reasoning autonomously, or adapt its behavior based on consequences. Agentic AI, by contrast, refers to systems that can autonomously achieve specific goals with limited supervision. Unlike traditional AI models, agentic AI demonstrates autonomy, goal-driven behavior, and adaptability. It builds on generative AI capabilities but extends beyond content creation to solve complex, multi-step problems through reasoning, planning, and tool use.(ScienceDirect) AGI — Artificial General Intelligence — extends this concept further still, referring to a hypothetical but increasingly plausible system capable of matching or exceeding human cognitive performance across the full range of intellectual domains without task-specific training.

Image created by Copilot
Continue reading

The Status of Robot Tanks in March 2026: An Unbundling

By Jim Shimabukuro (assisted by ChatGPT)
Editor

Multiple countries are actively developing what can reasonably be described as “robot tanks,” more formally called unmanned ground combat vehicles (UGCVs) or heavily armed unmanned ground vehicles (UGVs). What is striking in 2024–2026 is not just experimentation, but early operational deployment, especially in the Russia–Ukraine war, which has become the first large-scale laboratory for robotic ground warfare.

Image created by ChatGPT
Continue reading

Review of DNI Tulsi Gabbard’s Remarks at 18 March 2026 SSCI Hearing

By Jim Shimabukuro (assisted by Copilot)
Editor

DNI Tulsi Gabbard’s opening remarks present a single overarching thesis: the United States faces a rapidly evolving, multi‑domain threat environment in which homeland security, transnational crime, terrorism, state adversaries, cyber operations, and emerging technologies are converging in ways that demand vigilance, coordination, and sustained national resolve. She frames the intelligence community’s assessment as non‑political and rooted in statutory duty, emphasizing that the briefing reflects analytic judgments rather than personal opinion.

Image created by ChatGPT
Continue reading

Transcript of DNI Tulsi Gabbard’s Opening Remarks at 18 March 2026 SSCI Hearing

On March 18, 2026, Director of National Intelligence Tulsi Gabbard delivered opening remarks at a Senate Select Committee on Intelligence (SSCI) hearing for the Annual Threat Assessment of the U.S. Intelligence Community. The opening statement1 as delivered is below.

Tulsi Gabbard, Director of National Intelligence
Continue reading

Jensen Huang: The Gold Rush After Agentic Will Be Robots

By Jim Shimabukuro (assisted by Perplexity)
Editor

Jensen Huang’s GTC2026 keynote framed “physical AI” and robotics not as a side bet but as the next multi‑trillion‑dollar wave of the AI economy, continuous with today’s datacenters rather than a separate field.1,4 In both NVIDIA’s own recap and detailed press coverage, he cast robots, autonomous vehicles, and industrial automation as the natural endpoint of an “AI factory” stack where gigawatt‑scale infrastructure produces models that flow into embodied systems, arguing that the next gold rush after digital agents will be robots and other “physical AI” burning even more data and compute.3,4,6 This is less about a new technical thesis than a macro‑industrial one: embodied AI is presented as an infrastructure market similar in scale and inevitability to cloud and GPUs, with NVIDIA positioning itself as the full‑stack vendor from energy to humanoid controllers. In that sense, Huang’s message differs from classic robotics talks by making physical AI primarily an inference and datacenter story, with robots as endpoints of a vertically integrated pipeline rather than standalone machines.3,4

Image created by Copilot
Continue reading

AI Inference Chips and Why They Dominated Jensen Huang’s GTC2026 Keynote

By Jim Shimabukuro (assisted by ChatGPT)
Editor

AI inference chips sit at the center of a major shift in how artificial intelligence is actually used—and that shift explains why they dominated Jensen Huang’s keynote at NVIDIA’s GTC2026 and why they now anchor the company’s strategy.

Image created by Copilot
Continue reading

Do Current ‘AI-First’ Universities Represent a True Paradigm Shift?

By Jim Shimabukuro (assisted by Claude)
Editor

[Related: “The Emerging AI‑First University Paradigm“]

The Emerging AI‑First University Paradigm” (ETC Journal, 16 March 2026) makes a compelling case that Unity Environmental University, Ohio State University, the University of Washington, CUNY, and SUNY collectively sketch a new “AI-first” template for higher education — one in which AI is treated as a design principle rather than a peripheral tool, structures are reconfigured around AI’s capabilities, and ethics and equity are foregrounded as conditions of scale.¹ The five institutions do represent a meaningful advance beyond the typical university’s reactive, policy-memo approach to generative AI. Yet, when measured against what Thomas Kuhn understood as a genuine paradigm shift — a revolutionary displacement of the organizing assumptions, methods, and purposes of an entire field — these examples fall well short. They represent, rather, an intensification of one pole within the existing paradigm: the adoption-and-adaptation pole. The deeper anomaly AI poses to higher education — the radical destabilization of what universities are for, and of the three founding pillars on which they rest — remains largely unaddressed.

Image created by ChatGPT
Continue reading

The Emerging AI‑First University Paradigm

By Jim Shimabukuro (assisted by Copilot)
Editor

[Related: Do Current ‘AI-First’ Universities Represent a True Paradigm Shift?]

Unity Environmental University, Ohio State University, University of Washington, City University of New York, and the State University of New York, taken together, sketch the salient features of an emerging AI‑first university paradigm. First, AI is treated as a design principle and strategic core, not a peripheral technology: Unity codifies AI‑First Design Principles,¹ Ohio State builds an AI‑first educational environment,³ UW adopts an AI‑first institutional strategy,⁸ CUNY envisions human‑AI powered education,⁹ and SUNY embeds AI into system‑wide policy and infrastructure.¹¹ Second, AI‑first universities reconfigure structures—degrees, faculty hiring, governance, and system‑level coordination—around AI’s capabilities and risks, rather than trying to fit AI into legacy forms.

Image created by Copilot
Continue reading

OpenClaw Is a Self-Hosted, Open-Source Agentic AI Framework for PCs

By Jim Shimabukuro (assisted by ChatGPT)
Editor

OpenClaw is a relatively new example of what researchers and developers call agentic AI—software that does not simply respond to prompts but can observe, reason, and act autonomously on a user’s behalf. The project began in late 2025 as an open-source experiment by Austrian developer Peter Steinberger and quickly grew into one of the most visible autonomous-agent frameworks in 2026.¹ OpenClaw is distributed under an MIT open-source license and is designed to run locally on a user’s computer while connecting to external large language models such as GPT, Claude, or open-source models.¹

Image created by Copilot
Continue reading

Free Virtual Pass to NVIDIA GTC 2026 March 16-19

Conference passes have sold out, but you can still participate in person with an Exhibits Only pass (use code GTC26-20 for 20% off) or virtually [free].

NVIDIA GTC is the premier global AI conference, where developers, researchers, and business leaders come together to explore the next wave of AI innovation. From physical AI and AI factories to agentic AI and inference, GTC 2026 will showcase the breakthroughs shaping every industry. The conference venues are spread throughout downtown San Jose. For inspiring sessions, be part of the unique GTC experience. 

The Ambient Era of Operating Systems

By Jim Shimabukuro (assisted by Claude)
Editor

[Related: AI-Native Operating Systems: From Procedural to Intent-Based to Ambient]

The ETC Journal article “AI-Native Operating Systems: From Procedural to Intent-Based to Ambient” (13 March 2026) opens with a brisk diagnosis of where personal computing has been stuck: for three decades, users have had to navigate windows, files, and menus, actively directing machines step by step. The article argues that a growing number of technologists now believe the operating system itself may be on the verge of a fundamental transformation — one in which AI agents interpret human intentions and orchestrate digital actions automatically, rather than passively organizing applications and hardware as they do today. What the article calls the third and most radical pathway — ambient computing — is the destination where this trajectory ultimately leads: a world in which the operating system dissolves into a distributed intelligence layer spanning multiple devices and cloud services, and a person’s AI assistant manages communications, schedules events, and retrieves information regardless of which device is currently being used.¹ The following four articles expand on the idea of ambient computing.

Image created by Copilot
Continue reading

AI-Native Operating Systems: From Procedural to Intent-Based to Ambient

By Jim Shimabukuro (assisted by ChatGPT)
Editor

For more than three decades, the personal-computer operating system has been dominated by a familiar paradigm: the graphical desktop. Systems such as Microsoft Windows and macOS organize computing around icons, windows, files, and applications. The user launches programs, manipulates menus, and manually coordinates tasks between software tools. Beneath this interface, the operating system manages memory, hardware resources, and processes, but the overall architecture remains rooted in a conceptual model that dates to the late twentieth century. That model assumes that humans must actively direct computers step by step, selecting applications and instructing them how to perform tasks.

Image created by Copilot
Continue reading

Fully AI-Automated U.S. Tax System Feasible with Existing Technology But…

By Jim Shimabukuro (assisted by ChatGPT)
Editor

The idea of an AI-driven tax system that eliminates the need for individuals to file returns is not merely speculative; it is actively being explored by governments, researchers, and private companies. However, as of 2026, most efforts are focused on partial automation—automating compliance, enforcement, and preparation—rather than replacing the entire filing structure. Tax administrations around the world have been integrating machine learning and advanced analytics into their operations, primarily to detect fraud, streamline workflows, and improve taxpayer services. An OECD survey found that 29 of 38 member countries already deploy AI in their tax administrations, using it to identify patterns of tax evasion, automate routine case processing, and differentiate simple filings that can be handled automatically from complex cases requiring human judgment.¹ These deployments represent the early infrastructure of a future system in which tax authorities already possess most of the necessary data and can pre-compute liabilities without taxpayers filling out forms themselves.

Image created by Copilot
Continue reading

Status of Self-Driving Cars (March 2026): ‘tightly geofenced’

By Jim Shimabukuro (assisted by Copilot)
Editor

Autonomous driving in early 2026 sits in a strange middle ground—no longer a sci‑fi promise, but still far from ubiquitous. In multiple U.S. and Chinese cities, you can already hail a driverless robotaxi or see a Class 8 truck moving freight with no one in the cab, yet these services remain tightly geofenced, heavily supervised, and politically fragile. Waymo now delivers on the order of 250,000 paid robotaxi rides per week across several U.S. cities, making it the clear U.S. leader in commercial Level 4 robotaxis, while global weekly robotaxi rides have climbed into the hundreds of thousands according to industry surveys tracking more than 700,000 fully autonomous rides per week worldwide.1,2,3 In parallel, China’s Baidu Apollo Go has matched or exceeded Waymo’s scale, also reaching roughly 250,000 weekly rides and more than 140 million driverless miles, underscoring how quickly Chinese robotaxi operators have moved under more centralized regulatory regimes.4,5

Image created by Copilot
Continue reading

Emerging AI Disruptors: The Jobs-Wozniak-Gates of 2026

By Jim Shimabukuro (assisted by Claude)
Editor

The history of technology is not written primarily by the powerful. It is written by the restless. Steve Jobs and Steve Wozniak assembled the Apple I in a California garage. Bill Gates and Paul Allen wrote their BASIC interpreter in a college dorm room before any computer existed to run it. The disruptors of every technological era tend to arrive from the margins — not because the center is incompetent, but because the center is invested in the status quo. They cannot afford to imagine the world differently. The outsiders can.

Image created by Copilot
Continue reading

US/Israel‑Iran War: Lessons From Russia‑Ukraine

By Jim Shimabukuro (assisted by Copilot)
Editor

The Russia‑Ukraine war has underlined that national resilience and societal will can matter as much as raw military power, and that lesson should sit at the center of any thinking about a potential US/Israel‑Iran war. Ukraine’s ability to mobilize its population, maintain governance under fire, disperse critical infrastructure, and keep basic services functioning has repeatedly blunted Russian objectives and bought time for diplomacy and external support.1 In a US/Israel‑Iran context, that translates into prioritizing civilian preparedness, continuity of government, and rapid repair capabilities not only in Israel but across the wider region, including partners in the Gulf and beyond, so that societies can absorb shocks without collapsing into chaos. This matters for the “greater good” because wars that shatter basic social systems tend to radicalize populations, prolong grievances, and make any eventual peace far more fragile.

Image created by ChatGPT
Continue reading

US/Israel-Iran War: A Bloody Standoff Like Russia-Ukraine?

By Jim Shimabukuro (assisted by Claude)
Editor

This is no longer a hypothetical scenario. On February 28, 2026, the United States and Israel launched joint airstrikes on Iran, killing Supreme Leader Ali Khamenei. The stated goals are to destroy Iran’s missile and military capabilities, prevent the state from obtaining a nuclear weapon, and ultimately to achieve regime change by bringing the Iranian opposition to power.2 In response, Iranian forces launched missiles and armed drones against Israel and US military facilities in all six Gulf Cooperation Council countries.6 The opening of this war, which the US calls “Operation Epic Fury,” has been swift and devastating — but the far more dangerous question is what comes next. The conditions for a prolonged, grinding standoff comparable to the Russia-Ukraine war are alarmingly present.

Image created by Gemini
Continue reading

Bresnick et al.’s ‘China’s AI Arsenal’ – ‘intelligentized warfare’

By Jim Shimabukuro (assisted by Perplexity)
Editor

Bresnick, Probasco, and McFaul’s core thesis in “China’s AI Arsenal: The PLA’s Tech Strategy Is Working” (2 March 2026) is that the People’s Liberation Army has moved beyond aspirational rhetoric about “intelligentized warfare” and is now systematically translating AI ambitions into concrete capabilities across command-and-control, sensing, targeting, and unmanned systems, in ways that are beginning to work at scale and that the United States has not yet fully internalized in its own strategy.1 This argument builds directly on their recent empirical mapping of more than 9,000 AI-related PLA requests for proposals and nearly 3,000 AI-related defense contract awards between 2023 and 2024, which reveal a broad, coherent, and rapidly growing demand signal for AI in every warfighting domain.2,3

Image created by Copilot
Continue reading

What Will an Agentic University Look Like?

By Jim Shimabukuro (assisted by Claude)
Editor

The transformation of university pedagogy that agentic AI demands is perhaps the most visible and immediate of the three domains, and it begins with a fundamental rethinking of what learning is supposed to produce. Commentators inside higher education have described the emerging shift as the move “from generative assistant to autonomous agent,” emphasizing that generative models will increasingly sit behind agentic layers that decide when and how to use them.1 This means that course designs built around the submission of finished products — essays, problem sets, take-home exams — are structurally vulnerable in ways that syllabi policies cannot patch.

Image created by Gemini
Continue reading

Status of Agentic AI in Higher Ed: A Liminal Moment

By Jim Shimabukuro (assisted by Copilot)
Editor

Agentic AI in higher education is in a visible but early, uneven phase: it is talked about as “the next evolution” beyond prompt‑driven generative tools, yet most campuses still treat it as a set of pilots and thought experiments rather than core infrastructure. A widely used working definition frames agentic AI as systems that can pursue complex, often long‑horizon goals with minimal human intervention, planning multi‑step actions, using tools, maintaining memory, and adapting to changing contexts—what some researchers call a “qualitative leap” from static chatbots and rule engines.1 In practice, this means moving from “AI that answers” to “AI that acts”: agents that can orchestrate tasks across learning platforms, student information systems, and communication channels, rather than simply generating text on demand. Commentators inside higher ed have started to describe this shift as the move “from generative assistant to autonomous agent,” emphasizing that generative models will increasingly sit behind agentic layers that decide when and how to use them.6

Image created by Copilot
Continue reading

Five Strongest Criticisms Against the U.S.-Israeli Attack on Iran

By Jim Shimabukuro (assisted by Claude)
Editor

[Related: Operation Epic Fury: The Official Rationale, 4 March 2026]

1. The Attack Violated the UN Charter and International Law

The most foundational and broadly shared criticism of Operation Epic Fury is that it constitutes an illegal use of force under international law. Article 2(4) of the United Nations Charter prohibits the threat or use of force against the territorial integrity or political independence of any state. Two exceptions exist: Security Council authorization under Chapter VII, and individual or collective self-defense in response to an armed attack under Article 51. Neither applies here. The Security Council did not authorize the use of force against Iran. The United States did not request such authorization. Iran was not attacking the United States or Israel at the time of the strikes.

Image created by Gemini
Continue reading