By Jim Shimabukuro (assisted by Claude)
Editor
To understand how agentic AI and the emerging prospect of AGI will reshape developmental models of human intelligence, one must first grasp what distinguishes these systems from the generative AI that has already become familiar. Generative AI — the kind that produces text, images, and code in response to prompts — is fundamentally reactive. It generates outputs but does not pursue goals across time, manage multi-step reasoning autonomously, or adapt its behavior based on consequences. Agentic AI, by contrast, refers to systems that can autonomously achieve specific goals with limited supervision. Unlike traditional AI models, agentic AI demonstrates autonomy, goal-driven behavior, and adaptability. It builds on generative AI capabilities but extends beyond content creation to solve complex, multi-step problems through reasoning, planning, and tool use.(ScienceDirect) AGI — Artificial General Intelligence — extends this concept further still, referring to a hypothetical but increasingly plausible system capable of matching or exceeding human cognitive performance across the full range of intellectual domains without task-specific training.
As of 2025, we remain in the era of Narrow AI, systems specialized for particular tasks, but we are transitioning toward Broad AI, with generative and deep-learning models like OpenAI’s o3, DeepSeek’s R1, and agentic models moving us forward into reasoning and self-directed problem solving.(USAII) The timeline for AGI is contested but increasingly near-term in expert consensus. Shane Legg, co-founder of DeepMind Technologies, put the probability of minimal AGI at 50% by 2028. Dario Amodei of Anthropic expressed at the 2026 World Economic Forum strong confidence that AGI-level systems will likely emerge within a few years.(AIMultiple)
These developments arrive at a pivotal moment for the science of human cognition. Developmental models — theories of how the human mind grows, learns, and organizes knowledge across the lifespan — were constructed in a world where the primary cognitive challenges came from other humans, from physical environments, and from cultural tools like language and writing. Agentic AI introduces something genuinely unprecedented: a system that can substitute for many of the very cognitive acts through which the mind develops. The central question is whether that substitution will liberate human intelligence to reach for higher capacities, or quietly erode the foundational muscular effort through which intelligence is built. The seven thinkers and works explored below suggest that the answer depends not on technology itself but on the choices we make about how to design and inhabit the environments technology creates.
1. Jay McClelland and the Emergent Mind — When AI and Brain Science Circle Back to Each Other
Jay McClelland is among the most consequential figures at the intersection of cognitive science and artificial intelligence. In the late 1970s and early 1980s, the National Science Foundation and the Office of Naval Research funded projects by James “Jay” McClelland, David Rumelhart, and Geoffrey Hinton to model human cognitive abilities. That investment led to a cascade of research progress: a neural network model of how humans perceive letters and words; two volumes published in 1986 describing the team’s theory of how neural networks in our brains function as parallel distributed processing systems. McClelland was never thinking about building an AI. As a cognitive scientist, his goal was to understand the human mind. But now the progress in AI has come full circle: he draws inspiration from what has been learned in AI and deep learning to think about the human mind, while also asking what the mind and brain have to teach AI.(Stanford)
McClelland’s model of cognition centers on what he and Rumelhart called Parallel Distributed Processing (PDP) — the idea that thought is not a serial computation but an emergent property of many simultaneously active, weighted connections among neurons. He views cognitive functions as emerging from the parallel, distributed processing activity of neural populations, with learning occurring through the adaptation of connections among participating neurons. His research revolves around developing explicit computational models based on these ideas and applying them to substantive research questions through behavioral experiment, computer simulation, and mathematical analysis.(McClelland)
What makes McClelland’s recent work especially relevant to the question of AGI’s impact on human developmental models is his collaboration with Google DeepMind researchers to probe whether large language models reason the way humans do. They explored whether neural network models, like humans, reason more accurately when they have prior contextual knowledge compared to when they are given completely abstract topics requiring symbolic logic. Their research found that AI models, like humans, infuse their thinking with prior knowledge and beliefs, and are biased toward factually true or widely believed conclusions even when they do not follow from the given premises. These results were published in a 2024 paper in PNAS Nexus.(Stanford)
The implications for developmental models are profound. McClelland’s finding that humans and AI share the same systematic reasoning biases challenges the assumption that human intelligence is uniquely structured. His book, The Emergent Mind: How Intelligence Arises in People and Machines, co-authored with Gaurav Suri, explains what we now know about how the mind works in humans and machines, and explores how AI influences what we know about ourselves. He takes the view that by pursuing the effort to understand the human biological mind through neuroscience and interdisciplinary research on the brain basis of cognition, we will ultimately have a deeper understanding not only of ourselves but also of how to build better machines.(Stanford)
McClelland stands against cognitive pessimism about AI’s effects on human development. His entire career arc — from cognitive science to AI and back again — embodies the thesis that the relationship between human and artificial intelligence is dialogic and mutually enriching. Far from atrophying, the human brain that lives alongside increasingly capable AI systems may come to understand itself with unprecedented depth. The questions that AI cannot yet answer — about consciousness, embodied experience, emotional resonance, and the kind of rapidly-generalizing learning that humans accomplish with a tiny fraction of the data an LLM requires — point not to human diminishment but to the distinctive, irreplaceable topology of biological cognition.6
2. Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar — The Risk of Knowledge Collapse and the Case for Complementarity
If McClelland offers a broadly optimistic framing, Nobel Prize-winning MIT economist Daron Acemoglu and his co-authors deliver the most rigorous warning yet about what unchecked agentic AI could do to society’s collective cognitive ecosystem. In a February 2026 NBER working paper titled “AI, Human Cognition and Knowledge Collapse,” Acemoglu, Kong, and Ozdaglar study how generative AI, and in particular agentic AI, shapes human learning incentives and the long-run evolution of society’s information ecosystem. They build a dynamic model of learning and decision-making in which successful decisions require combining shared, community-level general knowledge with individual-level, context-specific knowledge; these two inputs are complements.(thelivinglib)
The core mechanism Acemoglu and colleagues identify is subtle but devastating in its implications. When individuals learn, experiment, or reason through problems, they not only acquire information useful for their immediate context but also generate small fragments of knowledge that eventually become part of the broader intellectual commons. In other words, learning produces positive externalities. Agentic AI disrupts this relationship: by substituting for the cognitive effort individuals would otherwise undertake to solve problems themselves, it reduces the flow of new public signals, causing general knowledge to depreciate over time.(Medium) If AI accuracy exceeds a critical threshold, the system can collapse to zero general knowledge despite personalized recommendations. Welfare is non-monotone: modest AI helps, but excessive accuracy triggers collapse. Optimal regulation — such as deliberate garbling of outputs — may be needed to preserve human learning incentives.(substack)
The model’s implications are not merely theoretical. Research by Kosmyna and colleagues in 2025 found that using ChatGPT for writing assistance reduces users’ ability to memorize and accurately record their own arguments, and appears to do so by changing neural connectivity in users’ brains. The resulting output also looks more similar across writers, raising concerns about the erosion of individual creativity and agency.(MIT)
What rescues Acemoglu’s analysis from simple technophobia is his explicit acknowledgment that AI can function as a powerful complement to human cognition when it does not substitute for costly learning effort. He has advocated for a pro-worker AI agenda, one that prioritizes human jobs while using AI as a tool for greater efficiency, arguing that the best way to use something different from yourself is not to replace yourself with it but to use it in a complementary way.(Fortune) This is an economist arguing for developmental psychology: the human mind grows through effortful engagement, and policy must protect the conditions that make that engagement necessary. Read carefully, Acemoglu’s model is not a counsel of despair but a design specification for AI systems that augment rather than anesthetize the intellect.1,2
3. Anders Högberg and the Cognitive Co-Evolutionary Argument — We Are Not Stuck with a Stone Age Brain
One of the most bracing antidotes to AI-induced cognitive panic comes from archaeologist and cognitive evolution scholar Anders Högberg, whose 2025 Frontiers in Psychology perspective piece challenges what he calls the myth of the fixed “Stone Age brain.” Högberg argues that human cognition has always evolved through embodied interaction with environmental and socio-technical domains, reinforced by epigenetic processes. We are equipped with a neuroplastic brain primed to seamlessly co-evolve with technology. We are not simply preset with a Stone Age mind; our cognitive evolution is continuous and ongoing. Our evolutionary history gives us exceptional abilities to adapt to variation and change, making us perfectly primed to cognitively co-evolve with AI as AI systems become integrated into our cognition.(Frontiers)
Högberg draws on embodied cognition theory, cognitive evolution studies, and futures studies to argue that the same adaptive capacities that enabled Homo sapiens to thrive across every continent and climate will also shape the species’ co-evolution with AI. His core argument is that as our modes of engagement co-evolve with AI, so too do our conceptions of the human condition and our experiences of what it means to be human. We have entered an AI-human cognitive co-evolutionary trajectory whose outcomes are unpredictable.(Frontiers) He does not present this as either utopian or dystopian but as an invitation to rigorous anticipatory thinking — to asking what cognitive capacities people will need in futures we cannot yet fully see.
The article’s most important contribution to the question of cognitive atrophy may be its historical grounding. Every major technological transition in human history — from the development of written language to the printing press to the digital revolution — produced fears of cognitive deterioration. The human mind, defined by its neuroplasticity, is dynamically co-constituted through its embodied interaction with technologies. This has not only shaped our deep evolutionary past but continues to influence us today.(Frontiers) Literacy did not destroy memory; it transformed what memory needed to store and what it was freed to create. Mathematics did not make intuitive reasoning obsolete; it extended the reach of intuition into previously inaccessible domains. Högberg’s framework invites us to ask not whether AI will change the human mind — of course it will — but whether we are designing the transition with the wisdom and foresight it demands.3
4. The npj Artificial Intelligence “3R Principle” Paper — Neuroplasticity as the Cornerstone of Cognitive Health in the AI Age
A landmark 2026 paper published in npj Artificial Intelligence — a Nature Portfolio journal — represents perhaps the most neuroscientifically grounded engagement with the question of how AI use shapes the brain over time. Its authors propose what they call the 3R Principle — Results, Responses, Responsibility — as a framework for preserving neuroplasticity and cognitive health in an AI-saturated world. The paper argues that neuroplasticity is shaped by how humans interact with AI. Passive, uncritical reliance on AI may weaken activity-dependent brain plasticity and erode cognition, whereas active co-creation can sustain or enhance it. Drawing on plasticity rules and ethical considerations, the 3R principle proposes a preventive framework for cognitive hygiene, urging education toward AI use in ways that preserve agency, meaning-making, and long-term brain health.(Nature)
The neuroplasticity argument here is crucial for allaying fears of cognitive stagnation. The human brain is not a static organ that can be used up by assistance or rest. Neuroplasticity — the brain’s lifelong capacity to form new synaptic connections and reorganize its networks — depends on activity-dependent processes, meaning the right kind of AI engagement actively promotes neural growth rather than eroding it. According to the historical “use it or lose it” principle applicable to every brain function, plasticity is an active phenomenon. Surrendering both results and meaning to AI will likely lead not only to cognitive offloading but to a deeper and more dangerous form of offloading the will.(Nature) The crucial variable, therefore, is not whether one uses AI but whether the mode of use recruits deep cognitive engagement or bypasses it.
The 3R Principle translates this neuroscience into practical design principles. Results — AI should augment the human pursuit of goals rather than replace the pursuit itself. Responses — the human brain must remain in the loop of evaluating, questioning, and contextualizing AI outputs. Responsibility — the user must maintain active agency over the direction and meaning of their cognitive work. This framework points toward a new developmental model suited to the AI age: one in which the measure of cognitive health is not how much one knows but how actively and critically one engages with the tools and resources of one’s intellectual environment.4
5. Tina Grotzer and the Harvard Framework — Metacognition as the Curriculum of the AI Age
Tina Grotzer, cognitive scientist at the Harvard Graduate School of Education and senior researcher at Project Zero, has emerged as one of the most practically influential voices on how educational institutions should restructure their approach to human development in the age of AI. Her work sits at the intersection of cognitive science, learning theory, and curriculum design, and her recent public engagements have sharpened into a coherent framework with direct implications for how we should think about AI’s effect on developmental models of intelligence.
Grotzer emphasizes that learning is about far more than memorizing facts — it includes understanding how our minds work, what critical thinking looks like, and how to engage in creative reasoning. In her assessment, many students use AI without understanding that they need to develop their own cognitive capacities at all. They don’t reflect on the remarkable learning that their minds are doing day after day.(Distilinfo) The danger, she suggests, is not that AI will make students stupid, but that it will make them unaware of their own potential.
Grotzer advocates for metacognition — understanding and reflecting on one’s own thinking — as a new core purpose of education. She argues that teaching students to be critical and discerning about how they use AI is important, but even more important is helping them understand how their embodied human minds work and how powerful they can be when used well. The work in neuroscience makes a compelling case that, while human minds are computational and use Bayesian processes, they are better than Bayesian in many ways. The work of Antonio Damasio and others highlights how somatic markers enable quick, intuitive leaps that no purely statistical system can replicate.(Harvard)
What Grotzer’s framework adds to the developmental literature is a translation layer between neuroscience and educational practice. She argues that once students genuinely know what their minds can do that AI cannot — the affectively grounded intuition, the bodily-embedded creativity, the social emotional richness of genuine understanding — the question of which tasks to delegate to AI and which to protect as sites of human growth becomes tractable. This is a profoundly hopeful model: not a defensive withdrawal from AI, but a confident, evidence-based assertion of what human cognition uniquely offers.7
6. Michael Gerlich and the Cognitive Offloading Evidence — Understanding the Risk in Order to Counter It
Swiss researcher Michael Gerlich’s 2025 empirical study published in the MDPI journal Societies provides the most systematic quantitative evidence base for concerns about AI’s effect on critical thinking — and by extension, for understanding what must be counteracted in developmental models. The study investigates the relationship between AI tool usage and critical thinking skills, focusing on cognitive offloading as a mediating factor. Using a mixed-method approach with surveys and in-depth interviews with 666 participants across diverse age groups and educational backgrounds, quantitative data were analysed using ANOVA and correlation analysis, while qualitative insights were obtained through thematic analysis of interview transcripts.(MDPI)
The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. Furthermore, higher educational attainment was associated with better critical thinking skills, regardless of AI usage.(ResearchGate) These results highlight what Gerlich calls the potential cognitive costs of AI tool reliance, but his prescriptions are constructive: the solution is not to ban AI tools but to design educational experiences that require active, critical engagement with them.
Gerlich’s work is selected here precisely because it gives substance to the fear of cognitive atrophy rather than dismissing it — and in doing so, provides a map for addressing it. The fear is real but bounded. AI tools can automate routine and complex tasks, thereby reducing cognitive load and freeing up cognitive resources for higher-order thinking.(MDPI) The question is whether the freed resources are actually recruited for higher-order thinking or simply surrendered to further automation. Gerlich’s data show that this is not a fixed outcome but a variable shaped by education, age, and practice. Developmental models that foreground this variability and design for the high-engagement scenario are models adequate to the AI age.8
7. Hans Westerbeek and the Concept of Epistemic Sovereignty — Preserving the Authorship of the Mind
Philosopher and scholar Hans Westerbeek’s 2026 paper in Springer’s journal AI & Society, titled “How AI is Rewiring the Human Brain: The Generational Transformation of Cognition and Knowing,” represents the most philosophically rich contribution to our survey. Drawing on Rousseau’s concept of moral conscience, Heidegger’s idea of technological enframing (Gestell), and Neil Postman’s critique of technopoly, Westerbeek frames the AI moment as an epistemological rupture — a transformation not just of what people know but of the conditions under which knowledge is authored.
The paper identifies a widening generational divide between those who were relatively AI-independent and a generation that is developing interface-based cognition, with high dependence on AI learning environments. The implications are neurological as well as epistemological. For Gen Z, knowing is less a matter of discovery and increasingly a selection from pre-structured outputs. Their thoughts are no longer entirely their own: predictive tools do not just assist cognition but begin to preconfigure it.(Springer)
Westerbeek introduces what may become the defining concept for the next decade of human developmental theory: epistemic sovereignty. He defines this as the capacity to author knowledge independently, and argues that its erosion signals not diminished intelligence but diminished authorship. As analogue generations disappear, so too may the brains unshaped by algorithmic mediation. Preserving their epistemic virtues will require deliberate design and regulation of learning environments that restore friction, ambiguity, and cognitive struggle as essential features of human development.(Springer)
His answer to the fear of atrophy is an epistemology of resistance — not a reactionary rejection of AI but an intentional re-authoring of the mind within AI-saturated environments. The risk is not that younger people are less intelligent. The risk is that they become less practiced in the kinds of thinking that remain essential when the questions are complex, when the stakes are moral, when the answers are contested, and when the world is not neatly predictable. Deep comprehension, creativity beyond the average, independent learning, and the ability to detect manipulation all depend on the cognitive muscles that friction trains.(Westerbeek)
What Westerbeek adds to the developmental literature is a language adequate to the philosophical depth of what is at stake. New models of human intelligence for the AI age will need not only neuroscientific and educational frameworks but a theory of what kind of cognitive beings we want to be and what it means to be the author — not merely the consumer — of one’s own mind. That aspiration, far from being threatened by AI, is made more urgent and more urgent to articulate by it.9
Conclusion: Toward New Developmental Models
Taken together, these seven contributions sketch the outlines of what developmental models adequate to the agentic AI era will need to include. They will need McClelland’s insight that human and artificial cognition are structurally analogous at deep levels but diverge in the rapid, data-sparse, emotionally-grounded learning that characterizes human development. They will need Acemoglu’s warning that the collective knowledge commons that underpins all individual development must be protected as a public good, not simply optimized away by precise private recommendations. They will need Högberg’s evolutionary confidence that the human brain has always co-evolved with tools and will do so again, provided we are thoughtful architects of the environments we inhabit. They will need the 3R Principle’s neuroscientific insistence that activity-dependent plasticity is the biological substrate of cognitive health. They will need Grotzer’s practical pedagogical commitment to metacognition as the skill above all skills. They will need Gerlich’s empirical honesty about the risks of offloading, alongside his equally honest assessment that higher education is a protective factor. And they will need Westerbeek’s philosophical insistence on epistemic sovereignty — the right and capacity of every mind to author, not merely receive, its own understanding.
The great fear — that human brains will atrophy in a world of ever more capable AI — rests on a passive model of human cognition, one in which the brain is a vessel that simply receives whatever the environment delivers. The neuroscience and cognitive science assembled here paint a different picture: a brain that grows through challenge, that reorganizes in response to tools, that can and will find new domains of effort as old ones are automated, and that carries in its very biological design a drive toward active meaning-making that no technology, however sophisticated, can permanently suppress. The task is not to defend the human mind against AI. It is to design the AI-saturated world in a way that keeps the human mind gloriously, effortfully, irreducibly alive.
References
1Acemoglu, D., Kong, D., & Ozdaglar, A. (2026). AI, Human Cognition and Knowledge Collapse. NBER Working Paper 34910. https://www.nber.org/papers/w34910
2Acemoglu, D., Kong, D., & Ozdaglar, A. (2026). AI, Human Cognition and Knowledge Collapse [Full PDF]. MIT Economics. https://economics.mit.edu/sites/default/files/2026-02/AI,%20Human%20Cognition%20and%20Knowledge%20Collapse%2002-20-26.pdf
3Högberg, A. (2026). Becoming human in the age of AI: Cognitive co-evolutionary processes. Frontiers in Psychology, 16, 1734048. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1734048/full
4Di Plinio, S., et al. (2026). The brain side of human-AI interactions in the long-term: The “3R principle.” npj Artificial Intelligence. https://www.nature.com/articles/s44387-025-00063-1
5McClelland, J. L., & Suri, G. (2025). The Emergent Mind: How Intelligence Arises in People and Machines. [Book discussed in:] Stanford Report. https://news.stanford.edu/stories/2025/10/emergent-mind-book-intelligence-humans-machines-neural-networks
6McClelland, J. L. (2024). From brain to machine: The unexpected journey of neural networks. Stanford Report. https://news.stanford.edu/stories/2024/11/from-brain-to-machine-the-unexpected-journey-of-neural-networks
7Grotzer, T. (2025). Is AI dulling our minds? Harvard Gazette. https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/
8Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://www.mdpi.com/2075-4698/15/1/6Westerbeek, H. (2026). How AI is rewiring the human brain: The generational transformation of cognition and knowing. AI & Society. https://link.springer.com/article/10.1007/s00146-026-02912-2
9Westerbeek, H. (2026). How AI is rewiring the human brain: The generational transformation of cognition and knowing. AI & Society. https://link.springer.com/article/10.1007/s00146-026-02912-2
Filed under: Uncategorized |







































































































































































































































































































































































Leave a comment