Four Pivotal Reports on AI and Schooling: Brookings, RAND, UNESCO, UNICEF

By Jim Shimabukuro (assisted by Claude)
Editor

Introduction: Taken together, these four documents form a complementary quartet. RAND tells you what is already happening in U.S. schools at scale. UNESCO situates the pedagogical and governance challenges in global theoretical and comparative context. UNICEF grounds the analysis in children’s rights and provides actionable design standards for governments and the private sector. And Brookings supplies the developmental science framework and the precautionary logic that justifies urgent policy intervention. No single document suffices on its own; each fills gaps the others leave open.

Image created by Copilot

1. Brookings: A New Direction for Students in an AI World: Prosper, Prepare, Protect (2026)

The Burns et al. report1 is a significant and largely well-grounded contribution to a field still struggling to produce rigorous longitudinal evidence. Its central claim — that risks presently overshadow benefits because those risks are developmental in nature rather than merely instructional — is coherent, carefully argued, and draws on a genuinely wide evidence base: interviews and consultations with 505 participants across 50 countries, a review of hundreds of studies, and a Delphi panel. This breadth is a real strength. At the same time, the claim is not without important limitations that should temper how policymakers and school designers lean on it in the next few years.

The report’s most intellectually distinctive move is arguing that risks and benefits are not commensurable by simply counting them — they differ in kind. Benefits (improved essay quality, personalized feedback, teacher time-savings, expanded access) are largely additive: they enhance what a student or teacher can already do. Risks, by contrast, are argued to be subtractive at the foundation: undermining cognitive development, degrading trust between students and teachers, compromising social-emotional learning, and creating dependency that erodes autonomous agency. The European Parliament’s Research Service echoed this asymmetry in late 2025, noting that AI tools pose particular concerns for children whose cognitive capacities are still developing, and that significant worries center on students’ research, writing, and argumentation skills as AI increasingly automates these very capacities. Harvard’s Dr. Ying Xu, whose research has directly studied how children interact with AI, concurs that the key question is not whether AI can improve task performance — it can — but whether those gains persist when the AI is removed, a question for which definitive answers remain elusive. (europarl.europa.eu; gse.harvard.edu)

The report’s risk analysis is on its firmest empirical footing when addressing cognitive offloading and writing development. Research published in 2025 and cited widely in the literature finds that when students use generative AI as a direct answer-provider rather than a scaffolding tool, they produce better immediate outputs but do not retain those gains independently. The Brookings authors’ “flywheel effect” — where AI use reduces productive struggle, which reduces skill development, which increases AI dependence — is consistent with what multiple independent researchers have identified. A 2025 Frontiers in Education systematic review found that while AI can foster critical thinking when thoughtfully integrated, it also risks “restricting the range of ideas available” and impeding the analytical processes it purports to support. Concerns about cognitive development are also raised by a recent PMC-published viewpoint on adolescent health and generative AI, which finds that over-reliance may reduce critical thinking and inhibit cognitive development during a period when students are still forming the very faculties that allow them to evaluate information. (frontiersin.org; pmc.ncbi.nlm.nih.gov)

The report’s claim that risks currently overshadow benefits — as a balance of the field rather than as a description of worst-case implementation — is more contestable than its authors acknowledge. Their methodological framework, which they call a “premortem,” explicitly foregrounds potential harms rather than attempting a balanced measurement of real-world outcomes. This is a legitimate and valuable policy tool, but it can produce a lopsided picture when treated as an empirical finding rather than a heuristic. The evidence for AI’s benefits in appropriately designed settings is also substantial and growing. Khan Academy’s Khanmigo, now serving more than 700,000 K-12 students in over 380 U.S. district partners, was built specifically to be Socratic rather than answer-providing, and educators using it have reported meaningful gains in student engagement and language-learner support. Khan Academy’s own pre-generative-AI efficacy data showed that roughly 30 minutes of additional weekly practice produced measurable gains on standardized assessments, and the AI-enhanced version is designed to make that same practice more productive and adaptive. Meanwhile, a Harvard HGSE researcher found in 2025 that AI can “amplify learning opportunities” when it is designed to make existing media and learning time more interactive, as evidenced by studies showing improved scientific reasoning and engagement when AI enables interactive dialogues with educational content. (edweek.org; gse.harvard.edu)

A notable issue for readers evaluating this report’s applicability to U.S. school reconstruction is that its evidence base is international, involving 50 countries with vastly different infrastructure conditions, pedagogical cultures, and levels of teacher preparation. The risks it identifies most vividly — AI platforms designed for the general public with no educational guardrails, indiscriminate use without pedagogical framing — describe a real and common pattern, but they may not describe what well-resourced U.S. districts implementing purpose-built educational AI are actually doing. The report itself acknowledges this distinction by separating “AI-enriched” from “AI-diminished” learning experiences; its risk analysis is largely a description of the latter. The practical risk, then, is that readers interpret a premortem of worst-case trajectories as a verdict on AI-in-education as such, when the report’s own recommendations demonstrate the authors believe the former trajectory is achievable. It is also worth noting that the report was partly drafted using Anthropic’s Claude, OpenAI’s ChatGPT, and Microsoft’s Copilot — a disclosure that, while transparent, adds a layer of irony to a document arguing for caution about AI’s role in cognitive work, and that may invite scrutiny of how those tools shaped the synthesis. (brookings.edu)

Despite the above caveats, Burns et al. offers considerable value for the practical work of redesigning American schools in an AI-saturated environment. Its most useful contributions are not the headline finding about risks overshadowing benefits, but rather the granular analytic framework it provides for evaluating how AI is being implemented. The distinction between AI that strengthens the instructional core — the relationships between students, teachers, and content — and AI that bypasses or weakens those relationships is a practical, actionable concept for school leaders evaluating procurement decisions. Its recommendations around “titrated” AI use, child-friendly product design, AI literacy for educators, and evidence-based procurement criteria are well-grounded and appropriately specific. The Brookings authors’ own companion article, published the same month, reinforces that the window for responsible policy action remains open: as of December 2025, 31 U.S. states have published guidance or policies for AI in K-12 education, and the EU’s AI Act provides a regulatory model focused on risk-tiering that American policymakers can adapt. The report should be read, then, not as a call to pause AI adoption, but as a rigorous framework for ensuring that adoption is structured around developmental science, teacher-centered pedagogy, and child safety — conditions under which AI’s demonstrated benefits can be realized without the developmental costs that currently attend its unguided use. (brookings.edu; edtechmagazine.com)

2. RAND Corporation: AI Use in Schools Is Quickly Increasing but Guidance Lags Behind (2025)

The RAND report is perhaps the most directly useful complement to Burns et al. for U.S. school reconstruction because it provides what Brookings’ premortem framework conspicuously lacks: hard empirical data on what is actually happening in American classrooms right now. Drawing on surveys of more than 16,000 students, parents, teachers, principals, and district leaders collected during the 2024–2025 school year, RAND found that 54 percent of students and 53 percent of English language arts, math, and science teachers indicated that they used AI for school — increases of more than 15 percentage points compared with survey results in the past one to two years. These numbers confirm that the deployment trajectory Burns et al. warns about is not hypothetical; it is already well underway. Crucially, the RAND data also validates Burns et al.’s central concern about governance gaps: over 80 percent of students reported that teachers did not explicitly teach them how to use AI for schoolwork, and only 35 percent of district leaders said they provide students with AI training at all. The report’s lead author noted that without clear policies, “there’s a lot of gray area about what a student would use AI for,” and argued that schools need guidance that explains how to use AI to complement rather than supplant learning. The RAND study’s particular value for school redesigners is its granular breakdown by grade level and its nationally representative sampling — it allows district leaders to benchmark their own practices against a credible national picture, which is something the international sweep of Burns et al. cannot do for American audiences. It also surface a striking perception gap worth noting: 61 percent of parents, 48 percent of middle schoolers, and 55 percent of high schoolers agreed that greater use of AI will harm students’ critical-thinking skills — far higher shares than district leaders, who registered only 22 percent. That gap between adult community concern and administrative complacency is itself a governance problem that school reconstruction efforts will need to address head-on. (rand.org)

3. UNESCO: AI and the Future of Education: Disruptions, Dilemmas, and Directions (September 2025)

Presented at UNESCO’s Digital Learning Week in Paris in September 2025, this 160-page global report offers the most intellectually wide-ranging of the three companion documents, bringing together multiple scholarly perspectives to examine how AI challenges foundational assumptions about teaching, learning, and the human teacher’s role. Where Burns et al. organizes its analysis around a dichotomy of “enriched” versus “diminished” learning experiences, UNESCO’s report is structured around what it calls seven areas for action — from “defining AI futures in education” to “building governance frameworks” and “tackling inequality” — and it is notably more pluralistic in how it weighs benefits and risks. While some chapters point to unprecedented opportunities, others urge caution about the structural, pedagogical, and ethical risks that accompany rapid adoption. The report’s more critical contributors worry that heavily automated or adaptive systems may narrow learning pathways or reduce student agency, and that the rise of generative AI exposes deep weaknesses in traditional assessment. At the same time, other contributors describe emerging models — sometimes called “cyber-social learning” — in which AI acts as an enriching partner rather than a replacement for human instruction. On the governance side, since 2024, UNESCO has supported 58 countries in designing or improving digital and AI competency frameworks, curricula, and quality-assured training for educators and policymakers, giving the report a concrete implementation context that the Brookings document lacks. The UNESCO report’s most important contribution for U.S. school redesigners is its argument that AI adoption in schools and universities is not inevitable but should be guided by deliberate choices, accompanied by a set of governance principles — covering pedagogy, teacher support, equity, and evidence standards — that could inform state-level policy construction. It is also significantly more alert than Burns et al. to the risk that rhetoric about AI’s transformative potential can serve commercial and political interests as much as educational ones, and it draws on a genuinely global evidence base that helps American readers see their own context in comparative relief. (edtechinnovationhub.com; unesco.org)

4. UNICEF Innocenti: Guidance on AI and Children, Version 3.0 (December 2025)

The third and most recent document is distinctive in that it is the only one of the three to ground its analysis explicitly in a human rights framework — specifically, the UN Convention on the Rights of the Child — rather than in a purely educational or empirical lens. Released in December 2025, UNICEF’s Version 3.0 is the third iteration of guidance the organization has developed since 2020, updated specifically because of rapid advances in generative AI, increased child adoption of AI systems, and significant changes in the AI governance landscape. It draws on a twelve-country study with children and caregivers, making its empirical grounding genuinely representative of how children — not just educators and policymakers — are experiencing AI. The document’s most urgent and distinctive contribution is its insistence that children interact with technology in developmentally unique ways, and their mental models of trust, privacy, safety, and truth differ markedly from those of adults, making them more vulnerable to manipulation, misinformation, and emotional influence. This resonates strongly with the developmental asymmetry argument at the center of Burns et al., but grounds it more firmly in rights language: AI governance, UNICEF argues, must start from the principle that children are not simply small adults, and policies cannot treat them as such. The Version 3.0 guidance organizes its recommendations around three pillars (Protection, Provision, and Participation) and ten core requirements, including mandatory child-rights impact assessments before deploying AI in schools, safety-by-design requirements throughout a product’s lifecycle, and explicit obligations to support rather than displace children’s independent cognitive and social-emotional development. Researchers still know little about the long-term developmental, psychological and educational impacts of growing up with AI companions, algorithmic feeds, personalized learning systems and synthetic media, UNICEF warns — but stresses this uncertainty must not delay protection, because the risks already visible are significant and growing. For American school reconstruction, this document is particularly valuable in two respects: its procurement checklist gives district leaders a concrete, rights-aligned tool for evaluating EdTech vendors, and its emphasis on including children themselves in AI governance offers a participatory design principle largely absent from U.S. state-level guidance to date. (unicef.org; tanyagoodin)

Closing Thoughts

The four documents together paint a picture that is simultaneously sobering and hopeful. The risks are real and the governance gaps are wide, but the research community and policy world are clearly mobilizing with unusual speed relative to past technology transitions in education. Whether that mobilization happens fast enough to shape implementation before habits and commercial interests calcify is, perhaps, the central open question for the next two or three years.

__________
1Mary Burns et al., A New Direction for Students in an AI World: Prosper, Prepare, Protect, Brookings, Jan 2026. [PDF]

[End]

Leave a comment