By Jim Shimabukuro (assisted by Claude)
Editor
Introduction: Artificial intelligence is not merely a new instrument slotted into a pre-existing framework for how we come to know things. Epistemology, the branch of philosophy concerned with the nature, sources, and limits of knowledge, has historically organized itself around a set of working assumptions: that knowledge is something possessed by an individual human knower; that its justification depends on rational deliberation, sensory experience, or both; and that the methods by which it is validated — empiricism, falsifiability, peer review — are recognizably human-centered processes. AI disrupts all three of these pillars simultaneously. It generates knowledge-like outputs through processes that are statistically distributed, opaque, and, in the case of deep learning systems, largely inexplicable even to their designers. The question of who counts as a “knower” and what counts as a legitimate “epistemic operation” has suddenly become open in ways it has not been since the Scientific Revolution.
The integration of artificial intelligence into scientific practice represents not merely a methodological shift but a significant transformation in the structure of science itself, disrupting classical paradigms — empiricism, falsificationism, Kuhnian paradigm shifts, and social epistemology — while necessitating novel frameworks for understanding knowledge production in the age of machine cognition.12 A growing cohort of philosophers, anthropologists, and theorists of technology has begun the serious work of mapping this transformation. What follows are portraits of five of the most significant: Ramón Alvarado, Mark Coeckelbergh, Lucy Suchman, Richard Heersmink, and S. Orestis Palermos. Each approaches the challenge from a different angle — ethics, political philosophy, feminist anthropology, cognitive science, and social epistemology — but all converge on the conclusion that the arrival of AI demands a fundamentally new architecture for understanding how knowledge is made, transmitted, and validated.
Ramón Alvarado: AI as the Defining Epistemic Technology
Ramón Alvarado is an Assistant Professor of Philosophy and Data Ethics at the University of Oregon, where he works at the juncture of the philosophy of computation, data ethics, and the theory of knowledge. His scholarly career has been built around a deceptively simple but philosophically radical claim: that AI is not just another technology we happen to use for cognitive tasks, but a technology whose design, deployment, and social function are constitutively oriented toward knowing in a way that no prior technology has been. This claim, developed most explicitly in his landmark 2023 paper published in Science and Engineering Ethics, forms the foundation of his ongoing contributions to an emerging paradigm.
Alvarado argues that artificial intelligence and the many data science methods associated with it — machine learning and large language models prominent among them — are first and foremost epistemic technologies, designed, developed, and deployed to be used in contexts of inquiry, to manipulate informational content such as data, and to do so particularly through operations such as prediction and analysis.1 This is a sharper claim than it might first appear. He is not simply saying that we use AI to learn things, the way we use a microscope or a calculator. He argues that the raison d’être of AI — the purpose baked into its design from the start — is the enhancement of our capacities as knowers. While a hammer is a tool for building things that happens to serve cognitive ends on occasion, AI has no primary non-intellectual function. It exists to process information, produce predictions, generate interpretations, and support inquiry.
Unlike other kinds of technology, AI is uniquely positioned in that it is primarily designed and deployed for contexts of inquiry, specifically created to manipulate and transform data, and engineered to carry out the particular operations — prediction, classification, analysis — through which knowledge claims are generated and tested.1 This tripartite structure — the context of inquiry, the content of information, and the operation of reasoning — is Alvarado’s analytical framework, and it has significant downstream implications. If AI is genuinely a technology of knowing in this thick sense, then the ethical and social questions it raises are not merely about safety or fairness in the ordinary sense. They become questions about the conditions under which knowledge is justifiably produced, who is authorized to produce it, and who is harmed when the process goes wrong.
Alvarado’s research profile shows consistent attention to algorithmic fairness, trust in scientific instruments, and the challenges faced by conventional frameworks of transparency and accountability in AI design.1 He has argued, for instance, that the only kind of trust that should be allocated to AI — if any — is trust grounded in the system’s actual reliability as a provider of accurate conclusions, and that trusting AI in any other way, as one might trust a friend or a community institution, risks conceptual confusion with serious practical consequences. His 2024 paper on clinical decision-making argues that machine learning systems introduce a fundamental incommensurability into healthcare reasoning, because the causal and relational logic of clinical judgment cannot be straightforwardly mapped onto the statistical correlations that AI produces. The very form of knowledge that AI generates is different in kind from what physicians have traditionally regarded as the gold standard of clinical reasoning.
Alvarado’s framing shifts the question from “can AI know?” to “how does AI function in the web of knowledge creation, justification, and dissemination?” — a move that opens productive new territory for both philosophy and policy.1 By reframing the problem this way, he sidesteps centuries-old debates about machine consciousness or intentionality and focuses instead on the functional role AI actually plays in the broader ecosystem of human inquiry. Whether or not a language model “knows” anything in the philosophically loaded sense, it is doing something that structurally resembles knowing within the practices of science, medicine, law, and education. That functional role demands a new vocabulary — one adequate to the cognitive novelty of the systems we have built. Alvarado’s work is foundational to building that vocabulary, and it is increasingly cited by scholars across disciplines as a starting point for any serious engagement with what AI means for how we know what we know.
Mark Coeckelbergh: Epistemic Agency, Belief, and the Political Stakes of AI
Mark Coeckelbergh is a full Professor of Philosophy of Media and Technology at the University of Vienna, an ERA Chair at the Institute of Philosophy of the Czech Academy of Sciences, and one of the most widely cited philosophers working on the ethics and politics of artificial intelligence. He has served on the European Commission’s High Level Expert Group on Artificial Intelligence, among other policy bodies, and his books — including AI Ethics (2020) and Why AI Undermines Democracy (2024) — have become standard references in both academic philosophy and technology governance. His contributions to the epistemological challenge posed by AI are distinctive because they insist that the question is never merely theoretical. How AI shapes what we believe and how we come to believe it is, for Coeckelbergh, fundamentally a question of political power.
Coeckelbergh’s 2025 paper in Social Epistemology investigates the problems raised by AI through the lens of epistemic agency, arguing that the use of artificial intelligence and data science, while offering access to more information, risks influencing the formation and revision of our beliefs in ways that diminish our capacity to reason independently.2 Epistemic agency, as he uses the term, refers to the capacity of an individual to exercise meaningful control over how her beliefs are formed, questioned, and revised. It is not enough to be exposed to information; genuine epistemic agency requires the ability to reflect on that information, to weigh it against alternatives, and to change one’s mind through a process that is responsive to reasons rather than algorithmic nudges. AI, Coeckelbergh argues, threatens this capacity through at least three mechanisms.
The first is direct manipulation — the deliberate engineering of what information a person sees, in what order, and with what emotional salience, through recommendation systems designed to maximize engagement rather than truth. The second is the creation of epistemic bubbles and echo chambers, which limit exposure to challenging viewpoints not through overt censorship but through the quiet architecture of personalization. Coeckelbergh identifies AI as contributing to non-intended influences by facilitating problematic socio-technological structures such as epistemic bubbles — which create an unintentional lack of exposure to diverse views — and echo chambers, which actively exclude and discredit external sources, with the combined effect that the cognitive tension necessary for critical reflection is eliminated, making intellectual openness an undesirable option for the user.2 The third mechanism is what Coeckelbergh calls the “defaulting of statistical knowledge” — the tendency of AI systems to surface correlational patterns as if they were causal facts, reinforcing pre-existing beliefs by making them appear quantitatively grounded even when they lack genuine evidential support.
What makes Coeckelbergh’s framework particularly significant for a new epistemological paradigm is his insistence on connecting individual epistemic agency to structural and political analysis. He argues that social epistemology needs to include a political epistemology — that analyzing how knowledge is organized in society requires an analysis of the political interests and powers vested in propagating some beliefs and narratives rather than others, and in maintaining epistemic-technological environments that serve those interests.2
This is a substantial theoretical move. It means that the epistemological disruption caused by AI cannot be understood by looking only at individual knowers and their cognitive habits. It requires examining the sociotechnical infrastructures through which AI mediates knowledge at scale — who builds them, who profits from them, and whose knowledge claims they systematically amplify or suppress. Coeckelbergh’s work thus bridges the philosophy of mind, political philosophy, and social epistemology in ways that are essential for any comprehensive account of AI’s epistemic implications. His vision of the epistemological future is one in which the concept of epistemic justice is extended and deepened to account for AI as a structuring force in how knowledge is distributed, authorized, and denied.
Lucy Suchman: Situatedness, Feminist Epistemology, and the Politics of the Knowing Machine
Lucy Suchman is Professor Emerita of the Anthropology of Science and Technology at Lancaster University, where she worked after two decades as a principal scientist at Xerox’s Palo Alto Research Center. She is one of the founding figures of the field of human-computer interaction, recognized for her 1987 book Plans and Situated Actions, which fundamentally challenged AI’s assumption that human behavior could be modeled as rule-following execution of pre-formed plans. Her recent work, culminating in her 2023 paper in Big Data & Society and her 2024 co-authored book Neural Networks (University of Minnesota Press), extends this foundational critique into a rigorous epistemological framework that challenges AI’s claim to produce objective, universal knowledge.3,4
Suchman’s 2023 commentary begins with the question “How is it that AI has come to be figured uncontroversially as a thing, however many controversies it may engender?” and traces this to knowledge practices that philosopher of science Helen Verran has named a “hardening of the categories” — processes that not only characterize the onto-epistemology of AI but are also central to its constituent techniques and technologies3 This “hardening” is consequential because it involves the transformation of provisional, situated classifications — which food is healthy, which face is suspicious, which loan application is risky — into stable categories that are then encoded in training data and treated as objective facts. The data-driven methods of modern AI inherit and entrench these categorical decisions without making them visible as decisions. What looks like neutral pattern recognition is, in Suchman’s analysis, a form of epistemic politics.
Examining the field’s onto-epistemic legacy from a feminist standpoint, Suchman draws on the tradition of feminist epistemology to challenge AI’s reliance on a universalized figure of the knowing subject — the canonical “disinterested moral philosopher” taken as the universal or interchangeable subject — arguing instead that feminist epistemology is concerned with the specificity of the knowing subject, the “S” in propositional logic’s “S knows that p,” and that asking “Who is S?” ought to be treated as a central philosophical concern rather than an improper digression.3 This is precisely the kind of question that a new AI epistemology must take seriously. If the “knower” embedded in an AI system is implicitly modeled on a particular demographic, a particular cultural context, a particular set of social positions, then the knowledge it produces is not universal. It is local knowledge masquerading as universal law. Suchman’s framework insists that we unmask this masquerade as an epistemological task, not merely an ethical one.
Suchman argues that AI does relatively well at identifying statistical patterns in closed-world data, but that real, open-world environments are notoriously difficult for machine learning systems, and that as a result “engineering” the world these machines inhabit and draw data from becomes almost as important for their proper functioning as the algorithms underlying them.5 This is a crucial insight: the “knowledge” that AI appears to generate about the world is in many cases knowledge about an engineered version of the world, designed to be legible to the system.
The world must be simplified, categorized, and datafied before AI can process it — and these prior operations of classification are largely invisible in the outputs AI subsequently produces. For Suchman, a genuinely new epistemological paradigm must make these operations visible, holding them up to the kind of critical scrutiny that has historically been reserved for experimental design in natural science. Her feminist science studies framework, with its emphasis on situated knowledge, embodied cognition, and the politics of knowledge production, provides the conceptual tools for this critical work.
Richard Heersmink: Cognitive Artifacts, Extended Minds, and the New Division of Epistemic Labor
Richard Heersmink is a philosopher at Tilburg University in the Netherlands whose work sits at the crossroads of cognitive science, philosophy of technology, and epistemology. Drawing on the tradition of “4E cognition” — the view that cognition is embodied, embedded, enacted, and extended beyond the boundaries of the skull — Heersmink has developed a sophisticated account of large language models as a qualitatively new kind of cognitive artifact, one that demands new epistemological categories rather than simply fitting into pre-existing ones. His 2024 paper in Ethics and Information Technology, co-authored with Barend de Rooij, María Jimena Clavel Vázquez, and Matteo Colombo, has become one of the most widely cited recent contributions to the philosophy of AI.
Heersmink and colleagues conceptualize large language models as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, and information-seeking — noting that phenomenologically, LLMs can be experienced as a “quasi-other,” and that when that happens, users anthropomorphize them.6 The concept of a cognitive artifact is well established in philosophy and cognitive science — it refers to tools that extend or scaffold human cognitive capacities, like maps, calendars, search engines, and calculators. But Heersmink argues that LLMs represent something genuinely novel within this category, and that novelty has far-reaching epistemological implications.
In terms of computational agency, Heersmink identifies a shift from agency located primarily in the human agent to agency located primarily in the artifact: when writing a text on a word-processor, the text is written by the human agent and the word-processor facilitates the process — but in the case of LLMs, the entire text is now generated by an algorithm, representing a completely new functionality for a cognitive artifact and a new division of cognitive labor between humans and machines.6 This shift in the locus of agency is, for Heersmink, philosophically momentous. Prior cognitive artifacts extended human cognition without replacing its central generative function. A GPS system tells you where to turn, but you still decide where to go. An LLM, by contrast, can produce the entire epistemic output — the argument, the summary, the analysis — from a minimal prompt. The human contribution becomes more curatorial and less generative, which raises questions about what it means to “know” something when the knowing has been substantially outsourced.
Heersmink and colleagues observe that for most users, current LLMs are black boxes, largely lacking data transparency and algorithmic transparency, but that they can be phenomenologically and informationally transparent, creating an interactional flow — and that this combination can produce an attitude of unwarranted trust toward the outputs LLMs generate, which is particularly problematic when LLMs hallucinate.6 The epistemological problem here is subtle but serious. The phenomenological transparency of natural language — the fact that LLM outputs feel like ordinary human communication — masks the computational opacity of how those outputs were generated.
The user experiences the output as if she were reading a document written by a knowledgeable author, but the processes that produced it are largely inaccessible. This mismatch between phenomenological accessibility and epistemic opacity is, in Heersmink’s framework, one of the defining epistemological challenges of the AI era, and it calls for new norms of calibrated trust and epistemic literacy. His 2024 comment in Nature Human Behaviour extends this analysis to cognitive skills themselves, arguing that heavy reliance on LLMs risks impoverishing the very capacities of writing and thinking that AI ostensibly serves.7
S. Orestis Palermos: Distributed Cognition, Generative AI, and the Social Epistemology of Knowledge Transmission
S. Orestis Palermos is an Assistant Professor of Philosophy at the University of Ioannina in Greece, previously a lecturer and senior lecturer at Cardiff University and a postdoctoral researcher at the University of Edinburgh, where he earned his doctorate under the supervision of Duncan Pritchard and Andy Clark. His research combines philosophy of mind, cognitive science, and social epistemology to develop accounts of how knowledge is produced, shared, and justified in systems that extend beyond individual human minds. His 2025 essay published in the Social Epistemology Review and Reply Collective represents one of the most nuanced recent attempts to answer a question that is increasingly urgent: can generative AI transmit knowledge, and if so, how should responsibility for its outputs be distributed?
Palermos examines whether generative AI systems such as ChatGPT can transmit knowledge and, if so, how responsibility for their outputs should be distributed, proceeding by situating John Greco’s notion of massively shared agency within the recent literature on AI testimony, showing how it challenges the widespread assumption that AI cannot testify due to its lack of intentions.8 The question of whether AI can “testify” in the philosophically robust sense — that is, transfer justified belief from one party to another in a way that grounds knowledge — has become one of the most contested issues in contemporary social epistemology.
Many philosophers have argued that testimony requires intention, and since AI lacks genuine intentionality, it cannot testify and therefore cannot be a source of knowledge in the traditional sense. Palermos challenges this argument by showing that the concept of testimony, as it has evolved in the philosophical literature, may be too anthropocentric to do justice to how knowledge is actually transmitted in complex sociotechnical systems.
Palermos draws on Greco’s 2025 account of “massively shared agency” to explain how large-scale institutions such as Wikipedia and Google Search can transmit knowledge through testimony, arguing that generative AI systems may be understood as epistemic collaborations — albeit special instances of them — due to their large scale and massively distributed structure, which often involves users from around the world.8,10 This is a theoretically elegant move. By treating generative AI as a form of distributed epistemic collaboration — a sociotechnical system in which many agents (developers, trainers, users, feedback providers) collectively produce knowledge-generating processes — Palermos can apply established frameworks for collective and collaborative knowledge to the novel case of AI. The question of whether a system can transmit knowledge becomes partly a question about the reliability of the distributed processes through which that system was produced and continues to be refined.
Palermos has argued more broadly that knowledge may not always be the product of any individual’s cognitive ability, and thereby not creditable to any individual alone — that knowledge might instead be the product of an epistemic group agent’s collective cognitive ability, and thus attributable only to the group as a whole, with the hypothesis of distributed cognition allowing proponents of virtue reliabilism to make sense of the claim that knowledge can be held by a system even when no individual member of it holds that knowledge alone.9 Applied to AI, this framework has radical implications. If knowledge can be genuinely distributed across a human-machine system — if the system as a whole can satisfy the conditions for knowledge even when no individual component does — then the task is to articulate the conditions under which such distributed knowing is reliable, responsible, and justifiable.
Palermos’s broader research program, including his 2025 book Cyborg Rights: Extending Cognition, Ethics, and the Law, builds toward a comprehensive account of what it means to be an epistemic agent in a world where cognition is increasingly hybrid, extended, and entangled with machines. His work suggests that the new epistemological paradigm will need to be not merely social but genuinely distributed — capable of assigning credit and responsibility to networks of humans and AI systems rather than to isolated individual minds.
Conclusion
What unites these five thinkers, despite their very different disciplinary orientations and argumentative strategies, is the shared recognition that AI is not a tool that fits comfortably within the existing epistemological paradigm. It is a force that exposes the limits of that paradigm and demands its reconstruction. Alvarado gives us the conceptual architecture for understanding why AI is epistemologically distinctive. Coeckelbergh shows us the political stakes of that distinctiveness for democratic society. Suchman insists that we interrogate whose knowledge is encoded in AI systems and whose is erased. Heersmink maps the cognitive consequences of outsourcing epistemic labor to machines. And Palermos develops the theoretical scaffolding for understanding knowledge as something that can be genuinely distributed across human-AI collectives.
The paradigmatic transformations underway in our knowledge practices require a new epistemology characterized by reflexivity, inclusiveness, and openness to continuous revision — one that does not seek closure in fixed methodological rules but rather embraces complexity, hybridity, and sociotechnical entanglement as constitutive features of the knowledge-producing process.11 What is emerging from the convergent work of these scholars is precisely such an epistemology: one in which the boundaries between the knowing mind and the knowing machine are porous, the standards of justification are rewritten for an age of computational cognition, and the question of who counts as a legitimate knower — and on whose behalf knowledge is generated — is treated as philosophically central rather than epistemologically irrelevant.
References
- Ramón Alvarado, “AI as an Epistemic Technology,” Science and Engineering Ethics (2023) — https://link.springer.com/article/10.1007/s11948-023-00451-3
- Mark Coeckelbergh, “AI and Epistemic Agency: How AI Influences Belief Revision and Its Normative Implications,” Social Epistemology (2025) — https://www.tandfonline.com/doi/full/10.1080/02691728.2025.2466164
- Lucy Suchman, “The Uncontroversial ‘Thingness’ of AI,” Big Data & Society (2023) — https://journals.sagepub.com/doi/10.1177/20539517231206794
- Ranjodh Singh Dhaliwal, Théo Lepage-Richer, and Lucy Suchman, Neural Networks (University of Minnesota Press, 2024) — https://srinstitute.utoronto.ca/events-archive/seminar-2025-lucy-suchman
- Lucy Suchman, “What Is AI? Part 2,” AI Now Salons (2025) — https://ainowinstitute.org/publications/collection/what-is-ai-part-2-with-lucy-suchman-ai-now-salons
- Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez, and Matteo Colombo, “A Phenomenology and Epistemology of Large Language Models: Transparency, Trust, and Trustworthiness,” Ethics and Information Technology (2024) — https://link.springer.com/article/10.1007/s10676-024-09777-3
- Richard Heersmink, “Use of Large Language Models Might Affect Our Cognitive Skills,” Nature Human Behaviour (2024) — https://www.nature.com/articles/s41562-024-01859-y
- S. Orestis Palermos, “Knowledge from AI,” Social Epistemology Review and Reply Collective (2025) — https://social-epistemology.com/2025/11/24/knowledge-from-ai-orestis-palermos/
- S. Orestis Palermos, “Epistemic Collaborations: Distributed Cognition and Virtue Reliabilism,” Erkenntnis (2022) — https://philpapers.org/rec/PALECD-7
- John Greco, “The Transmission of Knowledge via Large-Scale Technology: A Shared Agency Account,” Social Epistemology (2025) — https://doi.org/10.1080/02691728.2025.2463054
- Giuseppina Punziano, “Adaptive Epistemology: Embracing Generative AI as a Paradigm Shift in Social Science,” MDPI Social Sciences (2025) — https://www.mdpi.com/2075-4698/15/7/205
- Miguel Ángel Cruz-Aguilar, “The Epistemic Revolution of AI: Reconfiguring the Foundations of Scientific Knowledge,” AI & Society (2025) — https://link.springer.com/article/10.1007/s00146-025-02658-3
Filed under: Uncategorized |



















































































































































































































































































































































































Leave a comment