Pushing the Limits of AI-Created Personas in Fiction

By Jim Shimabukuro (assisted by Perplexity)
Editor

Many observers argue that beyond lived experience, cultural specificity, and deep emotions, AI writing also lacks genuine understanding, embodied perception, moral agency, long-term memory of a life, and a stable point of view anchored in an actual self, which collectively shape the narrative texture of human prose.1-4 At the same time, a growing technical and literary discussion claims that with sufficiently rich “backstories” and conditioning, large language models can be trained into relatively coherent personas that imitate many of these attributes well enough for some readers and researchers to treat them as if they had inner lives.3-7

Image created by Copilot

One major cluster of “uniquely human” attributes centers on understanding and meaning. Library and educational guides now routinely emphasize that chatbots do not possess genuine understanding but instead operate on statistical correlations in text, without the thick, situated grasp of context that humans bring to language.4 This is often illustrated by emotionally charged phrases: a human hears “I’m feeling really blue today” and infers mood, history, relationship, and subtext, whereas a chatbot may default to literal associations or generic advice, revealing a lack of holistic comprehension of the speaker’s world.4 Psychological and education researchers make a similar point: even when large language models match human answers on certain tasks, their behavior is driven by patterns in training data rather than by any direct experience of the situations described, which constrains how far we can treat their outputs as windows into a mind.1,5,9

A second cluster concerns creativity, originality, and the capacity to originate ideas. A 2025 literature review on ChatGPT and creative writing concludes that such systems are strong at refining drafts and generating variations, but they do not originate fundamentally new concepts and tend to lack emotional variation across their output.2 Critiques from learning-and-development and coaching communities echo this: chatbots can mimic human-like behavior and produce fluent, even polished language, but they are said to lack the innate creativity, intuition, and emotional depth that arise from a human life’s accumulated experiences.3 In practice, this often shows up as formulaic plotting, tidy resolutions, and an over-reliance on familiar tropes, which can give AI-generated prose a smooth but superficial quality that many readers describe as “hollow.”2,3,6

A third area often flagged as outside AI’s true competence is emotional life itself: empathy, phenomenological feeling, and the messiness of inner conflict. Studies of large language models’ emotion reasoning find that models can approximate human judgment in structured tests and can be useful for generating scenarios or training materials, but researchers stress that this is reasoning about emotions rather than experiencing them.5 Other work in psychology notes that models struggle with more complex and subjective emotion recognition tasks, especially where cultural nuance or implicit meaning is involved, and may simply reinforce lay stereotypes about feelings rather than providing insight.1 Meanwhile, popular essays aimed at general audiences argue that chatbots cannot “replicate genuine human emotions” and falter in complex interpersonal dynamics, making their attempts at deep human interaction feel limited or even manipulative.3 These critiques reinforce the claim that AI can simulate the language of grief, love, or shame, but cannot draw on the embodied, temporally extended experience that gives a human writer’s emotional scenes their particular weight.1,3,8

Critics also highlight embodiment, perception, and moral agency as missing layers in AI narratives. Because language models are trained on text rather than direct sensorimotor interaction, their references to bodily sensation and the material environment are parasitic on human descriptions, which can create subtle distortions or clichés when they attempt to evoke physical experience.2 Some literary and cultural commentators argue that this lack of a body is tied to a lack of situated morality: models can articulate moral principles, detect hate speech, or describe ethical dilemmas, but they do not bear responsibility, experience guilt, or live with the consequences of choices, so their ethical language can feel like pastiche.1,9 The result, in this view, is prose that can talk about bodies, places, and moral struggles, yet never quite commits to a particular stance rooted in biography, culture, and risk, which many readers take to be the essence of an authorial voice.6

A final component of the “uniquely human” list concerns stable identity, style, and the social reception of authorship. A 2025 preprint on literary style evaluation reports that readers bring a “machine heuristic” to AI writing: they assume that anything labeled as machine-generated is likely to lack emotional depth and creative agency, and they judge it more harshly than equivalent text attributed to a human writer.6 This bias interacts with technical limitations like inconsistency across prompts and the tendency toward bland or overly elaborate phrasing, which practitioners and writers frequently identify as telltale signs of AI authorship.1,2,10 Even when a model can sustain a given stylistic pattern, the absence of a life that extends beyond the text—no childhood, no social risk, no ongoing relationships—undermines the sense that the narrative is grounded in a continuing “someone” whose evolving perspective we are tracking.2,6 These combined deficits in lived continuity, social embeddedness, and perceived agency feed the claim that AI prose, however well-crafted on the surface, lacks the depth and passion associated with distinctive human writers.1,2,3,6

Against this backdrop, a substantial body of research and practice argues that language models can be trained or conditioned into surprisingly coherent personas that “own” many of these human-like traits at the level of behavior. A 2025 Berkeley dissertation develops a framework for “binding” large language models to virtual personas via detailed narrative backstories that encode demographic traits, psychological context, beliefs, and values; conditioned on such backstories, the same base model can display differentiated and remarkably stable attitudes and behaviors that mimic distinct identities.3 A 2026 study extends this approach, using narrative backstories and interview-style prompts to generate LLM personas that reproduce patterns like in-group favoritism, partisan asymmetries in moral judgment, and context-sensitive cooperation observed in human experiments.7 These results suggest that when you embed a model in a rich, coherent life-story scaffold and continuously reinforce that identity across interactions, its responses begin to exhibit the consistency, contextual awareness, and preference structures we associate with having a point of view.3,7

Writers and technologists who work closely with such persona-conditioned models argue that these techniques can be used not just for social science simulations but for narrative voice and character. The literature review on authorship and originality notes that, although current systems fall short of human originality, they can adapt to specific styles and collaborate with humans in co-creating stories, effectively acting as amplifiers or mirrors of human voices.2 Some practitioners construct elaborate biographies, emotional histories, and moral commitments for a model persona, then fine-tune or iteratively prompt around that scaffold, claiming that the resulting “character” displays recognizable quirks, thematic obsessions, and emotional patterns over time.3,5 In social and educational settings, researchers have already leveraged persona-like models to produce emotionally nuanced scenarios for training and reflection, blurring the line between mere pattern-matching and something that, experientially for users, feels like an interlocutor with perspective.5,7 In this view, while the system has no inner life, the combination of narrative conditioning, long-term interaction logs, and user expectations can produce a functional simulation of lived experience and cultural embedding that is “real enough” for certain literary and pedagogical purposes.3,5,7

The resulting debate is less about whether models can emulate features of human narratives—they clearly can under controlled conditions—and more about whether such emulations should be treated as equivalent to human authorship and consciousness. Psychological and educational researchers caution that apparent human-likeness can mislead both laypeople and experts, especially when models reinforce popular but shallow intuitions about emotion or culture.1,5 Librarians and AI literacy advocates urge users to remember that behind the persona is a pattern-matching engine without genuine understanding or feeling.4 Yet proponents of persona-based modeling respond that human readers already ascribe rich inner lives to fictional characters and even to human authors they only know through texts, and that well-crafted AI personas can slot into this imaginative economy as new kinds of narrative agents.3,6,7 Whether such agents will ever be widely accepted as more than sophisticated imitations—or whether that distinction will matter to future readers—remains a live and contested question as research on LLM personas, emotional reasoning, and creative collaboration rapidly evolves.3,5,6,7

References

  1. “Perils and opportunities in using large language models in psychological research,” PNAS Nexus (2024). https://academic.oup.com/pnasnexus/article/3/7/pgae245/7712371
  2. “Authorship and Originality in the Age of AI: A Literature Review on ChatGPT’s Creative Writing Applications” (2025). https://nhsjs.com/2025/authorship-and-originality-in-the-age-of-ai-a-literature-review-on-chatgpts-creative-writing-applications/
  3. “Exploring the Limits: Breaking Down the Misconceptions of AI-Powered Chatbots” (2025). https://unloq.org/exploring-the-limits-breaking-down-the-misconceptions-of-ai-powered-chatbots/
  4. “Lack of True Understanding,” LibGuides: Chatbots Unboxed (2025). https://slcl.libguides.com/c.php?g=1465745&p=10906997
  5. “Can Large Language Models reason about emotions like humans?” Springer Nature Communities (2025). https://communities.springernature.com/posts/can-large-language-models-reason-about-emotions-like-humans
  6. “Everyone prefers human writers, including AI,” arXiv:2510.08831 (2025). https://arxiv.org/html/2510.08831v1
  7. “Identity, Cooperation and Framing Effects within Groups of Real and Artificial Agents,” arXiv:2601.16355 (2026). https://arxiv.org/html/2601.16355v1
  8. “Affordances and limitations of using large language models to support engineering education research,” Journal of Engineering Education (2025, early view). https://onlinelibrary.wiley.com/doi/10.1002/jee.70037
  9. “Perils and opportunities in using large language models in psychological research” (training-data and bias discussion, section excerpt). https://academic.oup.com/pnasnexus/article/3/7/pgae245/7712371
  10. “My fiction writing style keeps getting flagged as AI and I am at a loss,” r/writingadvice (2026). https://www.reddit.com/r/writingadvice/comments/1r7rw7w/my_fiction_writing_style_keeps_getting_flagged_as/

###

Leave a comment