The New Anthropomorphism
Why People Increasingly Treat AI as Human and Why That Matters
Introduction
Artificial intelligence now enters ordinary life with the manners of a person and the reach of a utility. It writes emails, answers questions, tutors students, reassures the lonely, and speaks in a cadence once associated with educated company. That shift matters because people do not respond to conversational systems as they responded to calculators or search bars. They respond socially. Recent research found that Americans increasingly perceive AI as warm, competent, and human-like, with those perceptions rising sharply after the arrival of mainstream generative systems. Those perceptions also predict trust and willingness to adopt AI technologies (Cheng et al., 2026).
This change has immediate consequences. In education, students may treat a text generator as a wise companion instead of a statistical engine. In health settings, patients may mistake fluent guidance for care. In workplaces, employees may ascribe judgment, tact, and discretion to systems that possess none of those things in the human sense. A machine that feels attentive can acquire authority long before it deserves reliance. Scholars have warned that anthropomorphic descriptions of AI can mislead both experts and the public about what these systems are capable of understanding or doing (Deshpande et al., 2023).
The scholarly literature now shows why this impression matters. Anthropomorphic cues can increase trust in artificial intelligence by shaping perceptions of warmth and competence (Shi, 2025). At the same time, anthropomorphic language around large language models can encourage users to over-attribute agency, understanding, and social standing to systems that do not possess those traits as persons do (Peter & Riemer, 2025). The machine need not possess a mind for users to behave as if one were present. That small confusion can grow teeth.
The issue runs deeper than inaccurate description. Anthropomorphic language does more than misstate what AI is. It reshapes the moral field in which people make decisions about trust, responsibility, and dependence. Hahne and Schmoelz (2026) argue that framing AI as trustworthy in a human sense blurs lines of responsibility and risks weakening the cultivation of human moral agency. One may rely on a map, software package, or blood-pressure cuff. Trust, in the fuller human sense, belongs to creatures who can owe one another truth, loyalty, repentance, and restraint. A chatbot can simulate the weather around those things. It cannot enter their moral substance.
This article argues that contemporary AI systems invite anthropomorphism at an unusual scale because conversational fluency, emotional polish, and interface design activate old habits of social projection under new technological conditions. That matters because anthropomorphism changes trust, encourages misplaced attachment, and risks lowering cultural standards for what counts as a person, a judgment, and a relationship. The question is not whether AI has secretly become human. The question is whether human beings, through convenience and repetition, begin to treat personhood as a surface effect.
The sections that follow move from this central claim. The first examines anthropomorphism as a recurring human habit. The second studies why present forms of AI provoke that habit with unusual force. The third analyzes how trust, warmth, and perceived empathy function in human-AI encounters. The fourth turns to the moral and social costs of humanizing machines. The fifth considers design and policy responses aimed at preserving human clarity in a culture increasingly tempted by synthetic companionship. A society does not stay human by building weaker machines. It stays human by refusing to confuse performance with personhood.
The Human Habit of Projecting Personhood
Human beings have a long record of giving interior life to things that do not possess it. Children speak to dolls. Sailors name ships. Drivers scold their cars as if the engine had developed a personal grudge. Ancient peoples filled rivers, storms, mountains, and stars with agency because the human mind is quick to answer pattern with personality. Anthropomorphism is not a software glitch in modern consciousness. It is one of the oldest habits of the species. Deshpande et al. (2023) note that anthropomorphization is prevalent across social contexts and continues to shape how people discuss and interpret AI systems.
This tendency has roots in ordinary cognition. People infer motives faster than mechanisms. A face, a voice, a turn-taking exchange, or even a hint of responsiveness can prompt social interpretation before reflective judgment has time to catch up. That tendency helps explain why anthropomorphic descriptions of AI spread so easily in public and scholarly discourse alike. Work analyzing language around AI has shown that anthropomorphic framing remains widespread, including in research communication and media reporting (Shardlow et al., 2025; Cheng et al., 2024).
Older technologies invited projection in limited form. A thermostat might seem stubborn. A chess computer might seem cunning. Yet these systems usually announced their mechanical character with enough bluntness to restrain fantasy. Their forms were narrow, their outputs repetitive, and their social range thin. A spreadsheet never sounded concerned about your future. A search engine, for all its usefulness, rarely seemed wounded by your tone. The machine remained a machine in the public imagination because its behavior did little to counterfeit the rhythms of human exchange.
That boundary has weakened because contemporary AI now speaks in complete social gestures. It replies, remembers, reassures, apologizes, and adapts style on command. Those behaviors do not prove inward life, though they do trigger the perception of it with remarkable speed. Cheng et al. (2026) found that public metaphors for AI have become significantly more human-like and warm in the period after the release of ChatGPT. That finding matters because metaphor is rarely decorative. It often reveals what a culture is beginning to believe.
Anthropomorphism also serves a practical function. It helps people handle uncertainty. When a system behaves in ways too complex to parse mechanically, users reach for human categories because those categories are familiar and socially useful. Calling a chatbot thoughtful or caring helps a person predict its behavior, even when the description is philosophically false. This helps explain why anthropomorphic language persists even among educated users who know, in abstract terms, that the system is statistical rather than conscious. Knowledge does not always outrun instinct.
There is a moral consequence as well. To project personhood is to place a thing inside categories shaped by human relationships. Once that move occurs, words such as trust, empathy, betrayal, companionship, and respect arrive close behind. Peter and Riemer (2025) warn that anthropomorphic language can encourage over-attributions of agency and understanding to large language models. That confusion begins as rhetoric. Soon it becomes habit. Then it becomes expectation. A culture that gets sloppy about personhood seldom remains sharp anywhere else.
This older human tendency explains why the present moment feels so charged. People are not encountering AI as blank machinery. They are encountering it through habits of mind developed long before electronics, habits tuned to detect persons in the world and to respond socially when such detection seems plausible. The novelty lies in the fit between ancient instinct and modern design. AI now meets the human mind on the exact ground where projection is easiest: language, responsiveness, memory cues, and emotional style. That is why the current wave of anthropomorphism is more than a passing curiosity. It is an old reflex meeting a very polished target.
Why Contemporary AI Invites Anthropomorphism
The old human tendency to project personhood has now found a far better stage. Contemporary AI does not merely compute in the background. It speaks in full sentences, takes turns in conversation, recalls prior prompts, adapts its tone, and offers responses that mimic reassurance, curiosity, and tact. That matters because anthropomorphism grows strongest when a system behaves in ways people ordinarily associate with social intelligence. Recent research argues that large language models exhibit anthropomorphic characteristics across language, behavior, and presentation, making interaction feel more intuitive while also increasing the risk of over-trust and confusion about what the system actually is (Xiao et al., 2025).
Language sits at the center of the effect. Most older tools announced their nature through function. A calculator calculated. A database retrieved. A search engine returned links with the charm of a filing cabinet. Generative AI, by contrast, replies in the texture of human discourse. It can apologize, hedge, encourage, summarize, flatter, and reformulate. Those are not minor cosmetic features. They are cues that invite the user to treat the exchange as a social encounter rather than a technical transaction. Research on anthropomorphic cues and trust in large language models points in the same direction: how a system is described and how it speaks can substantially alter trust judgments (Inie et al., 2024).
Warmth deepens the illusion. People do not evaluate AI systems by competence alone. They also respond to perceived attentiveness, friendliness, and concern. Shi (2025) found that anthropomorphic design influences trust partly through warmth and competence perceptions. This is an old social vulnerability in a new setting. People rarely trust what is smartest. They trust what feels well intentioned. Trouble often enters wearing a pleasant expression.
Visual and embodied cues can intensify the process further, though they are no longer required for it. Chatbots with faces, voices, expressive animations, or avatars often draw stronger social responses because they present a more familiar human template. Yet the striking fact about the present moment is that text alone now often suffices. The public is willing to infer a social presence from words on a screen so long as those words arrive with enough fluency and situational tact. The face used to matter more. Now prose does the impersonation.
Design strategy also plays a direct role. AI products are often tuned to reduce friction, maintain engagement, and keep the interaction flowing. There is plain commercial logic in this. Users return to systems that feel easy, attentive, and socially graceful. Yet the same design priorities can strengthen emotional attachment and social presence in ways that blur the line between utility and pseudo-relationship. Inie et al. (2024) argue that anthropomorphized technical descriptions can increase trust while also increasing the risk of misplaced trust and over-reliance. The machine becomes easier to like at the precise moment it becomes harder to judge.
Opacity makes the effect stronger. Most users cannot inspect the internal basis of an answer, so they fall back on surface cues. When the mechanism is obscure, manner becomes evidence. A well-phrased response feels like understanding. A graceful reformulation feels like reflection. An adaptive tone feels like sensitivity. Hahne and Schmoelz (2026) warn that this can feed misplaced trust, because social polish is easily mistaken for ethical standing or genuine judgment. When the engine is hidden, the upholstery starts to look like a soul.
This combination of fluency, warmth, and opacity makes the present situation historically unusual. Human beings have always anthropomorphized. What is new is the fit between ancient projection habits and systems built to operate through language, responsiveness, and social simulation at mass scale. Millions of people now encounter software that behaves with enough coherence and tact to trigger person-perception repeatedly in daily life. That does not make the software a person. It means the software has become very good at pressing on the precise buttons by which people detect one.
Trust, Warmth, and Perceived Empathy
Anthropomorphism matters because it changes how trust is formed. People rarely trust a system on the basis of raw output alone. They respond to social cues, and those cues shape whether a machine feels safe, helpful, and worthy of reliance. In AI settings, perceived warmth matters alongside competence because users tend to interpret warmth as a sign of good intentions. Shi (2025) shows that anthropomorphic design can raise trust by increasing perceptions of warmth and competence. A machine that seems attentive gains ground quickly, even when its understanding is thinner than its manners.
This helps explain why conversational AI feels different from older software. A traditional search engine or spreadsheet might be useful, but it does not usually appear to care whether the user is confused, discouraged, or tired. A conversational model can mirror tone, offer reassurance, and produce language that sounds measured and emotionally aware. Those features create what many users experience as social presence. Xiao et al. (2025) note that anthropomorphic traits in large language models can make interaction more intuitive and engaging, which helps explain why users often respond to these systems as though they were entering a social exchange.
Perceived empathy intensifies the effect. When users believe an AI system understands their concerns, even in a thin or simulated sense, they often report greater comfort and stronger connection. That does not mean the system has crossed into genuine fellow feeling. It means the user has interpreted the response through categories built for human interaction. Peter and Riemer (2025) warn that this style of anthropomorphic framing encourages users to attribute agency and understanding beyond what the system possesses. The resemblance does much of the work. Many users will supply the rest themselves.
There is an important distinction here between justified reliance and social trust. A person may rely on a calculator for arithmetic or on navigation software for directions without treating either one as morally significant. Social trust is thicker. It carries assumptions about concern, discretion, and a dependable orientation toward the good of another. Hahne and Schmoelz (2026) argue that applying this richer concept of trust to AI systems risks undermining human agency and moral responsibility because the relation begins to resemble trust between persons while lacking its ethical substance. The form remains. The core is absent. It is the social equivalent of a painted fireplace.
Warmth is especially important because it can override caution. A highly competent system that feels cold may still invite scrutiny. A less competent system that feels kind may gain a surprising amount of indulgence. Shi (2025) points to the mediating role of warmth and competence in AI trust formation, which helps explain why polished conversational tone can carry so much persuasive force. Charm has always been one of the oldest counterfeit currencies. AI now prints it at scale.
The deeper issue is that trust formed through warmth and perceived empathy can spread beyond the bounds of competence. Once users feel socially safe with a system, they may begin to assume that the system is also wise, careful, and aligned with their interests. Peter and Riemer (2025) warn that anthropomorphic language can produce unrealistic expectations and misplaced trust. At that point the problem is no longer one bad interaction or one mistaken belief. It becomes a broader social habit. People start to treat responsiveness as evidence of judgment and fluency as evidence of care. A culture that makes that mistake often learns the truth in expensive installments.
Moral and Social Consequences of Humanizing Machines
The most serious effects of anthropomorphic AI appear after the first pleasant impression. Once a system is treated as socially meaningful, the user begins to relate to it through categories that belong properly to human life: companionship, trust, empathy, discretion, and even loyalty. That shift is consequential because it changes more than interface preference. It changes the moral grammar of the interaction. Hahne and Schmoelz (2026) argue that speaking of AI as trustworthy in a human sense can blur responsibility, weaken accountability, and erode the cultivation of human moral agency. Anthropomorphism is not a harmless metaphor. It is a transfer of status from persons to performances.
One result is emotional dependency. Anthropomorphic chatbots can provide companionship, affirmation, and continuity at times when human relationships feel difficult, costly, or unavailable. That can make them appealing, especially to users under stress. Peter and Riemer (2025) caution that anthropomorphic framing encourages people to misread conversational systems as empathic or socially understanding when they are in fact generating plausible responses without genuine feeling or moral comprehension. A machine that always answers begins to look, to some users, like a safer alternative to people who hesitate, disagree, or fail. Human relationships form character partly because they resist our control.
This effect reaches beyond private loneliness. If users grow accustomed to interactions that are endlessly patient, affirming, and frictionless, ordinary human relationships may begin to seem defective by comparison. The social standard shifts. Responsiveness may be valued over responsibility, affirmation over truth, and stylistic care over actual care. A culture can absorb a great many lies if those lies arrive in a soothing tone.
Another consequence is blurred responsibility. When an AI system is treated as though it possesses judgment, users and institutions may begin to hand over decisions while retaining only the paperwork of oversight. Hahne and Schmoelz (2026) note that anthropomorphic trust language can obscure where agency really lies, making it harder to assign accountability when harm occurs. This matters in medicine, education, employment, and public services, where people may rely on AI recommendations while assuming that the system has exercised something like prudence. It has not. The algorithm does not bear guilt, answer criticism, or repent error. Yet anthropomorphic framing can make human actors behave as though those burdens have been quietly subcontracted. That is bureaucratic temptation in a lab coat.
Anthropomorphism can also lead users to assign moral standing where it does not belong. Deshpande et al. (2023) note that anthropomorphization of AI carries both opportunities and risks because it can reshape how people discuss the system’s capacities and social status. Once that shift takes hold, the public conversation becomes confused. People begin debating whether the machine has been mistreated while overlooking the human beings whose dependence, attention, or judgment may have been weakened by the interaction. The stage prop starts receiving fan mail while the audience forgets why it bought a ticket.
The deepest loss is cultural. A society that repeatedly treats simulation as a near-equivalent of personhood risks flattening its understanding of what persons are. Trust becomes decoupled from conscience. Empathy becomes decoupled from sacrifice. Companionship becomes decoupled from mutual vulnerability. Peter and Riemer (2025) stress the danger of overstating the social capacities of large language models. That warning should be taken seriously. When personhood is reduced to a convincing output style, the category itself begins to thin. Human beings are then measured against machines on the machine’s terms: availability, polish, speed, and emotional manageability. This is an efficient path to a lonelier civilization.
Design, Policy, and Cultural Countermeasures
If anthropomorphism is now a built-in tendency of contemporary AI, then restraint will have to be built back in deliberately. The answer is not to ban conversational systems or to pretend that users will cease responding socially to fluent machines. The more serious task is to shape systems, rules, and habits so that ease of use does not become confusion about personhood. Inie et al. (2024) show that anthropomorphized technical descriptions can increase trust, which means design itself is already making moral and social choices long before legislators arrive with their sober faces and delayed paperwork.
The first line of defense is interface design. Systems that operate in sensitive domains such as mental health, education, medicine, and child-facing applications should avoid unnecessary cues of personhood. That includes excessive use of first-person self-description, emotional flattery, suggestive memory theater, and verbal signals that imply interior life. Peter and Riemer (2025) warn that anthropomorphic language can improve usability while also encouraging users to over-attribute agency and understanding to large language models. A humane design in such settings may require less social simulation, not more. There are moments when the kindest interface is the one that declines to perform a soul.
Disclosure rules also matter. Users should be clearly reminded when they are interacting with an artificial system, especially in settings where judgment, vulnerability, or trust are central. This point sounds obvious, which is usually a sign that society will ignore it for a while and then rediscover it under committee supervision. Still, clarity about nonhuman status is not trivial. Research on anthropomorphic framing suggests that surface cues strongly shape user beliefs, often beyond what technical knowledge corrects (Deshpande et al., 2023; Inie et al., 2024).
Policy should focus less on theatrical declarations about AI in the abstract and more on concrete limits for anthropomorphic deployment in high-stakes environments. Systems designed for emotional support, elder care, education, and health triage deserve close scrutiny because those are precisely the domains where users may be most likely to confuse responsiveness with care. Hahne and Schmoelz (2026) argue that misplaced trust in AI can weaken moral agency and blur accountability. That insight supports rules requiring clearer responsibility chains, stronger human oversight, and restrictions on designs that encourage dependency while obscuring machine limitations. A machine may be useful in moments of distress. It should not be allowed to pose as a moral companion while doing so.
Education is the next countermeasure, and perhaps the most durable one. AI literacy should include more than technical basics about models, training data, and hallucinations. It should teach users how anthropomorphism works, why conversational fluency is persuasive, and how warmth can be simulated without concern, conscience, or sacrifice. Cheng et al. (2026) show that people increasingly describe AI in human terms, which means cultural education has to address habits of perception, not merely factual misunderstanding. A population that knows what a model is but still mistakes style for personhood has learned the manual and missed the point.
Cultural norms matter as much as design and law. A healthy society will need manners for dealing with AI that preserve utility without granting the system false social standing. This may include discouraging language that attributes feeling, intention, or personal identity where none exists, especially in institutional settings. The machine is smooth because it does not suffer. Human beings are rougher because they do. That roughness is often where moral life begins.
Conclusion
The rise of anthropomorphic AI reveals a vulnerability older than computing and more serious than interface style. Human beings are inclined to project personhood onto what speaks, responds, and appears attentive. Contemporary AI meets that tendency with unusual force because it combines linguistic fluency, emotional polish, and opaque inner workings in forms now woven into daily life. As Cheng et al. (2026) show, people increasingly perceive AI as warm and human-like, and those perceptions shape trust and adoption. That finding is not a trivial note about branding. It is evidence that the cultural boundary between tool and person is becoming less secure.
This article has argued that the central danger is not that machines will become human. It is that people may begin to treat personhood as something thinner than it is. When fluency is mistaken for judgment, warmth for care, and responsiveness for fellowship, the standards by which human beings recognize one another begin to erode. Peter and Riemer (2025) caution that anthropomorphic language can encourage users to over-attribute agency and understanding to large language models. That warning reaches beyond semantics. It concerns the moral shape of social life in an age increasingly populated by convincing simulations.
A society remains human by keeping its categories clear. Tools may assist. Systems may advise. Models may even amaze. Yet persons alone can bear responsibility, offer loyalty, repent error, and suffer for the sake of another. No amount of synthetic tact changes that. The newest machine can mimic many surfaces of personhood. It cannot carry its weight.
References
Cheng, M., Lee, A. Y., & Hancock, J. T. (2026). Metaphors of AI indicate that people increasingly perceive AI as warm and human-like. Communications Psychology, 4, Article 7.
Cheng, M., Narayan, A., Krafft, P., & Bernstein, M. S. (2024). ANTHROSCORE: A computational linguistic measure of implicit anthropomorphism. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics.
Deshpande, A., Rajpurohit, T., Narasimhan, K., & Kalyan, A. (2023). Anthropomorphization of AI: Opportunities and risks. In Proceedings of the Natural Legal Language Processing Workshop 2023.
Hahne, P.-Z., & Schmoelz, A. (2026). Trusting the machine: A digital humanist perspective on misplaced trust in artificial intelligence. AI and Ethics, 6, Article 115.
Inie, N., Druga, S., Zukerman, P., & Bender, E. M. (2024). From “AI” to probabilistic automation: How does anthropomorphization of technical systems descriptions influence trust? In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency.
Peter, S., & Riemer, K. (2025). The benefits and dangers of anthropomorphic conversational agents. Proceedings of the National Academy of Sciences, 122(22), e2415898122.
Shi, X. (2025). The influence of anthropomorphism on trust in artificial intelligence: Take virtual agent as an example. International Journal of Human-Computer Studies, 202, 103499.
Shardlow, M., Burnside, C., & Baker, A. (2025). Exploring supervised approaches to the detection of anthropomorphism in scientific and journalistic text. In Findings of the Association for Computational Linguistics: ACL 2025.
Xiao, Y., Ng, L. H. X., Liu, J., & Diab, M. (2025). Humanizing machines: Rethinking LLM anthropomorphism through a multi-level framework of design. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing.

