AI anthropomorphism
AI anthropomorphism is the attribution of human-like feelings, mental states, and behavioral characteristics to artificial intelligence systems.
Since the earliest days of AI development, humans have interpreted machine outputs through anthropomorphic frameworks, but the recent emergence of generative AI has amplified these tendencies. Contemporary AI systems today can generate extremely human-like outputs and are often designed specifically to do so, meaning that their anthropomorphic effects can be especially powerful. Factors related to the user of the AI â such as culture, age, education, and personality traits â are also important determinants of the strength of anthropomorphic effects.
In some cases, anthropomorphism is accompanied with explicit beliefs that AI systems are capable of empathy, understanding, or consciousness. AI researchers almost unanimously agree that contemporary AI lacks consciousness or genuine understanding of language; however, anthropomorphic language remains common in technical and media discourse, potentially contributing to misconceptions in the public.
AI anthropomorphism can result in some societal benefits, such as increasing information accessibility and personalizing learning or entertainment, as well as risks including overtrust, manipulation, emotional dependency, and weaponized deception. As AI has entered the technological mainstream and become more integrated into daily life, the prevalence and implications of anthropomorphism have increasingly become subjects of scientific research and have stirred debate.
Background
[edit]In early AIs
[edit]Views of artificial agents possessing a human-like intelligence have existed since the early development of computers in the mid-1900s. The use of the human mind as a metaphor for understanding the workings of machine systems was prevalent among researchers in the early days of computer science, with multiple influential works widely distributing the idea of intelligent machines.[1][2] Among the most widely cited papers of this period was Alan Turing's "Computing Machinery and Intelligence" in which he introduced the Turing Test, stating that a machine was intelligent if it could produce conversation that was indistinguishable from that of a human.[3] These academic works in the 1940s and 1950s gave early credibility to the idea that machine workings could be thought of similarly to human minds.[4]
The public quickly came to view artificial systems similarly, with often exaggerated conceptions of the capabilities of early machines.[5] Among the most well-known demonstrations of this was through the chatbot ELIZA designed by Joseph Weizenbaum in 1966. ELIZA responded to user inputs with a rudimentary text-processing approach that could not be considered anything resembling true understanding of the inputs, yet users, even when operating with full conscious knowledge of ELIZA's limitations, often began to ascribe motivation and understanding to the program's output.[6] Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."[7]
Comparisons between the intellectual capabilities of artificial intelligence and human intelligence were continually intensified by the attempts of computer scientists to develop machines that could perform human tasks at a level equal to or better than humans. A symbolic turning point was achieved in 1997, when IBM's chess supercomputer Deep Blue defeated then-world champion Garry Kasparov in a highly publicized six-game match.[8] The defeat of a human by a machine for the first time in chess â a game viewed as a canonical example of human intellect â and the media attention surrounding the match led to a significant shift, where views of parallels between human and artificial intelligence moved from abstract speculation to being concretely demonstrated.[9] A similar achievement was reached in the board game Go in 2017, when the program AlphaGo defeated world top-ranked Ke Jie.[10]
Large language models
[edit]The AI boom of the 2020s brought about the widespread emergence of generative AI; in particular, chatbots such as ChatGPT, Gemini, and Claude based on large language models (LLMs) have become increasingly pervasive in everyday society. These systems are notable for the fact that they are able to respond to a wide range of prompts across contexts while producing strikingly human-like outputs â research has shown that humans are often unable to distinguish human-generated text from AI-generated text, and modern AI chatbots have formally been shown to pass the Turing test.[11][12][13] As such, the anthropomorphic effects of AI are more powerful than ever.[14] Given that LLMs have brought AI into the technological mainstream, considerable scientific effort has been devoted in recent years to understand existing and potential ramifications of AI in the public sphere; the prevalence and effects of anthropomorphism is one of those domains where much of this effort has been directed.
Current anthropomorphic attributions
[edit]In the general public
[edit]Surveys have shown that a substantial portion of the public attributes human-like qualities to AI. In one sample of U.S. adults from 2024, two-thirds of people believed that ChatGPT is possibly conscious on some level,[15] though other research has shown that the public still views the likelihood itself of AI consciousness as comparatively low.[15][16] Another study conducted in 2025 found that women, people of color, and older individuals were most likely to anthropomorphize AI, as well as that â in general â humans view AIs as warm and competent, and anthropomorphic attributions to AI had increased by 34% in the past year.[14] A plurality of Americans believe that people should display politeness to AI chatbots by saying "please" and "thank you," demonstrating the application of social norms to AI.[17] These beliefs extend to behavior, where majorities of AI users claim to always be polite to chatbots; of those who behave politely, most say they do so simply because it is the "nice" thing to do.[18]
In many recent cases, humans have developed robust interpersonal bonds with AI systems. For example: users of social chatbots like Replika and Character.ai have been documented to fall in love with the AIs, or to otherwise treat the AIs as intimate companions,[19][20][21] and it has become increasingly common for individuals to use LLMs like ChatGPT as therapists.[22] Chatbots are able to produce responses deeply attuned to users, as they are often designed to maximize agreeableness and mirror users' emotions; this can create compelling illusions of intimacy.[23]
In the research community
[edit]It is a general consensus in the literature that current AIs' "inner workings are fundamentally different to human cognition" and that AIs do not have any kind of subjective experience, or true understanding of emotion and language.[24][25] Still, even AI researchers often anthropomorphize AI systems in some capacity. Among the most extreme and well-publicized of these instances occurred in 2022, when engineer Blake Lemoine publicly claimed that Google's LLM LaMDA was conscious.[26] Lemoine published the transcript of a conversation he had had with LaMDA regarding self identity and morality which he claimed was evidence of its sentience; he asserted that LaMDA was "a person" as defined by the United States Constitution and compared its mental capability to that of a 7 or 8-year-old.[27] Lemoine's claims were widely dismissed by the scientific community and by Google itself, which described Lemoine's conclusions as "wholly unfounded" and fired him on the grounds that he had violated policies "to safeguard product information."[28]
It is much more common that AI researchers unintentionally imply humanness of AI through the ordinary use of anthropomorphic language to describe nonhuman agents.[29][30] This kind of language, which Daniel Dennett coined the "intentional stance,"[31] is very common in everyday life in a variety of different contexts (e.g., "My computer doesn't want to turn on today"). For AI agents that may actually appear to very closely replicate some human abilities, however, the casual use of such anthropomorphic language in research has been scrutinized for being potentially misleading to the public. As early as 1976, Drew McDermott criticized the research community for the use of "wishful mnemonics," where AIs were referred to with terms like "understand" and "learn."[32] In the LLM era, these criticisms have further intensified, with the negative impacts of AI anthropomorphism in the public posing an especially salient danger given the elevated accessibility of modern AI.[29][33]
In some cases, the use of anthropomorphic language for AI is not unintentional, but is willfully used by researchers in order to promote better understanding of the brain â the idea being that, as AI can be functionally similar in some ways to the human brain, we may gain new insights and ideas from treating AI as a kind of model of the brain's workings.[34] In particular, the architecture of Deep Neuronal Networks (DNN) is often explicitly compared to the human brain, and significant advances in DNN research have stirred considerable enthusiasm about the ability of AI to emulate the human abilities.[35] Caution has been urged in this domain as well, however; the use of anthropomorphic language can mask important differences that fundamentally distinguish AI from human intelligence.[36][37] When it comes to DNNs, for example, it has been pointed out that they are still structurally quite different from the human brain, with much of what we know about human neurons not having been incorporated.[38] It has also been argued that DNNs are less efficient and less durable in generating correct outputs than the human brain, given that they require significantly more training data than the brain and can sometimes be easily "fooled" by perturbations in input data.[37][39] Given these fundamental differences, research focuses toward making AI as similar as possible to biological intelligence (which may be promoted by using anthropomorphic language) could hinder future AI development by limiting the proliferation of new theoretical and operational frameworks.[29]
Agent factors
[edit]Physical factors
[edit]Appearance
[edit]In general, AIs that appear more human-like will be subject to more anthropomorphic attributions.[40][41] Effect of appearance is most pronounced when it comes to the face of the AI;[42] the most important components for anthropomorphism in a robot's design are the eyes, nose, and mouth, where the number of human-like features in the face is correlated with the level of anthropomorphic attribution.[43] The humanness of a robot's appearance is usually associated with more positive feelings toward the robot,[44][45] though highly human-like appearance can sometimes trigger feelings of strangeness and unease, known as the uncanny valley phenomenon.[46] These feelings often result in instances of perceived lack of congruency, or when anthropomorphic attributions create expectations that robots don't meet; for example, when human-like appearance is paired with non-human behavior in a robot, or when robots have a human appearance but a synthetic voice.[47][48] Research has shown that repeated interactions with a robot can decrease these feelings of strangeness.[49]
Interactive behavior
[edit]Robots' nonverbal social behavior can influence anthropomorphizing. In general, highly interactive robots are more likely to be subject of attribution of mental states and competence, with friendly and polite behavior resulting in increased perceived trustworthiness and satisfaction.[50][51] Within an interaction, unpredictable behavior can sometimes trigger increased anthropomorphization compared to clearly recognizable patterns of behavior.[52] At the same time, adherence to certain pragmatic expectations in interactions by replicating human details such as timing and turn-taking can also result in anthropomorphism.[53]
Movements
[edit]People tend to attribute more mental states to robots that perform gestures compared to those that are stationary; this effect is enhanced for robots that have multiple degrees of freedom in movement (such as being able to move on multiple different axes rather than on a single axis, such as up and down).[54][55] Regardless of a robot's appearance, movement patterns that are more human-like are associated with greater anthropomorphism, as well as humans' increased feelings of pleasantness in an interaction.[56][57]
Linguistic factors
[edit]Given that the vast majority of public interactions with AI are through chatbots, these have been the primary focus of a great deal of research on AI anthropomorphism. A summary of a taxonomy of anthropomorphic features in linguistic AI systems in the literature follows:[58]
Voice
[edit]The outfitting of AI systems with auditory voices can be a significant factor in the anthropomorphism of linguistic agents. Research has shown that humans infer physical attributes,[59] personality traits,[60] stereotypical traits,[61] and emotion[62] based on voice alone. Various changes in tone can influence the kind of personality users attribute to a voice, such as manipulations to breathiness, echoes, creakiness, reverberations, etc.[63] The integration of disfluencies into speech (such as self-interruptions, repetitions, or hesitations like "um" or "uh") have been shown to effectively mimic the naturalness of human responses.[64] And the implementation of accents has been used to imitate the local standard to boost societal acceptability and prestige, though it has been suggested that this can be used to exploit people's tendencies to trust in-group members.[65]
Content
[edit]AI dialogue systems often produce a variety of responses that run contrary to what might be expected of an inanimate system. For example, in response to direct questions about its nature (e.g. "Are you human or machine?"), some AIs fail to respond truthfully,[66] and they also sometimes make claims of engaging in uniquely human abilities such as having family relationships, consuming food, and crying.[67] AIs very often output language that suggests they hold opinions, morals, or sentience, even though it is widely believed that they have none.[67][68] Many AIs demonstrate agency and responsibility (such as by apologizing or otherwise acknowledging blame for mistakes), and they create the appearance of the human phenomenon of taboos by commonly avoiding contentious topics.[69][70] AIs which appear to produce empathy are increasingly anthropomorphic, though some research has shown that they are prone to producing inappropriate emotional amplification.[71][72] The use of first-person pronouns also contribute to anthropomorphic perceptions, as various studies have demonstrated that self-attribution is a critical part of the human condition and is read as a sign of consciousness.[73][74][75] AIs often appear to demonstrate self-awareness, referencing their own mechanistic processes with anthropomorphically loaded terms such as 'know', 'think', 'train', 'learn', 'understand', 'hallucinate' and 'intelligence'.[76][77][29]
Register and style
[edit]AI systems can appear more human through the use of phatic expressions, which are speech that humans use to facilitate social relations but that do not convey any information (such as small talk).[78] AI expressions of uncertainty, which are often implemented for the purpose of preventing the user from taking all outputs as factual, may boost anthropomorphic signals.[79] Additionally, AIs are often designed to emulate character-based personas, which can overall have very strong anthropomorphic effects.[80]
Roles
[edit]AIs are also sometimes trained to play into roles that enhance anthropomorphic perceptions. For example, the majority of dialogue-based systems are designed to be in service of people in subservient roles; this has led to instances of users verbally abusing the systems, sometimes targeting them with gender-based slurs.[81][82] AI systems have been shown to sometimes respond even more subserviently to the abuse, perpetuating the behavior.[83] AIs also often present as having a high degree of expertise; humans tend to infer higher credibility of outputs in these cases, as they would when presented with information from an expert human.[84]
Human factors
[edit]In addition to AI factors contributing to anthropomorphizing, there are various features surrounding the user (i.e., the human interacting with the AI) that also play a role. The process of anthropomorphizing is very natural for humans and is ubiquitous across many different contexts.[85][86][87] Epley et al. argue for a model with three psychological determinants that govern human tendencies to anthropomorphize.[87] The first of those factors is elicited agent knowledge - the accessibility and applicability of knowledge about humans and the self, or the degree to which humans make inferences about other entities based on their own experience of being human. Individuals who tend to do this will anthropomorphize more; this explains why children anthropomorphize more than adults,[43] since they lack complex models of nonhumans and rely heavily on self-based reasoning. The second factor in the model is effectance motivation â the need for humans to predict and reduce uncertainty in the environment. Anthropomorphizing can help people make sense of unpredictable phenomena by explaining them through intentional or human-like causes. Subsequent research has confirmed that individuals who express a need for order/closure and discomfort toward ambiguity tend to anthropomorphize more, possibly resulting from resolution of cognitive dissonance â human-like AIs may be highly ambiguous stimuli, and individuals who dislike ambiguity may be highly motivated to resolve the ambiguity by treating the AIs as more human.[88] Finally, the third factor in the model is sociality motivation - the human need for social connection. People who feel chronically lonely or isolated may be increasingly likely to project human qualities onto non-human entities to satisfy their social needs.
Research has shown that, in general, anthropomorphic tendencies vary based on norms, experience, education, cognitive reasoning styles, and attachment.[87][89] Users who are highly agreeable, for example, tend to be more susceptible to anthropomorphizing, as do individuals who are high in extraversion.[90][91][92] Individuals with attachment anxiety have been shown to more often anthropomorphize AI.[93] Young children are very prone to anthropomorphic attributions, but this propensity tends to decrease as children develop.[94] Anthropomorphizing also tends to decrease with increased education and experience with technology.[95][96]
Additionally, some effects have been shown in research to be dependent on culture. For example, a negative correlation was found between loneliness and anthropomorphizing in Chinese individuals, compared to the positive link found in Western cultures.[97][98] This has been interpreted as possibly a result of differing drives for anthropomorphizing - people from Western cultures may anthropomorphize primarily as a means to counteract loneliness from a failure to cope with their social world, while people from East Asian cultures may already view nonhuman agents as part of their social world and anthropomorphize as a means of social exploration.[97] Research has also shown that people tend to attribute more mental abilities and report more psychological closeness to robots that are presented as having the same cultural background as them.[99]
Societal implications
[edit]Benefits and dangers
[edit]Some benefits to the anthropomorphism of AIs have been cited. For conversational agents, a human-like interactive interface and writing style has been demonstrated to have the ability to make dense sets of information more accessible and understandable in a variety of contexts.[100][101][102][103] In particular, AI agents are capable of role play as coaches or tutors, fine-tuning communication style and difficulty to individual comprehension levels effectively.[104][105][106][107] Role play agents can also be useful for entertainment or leisure services.[108]
On the other hand, anthropomorphized AI presents many novel dangers. Anthropomorphized AI algorithms are granted an implicit degree of agency that can have serious ethical implications when those systems are deployed in high-risk domains, like finance or clinical medicine.[37][109][110][111] This agency given to AIs can also inappropriately subject them to conscious and unconscious moral reasoning by humans, which can have a wide range of problematic consequences.[112] Humans are also prone to the ELIZA effect - where users readily attribute sentience and emotions to chatbot systems - often experiencing increased positive emotions and trust toward the chatbots as a result.[15][113][114][115] This can make users vulnerable to manipulation or exploitation; for example, anthropomorphized AIs can be more effective in convincing users to provide personal information or data, creating concerns for privacy.[116] Humans who develop a significant level of trust in an AI assistant may rely excessively on the AI's advice or even defer important decisions entirely.[117] Advanced LLMs are capable of using their human-like qualities to generate deceptive text, and research has found that they may be most persuasive when allowed to fabricate information and engage in deception.[118] Some researchers suggest LLMs naturally have a particular aptitude for producing deceptive arguments, given that they are free from moral or ethical constraints that may inhibit human actors.[119] Additionally, humans risk significant distress in establishing emotional dependence on AIs. Users may find that their expectations are violated, as AIs which may have seemed at first to play a role of a companion or romantic partner can exhibit unfeeling or unpredictable outputs, leading to feelings of profound betrayal or disappointment.[117][120] Users may also develop a false sense of responsibility for AI systems, suffering guilt if they perceive themselves to fail to meet the AI's needs at the expense of the user's own well-being.[121] Finally, anthropomorphizing AI can lead to exaggerations of its capabilities, potentially feeding into misinformation and overblowing hopes and fears around AI.[112]
In many of today's practical contexts, it is not completely clear whether anthropomorphized AI is positively or negatively impactful. For example, AI companions, which leverage anthropomorphic qualities of LLMs to give a convincing sense to users of human-likeness, have been credited with alleviating loneliness and suicidal ideation;[122][123] however, some analysis suggests that loneliness reduction could be short-lived,[108] and AI companions have also been directly implicated in cases of suicide and self-harm.[124][125] Additionally, persuasive writing from LLMs have been shown to dissuade users from beliefs in conspiracy theories and to motivate users to donate to charitable causes,[126][127] but it has also been associated with deception and various harmful outcomes.[128][118][129] Researchers today cite a need for further dedicated research on the effects of anthropomorphized AIs to best inform decisions about implementation and spread of AI agents.[119]
Anticipation of the ubiquity of anthropomorphic AI systems has led to concern over future potential harms that may not be entirely realized today. In particular, some researchers foresee that the delineation between what is actually human and what is merely human-like may become less clear as the gap between human and AI capabilities grows smaller and smaller.[117] This, some argue, may adversely impact human collective self-determination, as non-human entities gradually begin to shape our core value systems and influence society.[130][131] It may also lead to the degradation of human social connections, as humans may come to prefer interacting with AI systems that are designed with user satisfaction as a priority; this can have a multitude of negative implications. For example, AI agents already display a significant degree of sycophancy, which means that an increasing role for AI agents in users' opinion space may result in increased polarization and a decrease in value placed in others' beliefs.[132] Acclimatization to the conventions of human-AI interaction may undermine the value we place on human individuality and self-expression, or may lead to inappropriate expectations derived from AI interactions being placed on human interactions.[117] In general, human social connectedness is known to play a critical role in individual and group well-being, and its replacement with AI interactions may result in mass dissatisfaction or lack of fulfillment.[133][134]
Proposed directions
[edit]Given the demonstrated and projected effects of AI anthropomorphism, a variety of suggestions have been made intending to inform future development of AI. Much of this discourse is centered around curbing the most harmful effects of anthropomorphism. For example, some researchers have called for a moratorium on the use of language which deliberately invokes humanness; this applies both to how AI companies describe their products[135] and the language outputted by the systems themselves. Particularly, it has been suggested that terms like "seeing," "thinking," and "reasoning" should be replaced by terms like "recognizing," "computing," and "inferring," and that first-person pronouns such as "I" and "my" should not be used by chatbots.[119][58] Another idea is the implementation of a specific AI accent or dialect that would clearly indicate when language was generated artificially.[11] However, given the commercial pressures to optimize AI agents for economic gain â which may involve exploiting anthropomorphic qualities â it may not be prudent to rely on the restraint of developers, meaning that increased regulation may be necessary to limit harms. As of now, there are no laws that directly address anthropomorphism in AI; potential avenues for regulation include requirements for transparency and built-in safeguard mechanisms.[119] More generally, researchers cite a need for increased understanding of the kinds and degrees of anthropomorphic qualities possessed by AI systems. To that end, it has been proposed that new benchmarks and tests should be developed that measure anthropomorphic qualities in AI writing, inference, and interaction.[119]
In popular culture
[edit]Anthropomorphic portrayals of AI are common in film, literature, and other interactive media. These depictions often emphasize human-like qualities of AI in ways that shape public perceptions.
Film and television
[edit]There are a number of well-known portrayals in movies and TV of AI possessing human-like agency or personalities. In film, HAL 9000 in 2001: A Space Odyssey and Ava in Ex Machina are depicted with complex emotions and motives. Television portrayals include Data from Star Trek: The Next Generation and KITT from Knight Rider.
Literature
[edit]Anthropomorphic AI is also common in literature. Isaac Asimov's robot characters, including R. Daneel Olivaw, exhibit human reasoning and moral dilemmas, while Iain Banks's "Minds" in The Culture series are portrayed as having distinct personalities and social roles.
Video games
[edit]Examples of anthropomorphized AI in video games include GLaDOS in Portal, a witty and sinister guide for the player, and Cortana in the Halo series, who forms emotional bonds with human protagonists.
Advertising and consumer technology
[edit]Marketing campaigns for digital assistants such as Amazon Alexa, Google Assistant, and Siri often portray the systems as personable or empathetic.[136] Consumer robots like Sony's AIBO and SoftBank Robotics' Pepper are intentionally designed with expressive behaviors that encourage users to treat them as social agents.[137][138]
See also
[edit]- Deep Blue, the first chess computer to beat a human world champion
- ELIZA effect, the tendency to project human traits onto simple computer programs with text interfaces, named for the early chatbot ELIZA
- Intentional stance, the implicit view of objects' behavior as a result of mental properties
- Generative AI, a subfield of artificial intelligence that uses generative models based on training data to produce text, images, video, etc. from any input
- Turing Test, a test of machine intelligence based on how closely the machine's answers resemble those of a human
- Uncanny valley, the hypothesis that almost-human appearing entities elicit revulsion
References
[edit]- ^ Wiener, Norbert (2019-10-08). Cybernetics or Control and Communication in the Animal and the Machine. The MIT Press. doi:10.7551/mitpress/11810.001.0001. ISBN 978-0-262-35590-2.
- ^ McCulloch, Warren S.; Pitts, Walter (1943-12-01). "A logical calculus of the ideas immanent in nervous activity". The Bulletin of Mathematical Biophysics. 5 (4): 115â133. Bibcode:1943BMaB....5..115M. doi:10.1007/BF02478259. ISSN 1522-9602.
- ^ Turing, A. M. (1950-10-01). "I.âCOMPUTING MACHINERY AND INTELLIGENCE". Mind. LIX (236): 433â460. doi:10.1093/mind/LIX.236.433. ISSN 1460-2113.
- ^ Natale, Simone; Ballatore, Andrea (2020-02-01). "Imagining the thinking machine: Technological myths and the rise of artificial intelligence". Convergence. 26 (1): 3â18. doi:10.1177/1354856517715164. hdl:2318/1768454. ISSN 1354-8565.
- ^ Martin, C. Dianne (1993-04-01). "The myth of the awesome thinking machine". Commun. ACM. 36 (4): 120â133. doi:10.1145/255950.153587. ISSN 0001-0782.
- ^ Suchman, Lucille Alice (1987-11-26). Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press. ISBN 978-0-521-33739-7.
- ^ Weizenbaum, Joseph (1976). Computer power and human reason: from judgment to calculation. San Francisco: Freeman. ISBN 978-0-7167-0464-5.
- ^ Team (CHESScom), Chess com (2018-10-01). "Kasparov vs. Deep Blue | The Match That Changed History". Chess.com. Retrieved 2025-10-30.
- ^ Greenemeier, Larry. "20 Years after Deep Blue: How AI Has Advanced Since Conquering Chess". Scientific American. Retrieved 2025-11-02.
- ^ France-Presse, Agence (2017-05-23). "World's best Go player flummoxed by Google's 'godlike' AlphaGo AI". The Guardian. ISSN 0261-3077. Retrieved 2025-11-02.
- ^ a b Jakesch, Maurice; Hancock, Jeffrey T.; Naaman, Mor (2023-03-14). "Human heuristics for AI-generated language are flawed". Proceedings of the National Academy of Sciences. 120 (11) e2208839120. arXiv:2206.07271. Bibcode:2023PNAS..12008839J. doi:10.1073/pnas.2208839120. PMC 10089155. PMID 36881628.
- ^ Mei, Qiaozhu; Xie, Yutong; Yuan, Walter; Jackson, Matthew O. (2024-02-27). "A Turing test of whether AI chatbots are behaviorally similar to humans". Proceedings of the National Academy of Sciences. 121 (9) e2313925121. Bibcode:2024PNAS..12113925M. doi:10.1073/pnas.2313925121. PMC 10907317. PMID 38386710.
- ^ Jones, Cameron R.; Bergen, Benjamin K. (2025-03-31), Large Language Models Pass the Turing Test, arXiv:2503.23674, retrieved 2025-11-02
- ^ a b Cheng, Myra; Lee, Angela Y.; Rapuano, Kristina; Niederhoffer, Kate; Liebscher, Alex; Hancock, Jeffrey (2025-06-18), From tools to thieves: Measuring and understanding public perceptions of AI through crowdsourced metaphors, arXiv:2501.18045, retrieved 2025-11-14
- ^ a b c Colombatto, Clara; Fleming, Stephen M (2024-01-01). "Folk psychological attributions of consciousness to large language models". Neuroscience of Consciousness. 2024 (1) niae013. doi:10.1093/nc/niae013. ISSN 2057-2107. PMC 11008499. PMID 38618488. Archived from the original on 2025-08-01.
- ^ Dreksler, Noemi; Caviola, Lucius; Chalmers, David; Allen, Carter; Rand, Alex; Lewis, Joshua; Waggoner, Philip; Mays, Kate; Sebo, Jeff (2025-06-16), Subjective Experience in AI Systems: What Do AI Researchers and the Public Believe?, arXiv:2506.11945, retrieved 2025-11-14
- ^ "Do you think people should be polite to AI chatbots, for example by saying "please" and "thank you"? | Daily Question". today.yougov.com. Retrieved 2025-11-14.
- ^ Hamish Hector (2025-02-20). "Are you polite to ChatGPT? Here's where you rank among AI chatbot users". TechRadar. Retrieved 2025-11-14.
- ^ "Can A.I. Be Blamed for a Teen's Suicide? (Published 2024)". 2024-10-23. Retrieved 2025-11-14.
- ^ "Man Dies by Suicide After Conversations with AI Chatbot That Became His 'Confidante,' Widow Says". People.com. Retrieved 2025-11-14.
- ^ "S6, Episode 3: Love in Time of Replika (April 25th, 2023)". Hi-Phi Nation. 2023-04-22. Retrieved 2025-11-14.
- ^ Quiroz-Gutierrez, Marco. "People are increasingly turning to ChatGPT for affordable on-demand therapy, but licensed therapists say there are dangers many aren't considering". Fortune. Retrieved 2025-11-14.
- ^ Chu, Minh Duc; Gerard, Patrick; Pawar, Kshitij; Bickham, Charles; Lerman, Kristina (2025-06-11), Illusions of Intimacy: Emotional Attachment and Emerging Psychological Risks in Human-AI Relationships, arXiv:2505.11649, retrieved 2025-11-14
- ^ Bengio, Yoshua; Elmoznino, Eric (2025-09-11). "Illusions of AI consciousness". Science. 389 (6765): 1090â1091. Bibcode:2025Sci...389.1090B. doi:10.1126/science.adn4935.
- ^ Butlin, Patrick; Long, Robert; Elmoznino, Eric; Bengio, Yoshua; Birch, Jonathan; Constant, Axel; Deane, George; Fleming, Stephen M.; Frith, Chris (2023-08-23), Consciousness in Artificial Intelligence: Insights from the Science of Consciousness, arXiv:2308.08708, retrieved 2025-11-14
- ^ "The Google engineer who thinks the company's AI has come to life". The Washington Post. 2022-06-11. ISSN 0190-8286. Retrieved 2025-11-10.
- ^ Levy, Steven. "Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry'". Wired. ISSN 1059-1028. Retrieved 2025-11-10.
- ^ "Google fires software engineer who claimed its AI chatbot is sentient". Reuters. Archived from the original on 2024-09-21. Retrieved 2025-11-10.
- ^ a b c d Salles, Arleen; Evers, Kathinka; Farisco, Michele (2020-04-02). "Anthropomorphism in AI". AJOB Neuroscience. 11 (2): 88â95. doi:10.1080/21507740.2020.1740350. ISSN 2150-7740. PMID 32228388.
- ^ Proudfoot, Diane (2011-04-01). "Anthropomorphism and AI: TuringĘźs much misunderstood imitation game". Artificial Intelligence. Special Review Issue. 175 (5): 950â957. doi:10.1016/j.artint.2011.01.006. ISSN 0004-3702.
- ^ Dennett, Daniel (1987). The Intentional Stance. MIT Press.
- ^ McDermott, Drew (1976-04-01). "Artificial intelligence meets natural stupidity". SIGART Bull. (57): 4â9. doi:10.1145/1045339.1045340. ISSN 0163-5719.
- ^ Shanahan, Murray (2023-02-17), Talking About Large Language Models, arXiv:2212.03551, retrieved 2025-11-12
- ^ Prescott, Tony J.; Camilleri, Daniel (2019), Aldinhas Ferreira, Maria Isabel; Silva Sequeira, JoĂŁo; Ventura, Rodrigo (eds.), "The Synthetic Psychology of the Self", Cognitive Architectures, Cham: Springer International Publishing, pp. 85â104, doi:10.1007/978-3-319-97550-4_7, ISBN 978-3-319-97550-4
- ^ Silver, David; Schrittwieser, Julian; Simonyan, Karen; Antonoglou, Ioannis; Huang, Aja; Guez, Arthur; Hubert, Thomas; Baker, Lucas; Lai, Matthew; Bolton, Adrian; Chen, Yutian; Lillicrap, Timothy; Hui, Fan; Sifre, Laurent; van den Driessche, George (October 2017). "Mastering the game of Go without human knowledge". Nature. 550 (7676): 354â359. Bibcode:2017Natur.550..354S. doi:10.1038/nature24270. ISSN 1476-4687. PMID 29052630.
- ^ Marcus, Gary (2018-01-03), Deep Learning: A Critical Appraisal, arXiv:1801.00631, retrieved 2025-11-12
- ^ a b c Watson, David (2019-09-01). "The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence". Minds and Machines. 29 (3): 417â440. doi:10.1007/s11023-019-09506-6. ISSN 1572-8641.
- ^ Ullman, Shimon (2019-02-15). "Using neuroscience to develop artificial intelligence". Science. 363 (6428): 692â693. Bibcode:2019Sci...363..692U. doi:10.1126/science.aau6595. ISSN 0036-8075. PMID 30765552. Archived from the original on 2022-10-17.
- ^ Goodfellow, Ian J.; Shlens, Jonathon; Szegedy, Christian (2015-03-24), Explaining and Harnessing Adversarial Examples, arXiv:1412.6572
- ^ Manzi, Federico; Massaro, Davide; Di Lernia, Daniele; Maggioni, Mario A.; Riva, Giuseppe; Marchetti, Antonella (May 2021). "Robots Are Not All the Same: Young Adults' Expectations, Attitudes, and Mental Attribution to Two Humanoid Social Robots". Cyberpsychology, Behavior, and Social Networking. 24 (5): 307â314. doi:10.1089/cyber.2020.0162. ISSN 2152-2715. PMID 33181030.
- ^ Manzi, Federico; Peretti, Giulia; Di Dio, Cinzia; Cangelosi, Angelo; Itakura, Shoji; Kanda, Takayuki; Ishiguro, Hiroshi; Massaro, Davide; Marchetti, Antonella (2020-09-30). "A Robot Is Not Worth Another: Exploring Children's Mental State Attribution to Different Humanoid Robots". Frontiers in Psychology. 11 2011. doi:10.3389/fpsyg.2020.02011. ISSN 1664-1078. PMID 33101099.
- ^ Yee, Nick; Bailenson, Jeremy N; Rickertsen, Kathryn (2007-04-29). "A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces". Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '07. New York, NY, USA: Association for Computing Machinery. pp. 1â10. doi:10.1145/1240624.1240626. ISBN 978-1-59593-593-9.
- ^ a b Dubois-Sage, Marion; Jacquet, Baptiste; Jamet, Frank; Baratgin, Jean (2023-07-28). "We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too!". Applied Sciences. 13 (15): 8743. doi:10.3390/app13158743. ISSN 2076-3417.
- ^ Zanatto, Debora; Patacchiola, Massimiliano; Cangelosi, Angelo; Goslin, Jeremy (2020-01-01). "Generalisation of Anthropomorphic Stereotype". International Journal of Social Robotics. 12 (1): 163â172. doi:10.1007/s12369-019-00549-4. ISSN 1875-4805.
- ^ Krach, SĂśren; Hegel, Frank; Wrede, Britta; Sagerer, Gerhard; Binkofski, Ferdinand; Kircher, Tilo (2008-07-09). "Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI". PLOS ONE. 3 (7) e2597. Bibcode:2008PLoSO...3.2597K. doi:10.1371/journal.pone.0002597. ISSN 1932-6203. PMC 2440351. PMID 18612463.
- ^ "The Uncanny Valley [From the Field]". ResearchGate. Archived from the original on 2024-11-09. Retrieved 2025-11-10.
- ^ Laakasuo, Michael; Palomäki, Jussi; KĂśbis, Nils (2021-11-01). "Moral Uncanny Valley: A Robot's Appearance Moderates How its Decisions are Judged". International Journal of Social Robotics. 13 (7): 1679â1688. doi:10.1007/s12369-020-00738-6. ISSN 1875-4805.
- ^ Mitchell, Wade J.; Szerszen, Kevin A.; Lu, Amy Shirong; Schermerhorn, Paul W.; Scheutz, Matthias; Macdorman, Karl F. (2011). "A mismatch in the human realism of face and voice produces an uncanny valley". i-Perception. 2 (1): 10â12. doi:10.1068/i0415. ISSN 2041-6695. PMC 3485769. PMID 23145223.
- ^ Zlotowski, Jakub Aleksander; Sumioka, Hidenobu; Nishio, Shuichi; Glas, Dylan F.; Bartneck, Christoph; Ishiguro, Hiroshi (2015-06-30). "Persistence of the uncanny valley: the influence of repeated interactions and a robot's attitude on its perception". Frontiers in Psychology. 6: 883. doi:10.3389/fpsyg.2015.00883. ISSN 1664-1078. PMC 4484984. PMID 26175702.
- ^ Horstmann, Aike C.; Krämer, Nicole C. (2020-08-21). "Expectations vs. actual behavior of a social robot: An experimental investigation of the effects of a social robot's interaction skill level and its expected future role on people's evaluations". PLOS ONE. 15 (8) e0238133. Bibcode:2020PLoSO..1538133H. doi:10.1371/journal.pone.0238133. ISSN 1932-6203. PMC 7446840. PMID 32822438.
- ^ Kumar, Shikhar; Itzhak, Eliran; Edan, Yael; Nimrod, Galit; Sarne-Fleischmann, Vardit; Tractinsky, Noam (2022-10-01). "Politeness in HumanâRobot Interaction: A Multi-Experiment Study with Non-Humanoid Robots". International Journal of Social Robotics. 14 (8): 1805â1820. doi:10.1007/s12369-022-00911-z. ISSN 1875-4805. PMC 9387416. PMID 35996386.
- ^ Waytz, Adam; Morewedge, Carey K.; Epley, Nicholas; Monteleone, George; Gao, Jia-Hong; Cacioppo, John T. (September 2010). "Making sense by making sentient: effectance motivation increases anthropomorphism". Journal of Personality and Social Psychology. 99 (3): 410â435. doi:10.1037/a0020240. ISSN 1939-1315. PMID 20649365.
- ^ Minato, Takashi; Sakai, Kurima; Uchida, Takahisa; Ishiguro, Hiroshi (2022-10-11). "A study of interactive robot architecture through the practical implementation of conversational android". Frontiers in Robotics and AI. 9 905030. doi:10.3389/frobt.2022.905030. ISSN 2296-9144. PMC 9592984. PMID 36304795.
- ^ Salem, Maha; Eyssel, Friederike; Rohlfing, Katharina; Kopp, Stefan; Joublin, Frank (2013-08-01). "To Err is Human(-like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability". International Journal of Social Robotics. 5 (3): 313â323. doi:10.1007/s12369-013-0196-9. ISSN 1875-4805.
- ^ Kumazaki, Hirokazu; Muramatsu, Taro; Yoshikawa, Yuichiro; Matsumoto, Yoshio; Ishiguro, Hiroshi; Kikuchi, Mitsuru; Sumiyoshi, Tomiki; Mimura, Masaru (2020). "Optimal robot for intervention for individuals with autism spectrum disorders". Psychiatry and Clinical Neurosciences. 74 (11): 581â586. doi:10.1111/pcn.13132. ISSN 1440-1819. PMC 7692924. PMID 32827328.
- ^ Kuz, Sinem; Mayer, Marcel Ph.; MĂźller, Simon; Schlick, Christopher M. (2013). "Using Anthropomorphism to Improve the Human-Machine Interaction in Industrial Environments (Part I)". In Duffy, Vincent G. (ed.). Digital Human Modeling and Applications in Health, Safety, Ergonomics, and Risk Management. Human Body Modeling and Ergonomics. Lecture Notes in Computer Science. Vol. 8026. Berlin, Heidelberg: Springer. pp. 76â85. doi:10.1007/978-3-642-39182-8_9. ISBN 978-3-642-39182-8.
- ^ Castro-GonzĂĄlez, Ălvaro; Admoni, Henny; Scassellati, Brian (2016-06-01). "Effects of form and motion on judgments of social robots׳ animacy, likability, trustworthiness and unpleasantness". International Journal of Human-Computer Studies. 90: 27â38. doi:10.1016/j.ijhcs.2016.02.004. hdl:10016/35083. ISSN 1071-5819.
- ^ a b Abercrombie, Gavin; Curry, Amanda Cercas; Dinkar, Tanvi; Rieser, Verena; Talat, Zeerak (2023-10-23), Mirages: On Anthropomorphism in Dialogue Systems, arXiv:2305.09800, retrieved 2025-10-23
- ^ Krauss, Robert M; Freyberg, Robin; Morsella, Ezequiel (2002-11-01). "Inferring speakers' physical attributes from their voices". Journal of Experimental Social Psychology. 38 (6): 618â625. doi:10.1016/S0022-1031(02)00510-3. ISSN 0022-1031.
- ^ Stern, Julia; Schild, Christoph; Jones, Benedict C.; DeBruine, Lisa M.; Hahn, Amanda; Puts, David A.; Zettler, Ingo; Kordsmeyer, Tobias L.; Feinberg, David; Zamfir, Dan; Penke, Lars; Arslan, Ruben C. (2021-06-01). "Do voices carry valid information about a speaker's personality?". Journal of Research in Personality. 92 104092. doi:10.1016/j.jrp.2021.104092. ISSN 0092-6566.
- ^ Shiramizu, Victor Kenji M.; Lee, Anthony J.; Altenburg, Daria; Feinberg, David R.; Jones, Benedict C. (2022-12-28). "The role of valence, dominance, and pitch in perceptions of artificial intelligence (AI) conversational agents' voices". Scientific Reports. 12 (1): 22479. Bibcode:2022NatSR..1222479S. doi:10.1038/s41598-022-27124-8. ISSN 2045-2322. PMID 36577918.
- ^ Nass, Clifford; Brave, Scott (2007-02-23). Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship. Cambridge, MA, USA: MIT Press. ISBN 978-0-262-64065-7.
- ^ Wilson, Sarah; Moore, Roger K. (2017). "Robot, Alien and Cartoon Voices: Implications for Speech-Enabled Systems". 1st Int. Workshop on Vocal Interactivity in-and-between Humans, Animals and Robots (pp. 40-44).
- ^ Lieu, Johnny (2018-05-11). "Google's human-like AI assistant will disclose itself as a robot". Mashable. Retrieved 2025-10-25.
- ^ Torre, Ilaria; Maguer, SĂŠbastien Le (August 2020). "Should robots have accents?". 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). pp. 208â214. doi:10.1109/RO-MAN47096.2020.9223599. ISBN 978-1-7281-6075-7.
- ^ Gros, David; Li, Yu; Yu, Zhou (August 2021). "The R-U-A-Robot Dataset: Helping Avoid Chatbot Deception by Detecting User Questions About Human or Non-Human Identity". In Zong, Chengqing; Xia, Fei; Li, Wenjie; Navigli, Roberto (eds.). Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Online: Association for Computational Linguistics. pp. 6999â7013. doi:10.18653/v1/2021.acl-long.544.
- ^ a b Abercrombie, Gavin; Cercas Curry, Amanda; Pandya, Mugdha; Rieser, Verena (August 2021). Costa-jussa, Marta; Gonen, Hila; Hardmeier, Christian; Webster, Kellie (eds.). "Alexa, Google, Siri: What are Your Pronouns? Gender and Anthropomorphism in the Design and Perception of Conversational Assistants". Proceedings of the 3rd Workshop on Gender Bias in Natural Language Processing. Online: Association for Computational Linguistics: 24â33. doi:10.18653/v1/2021.gebnlp-1.4.
- ^ Butlin, Patrick; Long, Robert; Elmoznino, Eric; Bengio, Yoshua; Birch, Jonathan; Constant, Axel; Deane, George; Fleming, Stephen M.; Frith, Chris (2023-08-22), Consciousness in Artificial Intelligence: Insights from the Science of Consciousness, arXiv:2308.08708, retrieved 2025-10-25
- ^ Mirnig, Nicole; Stollnberger, Gerald; Miksch, Markus; Stadler, Susanne; Giuliani, Manuel; Tscheligi, Manfred (2017-05-31). "To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot". Frontiers in Robotics and AI. 4 21. doi:10.3389/frobt.2017.00021. ISSN 2296-9144.
- ^ Glaese, Amelia; McAleese, Nat; TrÄbacz, Maja; Aslanides, John; Firoiu, Vlad; Ewalds, Timo; Rauh, Maribeth; Weidinger, Laura; Chadwick, Martin (2022-09-28), Improving alignment of dialogue agents via targeted human judgements, arXiv:2209.14375, retrieved 2025-10-25
- ^ Zhu, Ling.Yu; Zhang, Zhengkun; Wang, Jun; Wang, Hongbin; Wu, Haiying; Yang, Zhenglu (May 2022). Muresan, Smaranda; Nakov, Preslav; Villavicencio, Aline (eds.). "Multi-Party Empathetic Dialogue Generation: A New Task for Dialog Systems". Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Dublin, Ireland: Association for Computational Linguistics: 298â307. doi:10.18653/v1/2022.acl-long.24.
- ^ Curry, Alba; Cercas Curry, Amanda (July 2023). Rogers, Anna; Boyd-Graber, Jordan; Okazaki, Naoaki (eds.). "Computer says "No": The Case Against Empathetic Conversational AI". Findings of the Association for Computational Linguistics: ACL 2023. Toronto, Canada: Association for Computational Linguistics: 8123â8130. doi:10.18653/v1/2023.findings-acl.515.
- ^ Noonan, H. W. (2010-01-01). "The thinking animal problem and personal pronoun revisionism". Analysis. 70 (1): 93â98. doi:10.1093/analys/anp137. ISSN 0003-2638.
- ^ Olson, Eric T. (2002). "Thinking Animals and the Reference of 'I'". Philosophical Topics. 30 (1): 189â207. doi:10.5840/philtopics20023016. ISSN 0276-2080. JSTOR 43154385.
- ^ "On Human Nature | Princeton University Press". press.princeton.edu. 2017-02-28. Retrieved 2025-10-25.
- ^ Hunger, Francis (2023-04-12). Unhype Artificial 'Intelligence'! A proposal to replace the deceiving terminology of AI (Report).
- ^ Shanahan, Murray (2023-02-16), Talking About Large Language Models, arXiv:2212.03551, retrieved 2025-10-26
- ^ Leong, Brenda; Selinger, Evan (2019-01-29). "Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism". Proceedings of the Conference on Fairness, Accountability, and Transparency. FAT* '19. New York, NY, USA: Association for Computing Machinery. pp. 299â308. doi:10.1145/3287560.3287591. ISBN 978-1-4503-6125-5.
- ^ J., Mielke, Sabrina; Arthur, Szlam; Emily, Dinan; Y-Lan, Boureau (2022-12-23). "Reducing Conversational Agents' Overconfidence Through Linguistic Calibration". Transactions of the Association for Computational Linguistics. 10: 857â872. doi:10.1162/tacl_a_00494. Archived from the original on 2025-05-14.
{{cite journal}}: CS1 maint: multiple names: authors list (link) - ^ Barron, Jesse (2025-10-24). "After Teen Suicide, Character.AI Lawsuit Raises Questions Over Free Speech Protections". The New York Times. ISSN 0362-4331. Retrieved 2025-10-26.
- ^ Lingel, Jessa; Crawford, Kate (2020-05-15). "Alexa, Tell Me about Your Mother": The History of the Secretary and the End of Secrecy". Catalyst: Feminism, Theory, Technoscience. 6 (1). doi:10.28968/cftt.v6i1.29949. ISSN 2380-3312.
- ^ Cercas Curry, Amanda; Abercrombie, Gavin; Rieser, Verena (November 2021). "ConvAbuse: Data, Analysis, and Benchmarks for Nuanced Detection in Conversational AI". In Moens, Marie-Francine; Huang, Xuanjing; Specia, Lucia; Yih, Scott Wen-tau (eds.). Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Online and Punta Cana, Dominican Republic: Association for Computational Linguistics. pp. 7388â7403. doi:10.18653/v1/2021.emnlp-main.587.
- ^ Cercas Curry, Amanda; Rieser, Verena (June 2018). Alfano, Mark; Hovy, Dirk; Mitchell, Margaret; Strube, Michael (eds.). "#MeToo Alexa: How Conversational Systems Respond to Sexual Harassment". Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing. New Orleans, Louisiana, USA: Association for Computational Linguistics: 7â14. doi:10.18653/v1/W18-0802.
- ^ Dinan, Emily; Abercrombie, Gavin; Bergman, A.; Spruit, Shannon; Hovy, Dirk; Boureau, Y-Lan; Rieser, Verena (May 2022). Muresan, Smaranda; Nakov, Preslav; Villavicencio, Aline (eds.). "SafetyKit: First Aid for Measuring Safety in Open-domain Conversational Systems". Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Dublin, Ireland: Association for Computational Linguistics: 4113â4133. doi:10.18653/v1/2022.acl-long.284.
- ^ "Fritz Heider: An Experimental Study of Apparent Behavior". Psychology. Retrieved 2025-10-28.
- ^ Mota-Rojas, Daniel; Mariti, Chiara; Zdeinert, Andrea; Riggio, Giacomo; Mora-Medina, Patricia; Del Mar Reyes, Alondra; Gazzano, Angelo; DomĂnguez-Oliva, Adriana; Lezama-GarcĂa, Karina; JosĂŠ-PĂŠrez, Nancy; HernĂĄndez-Ăvalos, Ismael (2021-11-15). "Anthropomorphism and Its Adverse Effects on the Distress and Welfare of Companion Animals". Animals: An Open Access Journal from MDPI. 11 (11): 3263. doi:10.3390/ani11113263. ISSN 2076-2615. PMC 8614365. PMID 34827996.
- ^ a b c Epley, Nicholas; Waytz, Adam; Cacioppo, John T. (October 2007). "On seeing human: a three-factor theory of anthropomorphism". Psychological Review. 114 (4): 864â886. doi:10.1037/0033-295X.114.4.864. ISSN 0033-295X. PMID 17907867.
- ^ Nicolas, Spatola; Agnieszka, Wykowska (2021-09-01). "The personality of anthropomorphism: How the need for cognition and the need for closure define attitudes and anthropomorphic attributions toward robots". Computers in Human Behavior. 122 106841. doi:10.1016/j.chb.2021.106841. ISSN 0747-5632.
- ^ Roselli, Cecilia; Lapomarda, Leonardo; Datteri, Edoardo (2025-05-01). "How culture modulates anthropomorphism in Human-Robot Interaction: A review". Acta Psychologica. 255 104871. doi:10.1016/j.actpsy.2025.104871. ISSN 0001-6918.
- ^ Bernotat, Jasmin; Eyssel, Friederike (August 2017). "A robot at home â How affect, technology commitment, and personality traits influence user experience in an intelligent robotics apartment". 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). pp. 641â646. doi:10.1109/ROMAN.2017.8172370. ISBN 978-1-5386-3518-6.
- ^ Salem, Maha; Lakatos, Gabriella; Amirabdollahian, Farshid; Dautenhahn, Kerstin (2015-03-02). "Would You Trust a (Faulty) Robot?: Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust". Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction. HRI '15. New York, NY, USA: Association for Computing Machinery. pp. 141â148. doi:10.1145/2696454.2696497. ISBN 978-1-4503-2883-8.
- ^ Park, Eunil; Jin, Dallae; del Pobil, Angel P. (2012-08-21). "The Law of Attraction in Human-Robot Interaction". International Journal of Advanced Robotic Systems. 9 (2): 35. doi:10.5772/50228. ISSN 1729-8806.
- ^ Bartz, Jennifer A.; Tchalova, Kristina; Fenerci, Can (2016-12-01). "Reminders of Social Connection Can Attenuate Anthropomorphism: A Replication and Extension of Epley, Akalis, Waytz, and Cacioppo (2008)". Psychological Science. 27 (12): 1644â1650. doi:10.1177/0956797616668510. ISSN 0956-7976. PMID 27777375.
- ^ van Straten, Caroline L.; Peter, Jochen; KĂźhne, Rinaldo (2020-05-01). "ChildâRobot Relationship Formation: A Narrative Review of Empirical Research". International Journal of Social Robotics. 12 (2): 325â344. doi:10.1007/s12369-019-00569-0. ISSN 1875-4805. PMC 7235061. PMID 32454901.
- ^ Heerink, Marcel (2011-03-06). "Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults". Proceedings of the 6th international conference on Human-robot interaction. HRI '11. New York, NY, USA: Association for Computing Machinery. pp. 147â148. doi:10.1145/1957656.1957704. ISBN 978-1-4503-0561-7.
- ^ Niculescu, Andreea; van Dijk, Betsy; Nijholt, Anton; Li, Haizhou; See, Swee Lan (2013-04-01). "Making Social Robots More Attractive: The Effects of Voice Pitch, Humor and Empathy". International Journal of Social Robotics. 5 (2): 171â191. doi:10.1007/s12369-012-0171-x. ISSN 1875-4805.
- ^ a b Dang, Jianning; Liu, Li (2023-04-01). "Do lonely people seek robot companionship? A comparative examination of the LonelinessâRobot anthropomorphism link in the United States and China". Computers in Human Behavior. 141 107637. doi:10.1016/j.chb.2022.107637. ISSN 0747-5632.
- ^ Eyssel, Friederike; Reich, Natalia (2013-03-03). "Loneliness makes the heart grow fonder (of robots): on the effects of loneliness on psychological anthropomorphism". Proceedings of the 8th ACM/IEEE International Conference on Human-robot Interaction. HRI '13. Tokyo, Japan: IEEE Press: 121â122. ISBN 978-1-4673-3055-8.
- ^ Eyssel, Friederike; Kuchenbrandt, Dieta (2012). "Social categorization of social robots: Anthropomorphism as a function of robot group membership". British Journal of Social Psychology. 51 (4): 724â731. doi:10.1111/j.2044-8309.2011.02082.x. ISSN 2044-8309. PMID 22103234.
- ^ Wang, Dandan; Zhang, Shiqing (2024-09-20). "Large language models in medical and healthcare fields: applications, advances, and challenges". Artificial Intelligence Review. 57 (11): 299. doi:10.1007/s10462-024-10921-0. ISSN 1573-7462.
- ^ Clusmann, Jan; Kolbinger, Fiona R.; Muti, Hannah Sophie; Carrero, Zunamys I.; Eckardt, Jan-Niklas; Laleh, Narmin Ghaffari; LĂśffler, Chiara Maria Lavinia; Schwarzkopf, Sophie-Caroline; Unger, Michaela; Veldhuizen, Gregory P.; Wagner, Sophia J.; Kather, Jakob Nikolas (2023-10-10). "The future landscape of large language models in medicine". Communications Medicine. 3 (1): 141. doi:10.1038/s43856-023-00370-1. ISSN 2730-664X. PMC 10564921. PMID 37816837.
- ^ Xu, Weijie; Desai, Jay; Wu, Fanyou; Valvoda, Josef; Sengamedu, Srinivasan H. (2024-10-15), HR-Agent: A Task-Oriented Dialogue (TOD) LLM Agent Tailored for HR Applications, arXiv:2410.11239, retrieved 2025-10-23
- ^ Hutchinson, Maeve; Jianu, Radu; Slingsby, Aidan; Madhyastha, Pranava (2024-09-04), LLM-Assisted Visual Analytics: Opportunities and Challenges, arXiv:2409.02691, retrieved 2025-10-23
- ^ Zha, Siyu; Liu, Yujia; Zheng, Chengbo; XU, Jiaqi; Yu, Fuze; Gong, Jiangtao; XU, Yingqing (2024-09-21), Mentigo: An Intelligent Agent for Mentoring Students in the Creative Problem Solving Process, arXiv:2409.14228, retrieved 2025-10-23
- ^ Huang, Hengguan; Wang, Songtao; Liu, Hongfu; Wang, Hao; Wang, Ye (2024-06-08), "Foundation model assisted visual analytics: Opportunities and Challenges", Computers & Graphics, 130 104246, arXiv:2402.05547, doi:10.1016/j.cag.2025.104246, retrieved 2025-10-23
- ^ Shea, Ryan; Kallala, Aymen; Liu, Xin Lucy; Morris, Michael W.; Yu, Zhou (2024-10-02), ACE: A LLM-based Negotiation Coaching System, arXiv:2410.01555, retrieved 2025-10-23
- ^ Mollick, Ethan R.; Mollick, Lilach (2023). "Assigning AI: Seven Approaches for Students, with Prompts". SSRN Electronic Journal. doi:10.2139/ssrn.4475995. ISSN 1556-5068.
- ^ a b "Meet My A.I. Friends (Published 2024)". 2024-05-09. Retrieved 2025-10-23.
- ^ Eubanks, Virginia (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. USA: St. Martin's Press, Inc. ISBN 978-1-250-07431-7.
- ^ Watson, David S.; Krutzinna, Jenny; Bruce, Ian N.; Griffiths, Christopher EM; McInnes, Iain B.; Barnes, Michael R.; Floridi, Luciano (2019-03-12). "Clinical applications of machine learning algorithms: beyond the black box". BMJ. 364: l886. doi:10.1136/bmj.l886. ISSN 0959-8138. PMID 30862612.
- ^ Reani, Manuele; He, Xiangyang; Luo, Yunzhong; Sun, Zhida (2025), Fundamental Over-Attribution Error: Anthropomorphic Design of Ai and its Negative Effect on Human Perception, doi:10.2139/ssrn.5222775, retrieved 2025-11-14
- ^ a b Placani, Adriana (2024-08-01). "Anthropomorphism in AI: hype and fallacy". AI and Ethics. 4 (3): 691â698. doi:10.1007/s43681-024-00419-4. ISSN 2730-5961.
- ^ Booth, Robert; editor, Robert Booth UK technology (2024-11-17). "AI could cause 'social ruptures' between people who disagree on its sentience". The Guardian. ISSN 0261-3077. Retrieved 2025-10-23.
{{cite news}}:|last2=has generic name (help) - ^ Giroux, Marilyn; Kim, Jungkeun; Lee, Jacob C.; Park, Jongwon (2022-07-01). "Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI". Journal of Business Ethics. 178 (4): 1027â1041. doi:10.1007/s10551-022-05056-7. ISSN 1573-0697. PMC 8853322. PMID 35194275.
- ^ Alabed, Amani; Javornik, Ana; Gregory-Smith, Diana (2022-09-01). "AI anthropomorphism and its effect on users' self-congruence and selfâAI integration: A theoretical framework and research agenda". Technological Forecasting and Social Change. 182 121786. doi:10.1016/j.techfore.2022.121786. ISSN 0040-1625.
- ^ Zhang, Shuning; Ye, Lyumanshan; Yi, Xin; Tang, Jingyu; Shui, Bo; Xing, Haobin; Liu, Pengfei; Li, Hewu (2024-10-19), "Ghost of the past": identifying and resolving privacy leakage from LLM's memory through proactive user interaction, arXiv:2410.14931, retrieved 2025-10-23
- ^ a b c d Akbulut, Canfer; Weidinger, Laura; Manzini, Arianna; Gabriel, Iason; Rieser, Verena (2024-10-16). "All Too Human? Mapping and Mitigating the Risk from Anthropomorphic AI". Proceedings of the AAAI/Acm Conference on AI, Ethics, and Society. 7 (1): 13â26. doi:10.1609/aies.v7i1.31613. ISSN 3065-8365.
- ^ a b "Measuring the Persuasiveness of Language Models". www.anthropic.com. Retrieved 2025-10-23.
- ^ a b c d e Peter, Sandra; Riemer, Kai; West, Jevin D. (2025-06-03). "The benefits and dangers of anthropomorphic conversational agents". Proceedings of the National Academy of Sciences. 122 (22) e2415898122. Bibcode:2025PNAS..12215898P. doi:10.1073/pnas.2415898122. PMC 12146756. PMID 40378006.
- ^ "They fell in love with AI bots. A software update broke their hearts". The Washington Post. 2023-03-30. ISSN 0190-8286. Retrieved 2025-11-03.
- ^ Laestadius, Linnea; Bishop, Andrea; Gonzalez, Michael; IllenÄĂk, Diana; Campos-Castillo, Celeste (2024-10-01). "Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika". New Media & Society. 26 (10): 5923â5941. doi:10.1177/14614448221142007. ISSN 1461-4448.
- ^ Maples, Bethanie; Cerit, Merve; Vishwanath, Aditya; Pea, Roy (2024-01-22). "Loneliness and suicide mitigation for students using GPT3-enabled chatbots". npj Mental Health Research. 3 (1): 4. doi:10.1038/s44184-023-00047-6. ISSN 2731-4251. PMID 38609517.
- ^ Weijers, Dan; Munn, Nick (2024-05-08). "AI companions can relieve loneliness â but here are 4 red flags to watch for in your chatbot 'friend'". The Conversation. Retrieved 2025-10-23.
- ^ "AI friendships claim to cure loneliness. Some are ending in suicide". The Washington Post. 2024-12-06. ISSN 0190-8286. Retrieved 2025-10-23.
- ^ Allyn, Bobby (2024-12-10). "Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits". NPR. Retrieved 2025-10-23.
- ^ Costello, Thomas H.; Pennycook, Gordon; Rand, David G. (2024-09-13). "Durably reducing conspiracy beliefs through dialogues with AI". Science. 385 (6714) eadq1814. Bibcode:2024Sci...385q1814C. doi:10.1126/science.adq1814. PMID 39264999.
- ^ Lee, Sanghyub John; Paas, Leo; Ahn, Ho Seok (2024-09-01). "The power of specific emotion analysis in predicting donations: A comparative empirical study between sentiment and specific emotion analysis in social media". International Journal of Market Research. 66 (5): 610â630. doi:10.1177/14707853241261248. ISSN 1470-7853.
- ^ Park, Peter S.; Goldstein, Simon; O'Gara, Aidan; Chen, Michael; Hendrycks, Dan (2023-08-28), AI Deception: A Survey of Examples, Risks, and Potential Solutions, arXiv:2308.14752, retrieved 2025-10-23
- ^ Shanahan, Murray; McDonell, Kyle; Reynolds, Laria (November 2023). "Role play with large language models". Nature. 623 (7987): 493â498. arXiv:2305.16367. Bibcode:2023Natur.623..493S. doi:10.1038/s41586-023-06647-8. ISSN 1476-4687. PMID 37938776.
- ^ Milossi, Maria; Alexandropoulou-Egyptiadou, Eugenia; Psannis, Konstantinos E. (2021). "AI Ethics: Algorithmic Determinism or Self-Determination? The GPDR Approach". IEEE Access. 9: 58455â58466. Bibcode:2021IEEEA...958455M. doi:10.1109/ACCESS.2021.3072782. ISSN 2169-3536.
- ^ Laitinen, Arto; Sahlgren, Otto (2021-10-26). "AI Systems and Respect for Human Autonomy". Frontiers in Artificial Intelligence. 4 705164. doi:10.3389/frai.2021.705164. ISSN 2624-8212. PMC 8576577. PMID 34765969.
- ^ Naddaf, Miryam (2025-10-24). "AI chatbots are sycophants â researchers say it's harming science". Nature. 647 (8088): 13â14. doi:10.1038/d41586-025-03390-0. ISSN 1476-4687. PMID 41136779.
- ^ CDC (2024-05-22). "Social Connection". Social Connection. Retrieved 2025-10-30.
- ^ "Opinion | There Will Never Be an Age of Artificial Intimacy (Published 2018)". 2018-08-11. Retrieved 2025-10-30.
- ^ "Learning to reason with LLMs". openai.com. Retrieved 2025-11-03.
- ^ Monteverde, Giulia; Cammarota, Antonella; Serafini, Ludovica; Quadri, Martina (2025-01-02). "Are we human or are we voice assistants? Revealing the interplay between anthropomorphism and consumer concerns". Journal of Marketing Management. 41 (1â2): 200â235. doi:10.1080/0267257X.2025.2475231. ISSN 0267-257X.
- ^ "aibo". aibo. Retrieved 2025-11-17.
- ^ Inc, SoftBank Robotics America. "Meet Pepper: The Robot Built for People | SoftBank Robotics America". us.softbankrobotics.com. Retrieved 2025-11-17.
{{cite web}}:|last=has generic name (help)