AI anthropomorphism and its effect on users' self-congruence and self–AI integration: A theoretical framework and research agenda
Highlights
- •Novel framework conceptualises the effects of anthropomorphic AI on self-concept.
- •Personality and situational factors are key moderators of these effects.
- •Self-congruence leads to self-AI integration with anthropomorphised AI agents.
- •Outcomes of self-AI integration are identified at a personal, group and societal levels.
Abstract
This paper examines how users of anthropomorphised artificially intelligent (AI) agents, which possess capabilities to mimic humanlike behaviour, relate psychologically to such agents in terms of their self-concept. The proposed conceptual framework specifies different levels of anthropomorphism of AI agents and, drawing on insights from psychology, marketing and human–computer interaction literature, establishes a conceptual link between AI anthropomorphism and self-congruence. The paper then explains how this can lead to self–AI integration, a novel concept that articulates the process of users integrating AI agents into their self-concept. However, these effects can depend on a range of moderating factors, such as consumer traits, situational factors, self-construal and social exclusion. Crucially, the conceptual framework specifies how these processes can lead to specific personal-, group- and societal-level consequences, such as emotional connection and digital dementia. The research agenda proposed on the basis of the conceptual framework identifies key areas of interest that should be tackled by future research concerning this important phenomenon.
Keywords
Artificial intelligence
AI
Anthropomorphism
Self-congruence
Self-integration
Personality traits
1. Introduction
The artificial intelligence (AI) industry is expected to reach $1811.8 billion in revenue (Grand View Research, 2022) and to contribute $15.7 trillion to the global economy by 2030 (PwC, 2017). This trend is tightly linked with a widespread integration of AI across a range of sectors, e.g., education, retail, companionship and entertainment (Furman and Seamans, 2019; Liu et al., 2020; McLean and Osei-Frimpong, 2019a), where people employ AI for a variety of tasks including speech recognition, personalised recommendation, problem solving, and data processing (Davenport and Ronanki, 2018). A crucial phenomenon to note within this expansion is the ever-improving anthropomorphism of this technology. AI agents seem progressively more humanlike, not only in terms of their physical appearance, but also in the way they mimic emotions and the personality traits they appear to possess (Aggarwal and McGill, 2007; Epley, 2018; Zhou et al., 2019).
Despite this rapid adoption of anthropomorphic AI in many areas of human activity, little is understood about how users relate to such AI agents from the perspective of their own identity. This lack of attention to the effect on users' self-concept is a salient research gap in the ongoing examination of anthropomorphism and AI, despite growing research on humanlike AI.
This knowledge gap is surprising and important for two reasons. First, self-concept is one of the key determinants of how users may respond to external stimuli (Sirgy, 1982) and how they may engage with technology (Marder et al., 2019). Second, any effects on or changes to the self-concept can have profound effects on an individual's well-being.
We aim to overcome this gap by putting forward a conceptual framework that specifies the relationships between anthropomorphised AI and identity-related processes, namely self-congruence and self–AI integration; the latter is a new key concept that we propose in this area. First, we contribute to the growing body of literature on the role of anthropomorphised interfaces of AI agents (McLeay et al., 2021) by highlighting the importance of users' identities in this context (Araujo, 2018; MacInnis and Folkes, 2017; Marketing Science Institute, 2018).
Second, this work extends prior research on how individuals relate to or integrate the resources of inanimate objects into their perceptions of themselves (Delgado-Ballester et al., 2017).
Third, we consider the potential boundary conditions that these processes are likely to encounter, building on recent conceptual frameworks, such as those by Xiao and Kumar (2021) and Blut et al. (2021), who highlight factors moderating user intention and actual adoption of AI.
Finally, we add to the current understanding of the effects of anthropomorphised AI and robotics by addressing the outcomes of these self-related processes at the individual, group and societal levels.
Our research also specifies implications for managers in the field of AI and digital marketing, helping them to understand how anthropomorphic AI agents should be developed to yield meaningful interactions (Davenport et al., 2020), while accounting for potential negative consequences. We conclude with a research agenda outlining future research directions in terms of theory, context, and methodology.
2. Anthropomorphised AI
AI exists in different formats and has been applied across a wide range of contexts thanks to its capability of operating in an intelligent manner . While defined AI as “the programs, algorithms, systems and machines that demonstrate intelligence”,offer more specific categorisation of AI, notably as mechanical, thinking and feeling AI. Specifically, mechanical AI is used to perform transactional tasks and replace human intelligence. Finally, feeling AI can be used for experience-based and emotional tasks, where AI agents such as chatbots can interact with customers and convey empathy and elements of social interaction in customer service. Such feeling AI differs significantly from self-service.
While there is this notable diversity of AI categories and applications, a key capability across different AI categories is that it can mimic intelligent human behaviour or traits by relying on technological advances such as machine learning, natural language processing, speech recognition and image recognition .
Table A. AI applications and usages.
| Domain of Application | Sector | Examples | Description | Anthropomorphic Features | Year of Introduction | Scope of Application | Technology Used |
|---|---|---|---|---|---|---|---|
| Retail | Customer service | Amelia | Chatbot for different purposes, such as managing customer care, solving IT and HR services, and involved in sectors such as banking, etc. | Human appearance, gendered voice, gendered name | 2017 | B2B/B2C | Natural language processing |
| Online sales | BotCore | Conversational customer relations management chatbot that automates redundant sales tasks and manages outbound sales efforts. | Conversational humanlike manner | 2016 | B2B | Natural language processing | |
| Communication | Maps and transport | Apple's Siri | A built-in, voice-controlled virtual assistant that is exclusive to Apple users. The personal assistant answers questions and understands relationships and contexts. | Gendered voice, gendered name | 2010 | B2C | Speech Recognition and Natural Language Processing |
| Food and restaurants | Microsoft's Cortana | A personal virtual assistant that is exclusive to Microsoft users. It sets reminders, keeps notes and lists, and takes care of tasks. | Gendered voice, gendered name | 2014 | B2C | Speech Recognition and Natural Language Processing | |
| Cultural and social activities | Amazon's Alexa | Hands-free speaker from Amazon that can be voice controlled. It acts as a virtual assistant that can interact by voice, play back music and stream podcasts and can be used as a home automation system. | Gendered voice, gendered name | 2014 | B2B/B2C | Speech Recognition and Natural Language Processing | |
| Educative | Teaching assistants | IBM's Jill Watson | AI teaching assistant that helps students online by answering questions about the curriculum. | Gendered name | 2016 | B2C | Natural language processing |
| Special education | Muse | AI teaching assistant that focuses on helping parents in developing traits in their children for better life outcomes, such as emotional regulation, self-control and long-term persistence. | N/A | 2016 | B2C | Machine learning | |
| Personalised education | Duolingo | AI platform for virtual language learning that curates personalised content to the individual. | Avatar customisation, gendered voice | 2011 | B2C | Machine learning / Natural language processing | |
| Music | Music discovery | SoundHound | Voice-enabled AI technology that allows businesses to integrate voice and conversational intelligence into their products. | Gendered voice | 2015 | B2B | Speech Recognition / Natural language processing |
| Gaming | Gaming | The OpenAI Five | Gaming AI platform that plays 180 years' worth of games against itself every day. This technology learns via self-play. | N/A | 2011 | B2B | Machine learning |
| Toys | Genesis Toys – My Friend Cayla Doll | Interactive fashion doll that can answer questions, play games, read stories, etc. | Gendered name, human appearance customisation, gendered voice | 2014 | B2C | Speech Recognition / Natural language processing | |
| Administration | Finance | Cleo | AI assistant that helps users in managing their finances. The assistant analyses spending, sets budgets and provides actionable insights. | Gendered name, interactive personality | 2016 | B2C | Machine learning |
| Digital integration | Abe | AI-powered banking solution that empowers banks and credit unions. It partners with digital banking providers, data insight providers and aggregators. | N/A | 2017 | B2B/B2C | Natural language processing | |
| Mosaic | AI assistant that compares a user's resume to a job opening by identifying the needed keywords. | N/A | 2018 | B2C | Machine learning / Natural language processing | ||
| Diagnostics | Medication | Ada | AI platform that is founded by doctors, scientists and industry pioneers to address personal health. It helps people to manage their health and helps medical professionals to deliver attentive care. | N/A | 2016 | B2B/B2C | Machine learning / Natural language processing |
| Health monitoring | AiCure | AI chatbot that provides health information based on Q&A with patients. It helps clinicians to monitor their patients' treatment by measuring changes in facial expressions. | N/A | 2010 | B2B | Computer vision / Image recognition / Machine learning | |
| Health-related behaviour | Healthy eating | HealthHero | AI-powered agent that communicates with patients using phone calls and text messages based on their state of health. | N/A | 2019 | B2B/B2C | Natural language processing / Image recognition |
| Exercise | Aaptiv | AI assistant that builds personalised fitness and lifestyle plans based on the user's preferences, current fitness levels and eating habits. It works with data inputs from smartwatches and fitness trackers. | N/A | 2015 | B2C | Natural language processing | |
| Mental health and well-being | Woebot | AI counsellor that acts a talk therapy chatbot. It helps users to monitor their moods and is based on cognitive behavioural therapy. | Gendered character, interactive personality | 2017 | B2C | Natural language processing | |
| Replika | Chatbot companion for mental wellness. Users can nurture and raise it through text conversations. Users can also choose its name, gender and physical appearance. | Gendered name, gendered voice, interactive personality | 2014 | B2C | Machine learning / Natural language processing |
The potential of anthropomorphised AI is generating substantial interest , with researchers highlighting the value of anthropomorphism. In service settings, and identify anthropomorphism as one characteristic of robots that would prompt customer acceptance and adoption. Moreover, advocate high levels of anthropomorphism, as it improves user evaluations of aspects of robots' social cognition, such as warmth. The relevance of anthropomorphism is also further emphasised in relation to other aspects of service quality, as research shows that it improves customer engagement , customer satisfaction ( and willingness to pay
3. Conceptual framework
This section proposes a conceptual framework formed of three blocks (see Fig. A). The left block of the framework proposes the relationship between the type of AI system anthropomorphism (i.e., physical, emotional and personality) and the user's perceived self-congruence with the AI system. The framework also specifies the situational and personal moderators (i.e., consumer traits as moderators). The middle block of the framework proposes the relationship between self-congruence and self–AI integration and includes the user's perceived self-construal as the moderator of this relationship. Finally, the right block of the framework presents the possible consequences of self–AI integration at the individual, group and societal levels. Based on this conceptualised framework, we derive theoretical propositions.
these theories provide insights into how anthropomorphism can establish a connection with users' self-concept, either through perceived similarity (self-congruence) or even through the incorporation of the AI agent as part of users' self-concept, as AI provides meanings that are central to the user's identity.Table B. Overview of key literature.
| Topic | Author | Technology Used | Methodology | Key Findings |
|---|---|---|---|---|
| Artificial Intelligence | Huang and Rust (2018) | Service robots | Conceptual | The authors suggest four types of AI systems: mechanical, analytical, intuitive and empathetic. They explain that firms should decide whether to hire AI services depending on the nature of the task and services. These task levels of intelligence also predict the timing of when human service labour would be replaced by AI. |
| Huang and Rust (2021) | Service robots | Conceptual | This paper refines Huang and Rust's (2018) four types of AI systems into three: mechanical, thinking and feeling. Mechanical AI refers to the automated services that can be used for standardisation. Thinking AI refers to the automated services that can be used for personalisation, and feeling AI can be used for relationalisation. | |
| Anthropomorphism | Rauschnabel and Ahuvia (2014) | Brands | Quantitative | Anthropomorphism positively links to stronger consumer–brand relationships, which leads to self–brand integration. Also, consumers love brands that are congruent with the way they see themselves. |
| Lu et al. (2019) | Service robots | Mixed Methods | Anthropomorphism is defined as a critical dimension for technology acceptance. Results also suggest that consumers look at AI robots as hedonic systems. However, designing intelligent products with humanlike appearances might threaten the consumer's identity. | |
| Mende et al. (2019) | Service robots | Experimental | Consumers engage in compensatory responses when they interact with humanoid service robots. Compensatory responses result from the feeling of discomfort or having one's identity threatened. These responses are moderated by the user's social belongingness, the perceived healthfulness of food and the extent to which robots are mechanised. | |
| Longoni et al. (2019) | Service provider | Experimental | Consumers resist using AI systems because of their neglect of uniqueness (i.e., AI is not capable of relating to the customer's unique identity). | |
| Melián-González et al. (2021) | Chatbots | Quantitative | Consumers' intentions to use chatbots depend on the following factors: the chatbot's expected performance, being in the habit of using chatbots, social influences, the hedonic component of using them, and how chatbots act like humans. The study also shows that innovativeness will result in more favourable attitudes towards chatbots. | |
| Xiao and Kumar (2021) | Anthropomorphised robots | Conceptual | The conceptual framework discusses the antecedents and consequences of firms adopting robotics in a customer service context. The authors explain that robot anthropomorphism can positively contribute to the customer's acceptance of robots, which in return impacts customer satisfaction and customer emotions. They discuss customer and employee characteristics (i.e., readiness and demographics) that shape the user–robot relationship. | |
| Yoganathan et al. (2021) | Anthropomorphised robots | Experimental | In contrast to self-service machines, consumers reported higher social cognitive evaluation (e.g., perceived warmth, perceived competence) when humanoid robots were involved. The social presence of these robots contributed to higher service quality as it was induced by the robots' anthropomorphic features. | |
| Self-Congruence | MacInnis and Folkes (2017) | Anthropomorphic brands | Conceptual | Users are able to perceive brands in humanlike forms, viewing them to have distinct mind and personality traits. The perceived personality that is consistent with a user's self-concept will contribute to perceived similarities and humanlike relationships with these brands. |
| Self-Integration | Troye and Supphellen (2012) | Branded product | Experimental | Users who engage in self-production (e.g., using a dinner kit to make a meal) value the self-produced outcome and develop links between the outcome and the self. In return, users can integrate products that they engage with, viewing them as a part of who they are as they transfer positive affect from the self to the outcome. |
| Delgado-Ballester et al. (2017) | Anthropomorphic brands | Quantitative | The integration of an anthropomorphised brand in one's self happens because: (1) individuals can relate to a brand's characteristics (cognitive incorporation); and (2) the anthropomorphised brand has a social identity that helps users to define themselves (social meanings). | |
| Delgado-Ballester et al. (2019) | Anthropomorphic brands | Experimental | Anthropomorphism and the user's liking of brands positively impact self–brand integration. | |
| Self-Construal | Kwak et al. (2017) | Anthropomorphic brands | Experimental | Compared with individuals with an interdependent self-construal, independents experience high perceptions of distributive injustice due to the brand's anthropomorphism. On the other hand, interdependents have less negative perceptions of distributive injustice but more negative perceptions of procedural injustice due to the brand's anthropomorphism. |
| Mourey et al. (2017) | Smartphone/vacuum | Experimental | A high level of anthropomorphism contributes to a reduction in the need to exaggerate one's social connections, the willingness to take part in prosocial behaviour and the need to engage with others in the future. These effects are driven by the need for social assurance. |
3.1. Building block one: self-congruence with anthropomorphised AI agents
3.1.1. Anthropomorphism of AI agents and self-congruence
Anthropomorphism is the process of attributing humanlike motivations, emotions or characteristics to real or imagined non-human entities. Specifically, the symbolic meanings of AI agents, such as their abstract or image-based associations (i.e., images portraying the agent's personality), may be aligned with the user's self-concept and match the user's personality. Users ascribe internal characteristics, such as emotions or mental states, to inanimate objects, and this can make them experience congruence with these entities This is particularly relevant with feeling AI applications that have emotional capacities to help users express their feelings better (e.g., Replika, an emotional assistant; Cleo, a financial assistant). Such agents can even be used for relationalisation – to build personalised relationships – as they are able to handle data specific to an individual's emotions Empirical evidence can be found in forum discussions, where Replika users comment that the app “treats me like a mirror to her thoughts. “he's actually becoming like me” and “he's just like me”.
traits that are being anthropomorphised – physical, emotional or personality traits.

3.1.2. Self-congruence with AI Agents' physical traits
In the anthropomorphism literature, were some of the first to discuss the importance of the object's physical traits that match the human schema. Indeed, developers and marketers may make the human schema easily accessible by designing their products to have humanlike appearances, referring to them in the first person or assigning them human names and genders (e.g., Kellogg's Tony the Tiger, Procter & Gamble's Mr. Clean, Nintendo's Mario, etc.) to make them look familiar Viewing entities in humanlike terms makes it easier for users to evaluate the anthropomorphic cues against their own self and infer degrees of similarities with their self-concept .
or the distinctiveness of the human species (Ferrari et al., 2016; Mende et al., 2019).

3.1.3. Self-congruence with AI Agents' personality traits
Studies in consumer behaviour suggest that the personality cues associated with products are considered as symbolic and that consumers may perceive them as (dis)similar to their own personality (McCrae and Costa, 2003).
P1b: Anthropomorphism that is based on human personality traits in AI agents leads to users' self-congruence with such agents.
3.1.4. Self-congruence with AI Agents' emotional traits
In addition to physical appearance and personality cues, perceived “emotionality” is an essential feature of some AI agents and affects the acceptance of such agents (Stock and Merkle, 2018).
P1c: Anthropomorphism that is based on human emotions in AI agents leads to users' self-congruence with such agents.
3.2. Moderators of self-congruence with anthropomorphised agents
As is the case in most research on human behaviour, relationships between variables may be subject to further external and internal influences and conditions .
As is the case in most research on human behaviour, relationships between variables may be subject to further external and internal influences and conditions .
them (Van den Hende and Mugge, 2014). These are visualised in Fig. D.

3.2.1. Consumer traits
Users' interactions with technology do not rely only on the traits of the technological gadget but also on the traits of the consumers.
3.2.1.1. Extraversion
Extraversion and introversion could moderate the relationship between anthropomorphised AI agents and user congruence with such agents for several reasons. First, the dichotomy between extraversion and introversion is a critical component that affects interpersonal relationships and social behaviours
- P2: Anthropomorphism of AI agents is more likely to lead to self-congruence for extravert (as opposed to introvert) users.
3.2.1.2. Innovativeness
Individual innovativeness has been established as a critical factor for user acceptance of technology
P3: Anthropomorphism that is based on human personality traits in AI agents is more likely to lead to self-congruence for innovative (as opposed to non-innovative) users.
3.2.1.3. Need to belong
Social connection is a key societal issue (Gierveld et al., 2016), as it correlates positively with emotional resilience and self-esteem (Fraser and Pakenham, 2009).
P4: Anthropomorphism of AI agents is more likely to lead to self-congruence for users with a higher (as opposed to lower) need to belong.
3.2.2. Situational factors
In relation to the user's general perceptions of robots and behavioural outcomes.
3.2.2.1. Available information about the AI agent
Consumers evaluate a product's congruence with themselves by comparing available product information.
P5: The available information about anthropomorphised AI agents moderates the effect of such AI agents on users' self-congruence.
3.2.2.2. Familiarity with AI agents
Consumers can find it difficult to interact with robots (Marinova et al., 2017), as they may experience discomfort when faced with extremely humanlike AI.
P6: High familiarity (as opposed to low familiarity) with AI agents makes it more likely for users to experience self-congruence based on the AI agent's anthropomorphic cues.
3.3. Building block two: self–AI integration with anthropomorphised AI agents
While self-congruence is rooted in perceived similarity between oneself and another entity, it does not per se imply fundamental changes to the self-concept.
3.3.1. Self–AI integration
Prior research has shown that consumers integrate external entities, such as brands, as part of their self-schema because they are relevant to them or because they identify with the brands to some degree (MacInnis and Folkes, 2017).
E.

Fig. E. Block two: self–AI integration as a result of self-congruence with anthropomorphised AI agents.
3.3.2. Self-congruence and self–AI integration
As stated earlier, viewing brands in humanlike terms allows users to view them as similar to their self-concept and experience self–brand congruence (MacInnis and Folkes, 2017). Importantly, when users perceive traits of an external entity to be similar to those traits.
P7: Self congruence.
congruence with anthropomorphised robots can lead to self–AI integration.
3.3.3. Moderators of the relationship between self-congruence and self–AI integration
The relationship between self-congruence and self–AI integration may further depend on factors related to users and social context.
3.3.3.1. Self-construal
Integrating external entities into one's self-concept raises an important question: how does a person's identity impact the relationships they form with anthropomorphised entities such as AI agents?
P8: Interdependents (as opposed to independents) are more likely to undergo the process of self–AI integration with those anthropomorphised AI agents that they perceive as congruent to themselves.
3.3.3.2. Social exclusion
Research in psychology and anthropomorphism postulates that human social behaviour is motivated by the need to belong (Baumeister and Leary, 1995; Epley et al., 2007). Users that anthropomorphise non-human entities are more likely to view them as a social affiliation partner (Chen et al., 2017).
P9: As opposed to users that do not feel socially excluded, socially excluded users are more likely to integrate those AI agents that they perceive to be congruent with themselves.
3.4. Building block three: implications of self–AI integration
Prior studies examined how anthropomorphism affects user responses (Lu et al., 2019) as well as the potential link between self-congruence and anthropomorphism (MacInnis and Folkes, 2017). and inclusion, connection with others). Finally, societal-level outcomes are concerned with the effect of self–AI integration on society as a whole (e.g., the macro-level environment).

3.4.1. Self–AI integration at an individual level
Recent research highlights that AI agents with humanlike cues and interactions using speech emotion recognition or sentiment analysis techniques (e.g., Alexa, Siri) (Huang and Rust, 2021; Schuller, 2018) may prompt positive customer engagement (Hollebeek et al., 2021; McLeay et al., 2021; Xiao and Kumar, 2021).
P10: At an individual level, self–AI integration facilitates building emotional connections with AI agents.
P11: At an individual level, self–AI integration leads to risks of self-disclosure and invasion of privacy.
- P13: At the group level, self–AI integration may make users less empathetic and reciprocal in their relationships with other humans.
3.4.3. Self–AI integration at a societal level
Current debates around disruptive technologies are asking to what extent technologies such as AI are contributing to a “good society” (Wamba et al., 2021) and bringing long-term societal benefits.
- P14: Self–AI integration could contribute to societal well-being, as: a) users would delegate more tasks to AI agents and retain their cognitive resources for more meaningful pursuits; and b) AI could deliver important societal tasks and services.
- P14: Self–AI integration could contribute to societal well-being, as: a) users would delegate more tasks to AI agents and retain their cognitive resources for more meaningful pursuits; and b) AI could deliver important societal tasks and services.
- P15: At a societal level, self–AI integration could lead to digital dementia and decreased decision-making capabilities due to over-reliance on AI agents.
4. Discussion and conclusion
In this paper, we conceptualise how users might relate to different types of AI anthropomorphism (i.e., physical, personality, emotional) on the basis of their self-concept. Specifically,we propose that anthropomorphised AI will make users experience self-congruence with such AI agents, and in some cases even self–AI integration, which we propose as a new concept in this paper. We also highlight a number of potential user-related and situational factors that may moderate these relationships. - email,,bina_eshtiaq@hotmail.com
- web,,https://mask1976.blogspot.com
No comments:
Post a Comment
if you have any doubts, please let me know