Customer experience is a major theme in contemporary marketing and management. Experiences are most varied as they happen in different contexts and situations (e.g., shopping, product usage, service). There are special characteristics to digital, online and mobile experiences, whereby the digital sphere becomes more dominant and digital-driven experiences are getting more frequent and pervasive. But there is a growing force, within the digital sphere, namely artificial intelligence (AI), that looks to have even greater and overreaching impacts on customer experiences, and on consumer behaviour more generally. Artificial intelligence is already getting increasingly involved in many processes, especially when facing consumers directly (e.g., chatbots, virtual assistants) or affecting interactions implicitly (e.g., personalisation, recommendations). Still, there seems to be too little understanding of consumers about the roles that AI may undertake, and how AI can influence their decisions, behaviours, and outcomes of their actions.
Customer experiences are closely linked to relationship marketing — positive, successful or enjoyable, and enhanced experiences tend to contribute markedly to the development of stronger relationships of customers with companies and brands; these are relationships that encourage customers to stay in for a longer time. Yet, consumers cannot be regarded as passive subjects in the experiences they go through, since they subjectively experience the interactions and other activities they take part in (i.e., how they perceive, think and feel about the experience), and hence most often they influence and shape the way an experience proceeds. Customer experiences are multidimensional and include cognitive, emotional, behavioural, sensorial and social aspects of consumer response. Relationship marketing is based on developing cooperative and collaborative relationships with customers for fulfilling their needs and wants in a most satisfying manner. Designing experiences in which customers can actively and voluntarily participate in driving their course, and occasionally allow them to be creative (e.g., personal customization, open innovation), coincides with a collaborative view of the relationships built with customers; it is also consistent with the service-dominant logic in marketing wherein value is created during consumer experiences of tangible product usage and intangible service utilisation [1a to 1c]. Artificial intelligence offers tools and capabilities that can improve the experiences for customers (e.g., increase the accuracy and timeliness of offers, provide assistance faster), yet it could also interfere with and break the connection between the customer and a company or brand.
In a clear and eye-opening review article, Puntoni, Walker-Reczek, Giesler, & Botti (2021, [2]) identify four types of consumer experiences with AI: data capture, classification, delegation, and social. These types also represent functional areas of AI by which companies engage with consumers, prospective and actual customers. The authors present the aspects or effects of involvement of AI in each type of experience, propose relevant research questions, and discuss their implications and possible consequences, for better and for worse. They clarify, however, that these experience types are inter-related (e.g., a capability of AI applied in data capture may influence the classification of customers); hence, it is not suggested that these types of experiences should be treated independently, instead approach them as different aspects of a whole customer journey. Puntoni and his colleagues provide practical, real-life examples of consumer experiences with AI that illustrate the issues they raise; the examples help to grasp the theoretical issues or claims, and in making their framework more vivid.
The phase of data capture can be very sensitive because the personal information captured about a customer may show up and affect other activities and experiences throughout the customer journey. Puntoni and his colleagues refer to information provided intentionally by the consumer (although sometimes without fully understanding how it is being used) as well as information gathered as “shadows” from the activities and actions consumers engage in. Information on consumer preferences and habits can be applied, for instance, to aid the customer by creating better-fitting offers or automating routine or recurring decisions. However, having one’s information obtained or traced on different occasions can invoke, from a sociological perspective, an undesirable sense of surveillance, that one is being followed around (e.g., face images captured by cameras in a store, learning one’s home plan from the paths taken by a Roomba iRobot cleaner).
From a psychological perspective, a lack of transparency about which information is being used and how can lead consumers to feel exploited. That is, although they are likely to recognise the benefits to be gained from using their data for customisation, consumers may feel uneasy and irritated by not understanding how or for what other purposes the information could be employed by AI. This may lead consumers, for example, to reactance, a hostile response towards the entity suspected of misusing information about them, and seeking correction in aim to restore their control. A customer may respond in reactance, for example, by refusing to accept subsequent recommendations; or as in the case of Danielle who “felt invaded” by Amazon’s Alexa, because it sent details of a private conversation to another person in her address book, and thereof committed to unplug Alexa around her home. The authors stress the importance for managers to better understand how feelings of exploitation may be evoked since feeling so might obscure from customers the value they can gain from letting their personal data being used by AI (e.g., presenting more personally relevant ads). They propose some additional measures or factors to consider for alleviating feelings of surveillance or exploitation (e.g., choice architecture, goals and motivation, the type of device powered by AI).
- In some cases consumers may find themselves in a dilemma because a service provider conditions the delivery of benefits of a service on joining a programme (as by mobile app) that mandates the access to and use of personal information of different kinds: one has to decide whether to agree and join or forgo the service, given his or her trust in the service provider (e.g., city parking permits, membership in loyalty club, financial service). More responsible businesses allow users of their website or app to choose the types of data, defined by purpose, they agree to share (e.g., through website’s ‘cookies’ or app’s settings).
The classification of customers also presents some critical and sensitive implications, socially and psychologically. Classification schemas are meant for targeting customers more accurately, efficiently, and with higher relevancy. On the one hand, a consumer may be satisfied if the group he or she is assigned to matches one’s aspirational group. On the other hand, a consumer may be aggravated if he or she believes that the group according to which one is served or treated reflects prejudice and allows discrimination (e.g., by gender, race, medical condition). Algorithms that utilise data of past decisions made regarding individuals are prone to fall into a trap of perpetuating previous discrimination on various bases. A negative consequence of this is that individuals may be deprived of certain services (e.g., declined a loan) or offered worse terms (e.g., lower credit allowance, higher price or ‘risk’ premium). The authors relate to this problem in the sociological context as ‘unequal worlds narrative’.
From the psychological perspective, Puntoni et al. raise the problem that consumers may conclude they are being misunderstood (e.g., their actual preferences to-date are misidentified). They discuss the ways that could make consumers feel misunderstood and the implications thereof. One of the causes, for instance, occurs when consumers believe that group membership overrides their uniqueness (e.g., idiosyncratic preferences) in making AI predictions. Another reason is the belief that discriminatory use of a social group membership results in biased predictions (as suggested in the ‘negative consequence’ above). Puntoni and his colleagues recommend that managers listen more carefully to the concerns of customers about the predictions and offers made to them to remove objections and improve classification experiences. They also make the interesting suggestion that periodically consumers would be invited to validate AI-based inferences and update AI’s view of one’s self and thus reduce potential frustration.
The AI delegation experiences are seemingly at the core of relations of consumers with artificial intelligence. A ‘delegation experience’ is defined by the authors as “one in which consumers involve an AI solution in a production process to perform tasks they would have otherwise performed themselves”. Thus, the kind of an AI solution employed may determine the extent to which it assists humans by augmenting their capabilities in performing tasks or assumes their roles by delegating them to AI (e.g., making decisions or taking actions on their behalf). Delegation to AI permits consumers, in their advantage, to (a) dedicate more time and effort to tasks that are more satisfactory or meaningful to them (e.g., leisure activities), or (b) concentrate on tasks that better match their skills (increase self-efficacy).
In a sociological context, Puntoni and his colleagues relate to a rather futurist agenda, namely the ‘transhumanist narrative’ which concerns forms in which humans may be replaced by AI and robots, and the creation of a ‘useless class’ of people whose skills are no longer developed or required, which could lead to erosion of democracy and decreased social justice (a concept adopted from historian Yuval Noah Harari). The psychological perspective is focused more on the effects of replacement on individuals (‘the replaced consumer’) and seems to touch more closely on the practical everyday concerns of consumers. Puntoni et al. explain that while delegation experiences can help in making consumers feel empowered, these experiences might also be perceived as threatening:
- First, it would not allow consumers the satisfaction of attribution of their consumption outcomes to their own skills and effort, and furthermore could deprive from them the sense of accomplishment.
- Second, outsourcing or relying on an AI-powered tool/device to perform a task may remove them from practising and improving their skills, increase a ‘satisficing’ tendency, and reduce one’s self-worth. The authors give the example of journalist John Seabrook who became alarmed when he realised how relying on a tool such as Google Smart Compose to choose words for him can disrupt him from expressing his original thought or what he intended to say.
- Third, outsourcing a task to AI might cause consumers to feel loss of self-efficacy: feeling less useful, less capable of performing the task, or less in control of its execution and outcome.
With regard to the first reason of threat proposed, we may add here, closer to the context of decision-making, that consumers may also resent a reduction of their autonomy of choice: recognising that a choice decision was made out of free will, and furthermore was driven by their own judgement (‘causality’)[3]. Such a threat may emerge, for example, in recommendation engines that control the number and range of options presented to the consumer (especially when it turns out the options shown are commercially biased), and virtual shopping assistants that over-simplify the choice problem and may even turn to make automatic purchase orders on behalf of consumers.
A social experience arises, according to Puntoni et al., when AI is capable of engaging in reciprocal communication with humans. They refer to two cases: (1) consumers are aware from the outset that they are interacting with an AI agent, and (2) when consumers start interacting with a representative of a company without knowing initially that it is an AI agent. Both scenarios have advantages and disadvantages. Customers may feel less comfortable and more reluctant to start a ‘chat’ interaction with a representative known to be AI-powered, but resistance can decrease if the agent proves informative and satisfactorily efficient (e.g., in simpler queries). The interaction may start better in the second case, yet the customer may feel alienated (‘cheated’) and angry when he or she discovers the partner was non-human; the response can be worse if the identity is revealed pre-maturely by malperformance of the AI agent.
- Social experiences with AI may take place by engaging directly in social interactions with consumers in conversations with virtual assistants / ‘chatbots’ or receiving aid / service from physical robots. Enhanced capability of reciprocity may prove most crucial for robots trained to serve as assistants to elderly people at their homes.
The authors refer to sociological issues such as when an AI agent learns ‘bad language’ from conversations between humans in social media platforms (e.g., sexist insults, racial slender, extremist opinions) and uses them is its own interactions with customers. From the psychological perspective, they describe situations in which customers may become alienated. First, an AI agent may reply in a way that seems insensitive to the customer and suggests that the agent does not understand what the human customer is telling it. Cases of failed automated response and service increase discomfort of interacting with a ‘social robot’, and may explain resistance to even start a conversation with it. A second type of alienation occurs when AI agents fail to interact with specific groups of customers. For instance, consumers with learning disabilities or dyslexia may find it greatly frustrating to interact with an AI agent unfit to communicate with them; in another example, AI agents were caught giving supremacist answers. Puntoni and his colleagues offer some measures of remedy: address the customer in first name and provide an explanation for malfunction; ensure an easy and swift transition from AI to human agent when an interaction runs into difficulty or becomes aversive (e.g., this can be achieved by automatic smart detection of the system, on request of the customer, or by a human agent ‘inspector’ who listens-in).
In a final note, it can be observed in the article of Puntoni et al. that some of the issues they raise entangle both sociological and psychological implications (e.g., ‘discrimination’ in classification experiences, ‘replacement’ in delegation experiences). Additionally, some issues may emerge in different types of experience through the customer journey, such as concerns about personal control that are emphasised in both date capture and delegation AI experiences.
The benefits that can be accrued from proper application of AI analytic and operational capabilities are undeniable: increasing the efficiency of many processes in marketing, service etc.; making better informed decisions based on advanced analyses of data; creating solutions, such as in goods and services, that fit more closely the needs and preferences of consumers. However, in order to achieve these benefits, and for customers to appreciate them more fully, firms as well as consumers need to learn and understand the limitations of AI from psychological and sociological perspectives and the adverse effects they may inflict on consumers as individuals and societies. This lesson is intelligently demonstrated by Puntoni and his colleagues. It is the responsibility of firms deploying AI methods to offer remedies and improve the performance of AI solutions, but it is nonetheless in the interest of consumers to learn and apply AI tools or devices sensibly and diligently. Acting accordingly could be conducive for making AI experiences more helpful and pleasant for consumers, and thereby also their customer experiences as a whole.
Ron Ventura, Ph.D. (Marketing)
Notes:
[1a] The Domain and Conceptual Foundations of Relationship Marketing; Atul Parvatiyar and Jagdish N. Sheth, 2000; in Handbook of Relationship Marketing, J.N. Sheth and A. Parvatiyar (editors), Chapter 1, SAGE Publications
[1b] Understanding Customer Experience Throughout the Customer Journey; Katherine N. Lemon and Peter C. Verhoef, 2016; Journal of Marketing, 80 (6), pp. 69-96 (available at Academia.edu)
[1c] Evolving to a New Dominant Logic for Marketing; Stephen L. Vargo and Robert F. Lusch, 2004; Journal of Marketing, 68 (1], pp. 1-17
[2] Consumers and Artificial Intelligence: An Experiential Perspective; Stefano Puntoni, Rebecca Walker Reczek, Markus Giesler, & Simona Botti, 2021; Journal of Marketing, 85 (1), pp. 131-151 [Note: first published online in October 2020, a copy is available at ResearchGate.net, retrieved 29 December 2020; the article was published in a special issue of JMR in collaboration with MSI Marketing Science Institute)
[3] Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data; Quentin Andre, Ziv Carmon, and colleagues, 2018; Customer Needs and Solutions, 5 (1-2), pp. 28-37 (available at Springer.com)