The influence of consumer word-of-mouth has been growing in the past few decades, especially with the advance and spread of interactive Web 2.0 and social media platforms. Consumers tell their relatives, friends, and other peers about their experiences, and recommend products and services. Consumers arguably tend to rely more on recommendations from people like them than on messages in marketer-driven advertising. Human recommendations can also be given to consumers by sellers in stores, representatives in call centres, and alike. But in the last decade a new source of recommendations has become increasingly available: computer-based recommendation agents powered by engines of artificial intelligence (AI) algorithms (analytic methods and models) and Big Data (e.g., on products, consumer behaviour and revealed preferences). The AI-enabled agents are knowledgeable, sophisticated, fast and always ready to recommend. This immediately raises the curious question: recommendations of what source would consumers prefer to accept, those given by a human (peer consumer, agent) or by an AI-based agent?
Chiara Longoni and Luca Cian recently investigated this research question, yet they added a condition that could make a difference to consumers when considering a recommendation from a human or an AI-based agent: Is the consumer choice going to be based more on utilitarian attributes or hedonic attributes of the product. They further examined antecedents to the human-AI preference and in what circumstances consumer preferences may be altered [1]. Longoni and Cian refer to AI-based recommendation as ‘word-of-machine‘, compared with traditional word-of-mouth or human recommendation; a word-of-machine effect, they propose thereby, is defined as “the phenomenon by which hedonic/utilitarian attribute trade-offs determine preference for, or resistance to, AI-based recommendations” as opposed to human recommendations.
The logic of the proposition can be described as follows: Utilitarian attributes are associated with functional benefits; decisions on utilitarian consumption tend to be cognitively-driven with instrumental goals in mind. Consumers believe that AI recommenders are more competent in making value assessments based on factual, rational and logical evaluative criteria that are especially relevant in a utilitarian context. Hence consumers perceive AI-based agents as more competent utilitarian recommenders and would prefer them in that role. In contrast, hedonic consumption is associated with sensory and experiential pleasure, and with emotional benefits; decisions tend to be more affectively-driven, and could be impulsive. Consumers believe that humans are more capable in making value assessments with respect to sensory, emotional and experiential criteria. Hence, consumers perceive humans as more competent hedonic recommenders, and they would prefer human recommendations in this context. The researchers find evidence that supports this distinction in perceived competence of AI and human recommenders, and demonstrate how it is revealed through consumers’ preferences for recommendations.
- The proposition draws on recorded beliefs of people about algorithms, that for example they approach the world as stable, orderly and rigid, do not appreciate uniqueness of humans, and lack empathy; people also seem to doubt the ability of algorithms to learn and improve, which could stem from bad experiences consumers have had with mistaken AI-based interventions and the feeling that the algorithms do not understand them well enough (e.g., they make wrong inferences from human behaviour).
Realistically, the consumption or use of most products entails a mix of utilitarian and hedonic aspects. It is hard to isolate and separate between them. What should matter is which dimension, utilitarian or hedonic, is more dominant (e.g., based on attributes of the product), or which of them is regarded as more important to the consumer in the consumption or use of a particular product. In view of differences of approach in delineating hedonic consumption, Alba and Williams suggest an encompassing but intuitive definition based on a singular concept: pleasure. In their words: “A vital component of hedonic consumption is whether the experience of consuming the product or event is pleasurable.” They identify two general categories of sources and determinants of pleasure: (1) the product or event, based on its inherent qualities (e.g., design, aesthetics); (2) the consumer’s personal experience or interpretation (e.g., enjoyment in anticipation and during an experience, engagement, savouing of the experience in memory)[2].
In an initial test of preference, Longoni and Cian performed two similar experiments in different product domains (hair mask & real-estate). They instructed participants to focus on attributes related either to a hedonic goal or to a utilitarian goal (‘goal activation’), and then choose between recommendations constructed by a human and an AI algorithm (equivalent in their expertise). In both domains, the human recommender was preferred more frequently when a hedonic goal was activated, and an AI recommender was preferred more frequently when a utilitarian goal was activated. This confirms in principle the word-of-machine effect.
- In the area of real-estate, participants were asked to choose between lists of recommended houses for investment in the vacation and ski resort of Cortina d’Ampezzo in Northern Italy (ski world championship February 2021, planned to host the Winter Olympics in 2026 together with Milano), one list devised by an AI expert-algorithm and the other by a human expert-agent. For the utilitarian goal participants were advised about the great importance that the house invested in would be functional, useful, and speak to one’s rationality — overall it should be practical; for the hedonic goal, participants were advised about the great importance that the house invested in would be amusing (fun), enjoyable, and speak to one’s emotions — overall it should gratify the senses. When a utilitarian goal was activated, 60% chose the selection made by the AI algorithm (40% for the human selection). Inversely, when driven by a hedonic goal, 76% preferred the selection of the human recommender over the AI recommender’s list (24%).
Longoni and Cian examined an implication of the effect particularly involving an actual consumption experience, and have shown that attribute perceptions are ‘biased’ in accordance with the type of recommender. Participants were asked to taste one out of two similarly-looking chocolate cakes, either one based on the recipe of a human chocolatier or the other based on a recipe suggested by an AI chocolatier. It is fair to say that inherently the cake seems hedonic-dominated — as in taste and aroma, and pleasure to the senses — with all due respect to its chemical ingredients and healthiness. When the cake was purportedly made with the AI chocolatier’s recipe, the utilitarian attributes were perceived of greater value, whereas when the cake was made following the recipe of a human chocolatier, the value of hedonic attributes was perceived higher. These findings go more towards supporting a distinction in attribute perceptions.
An alternative approach to ‘activating’ a utilitarian/hedonic goal for consumers-participants is asking them which of these types of attributes (by goal), they regard as more important to them (i.e., they ‘care more about’) when choosing a given product (a winter coat). As expected, participants who cared more about utilitarian attributes chose the recommendation of the AI assistant-recommender more than the human assistant’s recommendation, in opposite to when participants cared more about hedonic attributes and prefer more the human assistant’s recommendation. (The researchers verified that the type of recommender really matters, by showing that when just choosing between two human assistants, participants are indifferent between them, whatever attributes they care more about.)
The researchers substantiate a key component of their proposition concerning the differing perceived competences of human and AI recommenders as an explanation for different preferences in hedonic/utilitarian contexts. As a behavioural measure of recommendation acceptance, participants were asked if they would like to download from an app the recommendation of chocolates suggested by either an AI master or a human master chocolatier (one recommender presented to each participant). First, the relation between source of recommendation (Human/AI) and goal (Hedonic/Utilitarian) was confirmed again: relatively more recommendation downloads of the AI master were requested (82%) than of the human master chocolatier’s (63%) when a utilitarian goal was activated; and vice versa when a hedonic goal was activated, relatively more downloads of the human recommendation (88%) versus the AI’s recommendation (52%). Second, the underlying motive was substantiated: the AI master chocolatier was perceived as more competent to recommend chocolates when evaluating them according to a utilitarian goal, whereas the human master chocolatier was perceived as the more competent recommender of chocolates when evaluating them with a hedonic goal in mind. (In parallel, an alternative explanation was excluded: the preference for an AI recommender for a utilitarian goal is attributed from a consumer viewpoint to its competence in this context and not due to complexity of the attributes to be evaluated.)
The concern of consumers about attending to their unique preferences finds expression in this research. When raising the sensitivity of consumers-participants to matching their unique preferences, their preference for the AI recommender decreases and they no longer see it as better than the human recommender with regard to a utilitarian goal — actually their preference reverses. Under ‘usual’ circumstances (control), 77% prefer the AI (realtor) recommender for a utilitarian goal, but when alerted to the salience of matching their unique preferences, just 40% prefer the AI recommender over the human (realtor) recommending agent. (With regard to a hedonic goal, the human realtor agent remains the preferred recommender over the AI recommender, and not significantly greater if the salience of matching one’s unique preferences is raised.)
An urging issue is hanging about the relations between human recommenders and AI-based recommenders: is it a matter of utilising one or the other, or are there advantages to exploit in collaboration between them. Longoni and Cian address this issue, and show some intriguing findings. They look in what circumstances, of utilitarian or hedonic goals, artificial intelligence may assist and amplify the capabilities of a human recommender, thus augmenting his or her intelligence. Augmented Intelligence is exercised by letting an AI-based agent search and suggest an initial set of recommended options, and then allow the human agent assess and make the final recommendation to the consumer. (In the experiment the recommenders are once again human and AI master chocolatiers.)
In the case of a utilitarian goal, an AI recommender has an advantage (in terms of enhanced perceptions of utilitarian attributes) over the human recommender. However, when combining the two (hybrid AI-human decision making), they do together similarly well as the AI recommender alone. The hybrid, augmented model can bolster the value of recommendations made by a human recommending alone. But one cannot ignore the notion that the AI-based recommender might be working alone without requiring the human. The challenge this situation should elicit is to find ways in which human judgement can further increase the value of the recommendation after or simultaneously with AI through collaboration.
In the case of a hedonic goal, a human recommender has an advantage (in terms of enhanced perceptions of hedonic attributes) over the AI recommender. Yet, AI and human collaborating (hybrid) will do much better than an AI recommender alone, and as good as the human recommender alone. A hybrid model evidently can bolster the value of the recommendation of AI, which is inferior alone. In principle, we might conclude from this that AI is redundant. Alternately, however, there may be true gains in involving AI to facilitate the task of the human recommender, giving him or her an improved starting point, yet leaving the final judgement and decision to the human (expert) recommender. So the final value of the resultant recommended set may be at least as high as one devised by the human alone, but it may be produced with greater efficiency, and may end up with higher quality. With regard to chocolates, this scenario seems much more relevant in helping a human chocolatier, assuming that a hedonic goal is more salient (though health considerations may also be important, particularly in dark-bitter chocolate).
The researchers have shown that augmented intelligence (i.e., by joining human with AI) can ‘break’ the linkage predicated by the word-of-machine effect. Nevertheless, implementing collaboration between humans and AI, instead of replacement, will require work and effort to succeed; consumers as well as professionals acting as recommenders may have to be convinced more strongly of the added value that can be gained from augmented intelligence. As seen above, human recommenders can be more in demand when consumers expect matching their personally unique preferences with regard to a utilitarian goal (although companies insist that their algorithms can do it best all alone). The researchers also propose that consumers may be induced to leave behind their prior beliefs about the competences of humans and AI and re-think how good they may be at making recommendations on utilitarian or hedonic goals; they illustrate how this approach succeeds to alter preferences to some extent.
Longoni and Cian look forward to seeing their contribution lead to “prioritize new research focused on understanding the potential of AI in conjunction with humans rather than in contraposition“. They advise, for example, that companies whose customers are not in high need of customization may rely on AI, whereas companies whose customers desire more personalised recommendations may need to involve more humans. It can be added that in cases where utilitarian goals are more dominant or personal customization is less required, customers may be introduced directly to AI-based agents as recommenders; however, when hedonic goals are more dominant or personalised customization is more desired, AI-based algorithms may be employed in the background to assist a human agent who communicates with the customer directly as a recommender. In summary, algorithms of artificial intelligence can be very effective in retrieving preferences and devising recommended options to customers, yet in certain conditions a human touch and expertise could be very helpful in making the extra personal step towards the customer.
In this research, AI-based recommenders are compared with human recommenders who actually hold a formal position and act from their professional standpoint (e.g., expert or master chocolatier, realtor agent). Even when the AI and human expert recommenders are said to rely on a knowledgebase of judgements, reviews or recommendations by consumers and professionals in the field, to devise their own recommendations, they practically mediate between the other consumers and the consumers-participants who receive the outcome recommendations. That is, consumers do not receive recommendations as ‘word-of-mouth’ in the classic sense from peer consumers. Hence, using the term ‘word-of-machine’ could be a little confusing. From the perspective of comparing a human recommendation with an AI-based recommendation, the intention of the contrasting phrases is however made clear. Another issue concerns the number of options that is appropriate to recommend: A single option may be too restrictive, and even up to just three product options may leave many consumers feeling that their autonomy of choice has been limited too much (e.g., ‘am I missing out’, ‘is it reliable enough / trustworthy’). Recommending between four and ten product options may give consumers a satisfactory array of choice while not burdening them with too much information on too many available alternatives.
The research of Longoni and Cian sheds light on interesting and important aspects of relations and trade-offs in preference between human and AI-based recommenders. Building schemes of collaboration between AI and human recommenders, in hybrid or conjoint forms of decision-making, can produce valuable gains to both businesses and consumers — yet it will not come without effort and good will. The wisdom would be in combining the enhanced analytic capabilities and effectiveness of AI with human touch and judgement in situations and contexts they are found to be more craved for.
Ron Ventura, Ph.D. (Marketing)
Notes:
[1] Artificial Intelligence in Utilitarian vs. Hedonic Contexts: The “Word-of-Machine” Effect; Chiara Longoni and Luca Cian, 2020, Journal of Marketing (forthcoming), published online November 2020 (available from ResearchGate.net, downloaded on 6 December 2020, publication 343750244, DOI: 10.1177/0022242920957347).
[2] Pleasure Principles: A Review of Research on Hedonic Consumption; Joseph W. Alba and Elanor F. Williams, 2013; Journal of Consumer Psychology, 23 (1), (Online 2012: DOI: 10.1016/j.jcps.2012.07.003)