Feeds:
Posts
Comments

Posts Tagged ‘Research’

A classic view regarding decision-making holds that attention serves foremost to acquire the information most relevant and important for choosing between alternatives. Thereby the role of attention is largely a passive one. However, an alternative view that is gaining traction in recent years, especially due to the help of eye tracking research, argues that attention plays a more active role in decision processes, influencing the construction of decisions.

This is a key message delivered by Orquin and Mueller Loose (2013) in their review on the role of attention in decision-making, as can be learnt from tracking of eye movements and subsequent fixations [1]. The approach taken by the researchers, however, is less usual: They do not constrain themselves concretely to the domain of decision-making; instead, they start their review and analysis of evidence from theories or models of tasks similar or related to decision-making (e.g., perception, information processing, visual search, working memory, top-down and bottom-up processes, problem solving).  Then they try to project how the functions of attention in such tasks may project to or be expressed in decision processes.

Furthermore, Orquin and Mueller Loose examine the extent to which the evidence coincides with four alternative theories and associated models of decision-making (i.e., whether empirical evidence substantiates or refutes assumptions or conclusions in each theory). They review evidence from previous research on similar or related tasks that could also be traced specifically in decision tasks, based on eye tracking in decision-making research, and evaluate this evidence in the context of the alternative decision-making theories.

The theories and related models considered are: (1) rational models; (2) bounded rationality models; (3) evidence accumulation models (e.g., the attention drift diffusion model [aDDM] posits that a decision-maker accumulates evidence in favour of the alternative being fixated upon at a given time); and (4) parallel constraint satisfaction models (a type of dual process, neural network model based on the conception of System 1’s fast and intuitive thinking [first stage] and System 2’s slow and deliberate thinking [second stage]). Rational models as well as bounded rationality models more explicitly contend that the role of attention is simply to capture the information needed for making a decision. ‘Strong’ rational models hold that all relevant, available information about choice alternatives would be attended to and taken into account, whereas ‘relaxed’ rational models allow for the possibility of nonattendance to some of the information (e.g., attributes or object [product] features). Bounded rationality models suggest that information is acquired just as required by the decision rules applied. The two other categories of models are more flexible in regard to how information is acquired and used, and its effect on the decision process and outcome. However, the authors argue that all four theories are found to be in deficit to a smaller or larger degree in their consideration of the role and function of attention in decision processes, having at least some of their assumptions being rejected by the evidence evaluated.

Selected insights drawn from the review of Orquin and Mueller Loose are enlisted here only briefly to shed light on the significance of attention in consumer decision-making.

A crucial question in decision-making is how information enters the decision process and is being utilised in reaching a choice decision: information may be acquired through attention guided by a top-down (goal-driven) process, yet information may also be captured by a bottom-up (stimulus-based) attentional process. The entanglement of both types of processes when making a decision is a prime aspect in this domain and has multiple implications. A more efficient selection process may be driven by greater experience with a task (e.g., more important information cues have a higher probability of being fixated on) and increased expertise in comprehension of visualisations (e.g., more fixations to relevant areas, and inversely fewer fixations to irrelevant areas, requiring shorter fixation durations, and longer saccades [‘jumps’ between more distant elements of information in a scene]). The interaction between bottom-up and top-down processing can amplify attention capture and improve the visual acuity of objects perceived. Bottom-up attention in particular is likely to be influenced by the saliency of a visual stimulus; however, it may not take effect when the task demands on attention are high, wherein priority is given to top-down directives for attention. Decision-making research has shown that visually salient alternatives or attributes are more likely to capture attention and furthermore affect the decision in their favour.

An interplay occurs between working memory and ‘instant’ attention: As the load of information fixated becomes larger, more elements are passed to working memory, and information is accessed from there for processing; however, as the strain on working memory increases, consumers turn to re-fixating information elements and consider them instantly or just-in-time (i.e., fixations are thus used as external memory space). This type of interplay has been identified in tasks of problem solving. Toggling between working memory and fixations or re-fixations in decision tasks can be traced, for instance, in alternative comparisons. Greater demands imposed by information complexity and decision difficulty (due to greater similarity between alternatives) may require greater effort (operations) in acquiring and processing information, yet the process may be shortened on the other hand through learning.

  • Another area with interesting implication is processing of visual objects: Previous research has shown that visual objects are not encoded as complete representations (e.g., naturalistic product images) and the binding of features is highly selective. Thereof, encoding of particular features during an object-stimulus fixation may be goal-driven, and a re-fixation may be employed to refer just-in-time to specific object [product] features as needed in a decision task, thus saving on working memory capacity.

Consumers have a tendency to develop a bias during a decision task towards a favoured alternative. This alternative would get more fixations, and there is also a greater likelihood for the last alternative fixated to be the one chosen (put differently, consumers are likely to re-affirm the choice of their favourite alternative by re-fixating it just before making the decision). A desired or favoured attribute can also benefit from a similar effect by receiving more frequent attention (i.e., fixations). The authors point, however, to a difficulty in confirming evidence accumulation models: whether greater likelihood of a more fixated alternative to be chosen is due to its higher utility or greater exposure to it. They suggest a ‘soft’ model version in support for a greater effect of extended mere exposure leading to choice of an alternative. They add that a down-stream effect of attention from perception onto choice through a bottom-up process may play a role of gatekeeping the alternatives entering a consideration set. It is noted that a down-stream effect, arising from a bottom-up process, is clearly distinguishable from a utility effect, since the former is stimulus-driven and the latter is goal-driven.

Consistent with bounded rationality theory, heuristics shape patterns of attention, directed by the information that a heuristic calls for (e.g., by alternative or by attribute). Yet, eye-tracking studies conducted to trace the progression of decision processes could not corroborate the patterns of heuristics used as proposed in the literature. More formally, studies failed to substantiate the assumption that heuristics in use can be inferred from the patterns of attention recorded. Transitions of consumers between alternative-wise and attribute-wise rules during a decision task make inferences especially difficult. Not only decision rules influence what information is attended to, but information cues met with during the decision process can modify the course of the decision strategy applied — consider the potential effect that salient stimuli captured unexpectedly in a bottom-up manner can have on the progression of the decision strategy.

In summary, regarding the decision-making theories, Orquin and Mueller Loose conclude: (a) firmer support for the relaxed rational model over the strong model (nonattendance is linked to down-stream effects); (b) a two-way relationship between decision rules and attention, where both top-down and bottom-up processes drive attention; (c) the chosen alternative has a higher likelihood of fixations during the decision task and also of being the last alternative fixated — they find confirmation for a choice bias but offer a different interpretation of the function of evidence accumulated; (d) an advantage of the favoured alternatives or most important attributes in receiving greater attention, and advantage of salient alternatives receiving more attention and being more likely to be chosen (concerning dual process parallel constraint satisfaction models).

Following the review, I offer a few final comments below:

Orquin and Mueller Loose contribute an important and interesting perspective in the projection of the role of [visual] attention from similar or related tasks onto decision-making and choice. Moreover, relevance is increased because elements of the similar tasks are embedded in decision-making tasks. Nevertheless, we still need more research within the domain because there could be aspects specific or unique to decision-making (e.g., objectives or goals, structure and context) that should be specified. Insofar as attention is concerned, this call is in alignment with the conclusions of the authors. Furthermore, such research has to reflect real-world situations and locations where consumers practically make decisions.


In retail stores, consider for example the research by Chandon, Hutchinson, Bradlow, and Young (2009) on the trade-off between visual lift (stimulus-based) and brand equity (memory-based); this research combined eye tracking with scanner purchase data [2]. However, it is worth looking also into an alternative approach of video tracking as used by Hui, Huang, Suher, and Inman (2013) in their investigation of the relations between planned and unplanned considerations and actual purchases (video tracking was applied in parallel with path tracking)[3].

For tracing decision processes more generally, refer for example to a review and experiment with eye tracking (choice bias) by Glaholt and Reingold (2011)[4], but consider nonetheless the more critical view presented by Reisen, Hoffrage and Mast (2008) following their comparison of multiple methods of interactive process tracing (IAPT)[5]. Reisen and his colleagues were less convinced that tracking eye movements was superior to tracking mouse movements (MouseLab-Web) for identifying decision strategies while consumers are acquiring information (they warn of superfluous eye re-fixations and random meaningless fixations that occur while people are contemplating the options in their minds).


 

It should be noted that a large part of the research in this field, using eye-tracking measurement, is applied with concentrated displays of information on alternatives and their attributes. The most frequent and familiar format is information matrices (or boards), although in reality we may also encounter other graphic formats such as networks, trees, layered wheels, and more art-creative diagram illustrations. Truly, concentrated displays can be found in shelf displays in physical stores and also in screen displays online and in mobile apps (e.g., retailers’ online stores, manufacturers’ websites, comparison websites). However, on many occasions of decision tasks (e.g., durables, more expensive products), consumers acquire information through multiple sessions while constructing their decisions. That is, the decision process extends over time. In each session consumers may keep some information elements or cues for later processing and integration, or they may execute an interim stage in their decision strategy. If information is eventually integrated, consumers may utilise aides like paper notes and electronic spreadsheets, but they do not necessarily do so.

Orquin and Mueller Loose refer to effects arising from spatial dispersion of information elements in a visual display as relevant to eye tracking (i.e., distance length of saccades), but these studies do not account for temporal dispersion of information. Studies may need to bridge data from multiple sessions to accomplish a more comprehensive representation of some decision processes. Yet, smartphones today can help in closing somewhat the gap since they permit shoppers to acquire information in-store while checking more information from other sources on their smartphones — mobile devices of eye tracking may be used to capture this link.

Finally, eye tracking provides researchers with evidence about attention to stimuli and information cues, but it cannot tell them directly about other dimensions such as meaning of the information and valence. The importance of information to consumers can be implied from measures such as the frequency and duration of fixations, but other methods are needed to reveal additional dimensions, especially from the conscious perspective of consumers (vis-à-vis unconscious biometric techniques such as coding of facial expressions). An explicit method (Visual Impression Metrics) can be used, for example, to elicit statements by consumers as to what areas and objects in a visual display that they freely observe they like or dislike (or are neutral about); if applied in combination with eye tracking, it would enable to signify the valence of areas and objects consumers attend to (unconsciously) in a single session with no further probing.

The review of Orquin and Mueller Loose opens our eyes to the versatile ways in which [visual] attention may function during decision tasks: top-down and bottom-up processes working in tandem, toggling between fixations and memory, a two-way relation between decision strategies and visual attention, choice bias, and more. But foremost, we may learn from this review the dynamics of the role of attention during consumer decision-making.

Ron Ventura, Ph.D. (Marketing)

References: 

[1] Attention and Choice: A Review of Eye Movements in Decision Making; Jacob L. Orquin and Simone Mueller Loos, 2013; Acta Psychologica, 144, pp. 190-206

[2] Does In-Store Marketing Work? Effects of the Number and Position of Shelf Facings on Brand Attention and Evaluation at the Point of Purchase; Pierre Chandon, J. Wesley Hutchinson, Eric T. Bradlow, & Scott H. Young, 2009; Journal of Marketing, 73 (November), pp. 1-17

[3] Deconstructing the “First Moment of Truth”: Understanding Unplanned Consideration and Purchase Conversion Using In-Store Video Tracking; Sam K. Hui, Yanliu Huang, Jacob Suher, & J. Jeffrey Inman, 2013; Journal of Marketing Research, 50 (August), pp. 445-462.

[4] Eye Movement Monitoring as a Process Tracing Methodology in Decision Making Research; Mackenzie G. Glaholt and Eyal M. Reingold, 2011; Journal of Neuroscience, Psychology and Economics, 4 (2), pp. 125-146

[5] Identifying Decision Strategies in a Consumer Choice Situation; Nils Reisen, Ulrich Hoffrage, and Fred W. Mast, 2008; Judgment and Decision Making, 3 (8), pp. 641-658

Read Full Post »

‘Experience’ has gained a prime status in the past decade — everything seems to revolve around experience in the universe of management, marketing, and even more specifically with respect to relationship marketing. It has become like a sine qua non of operating in this universe. There can be multiple contexts for framing experience — customer experience, brand experience, user (or product) experience, and also employee experience. Nevertheless, these concepts are inter-linked, and customer experience could be the central point-of-reference just because all other forms of experience eventually contribute to the customer’s experience. After all, this is the age of experience economy (cf. Pine and Gilmore).

This focus on the role of experience and primarily customer experience (CX) in contemporary marketing surely has not escaped the attention of companies involved with data-based marketing particularly on the service side (e.g., technology, research, consulting). In mid-November 2018 enterprise information technology company SAP announced a stark move of acquiring research technology firm Qualtrics for the sum of $8 billion in cash (deal expected to materialise during the first half of 2019). Qualtrics started in 2002 by specialising in survey technology for conducting consumer and customer surveys online, and has later on broadened the spectrum of its software products and tools to address a range of experience domains, put in a framework entitled Experience Management (XM).

However, less visible to the public, Qualtrics made an acquisition of its own of Temkin Group — an expert company specialising in customer experience research, training and consulting — about two weeks before announcing the SAP-Qualtrics deal. Qualtrics was reportedly engaged at the time of these deals in preparations for its IPO. Adding the knowledge and capabilities of Temkin Group to those of Qualtrics could fairly be viewed as a positive enforcement of the latter prior to its IPO, and eventually the selling of Qualtrics to SAP. Therefore, it would be right to say that Qualrtics + Temkin Group and SAP are effectively joining forces in domain knowledge, research capabilities and data technologies. Yet since the original three entities (i.e., as before November 2018) were so unequal in size and power, it raises some major questions about how their union under the umbrella of SAP will work out.

SAP specialises in enterprise software applications for organisational day-to-day functions across-the-board, and supporting software-related services (SAP was established in 1972, based in Germany). It operates today in 130 countries with 100+ innovation and development centres; its revenue in the 2017 financial year was $23.46 billion. Many of the company’s software applications can be deployed on premises, in the cloud, or hybrid (SAP reports 150 million subscribers in the cloud service user base). The two product areas of highest relevance to this story are CRM & Customer Experience solutions and the Enterprise Resource Planning (ERP) solutions & Digital Core (featuring its flagship platform HANA). The two areas of solutions correspond with each other.

The S4/HANA platform is described as an intelligent ERP software, a real-time solution suite . It enables, for example, delivering personally customised products ordered online (e.g., bicycles). For marketing activities and customer-facing services it should require data from the CRM and CX applications. The ERP platform supports, however, the financial planning and execution of overall activities of a client organisation. The CRM & Customer Experience suite of solutions includes five key components: Customer Data Cloud (enabled actually by Gigya, another acquisition by SAP in 2017); Marketing Cloud; Commerce Cloud; Sales Cloud; and Service Cloud. The suite covers a span of activities and functions: profiling and targeting at segment-level and individual level, applicable, for instance, in campaigns or tracking customer journeys (Marketing); product order and content management (Commerce); comprehensive self-service processes plus field service management and remote service operations by agents (Service). In all these sub-areas we may find potential links to the kinds of data that can be collected and analysed with the tools of Qualtrics while SAP’s applications are run on operational data gathered within its system apparatus. The key strengths offered in the Customer Data Cloud are integrating data, securing customer identity and access to digital interfaces across channels and devices, and data privacy protection. SAP highlights that its marketing and customer applications are empowered by artificial intelligence (AI) and machine learning (ML) capabilities to personalise and improve experiences.

  • At the technical and analytic level, SAP’s Digital Platform is in charge of the maintenance of solutions and databases (e.g., ERP HANA) and management of data processes, accompanied by the suite of Business Analytics that includes the Analytics Cloud, Business Analytics, Predictive Analytics and Collaborative Enterprise Planning. Across platforms SAP makes use of intelligent technologies and tools organised in its Leonardo suite.

Qualtrics arrives from quite a different territory, nestled much closer to the field of marketing and customer research as a provider of technologies for data collection through surveys of consumers and customers, and data analytic tools. The company has gained acknowledgement thanks to its survey software for collecting data online whose use has so expanded to make it one of the more popular among businesses for survey research. Qualtrics now focuses on four domains for research: Customer Experience, Brand Experience, Product Experience, and Employee Experience.

  • The revenue of Qualtrics in 2018 is expected to exceed $400 million (in first half of 2018 revenue grew 42% to $184m); the company forecast that revenue will continue to grow at an annual rate of 40% before counting its benefits from synergies with SAP (CNBC; TechCrunch on 11 November 2018).

Qualtrics organises its research methodologies and tools by context under the four experience domains aforementioned. The flagship survey software, PER, allows for data collection through multiple digital channels (e.g., e-mail, web, mobile app, SMS and more), and is accompanied by a collection of techniques and tools for data analysis and visualisation. The company emphasises that its tools are so designed that use of them does not require one to be a survey expert or a statistician.

Qualtrics provides a range of intelligent assistance and automation capabilities; they can aid, guide and support the work of users according to their level of proficiency. Qualtrics has developed a suite of intelligent tools, named iQ, among them Stats iQ for statistical analysis, Text iQ for text analytics and sentiment scoring, and Predict iQ + Driver iQ for advanced statistical analysis and modelling. Additionally, it offers ExpertReview for helping with questionnaire composition (e.g., by giving AI-expert ‘second opinion’). In a marketing context, the company offers techniques for ad testing, brand tracking, pricing research, market segmentation and more. Some of these research methodologies and tools would be of less relevance and interest to SAP unless they can be connected directly to customer experiences that SAP needs to understand and account for through the services it offers.

The methods and tools by Qualtrics are dedicated to bringing the subjective perspective of customers about their experiences. Under the topic of Customer Experience Qualtrics covers customer journey mapping, Net Promoter Score (NPS), voice of the customer, and digital customer experience; user experience is covered in the domain of Product Experience, and various forms of customer-brand interactions are addressed as part of Brand Experience. The interest of SAP especially in Qualtrics, as stated by the firm, is  complementing or enhancing its operational data (O-data) with customer-driven experience data (X-data) produced by Qualtrics (no mention is made of Temkin Group). The backing and wide business network of SAP should create new opportunities for Qualtrics to enlarge its customer base, as suggested by SAP. The functional benefits for Qualtrics are less clear; possible gains may be achieved by combining operational metrics in customer analyses as benchmarks or by making comparisons between objective and subjective evaluations of customer experiences, assuming clients will subscribe to some of the services provided by the new parent company SAP.

Temkin Group operated as an independent firm for eight years (2010-2018), headed by Bruce Temkin (with wife Karen), until its acquisition by Qualtrics in late October 2018. It provided consulting, research and training activities on customer experience (at its core was customer experience but it dealt with various dimensions of experience beyond and in relation to customers). A key asset of Temkin Group is its blog / website Experience Matters, a valued resource of knowledge; its content remains largely in place (viewed January 2018), and hopefully will stay on.

Bruce Temkin developed several strategic concepts and constructs of experience. The Temkin Experience Rating metric is based on a three-component construct of experience: Success, Effort and Emotion. The strategic model of experience includes four required competencies: (a) Purposeful Leadership; (b) Compelling Brand Values; (c) Employee Engagement; and (d) Customer Connectedness. He made important statements in emphasising the essence of employee engagement to deliver superior customer experience, and in including Emotion as one of the pillars of customer experience upon which it should be evaluated. The more prominent of the research reports published by Temkin Group were probably the annual series of Temkin Experience Rating reports, covering 20 industries or markets with a selection of companies competing in each.

Yet Temkin apparently has come to a realisation that he should not go it alone any longer. In a post blog on 24 October 2018, entitled “Great News: Temkin Group Joins Forces With Qualtrics“, Temkin explained as the motivation to his deal with Qualtrics a recognition he had reached during the last few years: “it’s become clear to me that Qualtrics has the strongest momentum in CX and XM“. Temkin will be leading the Qualtrics XM Institute, built on the foundations of Temkin CX Institute dedicated to training. The new institute will be sitting on top of Qualtrics XM platform. In his blog announcement Temkin states that the Qualtrics XM Institute will “help shape the future of experience management, establish and publish best practices, drive product innovation, and enable certification and training programs that further build the community of XM professionals” — a concise statement that can be viewed as the charter of the institute Temkin will be in charge of at Qualtrics. Temkin has not taken long to adopt the framework of Experience Management and support it in writing for the blog.

The teams of Temkin and Qualtrics (CEO and co-founder Ryan Smith) may co-operate more closely in developing research plans on experience for clients and initiating research reports similar to the ones Temkin Group produced so far. Bruce Temkin should have easy and immediate access to the full range of tools and technologies of Qualtrics to continue with research projects and improve on them. Qualtrics should have much to benefit from the knowledge and training experience of Temkin in the new XM institute at Qualtrics. It seems easier to foresee beneficial synergies between Temkin Group and Qualtrics than their expected synergies with SAP.

However, there is a great question arising now, how all this vision and plans for Temkin and Qualtrics working together, and particularly their project of Qualtrics XM Institute, will be sustained following the acquisition of Qualtrics by SAP. One cannot overlook the possibility that SAP will develop its own expectations and may require changes to plans only recently made or modifications to Qualtrics CX Platform and XM Solutions so as to satisfy the needs of SAP. According to TechCrunch (11 Nov. 2018) Qualtrics will continue to function as a subsidiary company and will retain its branding and personnel (note: it may be gradually assimilated into SAP while keeping Qualtrics associated names, as seems to be the case of Israel-based Gigya). Much indeed can depend on giving Qualtrics + Temkin Group autonomy to pursue with their specialisations and vision on XM while they share knowledge, data and technologies with SAP.

Bill McDermott, CEO of SAP, is looking high in the sky: as quoted in the company’s news release from 11 November 2018, he describes bringing together SAP and Qualtrics as “a new paradigm, similar to market-making shifts in personal operating systems, smart devices and social networks“. But it is also evident that SAP still sees the move through the prism of technology: “The combination of Qualtrics and SAP reaffirms experience management as the ground-breaking new frontier for the technology industry“.

Temkin’s viewpoint is much more customer-oriented and marketing-driven vis-à-vis the technology-driven view of McDermott and SAP, which may put them in greater conflict with time about priorities and future direction for XM. Qualtrics headed by Ryan Smith will have to decide how it prefers to balance between the marketing-driven view and technology-driven view on experience. Temkin, for example, has reservations about the orientation of the technology known as Enterprise Feedback Management (EFM), suggesting instead a different focus by naming this field “Customer Insight and Action (AIC) Platforms”. In his comments on the acquisition of Qualtrics by SAP (16 November 2018) he explains that organisations “succeed by taking action on insights that come from many sources, combining experience data (X-data) and operational data (O-data)“. In his arguments in favour of joining SAP with Qualtrics, Temkin recollects an observation he made in an award-winning report from 2002 while at Forrester Research: he argued then that “widespread disappointing results of CRM were a result of a pure technology-orientation and that companies needed to focus more on developing practices and perspectives that used the technology to better serve customers”; he claims that much has changed in the field since that time. Yet it is hard to be convinced that technology has much less influence now in shaping organisational, managerial and marketing processes, on both service side (e.g., SAP) and client side.

  • As a note aside, if SAP gets the upper hand in setting the agenda and does not give sufficient autonomy to Qualtrics as suggested earlier, the first sector at risk of having most to lose from this deal would be ‘marketing and customer research’.

SAP and Qualtrics are both involved in development and implementation of technology, yet SAP is focused on information technology enabling overall day-to-day operations of an organisation, whereas Qualtrics is focused on technology enabling experience and marketing research. Qualtrics and Temkin Group are both engaged in domains of experience: Qualtrics specialises in the technology that enables the research, while Temkin Group brought strengths in conducting research plus strategic thinking and training (education) on customer experience. In order for their joint forces to succeed they all will have to find ways to bridge gaps between their viewpoints, to ‘live and let live’, and at the same time complement one another in areas of shared understanding and expertise.

Ron Ventura, Ph.D. (Marketing)

 

Read Full Post »

Health insurance, financial investments, telecom service plans — consumers frequently find it harder to make choice decisions in these exemplar domains. Such domains are more susceptible to exhibiting greater complexity: details, many and technical, to account for, multiple options difficult to differentiate and to choose from, and unclear consequences. In products, we may refer in particular to those involving digital technology and computer-based software that some consumers are likely to find more cumbersome to navigate and operate. When consumers are struggling to make any choice, they develop a stronger tendency to delay or avoid the decision at all. They need assistance or guidance in making their way towards a choice that more closely matches their needs or goals and preferences.

Handel and Schwartzstein (2018) are distinguishing between two mechanism types that obstruct or interfere with making rational decisions: frictions and mental gaps.

Frictions reflect costs in acquiring and processing information. They are likely to occur in earlier stages of a decision process when consumers are encountering difficulties in searching for and sorting through relevant information (e.g., what options are more suitable, what attributes and values to look at), and they have to invest time and effort in tracing the information and organising it. Furthermore, frictions may include the case when consumers fail to see in advance or anticipate the benefits from an available alternative  (e.g., consider the difficulty of older people to realise the benefits they may gain from smartphones).

Mental gaps are likely to make an impact at a more advanced stage: the consumer already has the relevant information set in front of him or her but misinterprets its meanings or does not understand correctly the implications and consequences of any given option (e.g., failing to map correctly the relation between insurance premium and coverage). Mental gaps pertain to “psychological distortions” that generally may occur during information-gathering,  attention and processing, but their significance is primarily in comprehension of the information obtained. In summary, it is “a gap between what people think and what they should rationally think given costs.”

In practice, it is difficult to identify which type of mechanism is acting as an obstacle on the way of consumers to a rational decision.  Research techniques are not necessarily successful in separating between a friction and a mental gap as sources of misinformed choices (e.g., choosing a dominated option instead of a dominating one apparent to the rational decision-maker). Notwithstanding, Handel and Schwartzstein are critical of research practices that focus on a single mechanism and ignore alternative explanations. In their view, disregard to the distinction between mechanisms can lead to spurious conclusions. They suggest using counterfactual approaches that test a certain mechanism, or a combination of explanations, and then argue against it with a ‘better’ prospective mechanism explanation. They also refer to survey-based and experimental research methods for distinguishing frictions and mental gaps. The aim of these methods is to track the sources of misinformed decisions.

Consumers often run into difficulty with financial investments and saving plans. In some countries policy makers are challenged with driving consumers-employees towards saving for retirement during the working years. Persuasion per se turns out to be ineffective and other approaches for directing or nudging consumers into saving are designed and implemented (e.g., encouraging people to “roll into saving” through a scheme known as ‘Save More Tomorrow’ by Thaler and Sunstein).

Confronting employees with a long list of saving plans or pension funds may deter them from duly attending to the alternatives in order to make a decision, and even risks their aborting the mission. When consumers-employees have a hard time to recognise differences between the plans or funds (e.g., terms of deposit, assets invested in, returns), they are likely to turn to heuristics that brutally cut through the list. Crucially, even if information on key parameters is available for each option, decision-makers may use only a small part of it. Similar difficulties in choosing between options may arise in financial investments, for instance when choosing between equity and index funds or bond funds. One may be assisted by suggesting a default plan (preferably, recommending a personally customised plan) or sorting and grouping the proposed plans and funds into classes (e.g., by risk level or time horizon). However, it should be acknowledged that consumer responses as described above may harbour frictions as well as mental gaps, and it could help to identify which mechanism has the greater weight in the decision process.

A key issue with health insurance concerns the mapping of relationship between an insurance premium and the level of deductibles or cost-sharing between the insurer and the insured. For example, consumers fall into a trap of accepting an insurance policy offered with a lower premium while not noticing a higher deductible they would have to pay in a future claim. An additional issue consumers have to attend to is the coverage provided for different medical procedures such as treatments and surgeries (given also the deductible level or rate). Consumers may stumble in their decision process while studying health insurance plans as well as while evaluating them.

  • Public HMOs (‘Kupot Holim’) in Israel offer expanded and premium health insurance plans as supplementary to what consumers are entitled to by the State Health Insurance Act. Yet in recent years insurance companies are prompting consumers to get an additional private health insurance plan from them — their argument is that following changes over the years in the HMOs’ plans and reforms by the government, those plans do not offer adequate coverage, or none at all, for more expensive treatments and surgeries. The coverage of private insurance plans is indeed more generous, but so are the much higher premiums , affordable to many only if paid for by the employer.

In addressing other aspects of healthcare, Handel and Schwartzstein raise the issue of consumer preference for a branded medication (non-prescription) over an equivalent and less costly generic or store-branded medication (e.g., buying Advil rather than a store-branded medication that contains the same active ingredient [ibuprofen] for pain relief as in Advil). Another vital issue concerns the tendency of patients to underweight the benefits of treatment by medications prescribed to them, and consequently do not take up medications satisfactorily as instructed to them by their physicians (e.g., patients with a heart condition, especially after a heart attack, who do not adhere as required to the medication regime administered to them).

Customers repeatedly get into feuds with their telecom service providers — mobile and landline phone communication , TV and Internet. Customers of mobile communications (‘cellular’), for example, often complain that the service plan they  had agreed to did not match their actual usage patterns or they did not understand properly the terms of the service contract they signed to. As a result, they have to pay excessive charges (e.g., for minutes beyond quota), or they are paying superfluous fixed costs.

With the advancement of technology the structure of mobile service plans has changed several times in the past twenty years. Mobile telecom companies today usually offer ‘global’ plans for smartphones that include first of all larger volumes of data (5GB, 10GB, 15GB etc.), and then practically an infinite or outright unlimited use of outgoing talking minutes and SMSs. While appealing at first, customers end up paying a fixed inclusive monthly payment that is too high relative to the traffic volume they actually make use of. On the one hand customers refrain from keeping track of their usage patterns because it is costly (a friction). On the other hand, customers fail in estimating their actual usage needs that will match the plan assigned to them (a mental gap). In fact, information on actual usage volumes is more available now (e.g., on invoices) but is not always easily accessible (e.g., more detailed usage patterns). It should be noted, however, that companies are not quick to replace a plan, not to mention voluntarily notifying customers of a mismatch that calls for upgrading or downgrading the plan.

A final example is dedicated here to housing compounds of assisted living for seniors. As people enter their retirement years (e.g., past 70) they may look for comfortable accommodation that will relieve them from the worries and troubles of maintaining their home apartment or house and will also provide them a safe and supportive environment. Housing compounds of assisted living offer residence units, usually of one or two rooms of moderate space, with an envelope of services: maintenance, medical supervision and aid, social and recreational activities (e.g., sports, games, course lectures on various topics). The terms for entering into assisted living housing can be nevertheless consequential and demanding. The costs involve mainly a leasing payment for the chosen residence and monthly maintenance fee payments.

Making the decision can be stressing and confusing. First, many elderly people cannot afford taking residence in such housing projects without selling their current home or possibly renting it (e.g., to cover a loan). In addition the value of the residence is depreciated over the years. Second, the maintenance fee is usually much higher than normal costs of living at home. Hence residents may need generous savings plus rental income in order to finance the luxury and comfort of assisted living. Except for the frictions that are likely to occur while looking for an appropriate and affordable housing compound, the prospect residents are highly likely to be affected by mental gaps in correctly understanding the consequences of moving into assisted living (and even their adult children may find the decision task challenging).

Methods of intervention from different approaches attempt to lead consumers to make decisions that better match their needs and provide them greater benefits or value. Handel and Schwartzstein distinguish between allocation policies that aim to direct or guide consumers to a recommended choice without looking into reasons or sources of the misinformed decisions (e.g., nudging techniques), and mechanism policies that attempt to resolve a misguided or misinformed choice decision by tackling a specific reason causing it, such as originating from a mechanism of friction or mental gap. From a perspective of welfare economics, the goal of an intervention policy of either type is to narrow down a wedge between the value consumers obtain from actual choices subject to frictions and mental gaps, and the value obtainable from a choice conditional on being free of frictions and mental gaps (i.e., assuming a rational decision). (Technical note: The wedge is depicted as a gap in value between a ‘demand curve’ and a ‘welfare curve’, respectively.)

Policies and methods of either approach have their advantages and disadvantages. An allocation policy has a potential for greater impact, that is, it can get farther in closing the welfare wedge.  Yet, it may be too blunt and excessive: while creating a welfare gain for some consumers, it may produce an undesirable welfare loss to consumers for whom the intervention is unfitting. Without knowing the source of error consumers make, it is argued that a nudging-type method (e.g., simplifying the structure of information display of options) could be insufficient or inappropriate to fix the real consumer mistake. A fault of allocation policies could particularly be, according to the authors, that they ignore heterogeneity in consumer preferences. Furthermore, and perhaps as a consequence, such policies overlook the presence of informed consumers who may contribute by leading to the introduction of far better products at lower prices.

Mechanism policies can in principle be more precise and effective while targeting specific causes of consumers’ mistakes, and hence correcting the costs of misinformed decisions without generating unnecessary losses to some of them. The impact could be more limited in magnitude, yet it would be measured. But achieving this outcome in practice, the authors acknowledge, can be difficult and complicated, requiring the application of some costly research methods or complex modelling approaches. They suggest that “[as] data depth and scope improve, empirically entangling mechanisms in a given context will become increasingly viable”.

The analysis by Handel and Schwarztsein of the effects of intervention policies — mechanism versus allocation — could come as too theoretical, building on familiar concepts of economic theory and models, furthermore being difficult and complicated to implement. Importantly, however, the authors open up a door for us to a wider view on sources of mistakes consumers make in decision-making and the differences between approaches aimed at improving the outcomes of their decisions. First, they clarify a distinction between mechanisms of frictions and mental gaps. Second, they contrast allocation policies (e.g., nudging) versus mechanism policies which they advocate. Third, to those less accustomed to the concepts of economic analysis, they demonstrate their ideas with practical real-world examples. Handel and Scwharzstein present a perspective well deserving to learn from.

Ron Ventura, Ph.D. (Marketing)

Reference:

Frictions or Mental Gaps: What’s Behind the Information We (Don’t) Use and When Do We Care?; Benjamin Handel and Joshua Schwartzsetein, 2018; Journal of Economic Perspectives, Vol. 32 (1 – Winter), pp. 155-178. (doi: 10.1257 / jep.32.1.155)

 

 

Read Full Post »

Ever so often, in many and different places, people take photos. The immediacy of access to cameras on smartphone devices has made photography a ubiquitous and more casual activity. The awareness and sensitivity of people to visual scenes and materials has increased, and photo images especially play a greater role in our lives. When people take their own photos to capture their experiences, this activity may become an integral part of the experience. It raises therefore an interesting question, how an experience could be subjectively affected by the act of taking photos whilst the experience is happening.

Almost obviously, our tendency to take photos is stronger during touristic experiencesAscona: Promenade on Lago Maggiore away from home while travelling in our own country and furthermore on visits to foreign countries. The experience could take place on holiday in a major city when touring its main streets and famous sites, or on vacation in a holiday resort in nature, going on a trip to the top of mountains, near a lake or along the sea-shore. However, we may take photos during more ordinary experiences such as dining in a restaurant (e.g., photo-taking of appetizing food dishes); in a party or family gathering; while playing (e.g., creative games like Lego); watching parades, sports events or other festivities; and even during a shopping tour. In those experiences we could be more passive observers or more active players, which may influence any additional involvement in photo-taking and its effect on the overall experience.

Kristin Diehl, Gal Zauberman and Alixandra Barasch (2016) investigated in-depth the effect that taking photos by consumers during an experience may have on their enjoyment from the experience: whether it amplifies enjoyment, or instead dampens it, and how the level of enjoyment varies in different conditions. Furthermore, they examined a proposed mechanism where engagement in an experience mediates enjoyment: in positive experiences, when individuals are more intensively engaged or immersed in the experience, it may elevate their enjoyment; thereby, to the extent that taking photos increases engagement, it would also heighten enjoyment. The researchers consider two routes of influence: (a) photo-taking competes with the ‘source’ experience by causing attention shifts, thus reducing engagement and enjoyment from the experience; (b) photo-taking helps in directing and focusing more attention on visual aspects of the ‘source’ experience, leading to increased engagement and consequently heighten enjoyment.

The photos taken may have subsequent benefits to individuals such as in aiding with memory of experienced events at a later time (i.e., serving as memory cues) and in showing photos of their experiences to relatives and friends (i.e., social benefit), but the researchers focus specifically on effect of the act of taking a photo at the time of the experience. Their research entailed nine studies (3 field studies & 6 lab experiments), using a range of methodologies and experience-contexts.

A most typical touristic experience is a city bus tour — consider riding a double-decker bus on an open-air top floor. Diehl and her colleagues organised actual bus tours in Philadelphia for photo takers and non-photo takers. They succeeded in showing in this setting that photo takers enjoyed their touristic experience more than those who did not take photos. They also obtained some evidence that the photo takers may have felt more engaged during the experience though the effect was statistically too weak. (Note: In order to exclude any benefits from using photos after the bus tour all participants were disallowed to take their personal cameras or smartphones with them and the assigned  ‘photo-takers’ were given instead a digital camera with a new memory card, yet they could not keep the card afterwards).

The researchers conducted a second field study, this time in the context of a casual lunch (i.e., it was not suggested the food was especially attractive to photograph). In this study the results were already stronger. Consistent with the bus tour study, photo-takers enjoyed the lunch experience more than those not taking photos, but in addition the photo-takers were found significantly more engaged. The setting was sufficient to support just in part that greater engagement mediates the higher enjoyment felt by those taking photos. (Note: In this study no physical restrictions were imposed — those instructed to take photos could use their own cameras or smartphones).

Lab experiments create less realistic experiences since they are only simulated, and the act of taking a photo is also simulated (i.e., a camera icon and a mouse click). However, a controlled experiment can facilitate surfacing the effects of interest while testing for the influence of additional factors. It is acknowledged that the researchers have already shown there is ground to their expected effects on enjoyment in real-life settings.

A lab experiment of simulated bus tours (using videos of tours in Hollywood, California, and London, UK), found support that photo-takers enjoyed their bus tour experiences significantly more, as well as felt significantly more engaged in them, than those not taking photos. Furthermore, there also was support that engagement fully mediates or connects positively between taking photos and enjoyment. Moreover, memory of the greater enjoyment of those taking photos persists as long as a week after the experience. (Note: Remembered enjoyment is to be distinguished from remembered content of the experience).

So, does taking photos indeed work to focus greater attention on what people experience and thus enhances their engagement and increases their enjoyment? The researchers provide important evidence with the help of eye-tracking (field study, museum exhibit) that taking a photo channels more attention to the objects of interest in the experience. In particular, it directs more attention to relevant visual aspects of the experience, that is, to the exhibit artifacts vis-à-vis other objects (e.g., information displays) in the exhibit hall. First, significant effects of greater enjoyment and engagement by photo-takers, and the mediation function of engagement, are replicated. Second, taking photos leads to spending a relatively greater time fixating on the artifacts (as proportion of total duration of fixations) compared to visiting without taking photos. Visitors taking a photo of an artifact fixate for a longer duration on it compared with those who only watch it; no such differences were found for other objects. Third, it is not only the duration of fixations but also the number of fixations dedicated to artifacts that are relatively higher among those taking photos compared to those who do not. (It should be noted, however, that measures were aggregated across ‘exhibit artifacts’ versus ‘other objects’, and not verified for every single artifact being photographed or not.)

Scenes for photography can be very different, some are rich with detail, light and colour (e.g., a lakeside landscape), others being more monotonic or homogenous (e.g., a vase or a person against a dark uniform background). This difference in experience seems to matter little with regard to enjoyment or engagement when taking photos. Comparing between bus tours (Hollywood/London) and pop/rock concerts (performing against a plain and non-changing background), it is found that similarly in those experiences those taking photos enjoy the experience more and feel more engaged than non-photo takers, regardless of the type of experience (full mediation was also supported).

Any indication that participants in the experiment have enjoyed the concert somewhat more than bus tours did not lead to any consistent conclusions; it may be due more to a music concert being more energizing than a city bus tour at least in idea, especially if we take into account also the experience of the music not captured in a photograph. But in real-life concerts of performing artists the viewers more usually today record video clips, not still photographs, by simply raising the smartphone above the head and filming. It is hard to say in these circumstances how much they may lose of the experience at all if they watch it through the screen and how it may affect their attention and enjoyment. Dealing with the smartphone or tablet to check the videos during the performance may distract them somewhat more. Yet, it could be that viewers recording videos on their devices may be disturbing more to other people in the audience than their own enjoyment of the experience.

Expo Milano 2015: Dining Bar (Argentina)

EXPO Milano 2015: For illustration of experience

We may find ourselves in different positions in experiences: Imagine taking a boat cruise on a lake, standing on the deck viewing the landscape around, or watching a parade on a maid road, looking from the side of the road — in these events one is primarily a passive observer. However, one becomes an active participant in the event, for example, of playing a creative game such as building Lego models or possibly visiting a museum exhibit that allows learning by using interactive displays and tools. As Diehl and colleagues suggest, it may have two implications: (a) the ‘active’ experience is in origin more entertaining and enjoyable so there is less to gain by additionally taking photos; (b) engaging in the task of taking photos interferes with participation in the main activity. The researchers applied creative arts-and-crafts projects (e.g., building an Eiffel Tower from wafers and icing): to make conditions comparable, they assigned participants to either actively building the tower model or to passively observing someone else building the same kind of model.

Indeed, taking photos during the experience makes a difference in increasing engagement and enjoyment only for those observing the project and not for those who are actively building the model. Photo takers who actively built the model were also more inclined to report that taking photos during the experience interfered with their project compared to those who only observed and took photos. On the other hand, the latter took more photos (about ten on average) compared with those who tried to build the project and take photos simultaneously (5.5 on average). Reasonably observers were more free to take photos and enjoy it as well. While taking photos did not increase enjoyment of the ‘builders’, there is also no evidence that it decreased it. It could be a little disappointing as we may expect that taking photos as we progress may enhance our sense of pride and satisfaction with our creation taking form — a sort of ‘I Built It Myself’ effect (following an “I Designed It Myself” effect by Franke, Schreier and Kaiser, 2010). Two requirements may be needed: first, that the ‘builder’ is of course successful during his or her task, and second, that by intermittently advancing with the project and stopping for a minute when progress is made to take a photo, it helps to minimise interference or distraction.

This topic brings to mind a particular concern, when the task of photography intervenes in the ‘source’ experience, and potentially disrupts it. Diehl and her colleagues cleverly distinguish between the functional-physical act of taking photos (i.e., operating a camera) and the mental process driving behind it (i.e., planning  the photos). It may be argued in this regard that the impact may be different on people taking photos with a smartphone or tablet device, a compact camera, or a more complex single-lens-reflex (SLR) camera. Also, more dedicated amateur photographers, with greater interest and photographic skills, may approach taking photos during an experience differently from others. This issue unfortunately does not receive an adequate answer in the research.

The researchers test two kinds of suspected interferences that may disrupt or distract photo takers from the main experience they engage in: (1) physical — by assuming one would have to carry and hold a bulky digital SLR camera (represented in the experiment just by a larger camera icon); and (2) functional — by enabling the photographer also to delete unsatisfactory photos right after taking them. The results have shown that with medium-interference (‘holding SLR’) the enjoyment of these photo-takers was in-between those taking photos as above and those not taking photos, not significantly different from either. Yet, with high-interference (‘SLR + deletion’) enjoyment was close and not statistically different from non-photo takers and lower compared with ‘regular’ photo-takers. Corresponding findings were obtained for engagement. Attending to delete photos is the task that appears to truly distract photo takers from the main experience (like checking one’s video during a concert). Holding an SLR camera should not disturb so much dedicated amateur photographers (with some exceptions of extra equipment) but certain operations in taking photos may demand additional attention that could indeed compete with the subject experience.

Nevertheless, the researchers demonstrate in another experiment that the mental process of thinking about taking photos and planning them is more crucial than the functional act of taking the photos. Planning to take photos alone increased enjoyment just as for those actually taking photos, compared with those not involved in any way in taking photos. In other words, planning to taking photos “led to similar levels of enjoyment as actually taking photos”. Reported engagement was similarly heightened when planning to take photos. For more dedicated amateur photographers planning the photos to be taken is a key part of the activity and may not be easy to separate from some functions (e.g., choices of composition, focal object, exposure and speed). Yet the photography-related activity may not be viewed as an interference but as an integral part of the whole experience, a way of living the experience more deeply and vividly.

When the experience is perceived as negative, taking photos would also increase engagement, but in this case it will result in lower enjoyment compared to those not taking photos. The increased engagement means more attention of the photo takers becomes focused on negative aspects of the experience.

The researchers study a specific mechanism of mediation by engagement between taking photos and enjoyment. But many consumers may receive their satisfaction and joy from recording their experience to refresh their memories later through the photos, perhaps more so if they are less interested in photography per se. Moreover, consumers increasingly take photos with the intention of uploading them to social media networks (e.g., Facebook, Instagram) for sharing with their acquaintances, close and far. Diehl and colleagues are not convinced, based on an initial survey, that people anticipate such benefits while taking the photos. Nevertheless, they do not exclude this possibility: they note that “individuals presumably take photos in part because they expect that reviewing those photos in the future will provide them with additional enjoyment” and such forward-looking behaviour may enhance their immediate enjoyment from the experience. In their judgement many consumers do not anticipate such an effect. They do note, however, that many marketers also forbid taking photos on their premises because they seem to believe that taking photos ruins individuals’ experiences.

The research of Diehl, Zauberman and Barasch is interesting and refreshing on a topic not studied often. It shows from different angles how taking photos enhances the enjoyment of consumers in positive experiences through increased engagement (i.e., focus more attention, feeling more deeply immersed in the experience). Taking photos could plausibly be seen as less interfering or disrupting to people the closer they perceive this activity as complementary to the experience itself, and especially so for those more interested in photography. Marketers should be less reluctant to let consumers taking photos since it is more likely to make them enjoy the experience better. Consumers have to learn when is the best timing to turn to taking photos so as to enjoy it the most as part of the whole experience.

Ron Ventura, Ph.D. (Marketing)

Reference:

How Taking Photos Increases Enjoyment of Experiences; Kristin Diehl, Gal Zauberman, and Alixandra Barasch, 2016; Journal of Personality and Social Psychology, 111 (2), pp. 119-140.

Read Full Post »

The strength, impact and value of a brand are embodied, fairly concisely, in the concept of ‘brand equity’. However, there are different views on how to express and measure brand equity, whether from a consumer (customer) perspective or a firm perspective. Metrics based on a consumer viewpoint (measured in surveys) raise particular concern as to what actual effects they have in the marketplace. Datta, Ailawadi and van Heerde (2017) have answered to the challenge and investigated how well Consumer-Based metrics of Brand Equity (CBBE) align with Sales-Based estimates of Brand Equity (SBBE). The CBBE metrics were adopted from the model of Brand Asset Valuator (Y&R) whereas SBBE estimates were derived from modelling market data of actual purchases. They also examined the association of CBBE with behavioural response to marketing mix actions [1].

In essence, brand equity expresses an incremental value of a product (or service) that can be attributed to its brand name above and beyond physical (or functional) attributes. Alternately,  brand equity is conceived as the added value of a branded product compared with an identical version of that product if it were unbranded. David Aaker defined four main groups of assets linked to a brand that add to its value: awareness, perceived quality, loyalty, and associations beyond perceived quality. On the grounds of this conceptualization, Aaker subsequently proposed the Brand Equity Ten measures, grouped into five categories: brand loyalty, awareness, perceived quality / leadership, association / differentiation, and market behaviour. Kevin Keller broadened the scope of brand equity wherein greater and more positive knowledge of customers (consumers) about a brand would lead them to respond more favourably to marketing activities of the brand (e.g., pricing, advertising).

The impact of a brand may occur at three levels: customer market, product market and financial market. In accordance, academics have followed three distinct perspectives for measuring brand equity: (a) customer-based — an attraction of consumers to the “non-objective” part of the product offering (e.g., ‘mindset’  as in beliefs and attitudes, brand-specific ‘intercept’ in a choice model); (b) company-based — additional value accrued to the firm from a product because of a brand name versus an equivalent product but non-branded (e.g., discounted cash flow); financial-based — brand’s worth is the price it brings or could bring in the financial market (e.g., materialised via mergers and acquisitions, stock prices)[2]. This classification is not universal:  for example, discounted cash flows are sometimes described as ‘financial’; estimates of brand value derived from a choice-based conjoint model constitute a more implicit reflection of the consumers’ viewpoint. Furthermore, models based on stated-choice (conjoint) or purchase (market share) data may vary greatly in the effects they include whether in interaction with each competing brand or independent from the brand ‘main effect’ (e.g., product attributes, price, other marketing mix variables).

A class of attitudinal (‘mindset’) models of brand equity may encompass a number of aspects and layers: awareness –> perceptions and attitudes about product attributes and functional benefits (+ overall perceived quality), ‘soft’ image associations (e.g., emotions, personality, social benefits) –> attachment or affinity –> loyalty (commitment). Two noteworthy academic studies have built upon the conceptualizations of Aaker and Keller in constructing and testing consumer-based measures:

  • Yoo and Donthu (2001) constructed a three-dimension model of brand equity comprising brand loyalty, brand awareness / associations (combined), and perceived quality (strength of associations was adopted from Keller’s descriptors of brand image). The multidimensional scale (MBE) was tested and validated across multiple product categories and cultural communities [3].
  • Netemeyer and colleagues (2004) demonstrated across products and brands that perceived quality, perceived value (for the cost), and uniqueness of a given brand potentially contribute to willingness to pay a price premium for the brand which in turn acts as a direct antecedent of brand purchase behaviour [4]. Price premium, an aspect of brand loyalty, is a common metric used for assessing brand equity.

Datta, Ailawadi and van Heerde distinguish between two measurement approaches: the consumer-based brand equity (CBBE) approach measures what consumers think and feel about the brand, while the sales-based brand equity (SBBE) approach is based on choice or share of the brand in the marketplace.

The CBBE approach in their research is applied through data on metrics from the Brand Asset Valuator model developed originally by Young and Roubicam (Y&R) advertising agency (the brand research activity is now defined as a separate entity, BAV Group; both Y&R and BAV Group are part of WPP media group). The BAV model includes four dimensions: Relevance to the consumers (e.g., fits in their lifestyles); Esteem of the brand (i.e., how much consumers like the brand and hold it in high regard); Knowledge of the brand (i.e., consumers are aware of and understand what the brand stands for); and  Differentiation from the competition (e.g., uniqueness of the brand)[5].

The SBBE approach is operationalised through modelling of purchase data (weekly scanner data from IRI). The researchers derive estimates of brand value in a market share attraction model (with over 400 brands from 25 categories, though just 290 brands for which BAV data could be obtained were included in subsequent CBBE-SBBE analyses) over a span of ten years (2002-2011). Notably, brand-specific intercepts were estimated for each year; an annual level is sufficient and realistic to account for the pace of change in brand equity over time. The model allowed for variation between brands in the sensitivity to their marketing mix actions (regular prices, promotional prices, advertising spending, distribution {on-shelf availability} and promotional display in stores) — these measures are not taken as part of SBBE values but indicate nonetheless expected manifestation of higher brand equity (impact); after being converted into elasticities, they play a key role in examining the relation of CBBE to behavioural outcomes in the marketplace.


  • Datta et al. seem to include in a SBBE approach estimates derived from (a) actual brand choices and sales data as well as (b) self-reported choices in conjoint studies and surveys. But subjective responses and behavioural responses are not quite equivalent bases. The authors may have aimed reasonably to distinguish ‘choice-based’ measures of brand equity from ‘attitudinal’ measures, but it still does not justify to mix between brands and products consumers say they would choose and those they actually choose to purchase. Conjoint-based estimates are more closely consumer-based.
  • Take for instance a research by Ferjani, Jedidi and Jagpal (2009) who offer a different angle on levels of valuation of brand equity. They derived brand values through a choice-based conjoint model (Hierarchical Bayes estimation at the individual level), regarded as consumer-level valuation. Vis-à-vis the researchers constructed a measure of brand equity from a firm perspective based on expected profits (rather than discounted cash flows), presented as firm-level valuation. Nonetheless, in order to estimate sales volume they ‘imported’ predicted market shares from the conjoint study, thus linking the two levels [6].

 

Not all dimensions of BAV (CBBE) are the same in relation to SBBE: Three of the dimensions of BAV — relevance, esteem, and knowledge — are positively correlated with SBBE (0.35, 0.39, & 0.53), while differentiation is negatively although weakly correlated with SBBE (-0.14). The researchers reasoned in advance that differentiation could have a more nuanced and versatile market effect (a hypothesis confirmed) because differentiation could mean the brand is attractive to only some segments and not others, or that uniqueness may appeal to only some of the consumers (e.g., more open to novelty and distinction).

Datta et al. show that correlations of relevance (0.55) and esteem (0.56) with market shares of the brands are even higher, and the correlation of differentiation with market shares is less negative (-0.08), than their correlations with SBBE (correlations of knowledge are about the same). The SBBE values capture a portion of brand attraction to consumers. Market shares on the other hand factor in additional marketing efforts that dimensions of BAV seem to account for.

Some interesting brand cases can be detected in a mapping of brands in two categories (for 2011): beer and laundry detergents. For example, among beers, Corona is positioned on SBBE much higher than expected given its overall BAV score, which places the brand among those better valued on a consumer basis (only one brand is considerably higher — Budweiser). However, with respect to market share the position of Corona is much less flattering and quite as expected relative to its consumer-based BAV score, even a little lower. This could suggest that too much power is credited to the name and other symbols of Corona, while the backing from marketing efforts to support and sustain it is lacking (i.e., the market share of Corona is vulnerable).  As another example, in the category of laundry detergents, Tide (P&G) is truly at the top on both BAV (CBBE) and market share. Yet, the position of Tide on SBBE relative to BAV score is not exceptional or impressive, being lower than predicted for its consumer-based brand equity. The success of the brand and consumer appreciation for it may not be adequately attributed specifically to the brand in the marketplace but apparently more to other marketing activities in its name (i.e., marketing efforts do not help to enhance the brand).

The degree of correlation between CBBE and SBBE may be moderated by characteristics of product category. Following the salient difference cited above between dimensions of BAV in relation to SBBE, the researchers identify two separate factors of BAV: relevant stature (relevance + esteem + knowledge) and (energized) differentiation [7].

In more concentrated product categories (i.e., the four largest brands by market share hold a greater total share of the category), the positive effect of brand stature on SBBE is reduced. Relevance, esteem and knowledge may serve as particularly useful cues by consumers in fragmented markets, where it is more necessary for them to sort and screen among many smaller brands, thus to simplify the choice decision process. When concentration is greater, reliance on such cues is less required. On the other hand, when the category is more concentrated, controlled by a few big brands, it should be easier for consumers to compare between them and find aspects on which each brand is unique or superior. Indeed, Datta and colleagues find that in categories with increased concentration, differentiation has a stronger positive effect on SBBE.

For products characterised by greater social or symbolic value (e.g., more visible to others when used, shared with others), higher brand stature contributes to higher SBBE in the market. The researchers could not confirm, however, that differentiation manifests in higher SBBE for products of higher social value. The advantage of using brands better recognized and respected by others appears to be primarily associated with facets such as relevance and esteem of the brand.

Brand experience with hedonic products (e.g., leisure, entertainment, treats) builds on enjoyment, pleasure and additional positive emotions the brand succeeds in evoking in consumers. Sensory attributes of the product (look, sound, scent, taste, touch) and holistic image are vital in creating a desirable experience. Contrary to expectation of Datta and colleagues, however, it was not found that stature translates to higher SBBE for brands of hedonic products (even to the contrary). This is not so good news for experiential brands in these categories that rely on enhancing relevance and appeal to consumers, who also understand the brands and connect with them, to create sales-based brand equity in the marketplace. The authors suggest in their article that being personally enjoyable (inward-looking) may overshadow the importance of broad appeal and status (outward-looking) for SBBE. Nevertheless, fortunately enough, differentiation does matter for highlighting benefits of the experience of hedonic products, contributing to a raised sales-based brand equity (SBBE).

Datta, Ailawadi and van Heerde proceeded to examine how strongly CBBE corresponds with behavioural responses in the marketplace (elasticities) as manifestation of the anticipated impact of brand equity.

Results indicated that when relevant stature of a brand is higher consumers respond favourably even more strongly to price discounts or deals  (i.e.,  elasticity of response to promotional prices is further more negative or inverse). Yet, the expectation that consumers would be less sensitive (adverse) to increased regular prices by brands of greater stature was not substantiated (i.e., expected positive effect: less negative elasticity). (Differentiation was not found to have a positive effect on response to regular prices either, and could be counter-conducive for price promotions.)

An important implication of brand equity should be that consumers are more willing to pay higher regular prices for a brand of higher stature (i.e., a larger price premium) relative to competing brands, and more forgiving when such a brand sees it necessary to update and raise its regular price. The brand may benefit from being more personally relevant to the consumer, better understood and more highly appreciated. A brand more clearly differentiated from competitors with respect to its advantages could also benefit from a protected status. All these properties are presumed to enhance attachment to a brand, and subsequently lead to greater loyalty, making consumers more ready to stick with the brand even as it becomes more expensive. This research disproves such expectations. Better responsiveness to price promotions can help to increase sales and revenue, but it testifies to the heightened level of competition in many categories (e.g., FMCG or packaged goods) and propensity of consumers to be more opportunistic rather than to the strength of the brands. This result, actually a warning signal, cannot be brushed away easily.

  • Towards the end of the article, the researchers suggest as explanation that they ignored possible differences in response to increases and decreases in regular prices (i.e., asymmetric elasticity). Even so, increases in regular prices by stronger brands are more likely to happen than price decreases, and the latter already are more realistically accounted for in response to promotional prices.

Relevant stature is positively related to responsiveness to feature or promotional display (i.e., consumers are more inclined to purchase from a higher stature brand when in an advantaged display). Consumers also are more strongly receptive to larger volume of advertising by brands of higher stature and better differentiation in their eyes (this analysis could not refer to actual advertising messages and hence perhaps the weaker positive effects). Another interesting finding indicates that sensitivity to degree of distribution (on-shelf availability) is inversely associated with stature — the higher the brand stature from consumer viewpoint, larger distribution is less attractive to the consumers. As the researchers suggest, consumers are more willing to look harder and farther (e.g., in other stores) for those brands regarded more important for them to have. So here is a positive evidence for the impact of stronger brands or higher brand equity.

The research gives rise to some methodological questions on measurement of brand equity that remain open for further deliberation:

  1. Should the measure of brand equity in choice models rely only on a brand-specific intercept (expressing intrinsic assets or value of the brand) or should it include also a reflection of the impact of brand equity as in response to marketing mix activities?
  2. Are attitudinal measures of brand equity (CBBE) too gross and not sensitive enough to capture the incremental value added by the brand or is the measure of brand equity based only on a brand-intercept term in a model of actual purchase data too specific and narrow?  (unless it accounts for some of the impact of brand equity)
  3. How should measures of brand equity based on stated-choice (conjoint) data and actual purchase data be classified with respect to a consumer perspective? (both pertain really to consumers: either their cognition or overt behaviour).

Datta, Ailawadi and van Heerde throw light in their extensive research on the relation of consumer-based equity (CBBE) to behavioural outcomes, manifested in brand equity based on actual purchases (SBBE) and in effects on response to marketing mix actions as an impact of brand equity. Attention should be awarded to positive implications of this research for practice but nonetheless also to the warning alerts it may signal.

Ron Ventura, Ph.D. (Marketing)

Notes:

[1] How Well Does Consumer-Based Brand Equity Align with Sales-Based Brand Equity and Marketing-Mix Response?; Hannes Datta, Kusum L. Ailawadi, & Harald J. van Heerde, 2017; Journal of Marketing, 81 (May), pp. 1-20. (DOI: 10.1509/jm.15.0340)

[2] Brands and Branding: Research Findings and Future Priorities; Kevin L. Keller and Donald R. Lehmann, 2006; Marketing Science, 25 (6), pp. 740-759. (DOI: 10.1287/mksc.1050.0153)

[3] Developing and Validating a Multidimensional Consumer-Based Brand Equity Scale; Boonghee Yoo and Naveen Donthu, 2001; Journal of Business Research, 52, pp. 1-14.

[4]  Developing and Validating Measures of Facets of Customer-Based Brand Equity; Richard G. Netemeyer, Balaji Krishnan, Chris Pullig, Guangping Wang,  Mahmet Yageci, Dwane Dean, Joe Ricks, & Ferdinand Wirth, 2004; Journal of Business Research, 57, pp. 209-224.

[5] The authors name this dimension ‘energised differentiation’ in reference to an article in which researchers Mizik and Jacobson identified a fifth pillar of energy, and suggest that differentiation and energy have since been merged. However, this change is not mentioned or revealed on the website of BAV Group.

[6] A Conjoint Approach for Consumer- and Firm-Level Brand Valuation; Madiha Ferjani, Kamel Jedidi, & Sharan Jagpal, 2009; Journal of Marketing Research, 46 (December), pp. 846-862.

[7] These two factors (principal components) extracted by Datta et al. are different from two higher dimensions defined by BAV Group (stature = esteem and knowledge, strength = relevance and differentiation). However, the distinction made by the researchers as corroborated by their data is more meaningful  and relevant in the context of this study.

 

Read Full Post »

‘Where do I find umbrellas?’ ‘How do I get to the shoe department?’ Questions like this are likely familiar to many consumers when visiting large department stores. Walking long pathways on a floor and moving between floors in a quest to find a needed product can be time-consuming and annoying. Signposts often are too general and lack useful instructions for direction. Mobile mapping applications (‘apps’) of indoors environments, an evolving technological development of the last five years, can make the shopping experience in large stores more smooth, convenient and enjoyable for consumers. A mapping app can be useful not only in department stores but also within large supermarkets, fashion, toys or DIY stores, to give just a few examples. Moreover, navigating in complex structures like shopping malls, airports, hospitals etc. may be made much easier with a mapping app.

Over the years large physical floor maps have been installed in some department stores (e.g., hung on the wall near a lift) — the problem is that the shopper has to try to keep in memory the route to pass to a desired destination. Signage of product directories placed in front of escalators may help the shopper to find on what floor a particular type of product (or a brand) is placed, but one may be left again to stroll a widespread floor until locating the product requested. Signs hung above aisles (e.g., in supermarkets) may not be seen until one approaches the relevant aisle. Some retailers and operators of shopping centres provide printed maps on cards or leaflets to guide their customers on the premises; the map is usually accompanied with index lists and codes for reference, and regions on the map diagram may be printed in different colours to facilitate navigation. Holding a map in the shopper’s hands can be a great relief. Holding a dynamic and interactive map displayed on the shopper’s mobile phone seems as an even greater step forward.

Mapping applications of enclosed environments aim to provide people with spatial information and tools similar to those that facilitate their navigation on roads and in the streets of cities. One can search for an address, a business or an institute, and the mapping utility will show the user its location on the map. Additionally, when used on a mobile device, smartphone or tablet, the application can show the way and follow the user until he or she gets to the destination. In-store, the ‘address’ would typically be a product. An in-store mapping app may show the shopper the location of the product in the store, and perhaps give instructions step-by-step how to get there, yet it will not necessarily be able to follow the user to the destination — an additional layer of technology, a physical infrastructure, is required to locate the shopper on the map and automatically “advance” the map on display as he or she walks in the store.

  • A web-based mapping utility of Heathrow Airport (London), for example, allows a prospect traveller to look for a starting point and a destination in any of the five terminals and their facilities and the online service will provide instructions in text and over the map diagram how to get there.

The GPS technology that usually allows the positioning of users on a map of an outdoors space, and follows the user until he or she gets to a destination, stops working when one enters an enclosed environment of a building. It is additionally not accurate enough to pinpoint the location of a person in a relatively small area, and especially is impractical in distinguishing between floors in the building. Therefore, this technology cannot be applied in mapping applications either in shopping centres or in-store. Alternative technologies have been tested and utilised for indoors mapping: more notable is Bluetooth technology applied with beacons, but there are other options in the field, including Wi-Fi and LED light bulbs for signalling and transmitting location information. Effective positioning of shoppers is said to require a dense network of devices (transmitters) throughout the store, oftentimes an expensive enterprise. Therefore, retailers appear to be more interested in implementing select functions of in-store mapping applications (e.g., orientation, promotions) but are less in a hurry to adopt also the capability of positioning shoppers on a map of the store.

A retailer can deliver via a mobile app promotional offers (e.g., digital coupons) to shoppers as well as updates on new products, services and events. A retail app may  include a bundle of services such as tools for mapping and managing a shopping list for the benefit of the customers. Some retailers already use a location functionality in their stores, independent of mapping, to improve the timing when offers are sent to shoppers during their visit, specific to their location in the store. But this functionality usually utilises fewer devices (e.g., beacons) than would be necessary for a full positioning capability. The mapping tools can produce several advantages: (1) deliver a helpful service to shoppers (e.g., using a shopping list with a map); (2) enhance navigation by location of the shopper on a dynamic map; (3) give a better incentive to shoppers to authorise an app to track their location in the store; (4) mount ‘flags’ of promotional offers for various products on the map near the relevant aisles or display shelves, particularly as the shopper approaches nearby (as a benchmark for illustration, think of information [icons & text] mounted on maps of Google or in an app like Waze).

The map is meant to provide first of all spatial information. Should mapping applications also be visuospatial, that is, display a visual image of the store’s appearance? It would be like making a virtual simulated tour of the store. The experience could be more entertaining (e.g., like gaming) but would it be more informative and useful? If the shopper is already in the store, he or she should not really need the enhanced display — it could be more confusing (screen and reality may interfere with each other) and time-consuming to navigate with such a display. The enhanced imagery display may be useful for planning a visit before entering the store, or perhaps for online shopping in a virtual store. Yet, once a shopper is at the physical store, a visuospatial display should be made an option as a matter of discretion by the shopper while the main display better be a map diagram that matches the actual layout and organisation of the store.

  • Mobile marketing company aisle411, which specialises also in indoors mapping for retail stores, created in co-operation with Google’s Project Tango a 3D imaged environment (“3D mapping”) of a supermarket store with features of augmented reality (e.g., product information. rewards and coupons). [BusinessWire.com, 25 June 2014, see video demonstration — note that the application is operating on a tablet mounted on the shopping cart]

A study published last year (Ertekin, Pryor & Pelton, Spring 2017) sought to identify perceptions, attitudes or personality traits that could motivate consumers to use mobile in-store mapping applications (*). The study focused on consumers from generations X (born in 1961-1979) and Y (born in 1980-1999 — adults likely to be familiar with and orientated to using computer technology and its applications). Actually 80% of the respondents in the sample were of generation Y. All respondents (n=258) had a device that can connect to the Internet (57% had a mapping application downloaded to their smartphone). The researchers considered factors regarding the use of technology of in-store mapping applications and how it would affect the shopping experience (30% of respondents reported trying an in-store mapping application before).

The degree of ease-of-use of an in-store mapping app was found to have a positive effect on intention (or ‘propensity’) to use it while shopping. Perceived ease-of-use was defined as the “degree to which a person believes that using a particular system would be free of effort” (e.g., easy to use, clear and understandable, flexible to interact with). Usefulness of the app pertains specifically to the act of shopping, helping to enhance the ‘job performance’ (effectiveness) of shopping with the map. As expected, perceived usefulness also had a positive effect on the intention to use such an app.

In addition to those functional or utilitarian benefits of the application, the researchers addressed the app’s ability to make the shopping experience emotionally more entertaining (particularly inducing excitement associated with novelty of the technology). Entertainment benefits (e.g., enjoyable learning about stores, fun, or merely a good pass time when bored) also strengthen the intention to use an in-store mapping app.

The willingness to use a mobile in-store mapping app is diminished by greater concern of consumers about sacrificing their security when using a network computing application (i.e., emphasis on protection from malicious software or stealing personal information). Conspicuously, however, reference to data security is only hinted and the sensitive matter of privacy is not properly covered, particularly the reluctance of consumers to let their moves being tracked. If the mapping app provides the user more perceived benefits of the types cited above, they may be less resistant to allow the retailer to track them.

A result that would probably be of interest to retailers shows that consumers who exhibit a stronger deal proneness are more intent on using an in-store mapping app. In other words, consumers who are more leaning towards buying on discounts and deals are more likely to be attracted to the mapping app in hope of finding there promotional offers, easy to locate in the store. Yet retailers should be careful about this finding because if they are too focused on delivering promotional offers through their apps, then they will get shoppers more interested in deals and reward points more frequently than other shoppers. In order to encourage shoppers to extend their in-store visits longer and make more unplanned purchases, promotional offers should be put forward on the app more closely in accordance with the store sections or aisles the shoppers access, when they pass through; where feasible, generate offers in association with products on a shopping list the shopper fills-in on the app (i.e., help a shopper find more easily the products on his or her list while adding products that are more likely to be perceived as complements to them).  Promotions are only one of the ways to encourage consumers to shop more, and that is true also for the ‘package’ offered in a retail mapping app.

The model analysed in this study did not provide support for a positive effect of being pressed in time on intention to use an in-store mapping app  (i.e., apps are not associated enough with saving time or those pressed in time are interested in the mapping app no more than others with more free time). It does not seem to give ground to a concern of retailers that such an app might allow shoppers to shorten their shopping trips, but as suggested above, if needed there are ways to circumvent such behaviour. The model also did not support the hypothesis that consumers who like to gather more market information (e.g., products, prices, innovations) and share their knowledge with others, to advise or actually influence them, are more inclined to use an in-store mapping app to accomplish their goals.

The study makes early steps in investigating consumer behaviour pertaining to using retail mapping apps. It confirms that functional as well as emotional benefits are drivers of consumer use of a mapping app in-store. But the investigation has to proceed to validate and refine those findings and conclusions. While the study targeted young consumers of relevant generations Y and X, the sample consisted of university students (hence probably also the vast majority of millennials). It may be sufficient for establishing relations of the tested factors to the use of mapping apps, but further research should go beyond a student population to cover consumers of these generations to validate the relations or effects. Additional analyses and models (beyond the regression model applied in this study) will have to examine effects more thoroughly or with greater scrutiny (e.g., causality, mediators). Furthermore, consumer disposition towards the mapping apps has to be examined through actual experience and behaviour, for example by letting shoppers perform their shopping ‘naturally’ with an app or by giving them specific tasks to perform with a mapping app in their shopping trip. The study of Ertekin, Pryor and Pelton would serve as an instructive and helpful starting point.

Consumers may utilise a mental map of a store site that they hold in memory to guide them through locations in the  store as in an auto-pilot mode. Mental maps are possible to construct, however, for stores that shoppers visit frequently enough or regularly. Digital mapping apps may change how consumers construct and utilise their own mental maps, stored in their long-term memory. People tend to favour digital information sources and rely less on their own memory. A shopper may need no more than a graph as a spatial model to perform his or her shopping job, or perhaps a more detailed mental model of a drawing similar to a map. Yet the extent to which people also use picture-like mental imageries of the site depends on how useful is the visual information for performing their task, because visual imagery requires greater resources. So visual imagery may be re-constructed more selectively as needed — think of ‘photos’ of specific locations of importance or interest to the shopper (e.g., shelf displays of ‘target’ products) pinned to the mental drawing at the relevant places. A conception like this may be emulated in the digital in-store maps of mobile applications.

Mobile in-store mapping applications present a significant, promising development in re-shaping consumer shopping experiences. It could play an important role in the future of retailing, but there is still ambiguity about the extent to which large retailers would choose to implement mapping features and capabilities, particularly the real-time positioning of shoppers inside a physical store. Mapping applications for retail indoors sites may impact, for example, the balance in preference of consumers between shopping online and offline (i.e., in brick-and-mortar stores).

Ron Ventura, Ph.D. (Marketing)

(*) An Empirical Study of Consumer Motivations to Use In-Store Mapping Application; Selcuk Ertekin, Susie Pryor, & Lou E. Pelton, 2017; Marketing Management Journal, 27 (1), pp. 63-74.

 

 

Read Full Post »

Fifteen years have passed since a Nobel Prize in economics was awarded to Daniel Kahneman to this time (Fall 2017) when another leading researcher in behavioural economics, Richard Thaler, wins this honourable prize. Thaler and Kahneman are no strangers — they have collaborated in research in this field from its early days in the late 1970s. Moreover, Kahneman together with the late Amos Tversky helped Thaler in his first steps in this field, or more generally in meeting economics with psychology. Key elements of Thaler’s theory of Mental Accounting are based on the value function in Kanheman and Tversky’s Prospect theory.

In recent years Thaler is better known for the approach he devised of choice architecture and the tools of nudging, as co-author of the book “Nudge: Improving Decisions About Health, Wealth and Happiness” with Cass Sunstein (2008-9). However, at the core of the contribution of Thaler is the theory of mental accounting where he helped to lay the foundations of behavioural economics. The applied tools of nudging are not appropriately appreciated without understanding the concepts of mental accounting and other phenomena he studied with colleagues which describe deviations in judgement and behaviour from the rational economic model.

Thaler, originally an economist, was unhappy with predictions of consumer choice arising from microeconomics — the principles of economic theory were not contested as a normative theory (e.g., regarding optimization) but claims by economists that the theory is able to describe actual consumer behaviour and predict it were put into question. Furthermore, Thaler and others early on argued that deviations from rational judgement and choice behaviour are predictable.  In his ‘maverick’ paper “Toward a Positive Theory of Consumer Choice” from 1980, Thaler described and explained deviations and anomalies in consumer choice that stand in disagreement with the economic theory. He referred to concepts such as framing of gains and losses, the endowment effect, sunk costs, search for information on prices, regret, and self-control (1).

The theory of mental accounting developed by Thaler thereafter is already an integrated framework that describes how consumers perform value judgements and make choice decisions of products and services to purchase while recognising psychological effects in making economic decisions (2).  The theory is built around three prominent concepts (described here only briefly):

Dividing a budget into categories of expenses: Consumers metaphorically (but sometimes physically) allocate the money of their budget into buckets or envelopes according to type or purpose of expenses. It means that they do not transfer money freely between categories (e.g., food, entertainment). This concept contradicts the economic principle of fungibility, thus suggesting that one dollar is not valued the same in every category. A further implication is that each category has a sub-budget allotted to it, and if expenses in the category during a period surpass its limit, a consumer will prefer to give up on the next purchase and refrain from adding money from another category. Hence, for instance,  Dan and Edna will not go out for dinner at a trendy restaurant if that requires taking money planned for buying shoes for their child. However, managing the budget according to the total limit of income in each month is more often unsatisfactory, and some purchases can still be made on credit without hurting other purchases in the same month. On the other hand, it can readily be seen how consumers get into trouble when they try to spread too many expenses across future periods with their credit cards, and lose track of the category limits for their different expenses.

Segregating gains and integrating losses: In the model of a value function by Kahneman and Tversky, value is defined upon gains and losses as one departs from a reference point (a “status quo” state). Thaler explicated in turn how properties of the gain-loss value function would be implemented in practical evaluations of outcomes. The two general “rules”, as demonstrated most clearly in “pure” cases, say: (a) if there are two or more gains, consumers prefer to segregate them (e.g., if Chris makes gains on two different shares on a given day, he will prefer to see them separately); (b) if there are two or more losses, consumers prefer to integrate them (e.g., Sarah is informed of a price for an inter-city train trip but then told there is a surcharge for travelling in the morning — she will prefer to consider the total cost for her requested journey). Thaler additionally proposed what consumers would prefer doing in more complicated cases of “mixed” gains and losses, whether to segregate between the gain and loss (e.g., if the loss is much greater than the gain) or integrate them (e.g., if the gain is larger than the loss so that one remains with a net gain).

Adding-up acquisition value with transaction value to evaluate product offers: A product or service offer generally exhibits in it benefits and costs to the consumer (e.g., the example of a train ticket above overlooked the benefit of the travel to Sarah). But value may arise from the offering or deal itself beyond the product per se. Thaler recognised that consumers may look at two sources of value, and composing or adding them together would yield the overall worth of a product purchase offer: (1) Acquisition utility is the value of a difference between the [monetary] value equivalent of a product to the consumer and its actual price; (2) Transaction utility is the value of a difference between the actual price and a reference price. In the calculus of value, hides the play of gains and losses. This value concept was quite quickly adopted by consumer and marketing researchers in academia and implemented in means-end models that depict chains of value underlying the purchase decision process of consumers (mostly in the mid-1980s to mid-1990s). Thaler’s approach to ‘analysing’ value is getting more widely acknowledged and applied also in practice, as expressions of value as such in consumer response to offerings can be found in so many domains of marketing and retailing.

A reference price may receive different representations, for instance: the price last paid; price recalled from a previous period; average or median price in the same product class; a ‘normal’ or list price; a ‘fair’ or ‘just’ price (which is not so easy to specify). The transaction value may vary quite a lot depending on the form of reference price a consumer uses, ceteris paribus, and hence affect how the transaction value is represented (i.e., as a gain or a loss and its magnitude). Yet, it also suggests that marketers may hint to consumers a price to be used as a reference price (e.g., an advertised price anchor) and thus influence consumers’ value judgements.

We often observe and think of discounts as a difference between an actual price (‘only this week’) and a higher normal price — in this case we may construe the acquisition value and transaction value as two ways to perceive gain on the actual price concurrently. But the model of Thaler is more general because it recognizes a range of prices that may be employed as a reference by consumers. In addition, a list price may be suspected to be set higher to invoke in purpose the perception of a gain vis-à-vis the actual discounted price which in practice is more regular than the list price. A list price or an advertised price may also serve primarily as a cue for the quality of the product (and perhaps also influence the equivalent value of the product for less knowledgeable consumers), while an actual selling price provides a transaction value or utility. In the era of e-commerce, consumers also appear to use the price quoted on a retailer’s online store as a reference; then they may visit one of its brick-and-mortar stores, where they hope to obtain their desired product faster, and complain if they discover that the price for the same product in-store is much higher. Where customers are increasingly grudging over delivery fees and speed, a viable solution to secure customers is to offer a scheme of ‘click-and-collect at a store near you’. Moreover, when more consumers shop with a smartphone in their hands, the use of competitors’ prices or even the same retailer’s online prices as references is likely to be even more frequent and ubiquitous.


  • The next example may help further to illustrate the potentially compound task of evaluating offerings: Jonathan arrives to the agency of a car dealer where he intends to buy his next new car of favour, but there he finds out that the price on offer for that model is $1,500 higher than a price he saw two months earlier in ads. The sales representative claims prices by the carmaker have risen lately. However, when proposing a digital display system (e.g., entertainment, navigation, technical car info) as an add-on to the car, the seller proposes also to give Jonathan a discount of $150 on its original price tag.
  • Jonathan appreciates this offer and is inclined to segregate this saving apart from the additional pay for the car itself (i.e., ‘silver-lining’). The transaction value may be expanded to include two components (separating the evaluations of the car offer and add-on offer completely is less sensible because the add-on system is still contingent on the car).

Richard Thaler contributed to the revelation, understanding and assessment of implications of additional cognitive and behavioural phenomena that do not stand in line with rationality in the economic sense. At least some of those phenomena have direct implications in the context of mental accounting.

One of the greater acknowledged phenomena by now is the endowment effect. It is the recognition that people value an object (product item) already in their possession more than when having the option of acquiring the same object. In other words, the monetary compensation David would be willing to accept to give up on a good he holds is higher than the amount he would agree to pay to acquire it —  people principally have a difficulty to give up on something they own or endowed with (no matter how they originally obtained it). This effect has been most famously demonstrated with mugs, but to generalise it was also tested with other items like pens. This effect may well squeeze into consumers’ considerations when trying to sell much more expensive properties like their car or apartment, beyond an aim to make a financial gain. In his latest book on behavioural economics, ‘Misbehaving’, Thaler provides a friendly explanation with graphic illustration as to why fewer transactions of exchange occur between individuals who obtain a mug and those who do not, due to the endowment effect vis-à-vis a prediction by economic theory (3).

Another important issue of interest to Thaler is fairness, such as when it is fair or acceptable to charge a higher price from consumers for an object in shortage or hard to obtain (e.g., shovels for clearing snow on the morning after a snow storm). Notably, the perception of “fairness” may be moderated depending on whether the rise in price is framed as a reduction in gain (e.g., a discount of $2o0 from list price being cancelled for a car in short supply) or an actual loss (e.g., an explicit increase of $200 above the list price) — the change in actual price is more likely to be perceived as acceptable in the former case than the latter (4). He further investigated fairness games (e.g., Dictator, Punishment and Ultimatum). Additional noteworthy topics he studied are susceptibility to sunk cost and self-control.

  • More topics studied by Thaler can be traced by browsing his long list of papers over the years since the 1970s, and perhaps more leisurely through his illuminating book: “Misbehaving: The Making of Behavioural Economics” (2015-16).

The tactics of nudging, as part of choice architecture, are based on lessons from the anomalies and biases in consumers’ procedures of judgement and decision-making studied by Thaler himself and others in behavioural economics. Thaler and Sunstein looked for ways to guide or lead consumers to make better choices for their own good — health, wealth and happiness — without attempting to reform or alter their rooted modes of thinking and behaviour, which most probably would be doomed to failure. Their clever idea was to work within the boundaries of human behaviour to modify it just enough and in a predictable way to put consumers on a better track to a choice decision. Nudging could mean diverting a consumer from his or her routine way of making a decision to arrive to a different, expectedly better, choice outcome. It often likely involves taking a consumer out of his or her ‘comfort zone’. Critically important, however, Thaler and Sunstein conditioned in their book ‘Nudge’ that: “To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates“. Accordingly, nudging techniques should not impose on consumers the choice of any designated or recommended options (5).

Six categories of nudging techniques are proposed: (1) defaults; (2) expect errors; (3) give feedback; (4) understanding “mappings”; (5) structure complex choices; and (6) incentives. In any of these techniques, the intention is to allow policy makers to direct consumers to choices that improve the state of consumers. Yet, the approach they advocate of ‘libertarian paternalism’ is not received without contention —  while libertarian, that is without coercing a choice, a question remains what gives an agency or policy maker the wisdom and right to determine which options should be better off for consumers (e.g., health plans, saving and investment programmes). Thaler and Sunstein discuss the implementation of nudging mostly in the context of public policy (i.e., by government agencies) but these techniques are applicable just as well to plans and policies of private agencies or companies (e.g., banks, telecom service providers, retailers in their physical and online stores). Nevertheless, public agencies and even more so business companies should devise and apply any measures of nudging to help consumers to choose the better-off and fitting plans for them; it is not for manipulating the consumers or taking advantage of their human errors and biases in judgement and decision-making.

Richard Thaler reviews and explains in his book “Misbehaving” the phenomena and issues he has studied in behavioural economics through the story of his rich research career — it is an interesting, lucid and compelling story. He tells in a candid way about the stages he has gone through in his career. Most conspicuously, this story also reflects the obstacles and resistance that faced behavioural economists for at least 25-30 years.

Congratulations to Professor Richard Thaler, and to the field of behavioural economics to which he contributed wholesomely, in theory and in its application.    

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) Toward a Positive Theory of Consumer Choice; Richard H. Thaler, 1980/2000; in Choices, Values and Frames (eds. Daniel Kahneman and Amos Tversky)[Ch. 15: pp. 269-287], Cambridge University Press. (Originally published in Journal of Economic Behaviour and Organization.)

(2) Mental Accounting and Consumer Choice; Richard H. Thaler, 1985; Marketing Science, 4 (3), pp. 199-214.

(3) Misbehaving: The Making of Behavioural Economics; Richard H. Thaler, 2016; UK: Penguin Books (paperback).

(4) Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias; Daniel Kahneman, Jack L. Knetsch, & Richard H. Thaler, 1991/2000; in Choices, Values and Frames (eds. Daniel Kahneman and Amos Tversky)[Ch. 8: pp. 159-170], Cambridge University Press. (Originally published in Journal of Economic Perspectives).

(5) Nudge: Improving Decisions About Health, Wealth, and Happiness; Richard H. Thaler and Cass R. Sunstein, 2009; UK: Penguin Books (updated edition).

Read Full Post »

Older Posts »