Feeds:
Posts
Comments

Posts Tagged ‘Consumers’

A classic view regarding decision-making holds that attention serves foremost to acquire the information most relevant and important for choosing between alternatives. Thereby the role of attention is largely a passive one. However, an alternative view that is gaining traction in recent years, especially due to the help of eye tracking research, argues that attention plays a more active role in decision processes, influencing the construction of decisions.

This is a key message delivered by Orquin and Mueller Loose (2013) in their review on the role of attention in decision-making, as can be learnt from tracking of eye movements and subsequent fixations [1]. The approach taken by the researchers, however, is less usual: They do not constrain themselves concretely to the domain of decision-making; instead, they start their review and analysis of evidence from theories or models of tasks similar or related to decision-making (e.g., perception, information processing, visual search, working memory, top-down and bottom-up processes, problem solving).  Then they try to project how the functions of attention in such tasks may project to or be expressed in decision processes.

Furthermore, Orquin and Mueller Loose examine the extent to which the evidence coincides with four alternative theories and associated models of decision-making (i.e., whether empirical evidence substantiates or refutes assumptions or conclusions in each theory). They review evidence from previous research on similar or related tasks that could also be traced specifically in decision tasks, based on eye tracking in decision-making research, and evaluate this evidence in the context of the alternative decision-making theories.

The theories and related models considered are: (1) rational models; (2) bounded rationality models; (3) evidence accumulation models (e.g., the attention drift diffusion model [aDDM] posits that a decision-maker accumulates evidence in favour of the alternative being fixated upon at a given time); and (4) parallel constraint satisfaction models (a type of dual process, neural network model based on the conception of System 1’s fast and intuitive thinking [first stage] and System 2’s slow and deliberate thinking [second stage]). Rational models as well as bounded rationality models more explicitly contend that the role of attention is simply to capture the information needed for making a decision. ‘Strong’ rational models hold that all relevant, available information about choice alternatives would be attended to and taken into account, whereas ‘relaxed’ rational models allow for the possibility of nonattendance to some of the information (e.g., attributes or object [product] features). Bounded rationality models suggest that information is acquired just as required by the decision rules applied. The two other categories of models are more flexible in regard to how information is acquired and used, and its effect on the decision process and outcome. However, the authors argue that all four theories are found to be in deficit to a smaller or larger degree in their consideration of the role and function of attention in decision processes, having at least some of their assumptions being rejected by the evidence evaluated.

Selected insights drawn from the review of Orquin and Mueller Loose are enlisted here only briefly to shed light on the significance of attention in consumer decision-making.

A crucial question in decision-making is how information enters the decision process and is being utilised in reaching a choice decision: information may be acquired through attention guided by a top-down (goal-driven) process, yet information may also be captured by a bottom-up (stimulus-based) attentional process. The entanglement of both types of processes when making a decision is a prime aspect in this domain and has multiple implications. A more efficient selection process may be driven by greater experience with a task (e.g., more important information cues have a higher probability of being fixated on) and increased expertise in comprehension of visualisations (e.g., more fixations to relevant areas, and inversely fewer fixations to irrelevant areas, requiring shorter fixation durations, and longer saccades [‘jumps’ between more distant elements of information in a scene]). The interaction between bottom-up and top-down processing can amplify attention capture and improve the visual acuity of objects perceived. Bottom-up attention in particular is likely to be influenced by the saliency of a visual stimulus; however, it may not take effect when the task demands on attention are high, wherein priority is given to top-down directives for attention. Decision-making research has shown that visually salient alternatives or attributes are more likely to capture attention and furthermore affect the decision in their favour.

An interplay occurs between working memory and ‘instant’ attention: As the load of information fixated becomes larger, more elements are passed to working memory, and information is accessed from there for processing; however, as the strain on working memory increases, consumers turn to re-fixating information elements and consider them instantly or just-in-time (i.e., fixations are thus used as external memory space). This type of interplay has been identified in tasks of problem solving. Toggling between working memory and fixations or re-fixations in decision tasks can be traced, for instance, in alternative comparisons. Greater demands imposed by information complexity and decision difficulty (due to greater similarity between alternatives) may require greater effort (operations) in acquiring and processing information, yet the process may be shortened on the other hand through learning.

  • Another area with interesting implication is processing of visual objects: Previous research has shown that visual objects are not encoded as complete representations (e.g., naturalistic product images) and the binding of features is highly selective. Thereof, encoding of particular features during an object-stimulus fixation may be goal-driven, and a re-fixation may be employed to refer just-in-time to specific object [product] features as needed in a decision task, thus saving on working memory capacity.

Consumers have a tendency to develop a bias during a decision task towards a favoured alternative. This alternative would get more fixations, and there is also a greater likelihood for the last alternative fixated to be the one chosen (put differently, consumers are likely to re-affirm the choice of their favourite alternative by re-fixating it just before making the decision). A desired or favoured attribute can also benefit from a similar effect by receiving more frequent attention (i.e., fixations). The authors point, however, to a difficulty in confirming evidence accumulation models: whether greater likelihood of a more fixated alternative to be chosen is due to its higher utility or greater exposure to it. They suggest a ‘soft’ model version in support for a greater effect of extended mere exposure leading to choice of an alternative. They add that a down-stream effect of attention from perception onto choice through a bottom-up process may play a role of gatekeeping the alternatives entering a consideration set. It is noted that a down-stream effect, arising from a bottom-up process, is clearly distinguishable from a utility effect, since the former is stimulus-driven and the latter is goal-driven.

Consistent with bounded rationality theory, heuristics shape patterns of attention, directed by the information that a heuristic calls for (e.g., by alternative or by attribute). Yet, eye-tracking studies conducted to trace the progression of decision processes could not corroborate the patterns of heuristics used as proposed in the literature. More formally, studies failed to substantiate the assumption that heuristics in use can be inferred from the patterns of attention recorded. Transitions of consumers between alternative-wise and attribute-wise rules during a decision task make inferences especially difficult. Not only decision rules influence what information is attended to, but information cues met with during the decision process can modify the course of the decision strategy applied — consider the potential effect that salient stimuli captured unexpectedly in a bottom-up manner can have on the progression of the decision strategy.

In summary, regarding the decision-making theories, Orquin and Mueller Loose conclude: (a) firmer support for the relaxed rational model over the strong model (nonattendance is linked to down-stream effects); (b) a two-way relationship between decision rules and attention, where both top-down and bottom-up processes drive attention; (c) the chosen alternative has a higher likelihood of fixations during the decision task and also of being the last alternative fixated — they find confirmation for a choice bias but offer a different interpretation of the function of evidence accumulated; (d) an advantage of the favoured alternatives or most important attributes in receiving greater attention, and advantage of salient alternatives receiving more attention and being more likely to be chosen (concerning dual process parallel constraint satisfaction models).

Following the review, I offer a few final comments below:

Orquin and Mueller Loose contribute an important and interesting perspective in the projection of the role of [visual] attention from similar or related tasks onto decision-making and choice. Moreover, relevance is increased because elements of the similar tasks are embedded in decision-making tasks. Nevertheless, we still need more research within the domain because there could be aspects specific or unique to decision-making (e.g., objectives or goals, structure and context) that should be specified. Insofar as attention is concerned, this call is in alignment with the conclusions of the authors. Furthermore, such research has to reflect real-world situations and locations where consumers practically make decisions.


In retail stores, consider for example the research by Chandon, Hutchinson, Bradlow, and Young (2009) on the trade-off between visual lift (stimulus-based) and brand equity (memory-based); this research combined eye tracking with scanner purchase data [2]. However, it is worth looking also into an alternative approach of video tracking as used by Hui, Huang, Suher, and Inman (2013) in their investigation of the relations between planned and unplanned considerations and actual purchases (video tracking was applied in parallel with path tracking)[3].

For tracing decision processes more generally, refer for example to a review and experiment with eye tracking (choice bias) by Glaholt and Reingold (2011)[4], but consider nonetheless the more critical view presented by Reisen, Hoffrage and Mast (2008) following their comparison of multiple methods of interactive process tracing (IAPT)[5]. Reisen and his colleagues were less convinced that tracking eye movements was superior to tracking mouse movements (MouseLab-Web) for identifying decision strategies while consumers are acquiring information (they warn of superfluous eye re-fixations and random meaningless fixations that occur while people are contemplating the options in their minds).


 

It should be noted that a large part of the research in this field, using eye-tracking measurement, is applied with concentrated displays of information on alternatives and their attributes. The most frequent and familiar format is information matrices (or boards), although in reality we may also encounter other graphic formats such as networks, trees, layered wheels, and more art-creative diagram illustrations. Truly, concentrated displays can be found in shelf displays in physical stores and also in screen displays online and in mobile apps (e.g., retailers’ online stores, manufacturers’ websites, comparison websites). However, on many occasions of decision tasks (e.g., durables, more expensive products), consumers acquire information through multiple sessions while constructing their decisions. That is, the decision process extends over time. In each session consumers may keep some information elements or cues for later processing and integration, or they may execute an interim stage in their decision strategy. If information is eventually integrated, consumers may utilise aides like paper notes and electronic spreadsheets, but they do not necessarily do so.

Orquin and Mueller Loose refer to effects arising from spatial dispersion of information elements in a visual display as relevant to eye tracking (i.e., distance length of saccades), but these studies do not account for temporal dispersion of information. Studies may need to bridge data from multiple sessions to accomplish a more comprehensive representation of some decision processes. Yet, smartphones today can help in closing somewhat the gap since they permit shoppers to acquire information in-store while checking more information from other sources on their smartphones — mobile devices of eye tracking may be used to capture this link.

Finally, eye tracking provides researchers with evidence about attention to stimuli and information cues, but it cannot tell them directly about other dimensions such as meaning of the information and valence. The importance of information to consumers can be implied from measures such as the frequency and duration of fixations, but other methods are needed to reveal additional dimensions, especially from the conscious perspective of consumers (vis-à-vis unconscious biometric techniques such as coding of facial expressions). An explicit method (Visual Impression Metrics) can be used, for example, to elicit statements by consumers as to what areas and objects in a visual display that they freely observe they like or dislike (or are neutral about); if applied in combination with eye tracking, it would enable to signify the valence of areas and objects consumers attend to (unconsciously) in a single session with no further probing.

The review of Orquin and Mueller Loose opens our eyes to the versatile ways in which [visual] attention may function during decision tasks: top-down and bottom-up processes working in tandem, toggling between fixations and memory, a two-way relation between decision strategies and visual attention, choice bias, and more. But foremost, we may learn from this review the dynamics of the role of attention during consumer decision-making.

Ron Ventura, Ph.D. (Marketing)

References: 

[1] Attention and Choice: A Review of Eye Movements in Decision Making; Jacob L. Orquin and Simone Mueller Loos, 2013; Acta Psychologica, 144, pp. 190-206

[2] Does In-Store Marketing Work? Effects of the Number and Position of Shelf Facings on Brand Attention and Evaluation at the Point of Purchase; Pierre Chandon, J. Wesley Hutchinson, Eric T. Bradlow, & Scott H. Young, 2009; Journal of Marketing, 73 (November), pp. 1-17

[3] Deconstructing the “First Moment of Truth”: Understanding Unplanned Consideration and Purchase Conversion Using In-Store Video Tracking; Sam K. Hui, Yanliu Huang, Jacob Suher, & J. Jeffrey Inman, 2013; Journal of Marketing Research, 50 (August), pp. 445-462.

[4] Eye Movement Monitoring as a Process Tracing Methodology in Decision Making Research; Mackenzie G. Glaholt and Eyal M. Reingold, 2011; Journal of Neuroscience, Psychology and Economics, 4 (2), pp. 125-146

[5] Identifying Decision Strategies in a Consumer Choice Situation; Nils Reisen, Ulrich Hoffrage, and Fred W. Mast, 2008; Judgment and Decision Making, 3 (8), pp. 641-658

Read Full Post »

Health insurance, financial investments, telecom service plans — consumers frequently find it harder to make choice decisions in these exemplar domains. Such domains are more susceptible to exhibiting greater complexity: details, many and technical, to account for, multiple options difficult to differentiate and to choose from, and unclear consequences. In products, we may refer in particular to those involving digital technology and computer-based software that some consumers are likely to find more cumbersome to navigate and operate. When consumers are struggling to make any choice, they develop a stronger tendency to delay or avoid the decision at all. They need assistance or guidance in making their way towards a choice that more closely matches their needs or goals and preferences.

Handel and Schwartzstein (2018) are distinguishing between two mechanism types that obstruct or interfere with making rational decisions: frictions and mental gaps.

Frictions reflect costs in acquiring and processing information. They are likely to occur in earlier stages of a decision process when consumers are encountering difficulties in searching for and sorting through relevant information (e.g., what options are more suitable, what attributes and values to look at), and they have to invest time and effort in tracing the information and organising it. Furthermore, frictions may include the case when consumers fail to see in advance or anticipate the benefits from an available alternative  (e.g., consider the difficulty of older people to realise the benefits they may gain from smartphones).

Mental gaps are likely to make an impact at a more advanced stage: the consumer already has the relevant information set in front of him or her but misinterprets its meanings or does not understand correctly the implications and consequences of any given option (e.g., failing to map correctly the relation between insurance premium and coverage). Mental gaps pertain to “psychological distortions” that generally may occur during information-gathering,  attention and processing, but their significance is primarily in comprehension of the information obtained. In summary, it is “a gap between what people think and what they should rationally think given costs.”

In practice, it is difficult to identify which type of mechanism is acting as an obstacle on the way of consumers to a rational decision.  Research techniques are not necessarily successful in separating between a friction and a mental gap as sources of misinformed choices (e.g., choosing a dominated option instead of a dominating one apparent to the rational decision-maker). Notwithstanding, Handel and Schwartzstein are critical of research practices that focus on a single mechanism and ignore alternative explanations. In their view, disregard to the distinction between mechanisms can lead to spurious conclusions. They suggest using counterfactual approaches that test a certain mechanism, or a combination of explanations, and then argue against it with a ‘better’ prospective mechanism explanation. They also refer to survey-based and experimental research methods for distinguishing frictions and mental gaps. The aim of these methods is to track the sources of misinformed decisions.

Consumers often run into difficulty with financial investments and saving plans. In some countries policy makers are challenged with driving consumers-employees towards saving for retirement during the working years. Persuasion per se turns out to be ineffective and other approaches for directing or nudging consumers into saving are designed and implemented (e.g., encouraging people to “roll into saving” through a scheme known as ‘Save More Tomorrow’ by Thaler and Sunstein).

Confronting employees with a long list of saving plans or pension funds may deter them from duly attending to the alternatives in order to make a decision, and even risks their aborting the mission. When consumers-employees have a hard time to recognise differences between the plans or funds (e.g., terms of deposit, assets invested in, returns), they are likely to turn to heuristics that brutally cut through the list. Crucially, even if information on key parameters is available for each option, decision-makers may use only a small part of it. Similar difficulties in choosing between options may arise in financial investments, for instance when choosing between equity and index funds or bond funds. One may be assisted by suggesting a default plan (preferably, recommending a personally customised plan) or sorting and grouping the proposed plans and funds into classes (e.g., by risk level or time horizon). However, it should be acknowledged that consumer responses as described above may harbour frictions as well as mental gaps, and it could help to identify which mechanism has the greater weight in the decision process.

A key issue with health insurance concerns the mapping of relationship between an insurance premium and the level of deductibles or cost-sharing between the insurer and the insured. For example, consumers fall into a trap of accepting an insurance policy offered with a lower premium while not noticing a higher deductible they would have to pay in a future claim. An additional issue consumers have to attend to is the coverage provided for different medical procedures such as treatments and surgeries (given also the deductible level or rate). Consumers may stumble in their decision process while studying health insurance plans as well as while evaluating them.

  • Public HMOs (‘Kupot Holim’) in Israel offer expanded and premium health insurance plans as supplementary to what consumers are entitled to by the State Health Insurance Act. Yet in recent years insurance companies are prompting consumers to get an additional private health insurance plan from them — their argument is that following changes over the years in the HMOs’ plans and reforms by the government, those plans do not offer adequate coverage, or none at all, for more expensive treatments and surgeries. The coverage of private insurance plans is indeed more generous, but so are the much higher premiums , affordable to many only if paid for by the employer.

In addressing other aspects of healthcare, Handel and Schwartzstein raise the issue of consumer preference for a branded medication (non-prescription) over an equivalent and less costly generic or store-branded medication (e.g., buying Advil rather than a store-branded medication that contains the same active ingredient [ibuprofen] for pain relief as in Advil). Another vital issue concerns the tendency of patients to underweight the benefits of treatment by medications prescribed to them, and consequently do not take up medications satisfactorily as instructed to them by their physicians (e.g., patients with a heart condition, especially after a heart attack, who do not adhere as required to the medication regime administered to them).

Customers repeatedly get into feuds with their telecom service providers — mobile and landline phone communication , TV and Internet. Customers of mobile communications (‘cellular’), for example, often complain that the service plan they  had agreed to did not match their actual usage patterns or they did not understand properly the terms of the service contract they signed to. As a result, they have to pay excessive charges (e.g., for minutes beyond quota), or they are paying superfluous fixed costs.

With the advancement of technology the structure of mobile service plans has changed several times in the past twenty years. Mobile telecom companies today usually offer ‘global’ plans for smartphones that include first of all larger volumes of data (5GB, 10GB, 15GB etc.), and then practically an infinite or outright unlimited use of outgoing talking minutes and SMSs. While appealing at first, customers end up paying a fixed inclusive monthly payment that is too high relative to the traffic volume they actually make use of. On the one hand customers refrain from keeping track of their usage patterns because it is costly (a friction). On the other hand, customers fail in estimating their actual usage needs that will match the plan assigned to them (a mental gap). In fact, information on actual usage volumes is more available now (e.g., on invoices) but is not always easily accessible (e.g., more detailed usage patterns). It should be noted, however, that companies are not quick to replace a plan, not to mention voluntarily notifying customers of a mismatch that calls for upgrading or downgrading the plan.

A final example is dedicated here to housing compounds of assisted living for seniors. As people enter their retirement years (e.g., past 70) they may look for comfortable accommodation that will relieve them from the worries and troubles of maintaining their home apartment or house and will also provide them a safe and supportive environment. Housing compounds of assisted living offer residence units, usually of one or two rooms of moderate space, with an envelope of services: maintenance, medical supervision and aid, social and recreational activities (e.g., sports, games, course lectures on various topics). The terms for entering into assisted living housing can be nevertheless consequential and demanding. The costs involve mainly a leasing payment for the chosen residence and monthly maintenance fee payments.

Making the decision can be stressing and confusing. First, many elderly people cannot afford taking residence in such housing projects without selling their current home or possibly renting it (e.g., to cover a loan). In addition the value of the residence is depreciated over the years. Second, the maintenance fee is usually much higher than normal costs of living at home. Hence residents may need generous savings plus rental income in order to finance the luxury and comfort of assisted living. Except for the frictions that are likely to occur while looking for an appropriate and affordable housing compound, the prospect residents are highly likely to be affected by mental gaps in correctly understanding the consequences of moving into assisted living (and even their adult children may find the decision task challenging).

Methods of intervention from different approaches attempt to lead consumers to make decisions that better match their needs and provide them greater benefits or value. Handel and Schwartzstein distinguish between allocation policies that aim to direct or guide consumers to a recommended choice without looking into reasons or sources of the misinformed decisions (e.g., nudging techniques), and mechanism policies that attempt to resolve a misguided or misinformed choice decision by tackling a specific reason causing it, such as originating from a mechanism of friction or mental gap. From a perspective of welfare economics, the goal of an intervention policy of either type is to narrow down a wedge between the value consumers obtain from actual choices subject to frictions and mental gaps, and the value obtainable from a choice conditional on being free of frictions and mental gaps (i.e., assuming a rational decision). (Technical note: The wedge is depicted as a gap in value between a ‘demand curve’ and a ‘welfare curve’, respectively.)

Policies and methods of either approach have their advantages and disadvantages. An allocation policy has a potential for greater impact, that is, it can get farther in closing the welfare wedge.  Yet, it may be too blunt and excessive: while creating a welfare gain for some consumers, it may produce an undesirable welfare loss to consumers for whom the intervention is unfitting. Without knowing the source of error consumers make, it is argued that a nudging-type method (e.g., simplifying the structure of information display of options) could be insufficient or inappropriate to fix the real consumer mistake. A fault of allocation policies could particularly be, according to the authors, that they ignore heterogeneity in consumer preferences. Furthermore, and perhaps as a consequence, such policies overlook the presence of informed consumers who may contribute by leading to the introduction of far better products at lower prices.

Mechanism policies can in principle be more precise and effective while targeting specific causes of consumers’ mistakes, and hence correcting the costs of misinformed decisions without generating unnecessary losses to some of them. The impact could be more limited in magnitude, yet it would be measured. But achieving this outcome in practice, the authors acknowledge, can be difficult and complicated, requiring the application of some costly research methods or complex modelling approaches. They suggest that “[as] data depth and scope improve, empirically entangling mechanisms in a given context will become increasingly viable”.

The analysis by Handel and Schwarztsein of the effects of intervention policies — mechanism versus allocation — could come as too theoretical, building on familiar concepts of economic theory and models, furthermore being difficult and complicated to implement. Importantly, however, the authors open up a door for us to a wider view on sources of mistakes consumers make in decision-making and the differences between approaches aimed at improving the outcomes of their decisions. First, they clarify a distinction between mechanisms of frictions and mental gaps. Second, they contrast allocation policies (e.g., nudging) versus mechanism policies which they advocate. Third, to those less accustomed to the concepts of economic analysis, they demonstrate their ideas with practical real-world examples. Handel and Scwharzstein present a perspective well deserving to learn from.

Ron Ventura, Ph.D. (Marketing)

Reference:

Frictions or Mental Gaps: What’s Behind the Information We (Don’t) Use and When Do We Care?; Benjamin Handel and Joshua Schwartzsetein, 2018; Journal of Economic Perspectives, Vol. 32 (1 – Winter), pp. 155-178. (doi: 10.1257 / jep.32.1.155)

 

 

Read Full Post »

Consumer purchases from Internet retailing websites continue to expand, and their share out of total retail sales increases. Yet there is no real reason to declare the demise of physical, bricks-and-mortar stores and shops any time soon. Online purchases from e-stores (including through apps) indeed pose a stressing challenge to many physical stores, but the latter still hold a solid and dominant majority share of retail sales. Nonetheless, owners of physical stores will have to make changes to their mission and approach to retailing in order to answer effectively and successfully to the challenges from electronic retailing (‘e-tailing’).

The share of sales revenues from online retailing varies across categories (e.g., from groceries to electronics) yet the share overall out of total retailing revenues still floats around 12%-15% on average; there is also important variation between countries. Tensions are high particularly because of the threat from overarching e-tailers such as Amazon and Alibaba who grew their businesses in the virtual online environment. However, retailers do not have to choose to be either in the physical domain or the virtual domain: Many large and even medium bricks-and-mortar retailers are already double-operating through their physical stores and the Internet and mobile channels. Moreover, the master of Western e-tailing Amazon is lurking into the physical world with the establishment of its Amazon Go food stores, its venture into physical bookstores in selected US locations, and notably the acquisition of the food retail chain Whole Foods — what better testimony of the recognition that physical stores are still in need. All these observations should tell us that: (1) The lines between physical and virtual (electronic) retailing are blurred and the domains are not exclusive of each other; (2) It is a matter of linking between the domains where one can operate as an extension of the other (and it does not depend on which is the domain of origin); and (3) The domains are linked primarily by importing technology powered with data into the physical store’s space.

Technology alone, however, is not enough to resolve the challenges facing physical stores. Focusing on technology is like harnessing the carriage before the horses. The true and crucial question is: What will consumers of the coming future be looking for in stores? This is important, because consumers, especially the younger generations born after 1980, still have interest in shopping in bricks-and-mortar stores but they could be looking for something different from past decades, moreover given the digital options available to them now. The answers will have to come through rethinking and modifying the mission and strategy set for physical stores. The direction that seems most compelling for the mission is to shift emphasis from the merchandise offered in a store to the kind of experience offered in the store. The strategy may involve reconsideration and new planning of: (a) the product variety and volume of merchandise made available in the store; (b) interior design and visual merchandising; (c) scope and quality of service; and (d) the technologies applied in the store, all tailored to the convenience and pleasure of the shoppers.

This article will focus primarily on aspects of design of stores, including  interior design and decoration, layout, and visual merchandising (i.e., visual display of products); together with additional sensory elements (e.g., lighting, music, texture, scent) they shape the atmosphere in the store or shop. Yet it should be noted that the four strategy components suggested above are tied and influence each other in creating the kind of experience a retailer desires the customers-shoppers to have while in-store.

Shopping experiences in a store rely essentially on the emotions the store invokes in the consumers-shoppers. Notwithstanding the sensorial and cognitive reactions of shoppers to the interior scene of the store, the positive and pleasant emotions the shoppers feel will most likely be those that motivate them to stay longer and choose more products to purchase (further desired behaviours may include recommendation to friends and posting photos from the store on social media). Prior and close enough to consumption itself, the personal shopping and purchasing experience may invoke a range of positive emotions such as joy, optimism, love (non-romantic), peacefulness, and surprise; of course there also are potential negative emotions that retailers would wish to reduce (e.g., anger, worry, sadness)[*].

The need for shift in emphasis in physical stores is well stated by Lara Marrero, a strategy director with Gensler, a British design firm: “It used to be a place where people bought stuff. Now it is a state where a person experiences a brand and its offerings”. Marrero, who is leading the area of global retail practice at the firm, predicts a future change in mentality of shoppers from ‘grab and go’ to ‘play and stay’ (“Retail 2018: Trends and Predictions”, Retail Focus, 15 December 2017). This predicted shift is still inconsistent with a current retail interpretation of linking the digital and physical domains through schemes of ‘click-and-collect’ online orders at a physical store. Additionally, consumers nowadays conduct more research online on products they are interested in before coming to a store: The question is if a retailer should satisfy with letting the consumer just ask for his or her preferred product at the store or encourage the consumer-shopper to engage and interact more in-store, whether with assistance from human staff or digital utilities, before making a purchase — the push may have to come first from the consumers. Marrero further notes the social function of stores: retail environments become a physical meeting point for consumers to share brand experiences. Retailers will have to allow sufficient space for this in the store.

In order to generate new forms of shopper experiences the setting of a store’s scene also has to change and adapt to the kind of experience one seeks to create. New styles and patterns of in-store design are revealed through photo images of retail design projects, and the stories the images accompany, on websites of design magazines (e.g., VMSD of the US, Retail Focus of the UK). They demonstrate changes in the designing approach to the interior environment of stores and shops.

A striking aspect in numerous design exemplars is the tendency to create more spacious store scenes. It does not necessarily mean that the area of stores is larger but that the store’s layout and furnishing are organised to make it feel more spacious,  for example by making it look lighter and allowing shoppers to move more easily around. Additionally, it implies ‘loading’ the store’s areas which are accessible to customers with less merchandise. First, merchandise would be displayed mostly on fixtures attached to walls around the perimeters of the store, but even then it should not look too crowded (i.e., in appreciation that oftentimes ‘less is more’ for consumers). Second, fewer desks and other display fixtures are positioned across the floor to leave enough room for shoppers to walk around conveniently (and possibly feel more ‘free’). In fashion stores, for instance, this would also apply to  ‘isles’ of demonstrated dressing displays. Third, desks should not be packed with merchandise, and furthermore, at least one desk should be left free from merchandise — leave enough surface for shoppers and sellers to present and look at merchandise and to converse about the options. In some cases, it may allow for the shoppers to socialise and consult among themselves around a desk at the store (e.g., inspired by Apple stores). Opportunities to socialise can be enhanced in larger stores  by allocating space for a coffee & wine bar, for instance, which may serve also sandwiches, patisseries and additional drinks. Stores would be designed to look and feel more pleasant and enjoyable for consumers-shoppers to hang around, contemplate their options and make purchase decisions.

  • Large stores that spread over multiple floors with facades turning outwards to the street may fix the facades with glass sheets, and in order not to block natural daylight from entering into the store they would place desks and mobile hangers or other low shelf fixtures along the windows.

Modissa Fashion Store set for Christmas

In the new-era store not all merchandise the store may offer to sell needs to be displayed in the ‘selling areas’ accessible to shoppers. Retailers may have to retreat from the decades long paradigm that everything on display is the inventory, and vice versa. It is worth considering: First, some merchandise can be displayed as video on screens, and thus also add to the ‘show’ in the store; Second, shoppers can use digital catalogues in the store to find items currently not on display — such items may still be available in stock on premises or they may be ordered within 24 hours. But furthermore, customers may be able to coordinate online or through an app with a store near them to see certain products at a set time; up-to-date analyses of page visits and sales on a retailer’s online store can tell what products are most popular, subsequently guaranteeing that the physical stores keep extra items of them in stock on premises.

Here are references to a few exemplars for illustration of actual store design projects published in design magazines’ websites:

Burberry, London — The flagship store of luxury fashion brand Burberry on Regent Street is highlighted for both the use of space in its design and the employment of digital technology in the store. A large open space atrium (of an older time theatre) occupies the centre of the store (four floors, 3000 sqm), impressive in how Burberry allowed to keep it. The digitally integrated store is commended for its fusion of a ‘digital world’ into its bricks-and-mortar environment: a large high-resolution screen plays video in the atrium, synchronised with a hundred digital screens around the store, some 160 iPads (e.g., for finding items on the catalogue that may not be on display), and RFID tags attached to garments (VMSD, 18 December 2012).

Hogan, Milano — The footwear ’boutique’ store (277 sqm in via Montenapoleone) is designed to reflect the brand, “luxury but accessible”. The store’s mission has been described as follows: “Hogan is a lifestyle brand, championing contemporary culture. The store therefore needed to be dynamic, working hard to adapt from retail space to live event or gallery space”. Characteristic of the store: tilted surfaces for display, lying on top of each other like fallen-down domino bricks; and an animated display of patterns by LED lighting behind frosted glass walls — they both reflect movement, the former just symbolically while the latter more dynamically, to “express the dynamism of the city”. The store of Hogan also fosters social activity around its host bar and customization bar (Retail Focus, 15 February 2018).

Black by Dixon’s, Birmingham (UK) — The technology retail concept aspires to make “the geeky more stylish and exciting”. Digital technology is “dressed” in fashionable design, aiming at the more sophisticated Apple-generation (distinctive in the images are the mannequins “sitting” on desks as props, and colour contrasts on a dark background). (VMSD, 24 May 2011.)

Stella McCartney, Old Bond Street, London — The re-established flagship store resides in an 18th century historic-listed building (four floors, 700 sqm). Products such as dresses and handbags are displayed (sampled) across the store in different halls. The design and lighting give a very loose feeling. Refreshingly, the ground floor features an exhibit of black limestones and “carefully selected rocks” from the family’s estate, a piece of nature in-store (Retail Focus, 14 June 2018).

Admittedly, some of the more distinctive and impressive design exemplars belong to up-scale and luxury stores, but they do give direction and ideas for creating different experiences in retail spaces, even if less lavishly. Furthermore, technology can enrich the store and add a dimension of activity in it. Yet it is part of the whole design plan, not necessarily its central pillar, if at all.

Installing digital technology in a store does not mean importing the Internet and e-store into the physical store. Features of digital technology can be employed in-store in a number of ways, and the use of an online catalogue is just one of them. There is no wisdom for the physical store in trying to mimic Internet websites or compete with them. It should find ways, instead, to implement digital technologies that best suit the store’s space and transform the experience of its visiting shoppers.

Moreover, the store owner should identify those aspects that are lacking in the virtual online store and leverage them in the bricks-and-mortar store (e.g., immediacy, non inter-mediated interaction with products, sensorial stimulations other than visual and audio, feeling fun or relaxed). Thereof, the store should borrow certain technological amenities that can help to link between the domains and make the experience in-store more familiar, convenient, interesting, entertaining or exciting. According to an opinion article in Retail Focus on “The Future of High Street” (Lyndsey Dennis, 25 April 2018): “To draw customers back to brick-and-mortar, [retailers] need to rethink how they use their physical space and store formats. The key is to give customers something they can’t get online, whether that’s information, entertainment, or service“. Advanced technologies such as Virtual reality (VR) and Augmented Reality (AR) are part of the repertoire that are increasingly introduced in high street stores [e.g., AR applied in the fitting rooms of Burberry’s store, triggered by the RFID tags].

Matt Alderton, writing in ArchDaily magazine of architecture and design (25 November 2015), details key technologies and how they are implemented in stores to create new possibilities and leverage shopper experiences. One group of technologies can provide vital data to retailers which in turn can be applied to interact with shoppers and return useful information to them (e.g., beacons, RFID tags, visual lighting communications). The second group includes display technologies that may be enriching with information and entertaining to shoppers: for example, VR and AR, touch screens, and media projected on a surface such as table-top which thus becomes a touch screen. Alderton clearly sees consumer need for physical stores, the question is how consumers would want them: “What the data says is that shoppers want to move forward by going back: Like their forebears who visited Harrods, they crave emporiums that are experiential, not transactional, in nature“. (See also images in this article as they portray new-fashioned designs in space and layout; notably these stores feel less crowded by merchandise, and some show in-store digital displays.)

These are challenging times for bricks-and-mortar stores. New possibilities are emerging for physical stores to grow and thrive, yet they will have to adapt to changed shopping and purchasing patterns of consumers and develop new kinds of experiences that appeal to them. It should be a combined effort, with contribution from interior design of stores and visual merchandising, utilities and amenities based on digital technologies implemented in the store, and the support and assistance by human personnel. The in-store design is especially important in setting the scene — in appearance, comfort and appeal — that will shape shoppers’ experiences. Retailing could evolve as far as into new forms of ‘experiential shopping’.

Ron Ventura, Ph.D. (Marketing)

Reference: [*] Measuring Emotions in the Consumption Experience; Marsha L. Richins, 1997; Journal of Consumer Research, Vol. 24 (September), pp. 127-149.

Read Full Post »

When evaluating a restaurant, the quality of food is not like other factors considered — it has a special status. The same goes quite as much for other food establishments like coffee-houses. The customers or patrons may trade-off several factors which include the food, service, venue, price and location, yet food quality usually gets a much greater weight than the other attributes, suggesting that the decision process is practically not fully compensatory. The quality of the food, its taste and how much we enjoy it, is a “pre-condition” to dining at a restaurant. However, the balance with other attributes is important; in some cases, failure on those other attributes can be detrimental to the willingness of consumers to return to a restaurant or a coffee-house.

  • Some coffee-houses effectively function as ‘coffee-restaurant’ establishments by serving meals of a variety of food items suitable for every time of day (from eggs, salads and toasts to soups, pasta, hamburger or chicken-breast schnitzel with supplements).

Suppose that Dina and Mark, a fictional couple, are dining at a restaurant and find the dishes served to them being well-prepared and they enjoy very much the food’s taste. However, they are very unhappy with the sluggish service they get and inappropriate answers of the waiter, and feel the atmosphere in the restaurant is not pleasant (e.g., too dark or too noisy). The experience of Dina and Mark can be greatly hampered by factors other than food. How superior should the food be for our diners to be ready to tolerate bad service or a place they do not feel comfortable to be in for an hour or two?

On the other hand, Dina and Mark would likely expect the food (e.g., a dish like ‘risotto ai funghi’ [with mushrooms]) to uphold to a certain gratifying standard (i.e., that the ingredients are genuine, the texture is right, and the dish is overall tasty). If the food is not perceived good enough and diners do not enjoy it, this takes out the point of considering dining at the restaurant altogether. But if the food is good though not so special or great, yet the patrons Dina and Mark feel the staff truly welcome them, treat them warmly and cater to sensitivities they may have, they could still be happy to dine at such a restaurant again, and again. When the food is already satisfactory, additional facets of the experience such as great service and a pleasing ambience can increase substantially the desirability of a restaurant or coffee-house as a place consumers would  like to patronize. We may be looking at a decision process where at first food is a non-compensatory criterion, yet above a certain perceived threshold the balance customers-patrons strike between food and other attributes of their experience becomes more intricate and complex.

Browsing reviews of restaurants that are shared on TripAdvisor’s traveller website can provide helpful clues on how customers-patrons relate to food and additional factors in their appraisals of their experiences at restaurants. Reviews were sampled of Italian and Asian restaurants in Tel-Aviv and London (members-reviewers may be city locals, national and international travellers — examples are quoted anonymously so that reviewers and the specific restaurants they review are not identified by name).

Reviewers most often open by referring to the food they have had at the restaurant; next they may give their assessment of the service they have received, design and atmosphere, price or value, and location of the restaurant. Thus, a review may start by appraising the food as good / great / delicious, and then stating that the service was good / nice / efficient. Nonetheless, it is not uncommon for diners-reviewers to open with an assessment of the service they have received at the restaurant. There seems to be a greater propensity to open the review with service when it is superb, but also on the contrary when it is terrible. Occasionally a review will refer firstly to the atmosphere in the restaurant, which is formed by aspects such as interior design or décor, lighting, music and overall ambience. Atmosphere will appear first or at least early in the review particularly when it is superior or inferior.

Additionally, we can distinguish between reviews that are composed of a few short argument-like statements about the food, service and other attributes, and reviews that tell a story (i.e., a narrative-like review). There are diners-reviewers who go especially into detail of the dishes or items of food they, and possibly their companions, have ordered, and their opinion of the food. Yet reviewers may also describe how they were treated by the serving staff, particularly when they felt exceptionally welcome and cared for or annoyed and undesired. Reviews that have a narrative give a stronger impression of the course of dinner to the reader who can more easily visualize it.

It seems that when diners-reviewers say the food is ‘good‘, they do not throw it out of hand; they do mean that the food is truly good, fresh and tasty. This appraisal should be interpreted as a base threshold for being satisfied with the food. When the food is more than ‘good’, reviewers explicitly express it with adjectives like ‘great’, ‘delicious’, ‘fabulous’ or ‘amazing’. Conversely, descriptions of the food as ‘average’, ‘OK’, and moreover as ‘mediocre’, are certainly not compliments, more likely suggesting the food was barely satisfactory. Unless there was something else especially good about the experience in that restaurant like its service or venue, the reviewer would probably have little motivation to return.  Consider for example a reviewer who said about an Italian restaurant in Tel-Aviv: “The ONLY redeeming factor is, in my opinion, the ambience, which is really cozy and relaxed. Too bad they don’t serve food to match” (capitals in origin, rating: 2 ‘rings’ out of 5). Similarly, a reviewer of an Asian restaurant in London complimented it for its “friendly and attentive” waiting staff, but concluded: “So there were a lot of positives about this place, but I’m afraid the food just wasn’t good quality. It was very bland and boring” (rating: 2 ‘rings’). On the other hand, a review of an Asian restaurant in Tel-Aviv offers the opposite case wherein the reviewer states “AMAZING food, OUTRAGEOUS service” (title, capitals in origin), and ends with the conclusion “basically terrible service which was definitely the opposite of the wonderful tasty food we were served” — the rating for this restaurant experience: also 2 ‘rings’.

  • A prospective diner who looks for a restaurant to try for the first time may find the choice task confusing and daunting when reviews of the same restaurant are quite the opposite of each other in their content. Still, it usually does not take too long to realise the ratio of positive to negative reviews given to a restaurant, in addition to the chart of distribution of ratings it received.

Service appears as the second most important factor after food in a restaurant. Patrons want the waiting staff to be friendly and respectful (this of course is a two-way street), be attentive and not letting them feel forgotten, and to be flexible and kind enough to accommodate their personal sensitivities or preferences (e.g., less spicy, nuts-free, replace polenta with rice as supplement). Less pleasant or efficient service will not necessarily make diners-reviewers reject the restaurant if its food is excellent, but they could drop one grade off its rating (e.g., from 5 to 4). Inversely, when the diners-reviewers are happy with the quality and taste of food, then also meeting a warm and helpful waitress — or sitting in comfort in a beautifully designed venue — can make the whole experience so much better. Reviewers repeatedly emphasise when, on top of their pleasure of the food, they are impressed by a waiter or waitress who smiled to them, was friendly, attentive and helpful, and made them feel at home. A reviewer of an Italian restaurant in London explains why it is her favourite: “Quite simply, the food is absolutely gorgeous. Wonderful ingredients and very well cooked. But most of all the welcome that we received and service that we got from everyone is great” (rating: 5).

A particular aspect of service is the length of time a customer has to wait either to be seated at a table or while dining. Many restaurants take table reservations, but not all do. Not taking reservations is legitimate, but it is far less acceptable and even offensive when staff at a restaurant (including coffee-restaurants) run a waiting list at the doorstep and appear pleased with letting prospect customers gather and wait outside as if to show around how popular their establishment is; if you complain they may even hint at you how much they do not really need your patronage. Such past experience may have made a British reviewer visiting an Italian restaurant in Tel-Aviv be thankful when: “The staff were very pleasant and found us a seat on a very busy afternoon without behaving as if they were doing us an enormous favour”. In a different case, at an Asian restaurant in London, a reviewer commented: “Long wait to be seated, despite the place being half empty, as the servers were running around serving tables but not seating people”. Considerate restaurant proprietors may keep seats reserved for people waiting (e.g., next to the bar), and may even offer them a free drink if waiting is extended.

While at the table, diners dislike when waiters appear to forget them behind or somehow miss sight of them (e.g., waiting for menus, for taking order and bringing courses ordered, for the cheque). A reviewer in Tel-Aviv was critical pointedly of servers who “it seems lost interest”, and started chatting with their colleagues or playing on their phones. Waiting staff are expected to stand by, being ready to answer requests or voluntarily enquire if diners need anything. An American reviewer at another Italian restaurant in the city, coming “late one night”, appreciated that “my waitress made an effort to check on me regularly”. At an Italian restaurant in London, a reviewer noted that on arriving early for a meeting, “I was offered a newspaper to read while I waited which I thought a rather nice touch”; overall, he commended the service whereby “the staff proficiently and effortlessly ensured everyone felt special and were looked after”. Seemingly little touches matter!

In restaurants of fine cuisine it seems justified to wait patiently longer for an order (e.g., 20 minutes for a main course) as it could mean that the dish is freshly prepared with care for you in those very moments from start to finish [an advice received from my father]. In many ‘popular’ or casual restaurants, however, it would be much less the expectation, though it could depend on the type of food and how complicated it is perceived to prepare the dish. Furthermore, the sensitivity of customers-patrons to time spent could be subject to the occasion (e.g., meeting and dining leisurely in the evening vs. a pre-theatre dinner or a lunch break).

Reviews tend not to address directly the time until a dish ordered is served but more generally relate to the waiting time at any stage while being at the table. Some relevant references were traced in reviews of Asian restaurants in London: (a) A reviewer noted that “service can be slow” and “a bit hit and miss” (although the food and atmosphere were good); (b) Waiting for food was raised by a reviewer as an issue for concern: the waitresses seemed “understaffed” and having “stressed looking faces”, with the result that “We sat around with no food or drink for over 20 minutes before we could grab a waitresses’ attention” (the food was “fantastic” and the rating given could otherwise be 5 rather than 4 — the reviewer “would defiantly” return); (c) A reviewer who was overall happy with the friendly and efficient service and “freshly cooked and tasty delicious” food particularly remarked that the “food came quickly”.

The aesthetics of interior design of a restaurant or coffee-house can also have an impact on consumers’ attitude towards the place and on their behaviour. The style, materials, colours, surrounding decorations, furnishing, lighting etc. are instrumental in the way the design helps to create a certain atmosphere and mood (e.g., cold or warm; traditional or top-notch modern; quiet, ‘cool’ or energetic).

John Barnett and Anna Burles of ‘JB/AB Design’, a London-based agency specialising in design of coffee shops, offer six instructive guidelines on the ways design on different levels can contribute to brand experience. They start with creating a happening in the coffee shop (‘The shop is a stage’), followed by using appetizing imagery of food (‘customers eat with their eyes’); being authentic and relevant; persuasive visual merchandising; creative ambience; and giving customers good reasons to come and ‘gather around a table’ in  the coffee shop. Their recommendations sound mostly if not all adaptable to more types of food and drink establishments, including restaurants. In setting an authentic design, they advise to ‘say it like you mean it’ all round the shop : “The whole shop is a canvas for imagery and messaging that forms the basis of a conversation with your customers”.

Reviewers-diners talk less frequently of aspects of interior design and description of the space of the venue; broader references are made to atmosphere or ambience. In the case of an Italian restaurant in the Tel-Aviv area with an elegant modern design, three different reviewers noted it has “a very nice décor”, that it is “very spacious and modern”, and the “interior is beautiful, a lot of air”. A reviewer relating to an Italian restaurant in London wrote: “The décor seems a little dated, but there were some fun touches”. This reviewer also addressed music played in creating a pleasing atmosphere (“alternated nicely between Frank Sinatra and Luciano Pavarotti — perfect!”). A reviewer-diner mentioned earlier, who was impressed by the newspaper gesture, also said of that Italian restaurant: “The ambience was extremely relaxed and the décor is comfortable, plush and smart”. An Asian restaurant in Tel-Aviv was described by a reviewer as “pleasant, with very informal atmosphere, soft background music, and industrial/downtown décor”.

Some appraisals of design and atmosphere sound somewhat more reserved though still positive. For example, a reviewer said of a luxury Asian restaurant in London that it is “very dark inside, but somehow it is also very cooling place”. A reviewer in another luxury Asian restaurant was very impressed by a modern-futuristic design yet felt uncomfortable with it: “The place is playing with your perception, slightly disorienting with its colours and stairs and reflecting surfaces”. The reviewers quoted above were largely very happy with the food as well as the service. In just one case observed, a reviewer of an Asian restaurant in Tel-Aviv became very upset with the food and proclaimed “Sorry! But when we decide to go to the restaurant, we wish to have a good meal, NOT ONLY a trendy design” (capitals in origin, rating: 1). In this case the “rather nice designed place” could not compensate for a poor food experience. Customers-patrons welcome inspiring and modern designs, but the design must also feel pleasing to the eye and comfortable — be creative with designs but not be excessive.

A top priority for restaurants, and to a similar degree also for coffee-houses, remains taking the most care of the quality and taste of the food they serve. However, it is essential to also look after additional factors or facets that shape the customer’s experience such as service, design and atmosphere, price or value. The kind of service customers-patrons experience is especially a potential ‘game-changer’. Additionally, consumers may not be coming to a restaurant or coffee-house for its design but if it looks appealing the design and atmostphere can make the stay more comfortable and enoyable, and encourage patrons to stay longer, order more, and return. Food is a central pivot of customer appraisals, yet other facets of the experience can tilt it either way: spoil and even ruin the experience or instead support and enhance it.

Ron Ventura, Ph.D. (Marketing)

 

Read Full Post »

Revelations about the Facebook – Cambridge Analytica affair last month (March 2018) invoked a heated public discussion about data privacy and users’ control over their personal information in social media networks, particularly in the domain of Facebook. The central allegation in this affair is that personal data in social media was misused for the winning political presidential campaign of Donald Trump. It offers ‘juicy’ material for all those interested in American politics. But the importance of the affair goes much beyond that, because impact of the concerns it has raised radiates to the daily lives of millions of users-consumers socially active via the social media platform of Facebook; it could touch potentially a multitude of commercial marketing contexts (i.e., products and services) in addition to political marketing.

Having a user account as member of the social media network of Facebook is pay free, a boon hard to resist. Facebook surpassed in Q2 of 2017 the mark of two billion active monthly users, double a former record of one billion reached five years earlier (Statista). No monetary price requirement is explicitly submitted to users. Yet, users are subject to alternative prices, embedded in the activity on Facebook, implicit and less noticeable as a cost to bear.

Some users may realise that advertisements they receive and see is the ‘price’ they have to tolerate for not having to pay ‘in cash’ for socialising on Facebook. It is less of a burden if the content is informative and relevant to the user. What users are much less likely to realise is how personally related data (e.g., profile, posts and photos, other activity) is used to produce personally targeted advertising, and possibly in creating other forms of direct offerings or persuasive appeals to take action (e.g., a user receives an invitation from a brand, based on a post of his or her friend, about a product purchased or  photographed). The recent affair led to exposing — in news reports and a testimony of CEO Mark Zuckerberg before Congress — not only the direct involvement of Facebook in advertising on its platform but furthermore how permissive it has been in allowing third-party apps to ‘borrow’ users’ information from Facebook.

According to reports on this affair, Psychologist Aleksandr Kogan developed with colleagues, as part of academic research, a model to deduce personality traits from behaviour of users on Facebook. Aside from his position at Cambridge University, Kogan started a company named Global Science Research (GSR) to advance commercial and political applications of the model. In 2013 he launched an app in Facebook, ‘this-is-your-digital-life’, in which Facebook users would answer a self-administered questionnaire on personality traits and some personal background. In addition, the GSR app prompted respondents to give consent to pull personal and behavioural data related to them from Facebook. Furthermore, at that time the app could get access to limited information on friends of respondents — a capability Facebook removed at least since 2015 (The Guardian [1], BBC News: Technology, 17 March 2018).

Cambridge Analytica (CA) contracted with GSR to use its model and data it collected. The app was able, according to initial estimates, to harvest data on as many as 50 million Facebook users; by April 2018 the estimate was updated by Facebook to reach 87 millions. It is unclear how many of these users were involved in the project of Trump’s campaign because CA was specifically interested for this project in eligible voters in the US; it is said that CA applied the model with data in other projects (e.g., pro-Brexit in the UK), and GSR made its own commercial applications with the app and model.

In simple terms, as can be learned from a more technical article in The Guardian [2], the model is constructed around three linkages:

(1) Personality traits (collected with the app) —> data on user behaviour in Facebook platform, mainly ‘likes’ given by each user (possibly additional background information was collected via the app and from the users’ profiles);

(2) Personality traits —> behaviour in the target area of interest — in the case of Trump’s campaign, past voting behaviour (CA associated geographical data on users with statistics from the US electoral registry).

Since model calibration was based on data from a subset of users who responded to the personality questionnaire, the final stage of prediction applied a linkage:

(3) Data on Facebook user behaviour ( —> predicted personality ) —>  predicted voting intention or inclination (applied to the greater dataset of Facebook users-voters)

The Guardian [2] suggests that ‘just’ 32,000 American users responded to the personality-political questionnaire for Trump’s campaign (while at least two million users from 11 states were initially cross-referenced with voting behaviour). The BBC gives an estimate of as many as 265,000 users who responded to the questionnaire in the app, which corresponds to the larger pool of 87 million users-friends whose data was harvested.

A key advantage credited to the model is that it requires only data on ‘likes’ by users and does not have to use other detailed data from posts, personal messages, status updates, photos etc. (The Guardian [2]). However, the modelling concept raises some critical questions: (1) How many repeated ‘likes’ of a particular theme are required to infer a personality trait? (i.e., it should account for a stable pattern of behaviour in response to a theme or condition in different situations or contexts); (2) ‘Liking’ is frequently spurious and casual — ‘likes’ do not necessarily reflect thought-out agreement or strong identification with content or another person or group (e.g., ‘liking’ content on a page may not imply it personally applies to the user who likes it); (3) Since the app was allowed to collect only limited information on a user’s ‘friends’, how much of it could be truly relevant and sufficient for inferring the personality traits? On the other hand, for whatever traits that could be deduced, data analyst and whistleblower Christopher Wylie, who brought the affair out to the public, suggested that the project for Trump had picked-up on various sensitivities and weaknesses (‘demons’ in his words). Personalised messages were respectively devised to persuade or lure voters-users likely to favour Trump to vote for him. This is probably not the way users would want sensitive and private information about them to be utilised.

  • Consider users in need for help who follow and ‘like’ content of pages of support groups for bereaved families (e.g., of soldiers killed in service), combatting illnesses, or facing other types of hardship (e.g., economic or social distress): making use of such behaviour for commercial or political gain would be unethical and disrespectful.

Although the app of GSR may have properly received the consent of users to draw information about them from Facebook, it is argued that deception was committed on three counts: (a) The consent was awarded for academic use of data — users were not giving consent to participate in a political or commercial advertising campaign; (b) Data on associated ‘friends’, according to Facebook, has been allowed at the time only for the purpose of learning how to improve users’ experiences on the platform; and (c) GSR was not permitted at any time to sell and transfer such data to third-party partners. We are in the midst of a ‘blame game’ among Facebook, GSR and CA on the transfer of data between the parties and how it has been used in practice (e.g., to what extent the model of Kogan was actually used in the Trump’s campaign). It is a magnificent mess, but this is not the space to delve into its small details. The greater question is what lessons will be learned and what corrections will be made following the revelations.

Mark Zuckerberg, founder and CEO of Facebook, gave testimony at the US Congress in two sessions: a joint session of the Senate Commerce and Judiciary Committees (10 April 2018) and before the House of Representatives Commerce and Energy Committee (11 April 2018). [Zuckerberg declined a call to appear in person before a parliamentary committee of the British House of Commons.] Key issues about the use of personal data on Facebook are reviewed henceforth in light of the opening statements and replies given by Zuckerberg to explain the policy and conduct of the company.

Most pointedly, Facebook is charged that despite receiving reports concerning GSR’s app and CA’s use of data in 2015, it failed to ensure in time that personal data in the hands of CA is deleted from their repositories and that users are warned about the infringement (before the 2016 US elections), and that it took at least two years for the social media company to confront GSR and CA more decisively. Zuckerberg answered in his defence that Cambridge Analytica had told them “they were not using the data and deleted it, we considered it a closed case”; he immediately added: “In retrospect, that was clearly a mistake. We shouldn’t have taken their word for it”. This line of defence is acceptable when coming from an individual person acting privately. But Zuckerberg is not in that position: he is the head of a network of two billion users. Despite his candid admission of a mistake, this conduct is not becoming a company the size and influence of Facebook.

At the start of both hearing sessions Zuckerberg voluntarily and clearly took personal responsibility and apologized for mistakes made by Facebook while committing to take measures (some already done) to avoid such mistakes from being repeated. A very significant realization made by Zuckerberg in the House is him conceding: “We didn’t take a broad view of our responsibility, and that was a big mistake” — it goes right to the heart of the problem in the approach of Facebook to personal data of its users-members. Privacy of personal data may not seem to be worth money to the company (i.e., vis-à-vis revenue coming from business clients or partners) but the whole network business apparatus of the company depends on its user base. Zuckerberg committed that Facebook under his leadership will never give priority to advertisers and developers over the protection of personal information of users. He will surely be followed on these words.

Zuckerberg argued that the advertising model of Facebook is misunderstood: “We do not sell data to advertisers”. According to his explanation, advertisers are asked to describe to Facebook the target groups they want to reach, Facebook traces them and then does the placement of advertising items. It is less clear who composes and designs the advertising items, which also needs to be based on knowledge of the target consumers-users. However, there seems to be even greater ambiguity and confusion in distinguishing between use of personal data in advertising by Facebook itself and access and use of such data by third-party apps hosted on Facebook, as well as distinguishing between types of data about users (e.g., profile, content posted, response to others’ content) that may be used for marketing actions.

Zuckerberg noted that the ideal of Facebook is to offer people around the world free access to the social network, which means it has to feature targeted advertising. He suggested in Senate there will always be a pay-free version of Facebook, yet refrained from saying when if ever there will be a paid advertising-clear version. It remained unclear from his testimony what information is exchanged with advertisers and how. Zuckerberg insisted that users have full control over their own information and how it is being used. He added that Facebook will not pass personal information to advertisers or other business partners, to avoid obvious breach of trust, but it will continue to use such information to the benefit of advertisers because that is how its business model works (NYTimes,com, 10 April 2018). It should be noted that whereas users can choose who is allowed to see information like posts and photos they upload for display, that does not seem to cover other types of information about their activity on the platform (e.g., ‘likes’, ‘shares’, ‘follow’ and ‘friend’ relations) and how it is used behind the scenes.

Many users would probably want to continue to benefit from being exempt of paying a monetary membership fee, but they can still be entitled to have some control over what adverts they value and which they reject. The smart systems used for targeted advertising could be less intelligent than they purport to be. Hence more feedback from users may help to assign them well-selected adverts that are of real interest, relevance and use to them, and thereof increase efficiency for advertisers.

At the same time, while Facebook may not sell information directly, the greater problem appears to be with the information it allows apps of third-party developers to collect about users without their awareness (or rather their attention). In a late wake-up call at the Senate, Zuckerberg said that the company is reviewing app owners who obtain a large amount of user data or use it improperly, and will act against them. Following Zuckerberg’s effort to go into details of the terms of service and to explain how advertising and apps work on Facebook, and especially how they differ, Issie Lapowsky reflects in the ‘Wired’: “As the Cambridge Analytica scandal shows, the public seems never to have realized just how much information they gave up to Facebook”. Zuckerberg emphasised that an app can get access to raw user data from Facebook only by permission, yet this standard, according to Lapowsky, is “potentially revelatory for most Facebook users” (“If Congress Doesn’t Understand Facebook, What Hope Do Its Users Have”, Wired, 10 April 2018).

There can be great importance to how an app asks for permission or consent of users to pull their personal data from Facebook, how clear and explicit it is presented so that users understand what they agree to. The new General Data Protection Regulation (GDPR) of the European Union, coming into effect within a month (May 2018), is specific on this matter: it requires explicit ‘opt-in’ consent for sensitive data and unambiguous consent for other data types. The request must be clear and intelligible, in plain language, separated from other matters, and include a statement of the purpose of data processing attached to consent. It is yet to be seen how well this ideal standard is implemented, and extended beyond the EU. Users are of course advised to read carefully such requests for permission to use their data in whatever platform or app they encounter them before they proceed. However, even if no information is concealed from users, they may not be adequately attentive to comprehend the request correctly. Consumers engaged in shopping often attend to only some prices, remember them inaccurately, and rely on a more general ‘feeling’ about the acceptable price range or its distribution. If applying the data of users for personalised marketing is a form of price expected from them to pay, a company taking this route should approach the data fairly just as with setting monetary prices, regardless of how well its customers are aware of the price.

  • The GDPR specifies personal data related to an individual to be protected if “that can be used to directly or indirectly identify the person”. This leaves room for interpretation of what types of data about a Facebook user are ‘personal’. If data is used and even transferred at an aggregate level of segments there is little risk of identifying individuals, but for personally targeted advertising or marketing one needs data at the individual level.

Zuckerberg agreed that some form of regulation over social media will be “inevitable ” but conditioned that “We need to be careful about the regulation we put in place” (Fortune.com, 11 April 2018). Democrat House Representative Gene Green posed a question about the GDPR which “gives EU citizens the right to opt out of the processing of their personal data for marketing purposes”. When Zuckerberg was asked “Will the same right be available to Facebook users in the United States?”, he replied “Let me follow-up with you on that” (The Guardian, 13 April 2018).

The willingness of Mark Zuckerberg to take responsibility for mistakes and apologise for them is commendable. It is regrettable, nevertheless, that Facebook under his leadership has not acted a few years earlier to correct those mistakes in its approach and conduct. Facebook should be ready to act in time on its responsibility to protect its users from harmful use of data personally related to them. It can be optimistic and trusting yet realistic and vigilant. Facebook will need to care more for the rights and interests of its users as it does for its other stakeholders in order to gain the continued trust of all.

Ron Ventura, Ph.D. (Marketing)

 

 

 

 

 

Read Full Post »

When the fashion house Maskit originally flourished in the 1950s and 1960s, no one probably thought about it as a brand; actually, not many back then thought about ‘brands’ in general, at least not in Israel of those years. Yet if we look at Maskit retrospectively according to the standards of brands known today, it would be acknowledged as a name brand in fashion. The contemporary fashion house of Maskit, revived after a long recess of two decades, has adopted not only the name but also the genuine styling ideation and design creativity of the former fashion house, thus deserving the ‘license’ to exist again. Maskit of our days has already been planned to be a luxury brand based on current knowledge in marketing and management.

Maskit was unlikely to be regarded as a brand in the 1950s-1960s for two conspicuous reasons: First, brands and their functions in modern marketing came to recognition some thirty years later; Second, Israel had a heavy-laden socialist economy with little competitiveness and a just nascent consumer culture (evolving through the 1960s). Furthermore, Maskit was not run in its prime years as a business enterprise: it started in 1954 as a government agency, turned a decade later (1964) into a governmental company. Only in the 1970s has the government loosened its hold on the company and gradually handed it over to private hands. However, that move has more than anything led to the decline and demise of the former Maskit in 1994.

Maskit is very much the story of the people who built it, then and now. The fashion house was founded in 1954 by Ruth Dayan almost incidentally, but with a great spirit for initiative. She was actually asked by government officials to help in identifying and creating employment opportunities in agriculture for new Jewish immigrants from the Middle East and North Africa. However, Dayan noticed that women from North African countries had a special talent and skills in weaving, sewing and embroidery; she also identified that men from Yemen excelled in jewellery. From there the idea of a fashion house employing immigrants started to take form. Since Dayan was not a fashion designer herself, she teamed-up with Fini Leitersdorf, nominated as the house chief designer. Together they developed a unique and genuine concept for fashion design that is at the same time multi-cultural and Israeli-native. Albeit the unusual circumstances of her enterprise, Ruth Dayan was by our current understanding an early woman entrepreneur in Israel of that period. The privatised company did not manage to continue in the footsteps of Dayan and Leitersdorf following their retirement from the fashion house in the late 1970s. Dayan who just celebrated in mid-March this year (2018) her 101st birthday also belongs nonetheless to the present of Maskit as she has helped in creating the newly born fashion house.

  • ‘Maskit’ can have multiple meanings, such as ‘image’ and ‘figure’, but the most appropriate meaning of this old Hebrew word in relation to what the fashion house does would be ‘ornament’.

Sharon Tal, a fashion designer, re-founded Maskit together with her husband Nir Tal in 2014, following more than two years of preparation, research and planning. Sharon Tal is the fashion house chief designer whereas Nir Tal (CEO) is in charge of the business side, specialising in entrepreneurship. Sharon Tal is a graduate in fashion design from Shenkar College of Engineering, Design & Art in Israel. She has subsequently worked in internship for Lanvin in Paris and for Alexander McQueen in London, where she acquired experience in international fashion design. At McQueen in particular she has learned and later advanced to specialise in embroidery, which would prove especially relevant and important for her professional and business venture of re-launching Maskit. On her return to Israel in 2010 she developed interest in starting a fashion house, and with the help of her husband Nir they discovered that the ideals or goals she has been aspiring for in a fashion house had existed in Maskit of Dayan and Leitersdorf.

Sharon Tal met with Ruth Dayan to talk about her interest in reviving Maskit, and it seems that they connected quite quickly — their first meeting extended into several hours, and they continued to work closely together on the initiative thereafter. It appears that shared thinking, the commitment of Sharon Tal to respect and maintain the original vision of Maskit, and the relevance of Tal’s specialisation as well as international exposure for continuing the heritage of Maskit have helped to convince Dayan that Tal was the right person to revive the fashion house. Ruth Dayan has given her blessing to the Tal couple, and has joined them in guidance during the research and planning process. Indeed the success of Maksit to re-establish itself depends greatly on reviving the heritage of Maskit, which Sharon Tal seems to fully recognise and appreciate, as she also respects the personal legacy of Ruth Dayan.

Maskit has made different types of garments in the days of Leitersdorf and Dayan. The concept that was special in many of them was mounting quality fabrics with motives of different ethnic cultures in embroidery.  They combined modern styles of the times with design traditions of embroidery embellishments “made by immigrants, as well as by Druze, Bedouin, Palestinian, Lebanese and Syrian women” [E1; also see Maskit.com: About]. They used for decoration articles like buttons (e.g., made from river stones and shells), some were initially brought by immigrants from their countries of birth. Maskit also produced jewellery, pillow covers, and other home artifacts. Silver and gold for jewellery were also used in decorating garments. The Hungarian-born Leitefsdorf created the integration of Western (European) practices, materials, and design styles known to her with ethnic styles of different communities she came familiar with in Israel. It was a unique way of adopting cross-cultural ethnic fashion styles and designs, fabrics and colours, and fitting them to the Israeli habitat (nature, climate, and contemporary culture), hence making their clothing and other products ‘Israeli native’.

  • Ruth Dayan provided employment to the immigrants and hence has given them an opportunity to assimilate in the country, as well as helping them to preserve their traditions. It should be noted, however, that immigrants fleeing from Arab countries were at great disadvantage with limited choices compared with more veteran immigrants, mostly from European countries, who formed the dominant classes in the young state. Dayan benefitted from belonging to the latter (‘elite’) classes and was close also to ruling political circles (married at the time to General and later Defence Minister Moshe Dayan), which further helped in obtaining funding.

Sharon Tal has the will and intention to proceed along the same guiding lines of design and craftsmanship set by Dayan and Leitersdorf. But the aim of the renewed Maskit is not to relive the past; instead, the Tals strive to fit the concepts and practices of former Maskit to contemporary styles and tastes of our days. Their priority is to keep the fashion house being Israeli-native, representing its culture and nature, but that also means expressing the multiple original ethnic cultures that make up the Israeli society. Their emphasis also appears to be on handwork production and authenticity in everything they do. These implied ‘values’ could be key to achieving high quality, uniqueness and luxury positioning. Authenticity is seen as a basis for differentiation of the fashion brand; it is also approached as a way of establishing luxury in the sense that authenticity has become hard to find in many areas, and in fashionable clothing in particular. Maskit may be authentic in the fabrics and other materials they use, the methods they apply, and the personal and attentive treatment and service they would provide to their customers (including personally customised designs).

Here are some aspects in which Sharon Tal works to continue the heritage of Maskit. The fashion house uses, for instance, soft fabrics as in the past (including silk, linen as well as leather). Weaving in-house is no longer feasible as in the past so quality fabrics are imported (e.g., from the same suppliers as those Lanvin and McQueen work with). Yet Tal still sees hope that it will be possible to acquire quality fabrics made locally, and perhaps produce at Maskit, in the future [H1]. Among the creations of Leitersdorf, one that has given Maskit greater fame is the desert coat (or cloak) — Sharon Tal designed a new ‘desert collection‘ that is “re-interpreted for today’s woman and her lifestyle”. One of the differences in the desert coat of today from the previous is in its being made in linen rather than wool [E1]. Embroidery designed and prepared in-house remains an identifying signature of Maskit. However, the renewed Maskit is ready to give more credit to artisans working with the fashion house, unlike in the past.

Sharon and Nir Tal are clear about their high ambitions. They want Maskit to be an international leading luxury fashion brand. It is meant to compete on a world stage against international fashion super-brands and challenge renowned fashion retail chains. They do not see their competition against fashion designers in Israel since they look forward to see more Israeli designers succeed and the whole fashion industry in the country developing (H2). That may sound a little co-descending but it can also be interpreted as saying that they hope Maskit will be able to pull the fashion industry in Israel up with them, as Maskit has done before in its earlier life. Accordingly, while they aspire to reach overseas, they intend to extend their efforts to global markets only after establishing Maskit in Israel [E1], and wish to be able to return Maskit into being an international fashion house operating from Tel-Aviv [E2], apparently keeping this home base as their anchor.

Maskit led by Dayan has already reached overseas, mainly to the United States. Since 1956 the fashion house presented in fashion exhibitions in New-York and other American cities. Their designs sold at department stores of Neiman Marcus, Bergdorf Goodman, and Saks Fifth Avenue, and they featured in leading magazines like Vogue. Sharon and Nir Tal expect to take the renewed Maskit in the same direction, and their emphasis at least at start also is on the US. Targets are shifting with time, however: many female customers turn to fashion chains to buy their casual and less costly clothing, then invest in more special dressing, higher quality and enduring, from name designers or specialty boutiques — the latter is where Sharon Tal seems to be aiming. As a luxury brand, Maskit would also target women who buy primarily from famed designers [H2]. In addition, Maskit of the past attracted in Israel tourists visiting the country and their relatives (i.e., mostly Jewish, American, and more wealthy). Yet, Israeli customers also used to buy gifts from Maskit, mostly when they wanted to bring or send them to their relatives abroad to leave a good impression on them. This should stay valid today as then. Maskit may also be able to tap a growing desire in Israel to return to its roots (‘authentic Israeli’) or to connect generations of customers wearing Maskit then and now.

The prices of Maskit to end customers are in the mid- to high-range, not for every occasion.  Their blouse shirts or dresses can be even expensive relatively for their categories. Evening dresses or gowns may cost, for instance, from just below 2,000 shekels ($570, €465) up to a few tens of thousands shekels (e.g., a dress with handmade embroidery in a unique technique was sold for 25,000 shekels or more than $7,000)[H2]. The price of a bridal dress may cost (selling only) in the range of 7,500 to 25,000 shekels (~$2,000-7,000)[H3]. Bridal dresses and customised dresses are the more expensive on offer. A blouse could cost, for example, 900 shekels (leather-trimmed tunic blouse — ~$260, €185)[E1]. The items of Maskit, according to Nir Tal, are made to appeal to women who are “pretty sophisticated, and appreciate the art of this clothing” [E1]. The prices are clearly set to support perceived high quality of garments, and in particular the investments in craftsmanship and dedicated handwork.

  • The flagship shop and studio of Maskit are located in the American-German Colony in the old city of Yaffo adjacent to Tel-Aviv. The place is designed to resemble an atelier of many years in business, and includes museum-like displays next to selling areas (also see photos in H3].

From the business perspective, the Tals approached the launching of Maskit as when creating a start-up, guided primarily by Nir Tal. They wanted the revival of Maskit to be special and different, following the model of revival of brands like Burberry and Lanvin [E1]; it had to reflect the significant achievements of Maskit as a leading fashion house in the country in past years [H2]. It meant that greater effort and resources would have to be invested in the initiative, as in a start-up. The Tal couple gained major funding from key Israeli industrialist Stef Wertheimer, together with his invaluable business wisdom. Launching Maskit as a start-up sounds reasonable in order to recruit the energy needed and concentrate financial and organisational resources in launching the business. However, soon enough comes the time that the fashion house is established and has to realign itself to run for the long-term. There are good indications Maskit could be near that time, if they have not passed it already, and it does not require that they should be established off-shore first. For the long-running fashion house, sustained creativity and innovation are important as much as persistence and discipline. Maskit would be wise not to push itself too far too fast, so as not to burn itself like a start-up.

  • Note: Start-ups in hi-tech, particularly in Israel, do not have too good a reputation in holding for long, hence it would not be wise to use them as a model if the fashion house desires to exist in the long haul and does not plan an ‘exit’.

The brand of Maskit in fashion was not properly valued nor appreciated by the establishment in Israel more than forty years ago (Ruth Dayan noted jokingly in interviews that she lives on a monthly pension of 5,000 shekels as a former worker of the Labour Ministry). But Dayan together with Leitersdorf have demonstrated that a successful brand can be created even without having their minds set to it. Sharon and Nir Tal now have the opportunity to show how high Maskit can reach, and to develop and strengthen its brand, with the much greater marketing and management knowledge and best practices they can now employ. Reborn Maskit is positioned as a luxury brand for women with fine taste in fashion and appeal to nostalgia. The brand’s distinction remains dependent on their commitment to an Israeli-native identity with original creative design in high quality, and keeping their base in Israel even as an international brand.

Ron Ventura, Ph.D. (Marketing)

References in Hebrew:

[H1] Interview with Ruth Dayan & Sharon Tal at Maskit Studio, Xnet, 18 October 2015 (Xnet is an online ‘magazine’ section of Ynet news website, fashion section)

[H2] The New Life of Maskit, Calcalist (economics and business newspaper), 13 December 2017

[H3] New home for Maskit fashion house, Xnet, 28 June 2016

References in English:

[E1] “A Ready-to-Wear Fashion House in Israel’s Ethnic Past“, Jessica Steinberg, Times of Israel, 26 May 2014

[E2] “How the Israeli Fashion Brand Maskit Delivers Authentic Luxury“, Joseph DeAcetis, Forbes’ Opinions, 16 May 2017

 

Read Full Post »

The strength, impact and value of a brand are embodied, fairly concisely, in the concept of ‘brand equity’. However, there are different views on how to express and measure brand equity, whether from a consumer (customer) perspective or a firm perspective. Metrics based on a consumer viewpoint (measured in surveys) raise particular concern as to what actual effects they have in the marketplace. Datta, Ailawadi and van Heerde (2017) have answered to the challenge and investigated how well Consumer-Based metrics of Brand Equity (CBBE) align with Sales-Based estimates of Brand Equity (SBBE). The CBBE metrics were adopted from the model of Brand Asset Valuator (Y&R) whereas SBBE estimates were derived from modelling market data of actual purchases. They also examined the association of CBBE with behavioural response to marketing mix actions [1].

In essence, brand equity expresses an incremental value of a product (or service) that can be attributed to its brand name above and beyond physical (or functional) attributes. Alternately,  brand equity is conceived as the added value of a branded product compared with an identical version of that product if it were unbranded. David Aaker defined four main groups of assets linked to a brand that add to its value: awareness, perceived quality, loyalty, and associations beyond perceived quality. On the grounds of this conceptualization, Aaker subsequently proposed the Brand Equity Ten measures, grouped into five categories: brand loyalty, awareness, perceived quality / leadership, association / differentiation, and market behaviour. Kevin Keller broadened the scope of brand equity wherein greater and more positive knowledge of customers (consumers) about a brand would lead them to respond more favourably to marketing activities of the brand (e.g., pricing, advertising).

The impact of a brand may occur at three levels: customer market, product market and financial market. In accordance, academics have followed three distinct perspectives for measuring brand equity: (a) customer-based — an attraction of consumers to the “non-objective” part of the product offering (e.g., ‘mindset’  as in beliefs and attitudes, brand-specific ‘intercept’ in a choice model); (b) company-based — additional value accrued to the firm from a product because of a brand name versus an equivalent product but non-branded (e.g., discounted cash flow); financial-based — brand’s worth is the price it brings or could bring in the financial market (e.g., materialised via mergers and acquisitions, stock prices)[2]. This classification is not universal:  for example, discounted cash flows are sometimes described as ‘financial’; estimates of brand value derived from a choice-based conjoint model constitute a more implicit reflection of the consumers’ viewpoint. Furthermore, models based on stated-choice (conjoint) or purchase (market share) data may vary greatly in the effects they include whether in interaction with each competing brand or independent from the brand ‘main effect’ (e.g., product attributes, price, other marketing mix variables).

A class of attitudinal (‘mindset’) models of brand equity may encompass a number of aspects and layers: awareness –> perceptions and attitudes about product attributes and functional benefits (+ overall perceived quality), ‘soft’ image associations (e.g., emotions, personality, social benefits) –> attachment or affinity –> loyalty (commitment). Two noteworthy academic studies have built upon the conceptualizations of Aaker and Keller in constructing and testing consumer-based measures:

  • Yoo and Donthu (2001) constructed a three-dimension model of brand equity comprising brand loyalty, brand awareness / associations (combined), and perceived quality (strength of associations was adopted from Keller’s descriptors of brand image). The multidimensional scale (MBE) was tested and validated across multiple product categories and cultural communities [3].
  • Netemeyer and colleagues (2004) demonstrated across products and brands that perceived quality, perceived value (for the cost), and uniqueness of a given brand potentially contribute to willingness to pay a price premium for the brand which in turn acts as a direct antecedent of brand purchase behaviour [4]. Price premium, an aspect of brand loyalty, is a common metric used for assessing brand equity.

Datta, Ailawadi and van Heerde distinguish between two measurement approaches: the consumer-based brand equity (CBBE) approach measures what consumers think and feel about the brand, while the sales-based brand equity (SBBE) approach is based on choice or share of the brand in the marketplace.

The CBBE approach in their research is applied through data on metrics from the Brand Asset Valuator model developed originally by Young and Roubicam (Y&R) advertising agency (the brand research activity is now defined as a separate entity, BAV Group; both Y&R and BAV Group are part of WPP media group). The BAV model includes four dimensions: Relevance to the consumers (e.g., fits in their lifestyles); Esteem of the brand (i.e., how much consumers like the brand and hold it in high regard); Knowledge of the brand (i.e., consumers are aware of and understand what the brand stands for); and  Differentiation from the competition (e.g., uniqueness of the brand)[5].

The SBBE approach is operationalised through modelling of purchase data (weekly scanner data from IRI). The researchers derive estimates of brand value in a market share attraction model (with over 400 brands from 25 categories, though just 290 brands for which BAV data could be obtained were included in subsequent CBBE-SBBE analyses) over a span of ten years (2002-2011). Notably, brand-specific intercepts were estimated for each year; an annual level is sufficient and realistic to account for the pace of change in brand equity over time. The model allowed for variation between brands in the sensitivity to their marketing mix actions (regular prices, promotional prices, advertising spending, distribution {on-shelf availability} and promotional display in stores) — these measures are not taken as part of SBBE values but indicate nonetheless expected manifestation of higher brand equity (impact); after being converted into elasticities, they play a key role in examining the relation of CBBE to behavioural outcomes in the marketplace.


  • Datta et al. seem to include in a SBBE approach estimates derived from (a) actual brand choices and sales data as well as (b) self-reported choices in conjoint studies and surveys. But subjective responses and behavioural responses are not quite equivalent bases. The authors may have aimed reasonably to distinguish ‘choice-based’ measures of brand equity from ‘attitudinal’ measures, but it still does not justify to mix between brands and products consumers say they would choose and those they actually choose to purchase. Conjoint-based estimates are more closely consumer-based.
  • Take for instance a research by Ferjani, Jedidi and Jagpal (2009) who offer a different angle on levels of valuation of brand equity. They derived brand values through a choice-based conjoint model (Hierarchical Bayes estimation at the individual level), regarded as consumer-level valuation. Vis-à-vis the researchers constructed a measure of brand equity from a firm perspective based on expected profits (rather than discounted cash flows), presented as firm-level valuation. Nonetheless, in order to estimate sales volume they ‘imported’ predicted market shares from the conjoint study, thus linking the two levels [6].

 

Not all dimensions of BAV (CBBE) are the same in relation to SBBE: Three of the dimensions of BAV — relevance, esteem, and knowledge — are positively correlated with SBBE (0.35, 0.39, & 0.53), while differentiation is negatively although weakly correlated with SBBE (-0.14). The researchers reasoned in advance that differentiation could have a more nuanced and versatile market effect (a hypothesis confirmed) because differentiation could mean the brand is attractive to only some segments and not others, or that uniqueness may appeal to only some of the consumers (e.g., more open to novelty and distinction).

Datta et al. show that correlations of relevance (0.55) and esteem (0.56) with market shares of the brands are even higher, and the correlation of differentiation with market shares is less negative (-0.08), than their correlations with SBBE (correlations of knowledge are about the same). The SBBE values capture a portion of brand attraction to consumers. Market shares on the other hand factor in additional marketing efforts that dimensions of BAV seem to account for.

Some interesting brand cases can be detected in a mapping of brands in two categories (for 2011): beer and laundry detergents. For example, among beers, Corona is positioned on SBBE much higher than expected given its overall BAV score, which places the brand among those better valued on a consumer basis (only one brand is considerably higher — Budweiser). However, with respect to market share the position of Corona is much less flattering and quite as expected relative to its consumer-based BAV score, even a little lower. This could suggest that too much power is credited to the name and other symbols of Corona, while the backing from marketing efforts to support and sustain it is lacking (i.e., the market share of Corona is vulnerable).  As another example, in the category of laundry detergents, Tide (P&G) is truly at the top on both BAV (CBBE) and market share. Yet, the position of Tide on SBBE relative to BAV score is not exceptional or impressive, being lower than predicted for its consumer-based brand equity. The success of the brand and consumer appreciation for it may not be adequately attributed specifically to the brand in the marketplace but apparently more to other marketing activities in its name (i.e., marketing efforts do not help to enhance the brand).

The degree of correlation between CBBE and SBBE may be moderated by characteristics of product category. Following the salient difference cited above between dimensions of BAV in relation to SBBE, the researchers identify two separate factors of BAV: relevant stature (relevance + esteem + knowledge) and (energized) differentiation [7].

In more concentrated product categories (i.e., the four largest brands by market share hold a greater total share of the category), the positive effect of brand stature on SBBE is reduced. Relevance, esteem and knowledge may serve as particularly useful cues by consumers in fragmented markets, where it is more necessary for them to sort and screen among many smaller brands, thus to simplify the choice decision process. When concentration is greater, reliance on such cues is less required. On the other hand, when the category is more concentrated, controlled by a few big brands, it should be easier for consumers to compare between them and find aspects on which each brand is unique or superior. Indeed, Datta and colleagues find that in categories with increased concentration, differentiation has a stronger positive effect on SBBE.

For products characterised by greater social or symbolic value (e.g., more visible to others when used, shared with others), higher brand stature contributes to higher SBBE in the market. The researchers could not confirm, however, that differentiation manifests in higher SBBE for products of higher social value. The advantage of using brands better recognized and respected by others appears to be primarily associated with facets such as relevance and esteem of the brand.

Brand experience with hedonic products (e.g., leisure, entertainment, treats) builds on enjoyment, pleasure and additional positive emotions the brand succeeds in evoking in consumers. Sensory attributes of the product (look, sound, scent, taste, touch) and holistic image are vital in creating a desirable experience. Contrary to expectation of Datta and colleagues, however, it was not found that stature translates to higher SBBE for brands of hedonic products (even to the contrary). This is not so good news for experiential brands in these categories that rely on enhancing relevance and appeal to consumers, who also understand the brands and connect with them, to create sales-based brand equity in the marketplace. The authors suggest in their article that being personally enjoyable (inward-looking) may overshadow the importance of broad appeal and status (outward-looking) for SBBE. Nevertheless, fortunately enough, differentiation does matter for highlighting benefits of the experience of hedonic products, contributing to a raised sales-based brand equity (SBBE).

Datta, Ailawadi and van Heerde proceeded to examine how strongly CBBE corresponds with behavioural responses in the marketplace (elasticities) as manifestation of the anticipated impact of brand equity.

Results indicated that when relevant stature of a brand is higher consumers respond favourably even more strongly to price discounts or deals  (i.e.,  elasticity of response to promotional prices is further more negative or inverse). Yet, the expectation that consumers would be less sensitive (adverse) to increased regular prices by brands of greater stature was not substantiated (i.e., expected positive effect: less negative elasticity). (Differentiation was not found to have a positive effect on response to regular prices either, and could be counter-conducive for price promotions.)

An important implication of brand equity should be that consumers are more willing to pay higher regular prices for a brand of higher stature (i.e., a larger price premium) relative to competing brands, and more forgiving when such a brand sees it necessary to update and raise its regular price. The brand may benefit from being more personally relevant to the consumer, better understood and more highly appreciated. A brand more clearly differentiated from competitors with respect to its advantages could also benefit from a protected status. All these properties are presumed to enhance attachment to a brand, and subsequently lead to greater loyalty, making consumers more ready to stick with the brand even as it becomes more expensive. This research disproves such expectations. Better responsiveness to price promotions can help to increase sales and revenue, but it testifies to the heightened level of competition in many categories (e.g., FMCG or packaged goods) and propensity of consumers to be more opportunistic rather than to the strength of the brands. This result, actually a warning signal, cannot be brushed away easily.

  • Towards the end of the article, the researchers suggest as explanation that they ignored possible differences in response to increases and decreases in regular prices (i.e., asymmetric elasticity). Even so, increases in regular prices by stronger brands are more likely to happen than price decreases, and the latter already are more realistically accounted for in response to promotional prices.

Relevant stature is positively related to responsiveness to feature or promotional display (i.e., consumers are more inclined to purchase from a higher stature brand when in an advantaged display). Consumers also are more strongly receptive to larger volume of advertising by brands of higher stature and better differentiation in their eyes (this analysis could not refer to actual advertising messages and hence perhaps the weaker positive effects). Another interesting finding indicates that sensitivity to degree of distribution (on-shelf availability) is inversely associated with stature — the higher the brand stature from consumer viewpoint, larger distribution is less attractive to the consumers. As the researchers suggest, consumers are more willing to look harder and farther (e.g., in other stores) for those brands regarded more important for them to have. So here is a positive evidence for the impact of stronger brands or higher brand equity.

The research gives rise to some methodological questions on measurement of brand equity that remain open for further deliberation:

  1. Should the measure of brand equity in choice models rely only on a brand-specific intercept (expressing intrinsic assets or value of the brand) or should it include also a reflection of the impact of brand equity as in response to marketing mix activities?
  2. Are attitudinal measures of brand equity (CBBE) too gross and not sensitive enough to capture the incremental value added by the brand or is the measure of brand equity based only on a brand-intercept term in a model of actual purchase data too specific and narrow?  (unless it accounts for some of the impact of brand equity)
  3. How should measures of brand equity based on stated-choice (conjoint) data and actual purchase data be classified with respect to a consumer perspective? (both pertain really to consumers: either their cognition or overt behaviour).

Datta, Ailawadi and van Heerde throw light in their extensive research on the relation of consumer-based equity (CBBE) to behavioural outcomes, manifested in brand equity based on actual purchases (SBBE) and in effects on response to marketing mix actions as an impact of brand equity. Attention should be awarded to positive implications of this research for practice but nonetheless also to the warning alerts it may signal.

Ron Ventura, Ph.D. (Marketing)

Notes:

[1] How Well Does Consumer-Based Brand Equity Align with Sales-Based Brand Equity and Marketing-Mix Response?; Hannes Datta, Kusum L. Ailawadi, & Harald J. van Heerde, 2017; Journal of Marketing, 81 (May), pp. 1-20. (DOI: 10.1509/jm.15.0340)

[2] Brands and Branding: Research Findings and Future Priorities; Kevin L. Keller and Donald R. Lehmann, 2006; Marketing Science, 25 (6), pp. 740-759. (DOI: 10.1287/mksc.1050.0153)

[3] Developing and Validating a Multidimensional Consumer-Based Brand Equity Scale; Boonghee Yoo and Naveen Donthu, 2001; Journal of Business Research, 52, pp. 1-14.

[4]  Developing and Validating Measures of Facets of Customer-Based Brand Equity; Richard G. Netemeyer, Balaji Krishnan, Chris Pullig, Guangping Wang,  Mahmet Yageci, Dwane Dean, Joe Ricks, & Ferdinand Wirth, 2004; Journal of Business Research, 57, pp. 209-224.

[5] The authors name this dimension ‘energised differentiation’ in reference to an article in which researchers Mizik and Jacobson identified a fifth pillar of energy, and suggest that differentiation and energy have since been merged. However, this change is not mentioned or revealed on the website of BAV Group.

[6] A Conjoint Approach for Consumer- and Firm-Level Brand Valuation; Madiha Ferjani, Kamel Jedidi, & Sharan Jagpal, 2009; Journal of Marketing Research, 46 (December), pp. 846-862.

[7] These two factors (principal components) extracted by Datta et al. are different from two higher dimensions defined by BAV Group (stature = esteem and knowledge, strength = relevance and differentiation). However, the distinction made by the researchers as corroborated by their data is more meaningful  and relevant in the context of this study.

 

Read Full Post »

Older Posts »