Feeds:
Posts
Comments

Posts Tagged ‘Judgement’

Fifteen years have passed since a Nobel Prize in economics was awarded to Daniel Kahneman to this time (Fall 2017) when another leading researcher in behavioural economics, Richard Thaler, wins this honourable prize. Thaler and Kahneman are no strangers — they have collaborated in research in this field from its early days in the late 1970s. Moreover, Kahneman together with the late Amos Tversky helped Thaler in his first steps in this field, or more generally in meeting economics with psychology. Key elements of Thaler’s theory of Mental Accounting are based on the value function in Kanheman and Tversky’s Prospect theory.

In recent years Thaler is better known for the approach he devised of choice architecture and the tools of nudging, as co-author of the book “Nudge: Improving Decisions About Health, Wealth and Happiness” with Cass Sunstein (2008-9). However, at the core of the contribution of Thaler is the theory of mental accounting where he helped to lay the foundations of behavioural economics. The applied tools of nudging are not appropriately appreciated without understanding the concepts of mental accounting and other phenomena he studied with colleagues which describe deviations in judgement and behaviour from the rational economic model.

Thaler, originally an economist, was unhappy with predictions of consumer choice arising from microeconomics — the principles of economic theory were not contested as a normative theory (e.g., regarding optimization) but claims by economists that the theory is able to describe actual consumer behaviour and predict it were put into question. Furthermore, Thaler and others early on argued that deviations from rational judgement and choice behaviour are predictable.  In his ‘maverick’ paper “Toward a Positive Theory of Consumer Choice” from 1980, Thaler described and explained deviations and anomalies in consumer choice that stand in disagreement with the economic theory. He referred to concepts such as framing of gains and losses, the endowment effect, sunk costs, search for information on prices, regret, and self-control (1).

The theory of mental accounting developed by Thaler thereafter is already an integrated framework that describes how consumers perform value judgements and make choice decisions of products and services to purchase while recognising psychological effects in making economic decisions (2).  The theory is built around three prominent concepts (described here only briefly):

Dividing a budget into categories of expenses: Consumers metaphorically (but sometimes physically) allocate the money of their budget into buckets or envelopes according to type or purpose of expenses. It means that they do not transfer money freely between categories (e.g., food, entertainment). This concept contradicts the economic principle of fungibility, thus suggesting that one dollar is not valued the same in every category. A further implication is that each category has a sub-budget allotted to it, and if expenses in the category during a period surpass its limit, a consumer will prefer to give up on the next purchase and refrain from adding money from another category. Hence, for instance,  Dan and Edna will not go out for dinner at a trendy restaurant if that requires taking money planned for buying shoes for their child. However, managing the budget according to the total limit of income in each month is more often unsatisfactory, and some purchases can still be made on credit without hurting other purchases in the same month. On the other hand, it can readily be seen how consumers get into trouble when they try to spread too many expenses across future periods with their credit cards, and lose track of the category limits for their different expenses.

Segregating gains and integrating losses: In the model of a value function by Kahneman and Tversky, value is defined upon gains and losses as one departs from a reference point (a “status quo” state). Thaler explicated in turn how properties of the gain-loss value function would be implemented in practical evaluations of outcomes. The two general “rules”, as demonstrated most clearly in “pure” cases, say: (a) if there are two or more gains, consumers prefer to segregate them (e.g., if Chris makes gains on two different shares on a given day, he will prefer to see them separately); (b) if there are two or more losses, consumers prefer to integrate them (e.g., Sarah is informed of a price for an inter-city train trip but then told there is a surcharge for travelling in the morning — she will prefer to consider the total cost for her requested journey). Thaler additionally proposed what consumers would prefer doing in more complicated cases of “mixed” gains and losses, whether to segregate between the gain and loss (e.g., if the loss is much greater than the gain) or integrate them (e.g., if the gain is larger than the loss so that one remains with a net gain).

Adding-up acquisition value with transaction value to evaluate product offers: A product or service offer generally exhibits in it benefits and costs to the consumer (e.g., the example of a train ticket above overlooked the benefit of the travel to Sarah). But value may arise from the offering or deal itself beyond the product per se. Thaler recognised that consumers may look at two sources of value, and composing or adding them together would yield the overall worth of a product purchase offer: (1) Acquisition utility is the value of a difference between the [monetary] value equivalent of a product to the consumer and its actual price; (2) Transaction utility is the value of a difference between the actual price and a reference price. In the calculus of value, hides the play of gains and losses. This value concept was quite quickly adopted by consumer and marketing researchers in academia and implemented in means-end models that depict chains of value underlying the purchase decision process of consumers (mostly in the mid-1980s to mid-1990s). Thaler’s approach to ‘analysing’ value is getting more widely acknowledged and applied also in practice, as expressions of value as such in consumer response to offerings can be found in so many domains of marketing and retailing.

A reference price may receive different representations, for instance: the price last paid; price recalled from a previous period; average or median price in the same product class; a ‘normal’ or list price; a ‘fair’ or ‘just’ price (which is not so easy to specify). The transaction value may vary quite a lot depending on the form of reference price a consumer uses, ceteris paribus, and hence affect how the transaction value is represented (i.e., as a gain or a loss and its magnitude). Yet, it also suggests that marketers may hint to consumers a price to be used as a reference price (e.g., an advertised price anchor) and thus influence consumers’ value judgements.

We often observe and think of discounts as a difference between an actual price (‘only this week’) and a higher normal price — in this case we may construe the acquisition value and transaction value as two ways to perceive gain on the actual price concurrently. But the model of Thaler is more general because it recognizes a range of prices that may be employed as a reference by consumers. In addition, a list price may be suspected to be set higher to invoke in purpose the perception of a gain vis-à-vis the actual discounted price which in practice is more regular than the list price. A list price or an advertised price may also serve primarily as a cue for the quality of the product (and perhaps also influence the equivalent value of the product for less knowledgeable consumers), while an actual selling price provides a transaction value or utility. In the era of e-commerce, consumers also appear to use the price quoted on a retailer’s online store as a reference; then they may visit one of its brick-and-mortar stores, where they hope to obtain their desired product faster, and complain if they discover that the price for the same product in-store is much higher. Where customers are increasingly grudging over delivery fees and speed, a viable solution to secure customers is to offer a scheme of ‘click-and-collect at a store near you’. Moreover, when more consumers shop with a smartphone in their hands, the use of competitors’ prices or even the same retailer’s online prices as references is likely to be even more frequent and ubiquitous.


  • The next example may help further to illustrate the potentially compound task of evaluating offerings: Jonathan arrives to the agency of a car dealer where he intends to buy his next new car of favour, but there he finds out that the price on offer for that model is $1,500 higher than a price he saw two months earlier in ads. The sales representative claims prices by the carmaker have risen lately. However, when proposing a digital display system (e.g., entertainment, navigation, technical car info) as an add-on to the car, the seller proposes also to give Jonathan a discount of $150 on its original price tag.
  • Jonathan appreciates this offer and is inclined to segregate this saving apart from the additional pay for the car itself (i.e., ‘silver-lining’). The transaction value may be expanded to include two components (separating the evaluations of the car offer and add-on offer completely is less sensible because the add-on system is still contingent on the car).

Richard Thaler contributed to the revelation, understanding and assessment of implications of additional cognitive and behavioural phenomena that do not stand in line with rationality in the economic sense. At least some of those phenomena have direct implications in the context of mental accounting.

One of the greater acknowledged phenomena by now is the endowment effect. It is the recognition that people value an object (product item) already in their possession more than when having the option of acquiring the same object. In other words, the monetary compensation David would be willing to accept to give up on a good he holds is higher than the amount he would agree to pay to acquire it —  people principally have a difficulty to give up on something they own or endowed with (no matter how they originally obtained it). This effect has been most famously demonstrated with mugs, but to generalise it was also tested with other items like pens. This effect may well squeeze into consumers’ considerations when trying to sell much more expensive properties like their car or apartment, beyond an aim to make a financial gain. In his latest book on behavioural economics, ‘Misbehaving’, Thaler provides a friendly explanation with graphic illustration as to why fewer transactions of exchange occur between individuals who obtain a mug and those who do not, due to the endowment effect vis-à-vis a prediction by economic theory (3).

Another important issue of interest to Thaler is fairness, such as when it is fair or acceptable to charge a higher price from consumers for an object in shortage or hard to obtain (e.g., shovels for clearing snow on the morning after a snow storm). Notably, the perception of “fairness” may be moderated depending on whether the rise in price is framed as a reduction in gain (e.g., a discount of $2o0 from list price being cancelled for a car in short supply) or an actual loss (e.g., an explicit increase of $200 above the list price) — the change in actual price is more likely to be perceived as acceptable in the former case than the latter (4). He further investigated fairness games (e.g., Dictator, Punishment and Ultimatum). Additional noteworthy topics he studied are susceptibility to sunk cost and self-control.

  • More topics studied by Thaler can be traced by browsing his long list of papers over the years since the 1970s, and perhaps more leisurely through his illuminating book: “Misbehaving: The Making of Behavioural Economics” (2015-16).

The tactics of nudging, as part of choice architecture, are based on lessons from the anomalies and biases in consumers’ procedures of judgement and decision-making studied by Thaler himself and others in behavioural economics. Thaler and Sunstein looked for ways to guide or lead consumers to make better choices for their own good — health, wealth and happiness — without attempting to reform or alter their rooted modes of thinking and behaviour, which most probably would be doomed to failure. Their clever idea was to work within the boundaries of human behaviour to modify it just enough and in a predictable way to put consumers on a better track to a choice decision. Nudging could mean diverting a consumer from his or her routine way of making a decision to arrive to a different, expectedly better, choice outcome. It often likely involves taking a consumer out of his or her ‘comfort zone’. Critically important, however, Thaler and Sunstein conditioned in their book ‘Nudge’ that: “To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates“. Accordingly, nudging techniques should not impose on consumers the choice of any designated or recommended options (5).

Six categories of nudging techniques are proposed: (1) defaults; (2) expect errors; (3) give feedback; (4) understanding “mappings”; (5) structure complex choices; and (6) incentives. In any of these techniques, the intention is to allow policy makers to direct consumers to choices that improve the state of consumers. Yet, the approach they advocate of ‘libertarian paternalism’ is not received without contention —  while libertarian, that is without coercing a choice, a question remains what gives an agency or policy maker the wisdom and right to determine which options should be better off for consumers (e.g., health plans, saving and investment programmes). Thaler and Sunstein discuss the implementation of nudging mostly in the context of public policy (i.e., by government agencies) but these techniques are applicable just as well to plans and policies of private agencies or companies (e.g., banks, telecom service providers, retailers in their physical and online stores). Nevertheless, public agencies and even more so business companies should devise and apply any measures of nudging to help consumers to choose the better-off and fitting plans for them; it is not for manipulating the consumers or taking advantage of their human errors and biases in judgement and decision-making.

Richard Thaler reviews and explains in his book “Misbehaving” the phenomena and issues he has studied in behavioural economics through the story of his rich research career — it is an interesting, lucid and compelling story. He tells in a candid way about the stages he has gone through in his career. Most conspicuously, this story also reflects the obstacles and resistance that faced behavioural economists for at least 25-30 years.

Congratulations to Professor Richard Thaler, and to the field of behavioural economics to which he contributed wholesomely, in theory and in its application.    

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) Toward a Positive Theory of Consumer Choice; Richard H. Thaler, 1980/2000; in Choices, Values and Frames (eds. Daniel Kahneman and Amos Tversky)[Ch. 15: pp. 269-287], Cambridge University Press. (Originally published in Journal of Economic Behaviour and Organization.)

(2) Mental Accounting and Consumer Choice; Richard H. Thaler, 1985; Marketing Science, 4 (3), pp. 199-214.

(3) Misbehaving: The Making of Behavioural Economics; Richard H. Thaler, 2016; UK: Penguin Books (paperback).

(4) Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias; Daniel Kahneman, Jack L. Knetsch, & Richard H. Thaler, 1991/2000; in Choices, Values and Frames (eds. Daniel Kahneman and Amos Tversky)[Ch. 8: pp. 159-170], Cambridge University Press. (Originally published in Journal of Economic Perspectives).

(5) Nudge: Improving Decisions About Health, Wealth, and Happiness; Richard H. Thaler and Cass R. Sunstein, 2009; UK: Penguin Books (updated edition).

Advertisements

Read Full Post »

The discipline of consumer behaviour is by now well versed in the distinction between System 1 and System 2 modes of thinking, relating in particular to consumer judgement and decision making, with implications for marketing and retail management. Much appreciative gratitude is owed to Nobel Prize Laureate in economics Daniel Kahneman for bringing forward the concept of these thinking systems to the knowledge of the wider public (i.e., beyond academics) in his book “Thinking, Fast and Slow” (2012). ‘System 1’ and ‘System 2’, though not always using these labels, have been identified and elaborated by psychologists earlier than Kahneman’s book, as the author so notes. However, Kahneman succeeds in making more crystal clear the concepts of these different modes of thinking while linking them to phenomena studied in his own previous research, most notably in collaboration with the late Amos Tversky.

In a nutshell: System 1’s type of thinking is automatic, associative and intuitive; it tends to respond quickly, but consequently it is at higher risk of jumping to wrong conclusions. It is the ‘default’ type of thinking that guides human judgement, decisions and behaviour much of the time. On the other hand, System 2’s type of thinking is deliberative, logical, critical, and effortful; it involves deeper concentration and more complex computations and rules. System 2 has to be called to duty voluntarily, activating rational thinking and careful reasoning. Whereas thinking represented by System 1 is fast and reflexive, that of System 2 is slow and reflective.

Kahneman describes and explains the role, function and effect of System 1 and System 2 in various contexts, situations or problems. In broad terms: Thinking of the System 1 type comes first; System 2 either passively adopts impressions, intuitive judgements and recommendations by System 1 or actively kicks-in for more orderly examination and correction (alas, it tends to be lazy, not in a hurry to volunteer). Just to give a taste, below is a selection of situations and problems in which Kahneman demonstrates the important differences between these two modes of thinking, how they operate and the outcomes they effect:

  • # Illusions (e.g., visual, cognitive)  # Use of memory (e.g., computations, comparisons)  # Tasks requiring self-control  # Search for causal explanations  # Attending to information (“What You See Is All There Is”)  # Sets and prototypes (e.g., ‘average’ vs. ‘total’ assessments)  # Intensity matching  # ‘Answering the easier question’ (simplifying by substitution)  # Predictions (also see correlation and regression, intensity matching, representativeness)  # Choice in opt-in and opt-out framing situations (e.g., organ donation)
  • Note: In other contexts presented by Kahneman (e.g., validity illusion [stock-picking task], choice under Prospect Theory), the author does not connect them explicitly to  System 1 or System 2 so their significance may only be indirectly implied by the reader.

In order to gain a deeper understanding of System 1 and System 2 we should inspect the detailed aspects differentiating between these thinking systems. The concept of the two systems actually emerges from binding multiple dual-process theories of cognition together, thus appearing to be a larger cohesive theory of modes of thinking. Each dual process theory is usually focused on a particular dimension that distinguishes between two types of cognitive processes the human mind may utilise. However, those dimensions ‘correlate’ or ‘co-occur’, and a given theory often adopts aspects from other similar theories or adds supplementary properties; the dual-system conception hence is built on this conversion. The aspects or properties used to describe the process in each type of system are extracted from those dual-process theories. A table presented by Stanovich (2002) helps to see how System 1 and System 2 contrast in various dual-process theories. Some of those theories are: [For brevity, S1 and S2 are applied below to refer to each system.)

  • S1: Associative system / S2: Rule-based system (Sloman)
  • S1: Heuristic processing / S2: Analytic processing (Evans)
  • S1: Tacit thought process / S2: Explicit thought process (Evans and Over)
  • S1: Experiential system / S2: Rational system (Epstein)
  • S1: Implicit inference / S2: Explicit inference (Johnson-Laird)
  • S1: Automatic processing / S2: Controlled processing (Shiffrin and Schneider)

Note: Evans and Wason related to Type 1 vs. Type 2 processes already in 1976.

  • Closer to consumer behaviour: Central processing versus peripheral processing in the Elaboration Likelihood Model (Petty, Cacioppo & Schumann) posits a dual-process theory of routes to persuasion.

Each dual process theory provides a rich and comprehensive portrayal of two different thinking modes. The theories complement each other but they do not necessarily depend on each other. The boundaries between the two types of process are not very sharp, that is, features of the systems are not all exclusive in the sense that a particular property associated with a process of System 1 may occur in a System 2 process, and vice versa. Furthermore, the processes also interact with one another, particularly in a way where System 2 relies on products of thought from System 1, either approving them or using them as a starting-point for further analysis. Nevertheless, occasionally System 2 may generate reasons for us merely to justify a choice made by System 1 (e.g., a consumer likes a product for the visual appearance of its packaging or its design).

Stanovich follows the table of theories with a comparison of properties describing System 1 versus System 2 as derived from a variety of dual process theories, but without attributing them to any specific theory (e.g., holistic/analytic, relatively fast/slow, highly contextualized/decontextualized). Comparative lists of aspects or properties have been offered by other researchers as well. Evans (2008) formed a comparative list of more than twenty attributes which he divided into four clusters (describing System 1/System 2):

  • Cluster 1: Consciousness (e.g., unconscious/conscious, automatic/controlled, rapid/slow, implicit/explicit, high capacity/low capacity)
  • Cluster 2: Evolution (e.g., evolutionary old/recent, nonverbal/linked to language)
  • Cluster 3: Functional characteristics (e.g.,  associative/rule-based, contextualized/abstract, parallel/sequential)
  • Cluster 4: individual differences (universal/heritable, independent of/linked to general intelligence, independent of/limited by working memory capacity).

Listings of properties collated from different sources (models, theories), interpreted as integrative profiles of System 1 and System 2 modes of thinking, may yield a misconception of the distinction between the two systems as representing an over-arching theory. Evans questions whether it is really possible and acceptable to tie the various theories of different origins under a common roof, suggested as an over-arching cohesive theory of two systems (he identifies problems residing mainly with ‘System 1’). It could be more appropriate to approach the dual-system presentation as a paradigm or framework to help one grasp the breadth of aspects that may distinguish between two types of cognitive processes and obtain a more comprehensive picture of cognition. The properties are not truly required to co-occur altogether as constituents of a whole profile of one system or the other. In certain domains of judgement or decision problems, a set of properties may jointly describe the process entailed. Some dual process theories may take different perspectives on a similar domain, and hence the aspects derived from them are related and appear to co-occur.

  • Evans confronts a more widely accepted ‘sequential-interventionist’ view (as described above) with a ‘parallel-competitive’ view.

People use a variety of procedures and techniques to form judgements, make decisions or perform any other kind of cognitive task. Stanovich relates the structure, shape and level of sophistication of the mental procedures or algorithms of thought humans can apply, to their intelligence or cognitive capacity, positioned at the algorithmic level of analysis. Investing more effort in more complicated techniques or algorithms entailed in rational thinking is a matter of volition, positioned at the intentional level (borrowed from Dennett’s theorizing on consciousness).

However, humans do not engage a great part of the time in thought close to the full of their cognitive capacity (e.g., in terms of depth and efficiency). According to Stanovich, we should distinguish between cognitive ability and thinking dispositions (or styles). The styles of thinking a person applies do not necessarily reflect everything one is cognitively capable of. Put succinctly, the fact that a person is intelligent does not mean that he or she has to think and act rationally; one has to choose to do so and invest the required effort into it. When one does not, it opens the door for smart people to act stupidly. Furthermore, the way a person is disposed to think is most often selected and executed unconsciously, especially when the thinking disposition or style is relatively fast and simple. Cognitive styles that are entailed in System 1, characterised as intuitive, automatic, associative and fast, are made to ease the cognitive strain on the brain, and they are most likely to occur unconsciously or preconsciously. Still, being intuitive and using heuristics should not imply a person will end up acting stupidly — some would argue his or her intuitive decision could be more sensible than one made when trying to think rationally; it may depend on how thinking in the realm of System 1 happens — if one rushes while applying an inappropriate heuristic or relying on an unfitting association, he or she could become more likely to act stupidly (or plainly, ‘being stupid’).

Emotion and affect are more closely linked to System 1. Yet, emotion should not be viewed ultimately as a disruptor of rationality. As proposed by Stanovich, emotions may fulfill an important adaptive regulatory role — serving as interrupt signals necessary to achieve goals, avoiding entanglement in complex rational thinking that only keeps one away from a solution, and reducing a problem to manageable dimensions. In some cases emotion does not disrupt rationality but rather help to choose when it is appropriate and productive to apply a rational thinking style (e.g., use an optimization algorithm, initiate counterfactual thinking). By switching between two modes of thinking, described as System 1 and System 2, one has the flexibility to choose when and how to act in reason or be rational, and emotion may play the positive role of a guide.

The dual-system concept provides a way of looking broadly at cognitive processes that underlie human judgement and decision making. System 1’s mode of thinking is particularly adaptive by which it allows a consumer to quickly sort out large amounts of information and navigate through complex and changing environments. System 2’s mode of thinking is the ‘wise counselor’ that can be called to analyse the situation more deeply and critically, and provide a ‘second opinion’ like an expert. However, it intervenes ‘on request’ when it receives persuasive signals that its help is required. Consideration of aspects distinguishing between these two modes of thinking by marketing and retail managers can help them to better understand how consumers conduct themselves and cater to their needs, concerns, wishes and expectations. Undertaking this viewpoint can especially help, for instance, in the area of ‘customer journeys’ — studying how thinking styles direct or lead the customer or shopper through a journey (including emotional signals), anticipating reactions, and devising methods that can alleviate conflicts and reduce friction in interaction with customers.

Ron Ventura, Ph.D. (Marketing)

References:

(1)  Thinking, Fast and Slow; Daniel Kahneman, 2012; Penguin Books.

(2) Rationality, Intelligence, and Levels of Analysis in Cognitive Science (Is Dysrationalia Possible); Keith E. Stanovich, 2002; in Why Smart People Can Be So Stupid (Robert J. Sternberg editor)(pp. 124-158), New Haven & London: Yale University Press.

(3) Dual-Processing Accounts of Reasoning, Judgment and Social Cognition; Jonathan St. B. T. Evans, 2008; Annual Review of Psychology, 59, pp. 255-278. (Available online at psych.annualreviews.org, doi: 10.1146/annurev.psych.59.103006.093629).

 

Read Full Post »

One of the more difficult and troublesome decisions in brand management arises when entering a product category that is new to the company: Whether to up-start a new brand for the product or to endow it with the identity of an existing brand — that is, extending a company’s established brand from an original product category to a product category of a different type. The first question that would probably pop-up is “how different is the new product?”, acting as a prime criterion to judge whether the parent-brand fits the new product.

Notwithstanding, the choice is not completely ‘black or white’ since intermediate solutions are possible through the intricate hierarchy of brand (naming) architecture. But focusing on the two more distinct strategic branding options above helps to see more clearly the different risk and cost implications of launching a new product brand versus using the name of an existing brand from an original product category. Notably, the manufacturers, retailers and consumers, all perceive risks, albeit from the different perspective of each party given its role.

  • Note: Brand extensions represent the transfer of a brand from one type of product to a different type, to be distinguished from line extensions that pertain to the introduction of variants within the same product category (e.g., flavours, colours).

This is a puzzling marketing and branding problem also from an academic perspective. Multiple studies have attempted in different ways to identify the factors that best explain or account for successful brand extensions. While the stream of research on this topic helpfully points out to major factors, some more commonly agreed upon, a gap remains between the sorts of extensions predicted to succeed according to the studies and the extensions performed by companies that happen to succeed or fail in the markets in reality. A plausible reason for missing the outcomes of actual extensions, as argued by the researchers Milberg, Sinn, and Goodstein (2010), is neglecting the competitive settings in categories that are the target of brand extension (1).

Perhaps one of the most famous examples of a presumptuous brand extension has been the case of Virgin (UK), from music to cola (drink), airline, train transport, and mobile communication (ironically, the origin of the brand as Virgin Music has since been abolished). The success of Virgin’s distant extensions is commonly attributed to the personal character of Richard Branson, the entrepreneur behind the brand: his boldness, initiative, willingness to take risks, and adventurism. These traits seem to have transferred to his business activities and helped to make the extensions more credible and acceptable to consumers.

Another good example relates to Philips (originated in The Netherlands). Starting from lighting (bulbs, now more in LED), the brand extended over the years to personal care (e.g., face shavers for men, hair removal for women), sound and vision (e.g., televisions, DVD and Blue-Ray players, originally in radio sets), PC products, tablets and phones, and more. Still, when looking overall at the different products, systems and devices sharing the Philips brand, they can mostly be linked as members in a broad category of ‘electrics and electronics’, a primary competence of the company. As the company grew with time, launched more types of products whilst advancing with technology, and its Philips brand was perceived as having greater experience and good record in brand extensions, this could facilitate the market acceptance of further extensions to additional products.

  • In the early days of the 1930s to 1950s radio and TV sets relied for operation on vacuum tubes, later moving to electronic circuits with transistors or digital components. Hence, historically there was an apparent physical-technological connection between those products and the brand’s origin in light bulbs, a connection much harder to find now between category extensions, except for the broad category linkage suggested above.

Academic research has examined a range of ‘success factors’ of brand extensions, such as: perceived quality of the parent-brand; fit between the parent-brand and the extension category; degree of difficulty in making an extension (challenge undertaken); parent-brand conviction; parent-brand experience; marketing support; retailer acceptance; perceived risk (for consumers) in adopting the brand extension; consumer innovativeness; consumer knowledge of the parent-brand and category extension; the stage of entry into another category (i.e., as an early or a late entrant). The degree of fit of the parent-brand (and original product) with the extension category is revealed as the most prominent factor contributing to better acceptance and evaluation (e.g., favourability) of the extension in consumer studies.

Aaker and Keller specified in a pioneer article (1990) two requirements for fit: (a) the extension product category is a direct complement or a substitute of the original category; (b) the company, with its people and facilities, is perceived as having the knowledge and capability of manufacturing the product in the extension category. These requirements reflect a similarity between the original and extension product categories that is necessary for successful transfer of a favourable attitude towards the brand to the extension product type (2). A successful transfer of attitude may occur, however, also if the parent-brand has values, purpose or image that seem relevant to the extension product category, even when the technological linkage is less tight or apparent (as the case of Virgin suggests).

  • Aaker and Keller found that fit, based especially on competence, stands out as a contributing factor to higher consumer evaluation (level of difficulty is a secondary factor while perceived quality plays more of a ‘mediating’ role).

Volckner and Sattler (2006) worked to sort out the contributions of ten factors, as retrieved from academic literature, to the success of brand extensions; relations were refined with the aid of expert advice from brand managers and researchers (3). Contribution was assessed in their model in terms of (statistical) significance and relative importance. The researchers found  fit to be the most important factor driving (perceived) brand extension success in their study, followed by marketing support, parent-brand conviction, retail acceptance, and parent-brand experience. The complete model tested for more complex structural relationships represented through mediating and moderating (interacting) factors (e.g., the effect of marketing support on extension success ‘passes’ through fit and retailer acceptance).

For brand extensions to be accepted by consumers and garner a positive attitude, consumers should recognise a connectedness or linkage between the parent-brand and the category extension. The fit between them can be based on attributes of the original and extension types of product or a symbolic association. Keller and Lehmann (2006) conclude in this respect that “consumers need to see the proposed extension as making sense” (emphasis added). They identify product development, applied via brand (and line) extensions, as a primary driver of brand growth, and thereby adding to parent-brand equity. Parent-brands do not tend to be damaged by unsuccessful brand extensions, yet the authors point to circumstances where greater fit may result in a negative effect on the parent-brand, and inversely where joining a new brand name with the parent-brand (as its endorser) may protect the parent-brand from adverse outcomes of extension failure (4).

When assessing the chances of success of a brand extension, it is nevertheless important to consider what brands are already present in the extension category that a company is about to enter. Milberg, Sinn, and Goodstein claim that this factor has not received enough attention in research on brand extensions. In particular, one has to take into account the strength of the parent-brand relative to competing brands incumbent in the target category. As a starting point for entering the extension category, they chose to focus on how well consumers are familiar with the competitor brands vis-à-vis the extending brand.  Milberg and her colleagues proposed that a brand extension can succeed despite a worse fit with the category extension due to an advantage in brand familiarity, and vice versa. Consumer response to brand extensions was tested on two aspects: evaluation (attitude) and perceived risk (5).

First, it should be noted, the researchers confirm the positive effect of better fit on consumer evaluation of the brand extension when no competitors are considered. The better fitting extension is also perceived as significantly less risky than a worse fitting extension. However, Milberg et al. obtain supportive evidence that in a competitive setting, facing less familiar brands can improve the fortune of a worse fitting extension, compared with being introduced in a noncompetitive setting: When the incumbent brands are less familiar relative to the parent-brand, the evaluation of the brand extension is significantly higher (more favourable) and purchasing its product is perceived less risky than if no competition is referred to.

  • A reverse outcome is found in the case of better fit where the competitor brands are more highly familiar: A disadvantage in brand familiarity can dampen the brand extension evaluation and increase the sense of risk in purchasing from the extended brand, compared with a noncompetitive setting.

Two studies performed show how considering differences in brand familiarity can change the picture about the effect of brand extension fit from that often found without accounting for competing brands in the extension category.

When comparing different competitive settings, the research findings provide a more constrained support, but in the direction expected by Milberg and colleagues. The conditions tested entailed a trade-off between (a) a worse fitting brand extension competing with less familiar brands; and (b) a better fitting brand extension competing with more familiar brands. In regard to competitive settings:

The first study showed that the evaluation of a worse fitting extension competing with relatively unfamiliar brands is significantly more favourable than a better fitting extension facing more familiar brands. Furthermore, the product of a worse fitting brand extension is preferred more frequently over its competition than the better fitting extension product is (chosen by 72% vs. 6%, respectively). Also, purchasing a product from the worse fitting brand extension is perceived significantly less risky compared with the better fitting brand. These results indicate that the relative familiarity of the incumbent brands that an extension faces would be more detrimental to its odds of success than how well its fit is.

The second study aimed to generalise the findings to different parent-brands and product extensions. It challenged the brand extensions with somewhat more difficult conditions: it included categories that are all relevant to respondents (students), and so competitor brands in extension categories are also relatively more familiar to them than in the first study. The researchers acknowledge that the findings are less robust with respect to comparisons of the contrasting competitive settings. Evaluation and perceived risk related to the worse fitting brand competing with less familiar brands are equivalent to the better fitting brand extension facing more familiar brands. The gap in choice shares is reduced though in this case it is still statistically significant (45% vs. 15%, respectively). Facing less familiar brands may not improve the response of consumers to the worse fitting brand extension (i.e., not overcoming the effect of fit) but at least it is in a position as good as of the better fitting brand extension competing in a more demanding setting.

  • Perceived risk intervenes in a more complicated relationship as a mediator of the effect of fit on brand extension evaluation, and also in mediating the effect of relative familiarity in competitive settings. Mediation implies, for example, that a worse fitting extension evokes greater risk which is responsible for lowering the brand extension evaluation; consumers may seek more familiar brands to alleviate that risk.

A parent-brand can assume an advantage in an extension category even though it encounters brands that are familiar within that category, and may even be considered experts in the field: if the extending brand is leading within its original category and is better known beyond it, this can give it a leverage on the incumbents if those brands are more ‘local’ or specific to the extension category. For example, it would be easier for Nikon leading brand of cameras to extend to binoculars (better fit) where it meets brands like Bushnell and Tasco than extending to scanners (also better fit) where it has to face brands like HP and Epson. In the case of worse fitting extensions, it could be significant for Nikon whether it extends to CD players and competes with Sony and Pioneer or extends to laser pointers and faces Acme and Apollo — in the latter case it may enjoy the kind of leverage that can overcome a worse fit. (Product and brand examples are borrowed from Study 1). Further research may enquire if this would work better for novice consumers than experts. Milberg, Sinn and Goodstein recommend to consider additional characteristics that brands may differ on (e.g., attitude, image, country of origin), suggesting more potential bases of strength.

Entering a new product category for a company is often a difficult challenge, and choosing the more appropriate branding strategy for launching the product can be furthermore delicate and consequential. If the management chooses to make a brand extension, it should consider aspects of relative strength of its parent-brand, such as familiarity, against the incumbent brands of the category it plans to enter in addition to a variety of other characteristics of product types and its brand identity. However, the managers can take advantage as well of intermediate solutions in brand architecture to combine a new brand name with an endorsement of an established brand (e.g., higher-level brand for a product range). Choosing the better branding strategy may be helped by better understanding of the differences and relations (e.g., hierarchy) between product categories as perceived by consumers.

Ron Ventura, Ph.D. (Marketing)

Notes:

1. Consumer Reactions to Brand Extensions in a Competitive Context: Does Fit Still Matter?; Sandra J. Milberg, Francisca Sinn, & Ronald C. Goodstein, 2010; Journal of Consumer Research, 37 (October), pp. 543-553.

2.  Consumer Evaluations of Brand Extensions; David A. Aaker and Kevin L. Keller, 1990; Journal of Marketing, 54 (January), pp. 27-41.

3.  Drivers of Brand Extension Success; Franziska Volckner and Henrik Sattler, 2006; Journal of Marketing, 70 (April), pp. 18-34.

4. Brands and Branding: Research Finding and Future Priorities; Kevin L. Keller and Donald R. Lehmann, 2006; Marketing Science, 25 (6), pp. 740-759.

5. Ibid. 1.

Read Full Post »

A new film this year, “Sully”, tells the story of US Airways Flight 1549 that landed safely onto the water surface of the Hudson River on 15 January 2009 following a drastic damage to the plane’s two engines. This article is specifically about the decision process of the captain Chesley (Sully) Sullenberger with the backing of his co-pilot (first officer) Jeff Skiles; the film helps to highlight some instructive and interesting aspects of human judgement and decision-making in an acute crisis situation. Furthermore, the film shows how those cognitive processes contrast with computer algorithms and simulations and why the ‘human factor’ must not be ignored.

There were altogether 155 people on board of the Airbus A320 aircraft in its flight 1549 from New-York to North Carolina: 150 passengers and five crew members. The story unfolds whilst following Sully in the aftermath of the incident during the investigation of the US National Transportation Safety Board (NTSB) which he was facing together with Skiles. The film (directed by Clint Eastwood, featuring Tom Hanks as Sully and Aaron Ackhart as Skiles, 2016) is based on Sullenberger’s autobiographic book “Highest Duty: My Search for What Really Matters” (2009). Additional resources such as interviews and documentaries were also used in preparation of this article.

  • The film is excellent, recommended for its way of delivering the drama of the story during and after the flight, and for the acting of the leading actors. A caution to those who have not seen the film: the article includes some ‘spoilers’. On the other hand, facts of this flight and the investigation that followed were essentially known before the film.

This article is not explicitly about consumers, although the passengers, as customers, were obviously directly affected by the conduct of the pilots as it saved their lives. The focus, as presented above, is on the decision process of the captain Sullenberger. We may expect that such an extraordinary positive outcome of the flight, rescued from a dangerous circumstance, would have a favourable impact on the image of the airline US Airways that employs such talented flight crew members. But improving corporate image or customer service and relationships were not the relevant considerations during the flight, just saving lives.

Incident Schedule: Less than 2 minutes after take-off (at ~15:27) a flock of birds (Canada geese) clashed into both engines of the aircraft. It is vital to realise that from that moment, the flight lasted less than four minutes! The captain took control of the plane from his co-pilot immediately after impact with the birds, and then had between 30 seconds to one minute to make a decision where to land.  Next, just 151 seconds passed from impact with the birds and until the plane was approaching right above the Hudson river for landing on the water. Finally, impact with water occurred 208 seconds after impact with the birds (at ~15:30).

Using Heuristics: The investigators of NTSB told Sully (Hanks) about flight calculations performed in their computer simulations, and argued that according to the simulation results it had not been inevitable to land on the Hudson river, a highly risky type of crash-land. In response, Sully said that it had been impossible for himself and Skiles to perform all those detailed calculations during the four minutes of the flight after the impact of the birds with the aircraft’s engines; he was relying instead on what he saw with his eyes in front of him — the course of the plane and the terrain below them as the plane was gliding with no engine power.

The visual guidance Sully describes as using to navigate the plane resembles a type of ‘gaze heuristic’ identified by professor Gerd Gigerenzer (1). In the example given by Gigerenzer, a player who tries to catch a ball flying in the air does not have time to calculate the trajectory of the ball, considering its initial position, speed and angle of projection. Moreover, the player should also take into account wind, air resistance and ball spin. The ball would be on the ground by the time the player makes the necessary estimations and computation. An alternative intuitive strategy (heuristic) is to ‘fix gaze on the ball, start running, and adjust one’s speed so that the angle of gaze remains constant’. The situation of the aircraft flight is of course different, more complex and perilous, but a similar logic seems to hold: navigating the plane in air safely towards the terrain surface (land or water) when there is no time for any advanced computation (the pilot’s gaze would have to be fixed on the terrain beneath towards a prospect landing ‘runway’). Winter winds in New-York City on that frozen day have probably made the landing task even more complicated.  But in those few minutes available to Sully, he found this type of ‘gaze’ or eyesight guiding rule the most practical and helpful.

Relying on Senses: Sullenberger made extensive use of his senses (visual, auditory, olfactory) to collect every information he could get from his surrounding environment. To start with, the pilots could see the birds coming in front of them right before some of them were clashing into the engines — this evidence was crucial to identifying instantly the cause of the problem though they still needed some time to assess the extent of damage. In an interview to CBS’s programme 60 Minutes (with Katie Couric, February 2009), Sully says that he saw the smoke coming out from both engines, smelled the burned flesh of the birds, and subsequently heard a hushing noise from the engines (i.e., made by the remaining blades). He could also feel the trembling of the broken engines. This multi-modal sensory information contributed to convincing him that the engines were lost (i.e., unable to produce thrust) in addition to failure to restart them. Sully also utilised all that time information from the various meters or clocks in the cockpit dashboard in front of him (while Skiles was reading to him from the manuals). The captain was thus attentive to multiple visual stimuli (including and beyond using a visual guidance heuristic) in his decision process, from early judgement to action on his decision to land onto the water of the Hudson river.

Computer algorithms can ‘pick-up’ and process all the technical information of the aircraft displayed to the pilots in the cockpit. The algorithms may also apply in the computations additional measurements (e.g., climate conditions) and perhaps data from sensors installed in the aircraft. But the computer algorithms cannot ‘experience’ the flight event like the pilots. Sully could ‘feel the aircraft’, almost simultaneously and rapidly perceive the sensory stimuli he received in the cockpit, within and outside the cabin, and respond to them (e.g., make judgement). Information available to him seconds after impact with the birds gave him indications about the condition of the engines that algorithms as used in the simulations could not receive. That point was made clear in the dispute that emerged between Sully and the investigating committee with regard to the condition of one of the engines. The investigators claimed that early tests and simulations suggested one of the engines was still functioning and could allow the pilots to bring the plane to land in one of the nearby airports (returning to La Guardia or reverting to Teterboro in New-Jersey). Sully (Hanks) disagreed and argued that his indications were clear that the second engine referred to was badly damaged and non-functional — both engines had no thrust. Sully was proven right — the committee eventually updated that missing parts of the disputed engine were found and showed that the engine was indeed non-functional, disproving the early tests.

Timing and the Human Factor: The captain Sullenberger had furthermore a strong argument with the investigating committee of NTSB about their simulations in attempt to re-construct or replicate the sequence of events during the flight. The committee argued that pilots in a flight simulator ‘virtually’ made a successful landing in both La Guardia and Teterboro airports when the simulator computer was given the data of the flight. Sully (Hanks) found a problem with those live but virtual simulations. The flight simulation was flawed because it made the assumption the pilots could immediately know where it was possible to land, and they were instructed to do so. Sully and Skiles indeed knew immediately the cause of damage but still needed time to assess the extent of damage before Sully could decide how to react. Therefore, they could not actually turn the plane towards one of those airports right after bird impact as the simulating pilots did. The committee ignored the human factor, as argued by Sully, that had required him up to one minute to realise the extent of damage and his decision options.

The conversation of Sully with air controllers demonstrates his assessments step-by-step in real-time that he could not make it to La Guardia or alternatively to Teterboro — both were effectively considered — before concluding that the aircraft may find itself in the water of the Hudson. Then the captain directed the plane straight above the river in approach to crash-landing. One may also note how brief were his response statements to the air controller.  Sully was confident that landing on the Hudson was “the only viable alternative”. He told so in his interview to CBS. In the film, Sully (Hanks) told Skiles (Ackhart) during a recuperating break outside the committee hall that he had no question left in his mind that they have done the right thing.

Given the strong resistance of Sully, the committee ordered additional flight simulations where the pilots were “held” waiting for 35 seconds to account for the time needed to assess the damage before attempting to land anywhere. Following this minimum delay the simulating pilots failed to land safely neither at La Guardia nor at Teterboro. It was evident that those missing seconds were critical to arriving in time to land in those airports. Worse than that, the committee had to admit (as shown in the film) that the pilots made multiple attempts (17) in their simulations before ‘landing’ successfully in those airports. The human factor of evaluation before making a sound decision in this kind of emergency situation must not be ignored.

Delving a little deeper into the event helps to realise how difficult the situation was.  The pilots were trying to execute a three-part checklist of instructions. They were not told, however, that those instructions were made to match a situation of loss of both engines at a much higher altitude than they were at just after completing take-off. The NTSB’s report (AAR-10-03) finds that the dual engine failure at a low altitude was critical — it allowed the pilots too little time to fulfill the existing three-part checklist. In an interview to Newsweek in 2015, Sullenberger said on that challenge: “We were given a three-page checklist to go through, and we only made it through the first page, so I had to intuitively know what to do.”  The NTSB committee further accepts in its report that landing at La Guardia could succeed only if started right after the bird strike, but as explained earlier, that was unrealistic; importantly, they note the realisation made by Sullenberger that an attempt to land at La Guardia “would have been an irrevocable choice, eliminating all other options”.

The NTSB also commends Sullenberger in its report for operating the Auxiliary Power Unit (APU). The captain asked Skiles to try operating the APU after their failed attempt to restart the engines. Sully decided to take this action before they could reach the article on the APU in the checklist. The operation of the APU was most beneficial according to NTSB to allow electricity on board.

Notwithstanding the judgement and decision-making capabilities of Sully, his decision to land on waters of the Hudson river could have ended-up miserably without his experience and skills as a pilot to execute it rightly. He has had 30 years of experience as a commercial pilot in civil aviation since 1980 (with US Airways and its predecessors), and before that had served in the US Air Force in the 1970s as a pilot of military jets (Phantom F-4). The danger in landing on water is that the plane would swindle and not reach in parallel to the water surface, thus one of the wings might hit water, break-up and cause the whole plane to capsize and break-up into the water (as happened in a flight in 1996). That Sully succeeded to safely “ditch” on water surface is not obvious.

The performance of Sullenberger from decision-making to execution seems extraordinary. His judgement and decision capacity in these flight conditions may be exceptional; it is unclear if other pilots could perform as well as he has done. Human judgement is not infallible; it may be subject to biases and errors and succumb to information overload. It is not too difficult to think of examples of people making bad judgements and decisions (e.g., in finance, health etc.). Yet Sully has demonstrated that high capacity of human judgement and sound decision-making exists, and we can be optimistic about that.

It is hard, and not straightforward, to extend conclusions from flying airplanes to other areas of activity. In one aspect, however, there can be some helpful lessons to learn from this episode in thinking more deeply and critically about the replacement of human judgement and decision-making with computer algorithms, machine learning and robotics. Such algorithms work best in familiar and repeated events or situations. But in new and less familiar situations and in less ordinary and more dynamic conditions humans are able to perform more promptly and appropriately. Computer algorithms can often be very helpful but they are not always and necessarily superior to human thinking.

This kind of discussion is needed, for example, in respect to self-driving cars. It is a very active field in industry these days, connecting automakers with technology companies for installing autonomous computer driving systems in cars. Google is planning on creating ‘driverless’ cars without a steering wheel or pedals; their logic is that humans should not be involved anymore in driving: “Requiring a licensed driver be able to take over from the computer actually increases the likelihood of an accident because people aren’t that reliable” (2). This claim is excessive and questionable. We have to carefully distinguish between computer aid to humans and replacing human judgement and decision-making with computer algorithms.

Chesley (Sully) Sullenberger has allowed himself as the flight captain to be guided by his experience, intuition and common sense to land the plane safely and save the lives of all passengers and crew on board. He was wholly focused on “solving this problem” as he told CBS, the task of landing the plane without casualties. He recruited his best personal resources and skills to this task, and in his success he might give everyone hope and strength in belief in human capacity.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) “Gut Feelings: The Intelligence of the Unconscious”, Gerd Gigerenzer, 2007, Allen Lane (Pinguin Books).

(2) “Some Assembly Required”, Erin Griffith, Fortune (Europe Edition), 1 July 2016.

 

Read Full Post »

Consumers evoke from the visual appearance of a product their impressions of its beauty or aesthetics. They may also interpret physical features embedded in the product form (e.g., handles, switches, curvature) as cues for a proper use of the product. But there is an additional hidden layer of the design that may influence the judgement of consumers, that is the intention of the product designer(s). The intention could be an idea or a motive behind the design, as to what a designer wanted to achieve. However, intentions, only implicit in product appearance, may not be clear or easy to infer.

The intention of a designer may correspond to the artistic creativity of the product’s visual design (i.e., aesthetic appeal), its purpose and mode of use, and furthermore, extending symbolic meanings (e.g., social values, self-image of the target users). For a consumer, judgement could be a question of what one infers and understands from the product’s appearance, and how close one understands it to be the intention of the designer. For example, a consumer can make inferences from cues in the product form  (e.g., an espresso machine) about its appropriate function (e.g., how to insert a coffee capsule in order to make a drink) — but a consumer may ask herself, is that the way the designer intended the product to be used?  These inferences are interrelated and complementary in determining the ‘correct’ purpose, function or meaning of a product. There are original and innovative products for which the answers are more difficult to produce than for others based only on a product’s appearance.

  • Note: Colours and signs on the surface of a product may be informative in regard to function as well as symbolic associations of a product.

The researchers da Silva, Crilly and Hekkert (2015) investigated if and how consumers’ knowledge of the designers’ intentions can influence their appreciation of the respective products. Yet, in acknowledgement that consumers are likely to derive varied inferences on intention (some of them mistaken) from visual images of products, the researchers present verbal statements on intentions in addition to images. Moreover, their studies show that there is important significance to the contribution of the verbal statements, explicitly informing consumers-respondents of designers’ intentions, in influencing (improving) consumers’ appreciation of products (1).

To  begin with, consumers usually have different conceptions and understanding of design than professionals in the field. Thereby, most consumers are not familiar with terminology in the domain of design (e.g., typicality/novelty, complexity, unity, harmony) and may use their own vocabulary to describe attributes of appearance; if the same terms are used, they may not have the same meaning or interpretation among designers and common consumers (2). Nevertheless, consumers have innate tastes for design (e.g., based on principles of Gestalt), and with time they may develop better comprehension, appraisal skills, and refined preferences for design of artefacts (as well as buildings, paintings, photographs etc.). The preferences of individuals may progress as they develop greater design acumen and accumulate more experience in reacting to designed objects while preferences may also be affected by one’s personality traits. Design acumen, in particular, pertains to the aptitude or approach of people to visual design, which may be characterised by quicker sensory connections, greater sophistication of preferences, and stronger propensity for processing visual versus verbal information (3). The gaps prevailing between consumers and designers in domain knowledge and experience may cause diversions when making inferences directly about a product as well as when ‘reading’ the designer’s intention from the product’s appearance.

The starting point of da Silva, Crilly and Hekkert posits that “the designer’s intention can intuitively be regarded as the essence of a product and that knowledge of this intention can therefore affect how that product is appreciated” (p. 22). The ‘essence’ describes how a product is supposed to behave or perform as foreseen by the designer; thinking about it by consumers can give them pleasure as much as perceiving the product’s features.

Appreciation in Study 1 is measured as a composite of five scale items — liking, beauty, attractiveness, pleasingness, and niceness; it is a form of ‘valence judgement’ but with a strong “flavour” of aesthetics, a seeming remainder of its origin as a scale of aesthetic appreciation adapted by the researchers to represent general product appreciation.

  • Note: The degree to which the researchers succeeded in expanding the meaning of ‘appreciation’ may have some bearing on the findings where respondents make judgements beyond aesthetics (e.g., the scale lacks an item on ‘usefulness’).

At first it is established that knowledge of explicit intentions of designers, relating to 15 products in Study 1, influenced the appreciation of the designed products for good or bad (i.e., in absolute values) vis-à-vis the appreciation based on pictures alone. Subsequently, the researchers found support for overall increase in appreciation (i.e., positive effect) following the exposure to explicit statements of the designers’ intentions.

A deeper examination of the results revealed, however, that for three products there was a more substantial improvement; for ten products a moderate or minor increase was found due to intention knowledge; and two products suffered a decrement in appreciation. Furthermore, the less a product was appreciated based only on its image, the more it could gain in appreciation after consumers were informed of the designer’s intention. Products do not receive higher post-appreciation merely because they were appreciated better in the first place. More conspicuously, for products that were more difficult to interpret and judge based on their visual image, knowledge of the designer’s intention could help consumers-respondents realise and appreciate much better their purpose and why they were designed in that particular way, considering both their visual appeal and function (but there is a qualification to that, later explained).

The second study examined reasons for changes in appreciation following to being informed of designers’ intentions. Study 2 aimed to distinguish between appreciation that is due to appraisal of the intention per se and appreciation attributed to how well a product fulfills a designer’s intention, independent of whether a consumer approves or not of the intention itself. This study concentrated on three of the products used in Study 1, described briefly with their stated intentions (images included in the article):

  • A cross-cultural memory game (Product B) — The game “was designed with the aim of making the inhabitants of The Netherlands aware of their similarities instead of their differences” (i.e., comparing elements of Dutch and Middle Eastern cultures). [Product B gained the most in post-appreciation in Study 1.]
  • A partially transparent bag (Product C) — Things that are no longer in need, but are still in good condition, can be left in this bag on the street for anyone interested: “It was designed with the aim of enabling people to be generous towards strangers.” [Moderate gain.]
  • A “fitted-form” kitchen cupboard (Product G) — In this cupboard everyday products can be stored in fitted compartments according to their exact shapes. The designer’s intention said the product “was designed with the aim of helping people appreciate the comfortable predictability of daily household task”. [Product G gained the least in post-appreciation in Study 1.]

Consistent with Study 1, these three products were appreciated similarly and to a high degree based on images alone, and their appreciation increased to large, medium and small degrees after being informed of intentions. It is noted, however, that overall just half of respondents reported that knowing an intention changed how much they liked the respective product (about two-thirds for B, half for C, and a third for G). Subsequently respondents were probed about their reasons for changes in appreciation (liking) and specifically about their assessment of the product as means to achieve the stated intention. Three themes emerged as underlying the influence of intention knowledge on product appreciation: (a) perception of the product; (b) evaluation of the intention; and (c) evaluation of the product as a means to fulfill its intention (as explicitly queried).

Knowledge of the designer’s intention can change the way consumers perceive the product, its form and features. Firstly, it can make the product appear more interesting, such as by adding an element of surprise, an unexpected insight about its form (found especially for product B). In some cases it simply helps to comprehend the product’s form. The insight gained from knowing the designer’s intention may be expressed in revealing a new meaning of the product that improves appreciation (e.g., a more positive social ‘giving’ meaning of product C). But here is a snag — if the intention consumers are told of contradicts the meaning they assigned to the product when initially perceiving its image, it may inversely decrease one’s appreciation. For example, the ‘form-fitted’ cupboard (G) may seem nicely chaotic, but the way a consumer-participant interpreted it does not agree with the intention given by the designer (it ‘steals’ something from its attraction), and therefore the consumer becomes disappointed.

Upon being informed of the designer’s intention, a consumer may appreciate an idea or cause expressed in the intention itself (e.g., on merit of being morally virtuous, products B and C). The positive attitude towards the intention would then be transferred to the product (e.g., ‘helping people is a very beautiful thing’ in reference to C). On the downside, knowing an intention may push consumers away from a product (e.g., disliking the ‘predictability’ of one’s behaviour underlying product G). A product may thus gain or lose consumers’ favour in so far as the intention reflects on its essence.

But relying on a (declared) intention for the idea, cause or aim it conveys is not a sufficient criterion for driving appreciation upper or lower. Consumers also consider, as expected of them, whether the product is an able means to implement an idea or fulfill its aim. It is not just about what the designer intended to achieve but also how well a product was designed to achieve the designer’s goal. Participants in Study 2 were found to hold a product in favour for its capacity to fulfill its intended aim, even though they did not judge it as virtuous or worthy. There were also opposite cases where appreciation decreased but participants pointed out that the fault was not in the intention, rather in its implementation (e.g., “I think it’s a good idea [intention] but this [product C] won’t really work”). The authors suggest that participants use references in their judgements, including alternative known or imagined products which they believe to be more successful for fulfilling a similar aim or alternative aims or causes they could think of as appropriate for the same product.

The researchers find evidence in participants’ explanations suggesting they see how efficiency can be beautiful (e.g., how materials are used optimally and aesthetically). They relate this notion to a design principle of obtaining ‘maximum-effect-from-minimum-means’. Participants also endorsed novel or unusual means to realise the intention behind a product. Hekkert defined the principle above as one of the goals to pursue for a pleasing design.  It means conveying more information through fewer and simpler features, creating more meanings through a single construct, and applying metaphors. Hekkert also recommended a sensible balance between typicality and novelty (‘most advanced, yet acceptable’) that will inspire consumers but not intimidate them (4).

  • This research was carried out as part of the Project UMA: “Unified Model of Aesthetics” for designed artefacts at the Department of Industrial Design, Delft University of Technology, The Netherlands. (See how the model depicts a balance in meeting safety needs versus accomplishment needs for aesthetic pleasure: connectedness-autonomy, unity-variety, typicality-novelty).

Knowledge of the intentions of designers can elucidate for consumers why a product was designed to appear and to be used in a particular way. It contributes motivation or cause (e.g., social solidarity, energy-saving) for obtaining and using the designed product. But the intention should be reasonable and agreeable to consumers, and the product design in practice has to convince consumers it is fit and capable to fulfill the intention. It is nevertheless desirable that the product is visually pleasing, as an object of aesthetic appeal and as a communicator of functional and symbolic meanings.

When marketers assess that consumers are likely to have greater difficulty to interpret a product visual design and infer the intention behind it, they may wisely accompany a presentation of the product with a statement by the designer. This would apply, for instance, to innovative products, early products of their type, or original concepts for known products. The designer may introduce the design concept, his or her intention or aim, and perhaps how it was derived; this introduction may be delivered in text as well as video in assorted media as suitable (print, online, mobile). On the part of consumers, exposure to the designer’s viewpoint would  enrich their shopping and purchasing experience, helping them to develop better-tuned visual impressions and judgements of products.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) How People’s Appreciation of Products Is Affected by Their Knowledge of the Designers’ Intentions; Odette da Silva, Nathan Crilly, & Paul Hekkert, 2015; International Journal of Design, 9 (2), pp. 21-33.

(2) How Consumers Perceive Product Appearance: The Identification of Three Product Appearance Attributes; Janneke Blijlevens, Marielle E.H. Creusen, & Jan P.L. Schoorman, 2009; International Journal of Design, 3 (3), pp. 27-35.

(3) Seeking the Ideal Form: Product Design and Consumer Response; Peter H. Bloch, 1995; Journal of Marketing, 59 (3), pp. 16-29.

(4) Design Aesthetics: Principles of Pleasure in Design; Paul Hekkert, 2006; Psychology Science, 48 (2), pp. 157-172.

Read Full Post »

There can hardly be a doubt that Internet users would be lost and unable to exploit the riches of information in the World Wide Web (WWW), and the Internet overall, without the aid of search engines (e.g., Google, Yahoo!, Bing). Anytime information is needed on a new concept or in an unfamiliar topic, one turns to a search engine for help. Users search for information for various purposes in different spheres of life — formal and informal education, professional work, shopping, entertainment, and others. While on some tasks the relevant piece of information can be quickly retrieved from a single source chosen from the results list, oftentimes a rushed search that relies on results in immediate sight is simply not enough.

And yet users of Web search engines, as revealed in research on their behaviour, tend to consider only results that appear on the first page (a page usually includes ten results). They may limit their search task even further by focusing on just the first “top” results that can be viewed on the screen, without scrolling down to the bottom of the first page. Users then also tend to proceed to view only a few webpages by clicking their links on the results list (usually up to five results)[1].

  • Research in this field is based mostly on analysis of query logs, but researchers also apply lab experiments and observation of users in-person while performing search tasks.     

Internet users refrain from going through results pages and stop short of exploring information sources located on subsequent pages that are nonetheless potentially relevant and helpful. It is important, however, to distinguish between search purposes, because not for every type of search looking farther than the first page is necessary and beneficial. Firstly, our interest is in a class of informational search whose purpose in general is to learn about a topic (other recognized categories are navigational search and transactional / resource search)[2]. Secondly, we may distinguish between a search for more specific information and a search for learning more broadly about a topic. The goal of a directed search is to obtain information regarding a particular fact or a list of facts (e.g., UK’s prime minister in 1973, state secretaries of the US in the 20th century). Although it is likely we could find answers to such questions from a single source (e.g., Wikipedia), found on the first page of results, it is advisable to verify the information with a couple of additional sources; but that usually would be sufficient. An undirected search, on the other hand, is aimed to learn more broadly about a topic (e.g., the life and work of architect Frank Lloyd Wright, online shopping behaviour). The latter type of search is our main focus since in this case ending a search too soon can be the more damaging and harmful to our learning or knowledge acquisition [3]. This may also be true for other types of informational search identified by Rose and Levinson, namely advice seeking and obtaining a list of sources to consult [4].

With respect to Internet users especially in the role of consumers, and to their shopping activities, a special class of topical search is associated with learning about products and services (e.g., features and attributes, goals and uses, limitations and risks, expert reviews and advice). Negative consequences of inadequate learning in this case may be salient economically or experientially to consumers (though perhaps not as serious for our knowledgebase compared with other domains of education).

The problem starts even before the stage of screening and evaluating information based on its actual content. That is, the problem is not of selectively choosing sources that appear reliable or their information seems relevant and interesting; it is neither of selectively favouring information that supports our prior beliefs and opinions (i.e., a confirmation bias). The problem has to do with the tendency of people to consider and apply only that portion of information that is put in front of them. Daniel Kahneman pointedly labeled this human propensity WYSIATI — What You See Is All There Is — in his excellent book Thinking, Fast and Slow [4]. Its roots may be traced to the availability heuristic which deals with the tendency of people to rely on the exemplars of a category presented, or ease of accessing the first category instances from one’s memory, in order to make judgements about frequency or probability of categories and events. The heuristic’s effect extends also to error in assessing size (e.g., using only the first items of a data series to assess its total size or sum). However, WYSIATI should better be viewed in the wider context of a distinction explained and elaborated by Kahneman between what he refers to as System 1 and System 2.

System 1 is intuitive and quick-to-respond whereas System 2 is more thougthful and deliberate. While System 2 is effortful, System 1 puts as little effort as possible to make a judgement or reach a conclusion. System 1 is essentially associative (i.e., it draws on quick associations that come to mind), but it consequently also tends to jump to conclusions. System 2 on the other hand is more critical and specialises in asking questions and seeking more required information (e.g., for solving a problem). WYSIATI is due to System 1 and can be particularly linked with other possible fallacies related to this system of fast thinking (e.g., representativeness, reliance on ‘low numbers’ or insufficient data). Albeit, the slow thinking System 2 is lazy — it does not hurry to intervene, and even when it is activated on the call of System 1 often enough it only attempts to follow and justify the latter’s fast conclusions [5]. We need to enforce our will in order to make our System 2 think harder and improve where necessary on poorly-based judgements made by System 1. 

Several implications of WYSIATI when using a Web search engine become apparent. It is appealing to follow a directive which says: the search results you see is all there is. It is in the power of System 1 to tell users when utilising a search engine: there is no need to look further — consider links to search hits immediately accessible on the first page, preferably seen on the screen from top of the page, perhaps scroll down to its bottom. Users should pause to ask if the information proposed is sufficient or they need to look for more input.

  • Positioning a “ruler” at the bottom of any page with page numbers and a Next button that searchers can click-through to proceed to additional pages (e.g., Google) is not helpful in this regard — such a ruler should be placed also at the top of a page to encourage or remind users to check subsequent pages, whether or not one observes all the results on a given page.

Two major issues in employing sources of information are relevance and credibility of their content. A user can take advantage of the text snippet quoted from a webpage under the hyperlinked heading of each result in order to initially assess if it is relevant enough to enter the website. It is more difficult, however, to judge the credibility of websites as information sources, and operators of search engines may not be doing enough to help their users in this respect. Lewandowski is critical of an over-reliance of search engines on popularity-oriented measures as indicators of quality or credibility to evaluate and rank websites and their webpages. He mentions: the source-domain popularity; click and visit behaviour of webpages; links to the page in other external pages, serving as recommendations; and ratings and “likes” by Internet users [6]. Popularity is not a very reliable, guaranteed indicator of quality (as known for extrinsic cues of perceived quality of products in general). A user of a search engine could be misguided in relying on the first results suggested by the engine in confident belief that they have to be the most credible. Search engines indeed use other criteria for their ranking like text-based tests (important for relevance) and freshness, but with respect to credibility or quality, the position of a webpage in the list of results could be misleading.

  • Searchers should consider on their own if the source (company, organization or other entity) is familiar and has good reputation in the relevant field, then judge the content itself. Yet, Lewandowski suggests that search engines should give priority in their ranking and positioning of results to entities that are recognized authorities appreciated for their knowledge and practice in the domain concerned [7]. (Note: It is unverified to what extent search engines indeed use this kind of appraisal as a criterion.) 

Furthermore, organic results are not immune to marketing-driven manipulations. Paid advertised links normally appear now on a side bar, at top or bottom of pages, mainly the first one, and they may also be flagged as “ads”. Thus searchers can easily distinguish them and choose how to treat them. Yet, the position of a webpage in the organic results list may be “assisted” by using techniques of search engine optimization (SEO), increasing their frequency of retrieval, for example through popular keywords or tagwords in webpage content or promotional links to the page (non-ads). Users should be careful of satisficing behaviour, relying only on early results, and be willing to look somewhat deeper into the results list on subsequent pages (e.g., at least 3-4 pages, sometimes reach page 10). Surprisingly instructive and helpful information may be found in webpages that appear on later results pages. 

  • A principal rule of information economics may serve users well: keep browsing results pages and consider links proposed until additional information seems marginally relevant and helpful and does not justify the additional time continuing to browse results. Following this criterion suggests no rule-of-thumb for the number of pages to view — in some cases it may be sufficient to consider two results pages, while in others it could be worth considering even twenty pages. 

Another aspect of search behaviour concerns the composition of queries and the transition between search queries during a session. It is important to balance sensibly and efficiently between the number of queries used and the number of results pages viewed on each search trial. Web searchers tend to compose relatively short queries, about 3-4 keywords on average in English (in German queries are 1-2 words long since German includes many composite words). Users make relatively little use of logical operators. However, users update and change queries when they run into difficulty in finding the information they seek. It becomes a problem if they get unsatisfied with a query because they could not find the needed information too shortly. Users also switch between strings of keywords and phrases in natural language. Yet updating the query (e.g., replacing or adding a word) frequently changes the results list only marginally. The answer to a directed search may be found sometimes around the corner, that is, in a webpage whose link appears on the second or third results page. And as said earlier, it is worth checking 2-3 answers or sources before moving on. Therefore, it is wise even to eye-scan the results on 2-4 pages (e.g., based on heading and snippet) before concluding that the query was not accurate or effective enough.

  • First, users of Web search engines may apply logical operators to define and focus their area of interest more precisely (as well as other criteria features of advanced search, for example time limits). Additionally, they may try the related query strings suggested by the search engine at the bottom of the first page (e.g., in Google). Users can also refer to special domain databases (e.g., news, images) shown on the top-tab. Yahoo! Search, furthermore, offers on the first page a range of results types from different databases mixed with general Web results. And Google suggests references to academic articles from its Google Scholar database for “academic” queries.

The way Interent users perceive their own experience with search engines can be revealling. In a survey of Pew Research Center on Internet & American Life in 2012, 56% of respondents (adults) expressed strong confidence in their ability to find the information they need by using the service of a search engine and an additional 37% said they were somewhat confident. Also, 29% said they are always able to find the information looked for and 62% said they can find it most of the time, making together a vast majority of 91%. Additionally, American respondents were mostly satisfied with information found, saying that it was accurate and trustworthy (73%), and thought that relevance and quality of results improved over time (50%).

Internet users appear to set themselves modest information goals and become satisfied with the information they gathered, suspectedly too quickly. They may not appreciate enough the possibilties and scope of information that search engines can lead them to, or simply be over-confident in their search skills. As suggested above, a WYSIATI approach could drive searchers of the Web to end their search too soon. They need to make the effort, willingly, to overcome this tendency as the task demands, getting System 2 at work. 

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) As cited by Dirk Lewandowski (2008), Search Engine User Behaviour: How Can Users Be Guided to Quality Content, Information Service & Use, 28, pp. 261-268 http://eprints.rclis.org/16078/1/ISU2008.pdf ; also see for example research by Bernard J. Jansen and Amanda Spink (2006) on How Are We Searching the World Wide Web.

2) Daniel E. Rose & Danny Levinson (2004), Understanding User Goals in Web Search, ACM WWW Conference, http://facweb.cs.depaul.edu/mobasher/classes/csc575/papers/www04–rose.pdf 

(3) Dirk Lewandowski (2012), Credibility in Web Search Engines, In Online Credibility and Digital Ethos: Evaluating Computer-Mediated Communication, S. Apostel & M. Fold (Eds.) Hershey, PA: IGI Global (viewed at: http://arxiv.org/ftp/arxiv/papers/1208/1208.1011.pdf, 8 July ’14)

(4) Daniel Kahneman (2011), Thinking, Fast and Slow, Penguin Books.

(5)  Ibid. 4.

(6) Ibid. 3

(7) Ibid 1 (Lewandowski 2008). 

 

         

Read Full Post »

The Rolling Stones most probably need no introduction. At least those born anytime between 1950 and 1980 should know the band, with Mick Jagger as its lead singer, and some of their widely known hits like (Can’t Get No) Satisfaction, Start me Up, Jumpin’ Jack Flash, and Paint It Black. By continuing to perform after the 1970s the band has given better chance for younger generations to become its fans as well. It is the longest acting rock band ever (since 1962, albeit some changes in their original line-up).

  • They are now four: Mick Jagger, Keith Richards, Charles Watts, and Ron Wood (the first two have written most of the band’s songs). Wood replaced (1975) Mike Taylor who has recently returned to perform with the band as a guest.

Therefore, when it was announced this spring that the Rolling Stones will have performed in a concert for the first time in Israel on 4 June 2014, the news were received with great excitement and anticipation. But then came a snag: the ticket prices declared were higher than Israeli rock fans apparently expected. The concert took place in the city park of Tel-Aviv in an area similar in form to an “amphitheatre”. There were three types of tickets. A small portion were allocated for standing on the lawn in a close area in front of the stage (“Golden Ring”), priced 1600 NIS (US$460) for a ticket. The vast majority of tickets allowed standing on the lawn stretching from behind the Golden Ring to the back slopes of the “amphitheatre”. Each ticket cost 700 NIS (US$200). Additional VIP tickets with extra perks offered sitting places on a staircase-balcony situated on the right-hand side, facing the stage, for the price of 2700 NIS (US$770). A total of 50,000 tickets were offered.

Rock fans have made mainly two types of complaints: (a) that tickets were more expensive than demanded for other concerts of foreign artists performing in Israel this year and in the past few years; (b) they were more expensive compared with prices charged in concerts of the Rolling Stones in other countries (e.g., 2012-2013 “50 & Counting” tour and the current 2014 “On Fire” tour).  Those price comparisons were used as a basis for consumers to claim that the ticket prices in Israel were unfair. The anger was directed towards both the local organizing agent and the Rolling Stones. Social activists ran a protest campaign in social media to persuade fans not to buy tickets. It most likely explains the sluggish progress of ticket sales until the day of the concert. All that time in the run-up to the concert there were talks that not enough people were buying tickets. Eventually, the amphitheatre was filled-up with 48,00o spectators, including the VIP balcony (a sigh of relief is permitted).

Consumers frequently judge the fairness or unfairness of a price in question based on comparisons to prices paid by others (e.g., friends), to prices paid by oneself on previous occasions, and to prices paid in other outlets for the same or similar products or services. Such comparisons are not easy to make, varying in accuracy and level of relevance. A key criterion for the relevance of a comparison is the degree of similarity between cases for which prices are compared — the more similar cases are on their non-price aspects while the prices are non-equitable, the judgement of unfairness is expected to be stronger.

  • When comparing with the prices for other rock or pop concerts consumers attended in the past, we should take into account factors such as: (1) the other artists used as reference; (2) when the other concerts took place (e.g., this year, three years ago); (3) the venue for the concert (e.g., a park, a football/basketball stadium, a concert hall). Further attributes extend from a difference in venue: seating or standing tickets, distance from stage, and flat versus rising ground or balcony. For example, standard tickets for standing in the same park at the concert of Paul McCartney cost 500 NIS, but that was five years ago. Niel Young, however, will be performing at that park later this summer, and standing tickets cost less than 400 NIS. In another case, Cliff Richard performed last year at Tel-Aviv basketball stadium: tickets for sitting on the flat floor of the basket field cost about 1000-1500 NIS while tickets in the first rows of the tier balcony facing the stage cost about 650 NIS. Arguing for unfairness is therefore not straightforward.
  • In comparisons to concerts of the Rolling Stones in other countries, differences associated with the venue of the concert are again important. In addition, one may also need to account for differences in standard-of-living and purchasing-power-parity (PPP) between countries. Fans in Israel, for instance, were angered that tickets in countries like the US or UK  where standard-of- living is higher than in Israel actually cost less when translated to shekels. Let us consider a few cases in example: (1) Ticket prices for concerts in Rome (22 June) and Paris (13 June) range from “standard” €78 (~US$110) to “premium” €150 (~US$210), nominally and relatively less expensive; (2) In the concert at Perth Arena in Australia, scheduled for 29 October this year, tickets for standing in the Tongue Pit adjacent to the stage or for seating in the flat area at the centre of the arena cost A$580 (~US$540) whereas tickets for sitting on the lower rows of the tier balconies more distant from the stage cost A$376 (~US$350) — while some place arrangements may be more convenient in Perth, overall the tickets are not less expensive than in Israel; (3) In fact, complaints about the relatively high prices the Rolling Stones charge have also been voiced in other countries — for instance, an article in The Telegraph criticised the high prices for the band’s concerts in London in November 2012 during their 50 & Counting tour (prices ranged between £95 and £375 [~US$150-600] with VIP Hospitality tickets priced £950! [~US$1520]), requiring the Rolling Stones to defend the prices they charge (Ron Wood explained they invested millions in arranging the stage).  Truly, there are not many active bands today like them.

In a cognitive, calculated decision process, according to the theory of mental accounting (Thaler), a consumer would evaluate the value to him or her of attending a rock concert based on some attributes or benefits of the band performing (e.g., how much the songs are liked, their singing and music-playing, and the show given at live concerts). Expressed in monetary terms, it is the highest price a consumer is willing to pay that would be equivalent to the psychological value to him (similar to the concept of reservation price in economic theory). The difference between the monetary value of equivalence and the (normal) price the consumer is asked to pay denotes the acquisition utility for the consumer.

  • The normal or ‘list’ price is often not the actual price paid due to special deals and discounts put forward — a difference between the normal price and the discounted actual price denotes the additional transaction utility a consumer can gain. For instance, customers of an Israeli mobile telecom company could buy their tickets for the Rolling Stones concert at prices 100 NIS lower than the official prices. (Some fans had a chance to buy standard tickets at half price of 350 NIS in a contest organised by the band.)

This methodic way for deriving a (perceived) value and reaching a decision may run out-of-order when trying to apply it to a rock or pop concert. Music as a form of art evokes emotions that are likely to disrupt sensible calculations of value. Moreover for devoted fans of a singer or a band, adoration and affective attachment are likely to influence their decision process more strongly. The fans of the band may find it difficult and disturbing to analyse their experience of listening to the music or attending a live concert in the way required to derive a well-founded value or utility. When the experience is about enjoyment, excitement, and getting carried-away by the music, the monetary value or the price fans are willing to pay can be expected to receive a boost upwards. They could perceive a reasonable acquisition utility even with the premium near-stage or VIP tickets.

But many other fans who feel close to rock and pop music, who may be greater fans of other artists of these genres, could also be strongly attracted to attend the concert because of the extraordinary opportunity to see and hear the Rolling Stones performing live in Tel-Aviv. Consumers may sense the historical significance of such an event, not to be missed. That could act as an emotional inducement for these fans to elevate the price they are willing to pay high enough to buy at least the standard type of ticket. It took until an hour before the concert to ascertain that there were indeed enough of them to fill the amphitheatre in the park (with some help from discounts).

Two important ways of approaching price were considered above: one is directed inwards and focuses on the perceived value of the target service, a rock concert; the other is directed outwards and compares the target price with prices for other cases or episodes that seem similar to the consumers and through which they judge the (un)fairness of the target price. Both avenues introduced challenging problems for the rock concert; it probably could have not occurred without the emotional component of the decision process. However, it does not have to spoil the event itself for those who bought tickets. Price may continue to pre-occupy the customers’ minds in the gap period between the time of buying the ticket and the day of the concert. When the event arrives customers “close” the mental account; they may either conclude the value obtained from their ticket acquisition or shift their attention fully to the event and the benefits it delivers, the more desirable way for them to avoid conflicts of value.

The concert of the Rolling Stones was wonderful. Mick Jagger was fantastically energetic on the stage (admirable in his age of 70+), and Keith Richards looked especially joyful. Jagger also demonstrated nice skills in expressing himself in Hebrew to the delight of the local audience. The band performed the songs mentioned above among others (19 in total); unfortunately Jagger did not sing their beautiful song Ruby Tuesday, but he performed another ballad from their repertoire not appearing regularely in their concerts, Angie.

  • Given the enthusiasm of the audience, the spectators did not let price issues spoil the celebration. There were other two factors that threatened to hinder the enjoyment. First, there was a heat wave that evening with high humidity — but that could not be anticipated and was beyond human control. It just had to be tolerated while drinking lots of water. The second factor was entirely due to human behaviour — spectators lifting smartphones above their heads in attempt to record on video episodes from the concert. The quality of images captured on the little screen (e.g., from a distance of 200m+) and the enjoyment spectators feel doing so is left for debate elsewhere. Meanwhile those screens waived above “stole” pieces from the field-of-view of the spectators behind who tried to escape them — what a shame.

The Rolling Stones did everything in their power, and they had the power, to make spectators happy for the money they had paid to the last shekel. Price did matter in making the decision to purchase and it even threatened to spoil the concert. However, that was true during the run-up period until the concert started on 4 June at 21:15. As the performance went on the spectators could easily forget about the price. The price effect was mitigated or vanished, leaving the spectators with pleasure of the music and performance of the Rolling Stones and particularly Mick Jagger. One may think of other artists who can achieve this outcome, but the Rolling Stones are definitely on the top list. It remains a specially good experience to remember.

Ron Ventura, Ph.D. (Marketing)

 

 

Read Full Post »

Older Posts »