Feeds:
Posts
Comments

Posts Tagged ‘Judgement’

One of the more difficult and troublesome decisions in brand management arises when entering a product category that is new to the company: Whether to up-start a new brand for the product or to endow it with the identity of an existing brand — that is, extending a company’s established brand from an original product category to a product category of a different type. The first question that would probably pop-up is “how different is the new product?”, acting as a prime criterion to judge whether the parent-brand fits the new product.

Notwithstanding, the choice is not completely ‘black or white’ since intermediate solutions are possible through the intricate hierarchy of brand (naming) architecture. But focusing on the two more distinct strategic branding options above helps to see more clearly the different risk and cost implications of launching a new product brand versus using the name of an existing brand from an original product category. Notably, the manufacturers, retailers and consumers, all perceive risks, albeit from the different perspective of each party given its role.

  • Note: Brand extensions represent the transfer of a brand from one type of product to a different type, to be distinguished from line extensions that pertain to the introduction of variants within the same product category (e.g., flavours, colours).

This is a puzzling marketing and branding problem also from an academic perspective. Multiple studies have attempted in different ways to identify the factors that best explain or account for successful brand extensions. While the stream of research on this topic helpfully points out to major factors, some more commonly agreed upon, a gap remains between the sorts of extensions predicted to succeed according to the studies and the extensions performed by companies that happen to succeed or fail in the markets in reality. A plausible reason for missing the outcomes of actual extensions, as argued by the researchers Milberg, Sinn, and Goodstein (2010), is neglecting the competitive settings in categories that are the target of brand extension (1).

Perhaps one of the most famous examples of a presumptuous brand extension has been the case of Virgin (UK), from music to cola (drink), airline, train transport, and mobile communication (ironically, the origin of the brand as Virgin Music has since been abolished). The success of Virgin’s distant extensions is commonly attributed to the personal character of Richard Branson, the entrepreneur behind the brand: his boldness, initiative, willingness to take risks, and adventurism. These traits seem to have transferred to his business activities and helped to make the extensions more credible and acceptable to consumers.

Another good example relates to Philips (originated in The Netherlands). Starting from lighting (bulbs, now more in LED), the brand extended over the years to personal care (e.g., face shavers for men, hair removal for women), sound and vision (e.g., televisions, DVD and Blue-Ray players, originally in radio sets), PC products, tablets and phones, and more. Still, when looking overall at the different products, systems and devices sharing the Philips brand, they can mostly be linked as members in a broad category of ‘electrics and electronics’, a primary competence of the company. As the company grew with time, launched more types of products whilst advancing with technology, and its Philips brand was perceived as having greater experience and good record in brand extensions, this could facilitate the market acceptance of further extensions to additional products.

  • In the early days of the 1930s to 1950s radio and TV sets relied for operation on vacuum tubes, later moving to electronic circuits with transistors or digital components. Hence, historically there was an apparent physical-technological connection between those products and the brand’s origin in light bulbs, a connection much harder to find now between category extensions, except for the broad category linkage suggested above.

Academic research has examined a range of ‘success factors’ of brand extensions, such as: perceived quality of the parent-brand; fit between the parent-brand and the extension category; degree of difficulty in making an extension (challenge undertaken); parent-brand conviction; parent-brand experience; marketing support; retailer acceptance; perceived risk (for consumers) in adopting the brand extension; consumer innovativeness; consumer knowledge of the parent-brand and category extension; the stage of entry into another category (i.e., as an early or a late entrant). The degree of fit of the parent-brand (and original product) with the extension category is revealed as the most prominent factor contributing to better acceptance and evaluation (e.g., favourability) of the extension in consumer studies.

Aaker and Keller specified in a pioneer article (1990) two requirements for fit: (a) the extension product category is a direct complement or a substitute of the original category; (b) the company, with its people and facilities, is perceived as having the knowledge and capability of manufacturing the product in the extension category. These requirements reflect a similarity between the original and extension product categories that is necessary for successful transfer of a favourable attitude towards the brand to the extension product type (2). A successful transfer of attitude may occur, however, also if the parent-brand has values, purpose or image that seem relevant to the extension product category, even when the technological linkage is less tight or apparent (as the case of Virgin suggests).

  • Aaker and Keller found that fit, based especially on competence, stands out as a contributing factor to higher consumer evaluation (level of difficulty is a secondary factor while perceived quality plays more of a ‘mediating’ role).

Volckner and Sattler (2006) worked to sort out the contributions of ten factors, as retrieved from academic literature, to the success of brand extensions; relations were refined with the aid of expert advice from brand managers and researchers (3). Contribution was assessed in their model in terms of (statistical) significance and relative importance. The researchers found  fit to be the most important factor driving (perceived) brand extension success in their study, followed by marketing support, parent-brand conviction, retail acceptance, and parent-brand experience. The complete model tested for more complex structural relationships represented through mediating and moderating (interacting) factors (e.g., the effect of marketing support on extension success ‘passes’ through fit and retailer acceptance).

For brand extensions to be accepted by consumers and garner a positive attitude, consumers should recognise a connectedness or linkage between the parent-brand and the category extension. The fit between them can be based on attributes of the original and extension types of product or a symbolic association. Keller and Lehmann (2006) conclude in this respect that “consumers need to see the proposed extension as making sense” (emphasis added). They identify product development, applied via brand (and line) extensions, as a primary driver of brand growth, and thereby adding to parent-brand equity. Parent-brands do not tend to be damaged by unsuccessful brand extensions, yet the authors point to circumstances where greater fit may result in a negative effect on the parent-brand, and inversely where joining a new brand name with the parent-brand (as its endorser) may protect the parent-brand from adverse outcomes of extension failure (4).

When assessing the chances of success of a brand extension, it is nevertheless important to consider what brands are already present in the extension category that a company is about to enter. Milberg, Sinn, and Goodstein claim that this factor has not received enough attention in research on brand extensions. In particular, one has to take into account the strength of the parent-brand relative to competing brands incumbent in the target category. As a starting point for entering the extension category, they chose to focus on how well consumers are familiar with the competitor brands vis-à-vis the extending brand.  Milberg and her colleagues proposed that a brand extension can succeed despite a worse fit with the category extension due to an advantage in brand familiarity, and vice versa. Consumer response to brand extensions was tested on two aspects: evaluation (attitude) and perceived risk (5).

First, it should be noted, the researchers confirm the positive effect of better fit on consumer evaluation of the brand extension when no competitors are considered. The better fitting extension is also perceived as significantly less risky than a worse fitting extension. However, Milberg et al. obtain supportive evidence that in a competitive setting, facing less familiar brands can improve the fortune of a worse fitting extension, compared with being introduced in a noncompetitive setting: When the incumbent brands are less familiar relative to the parent-brand, the evaluation of the brand extension is significantly higher (more favourable) and purchasing its product is perceived less risky than if no competition is referred to.

  • A reverse outcome is found in the case of better fit where the competitor brands are more highly familiar: A disadvantage in brand familiarity can dampen the brand extension evaluation and increase the sense of risk in purchasing from the extended brand, compared with a noncompetitive setting.

Two studies performed show how considering differences in brand familiarity can change the picture about the effect of brand extension fit from that often found without accounting for competing brands in the extension category.

When comparing different competitive settings, the research findings provide a more constrained support, but in the direction expected by Milberg and colleagues. The conditions tested entailed a trade-off between (a) a worse fitting brand extension competing with less familiar brands; and (b) a better fitting brand extension competing with more familiar brands. In regard to competitive settings:

The first study showed that the evaluation of a worse fitting extension competing with relatively unfamiliar brands is significantly more favourable than a better fitting extension facing more familiar brands. Furthermore, the product of a worse fitting brand extension is preferred more frequently over its competition than the better fitting extension product is (chosen by 72% vs. 6%, respectively). Also, purchasing a product from the worse fitting brand extension is perceived significantly less risky compared with the better fitting brand. These results indicate that the relative familiarity of the incumbent brands that an extension faces would be more detrimental to its odds of success than how well its fit is.

The second study aimed to generalise the findings to different parent-brands and product extensions. It challenged the brand extensions with somewhat more difficult conditions: it included categories that are all relevant to respondents (students), and so competitor brands in extension categories are also relatively more familiar to them than in the first study. The researchers acknowledge that the findings are less robust with respect to comparisons of the contrasting competitive settings. Evaluation and perceived risk related to the worse fitting brand competing with less familiar brands are equivalent to the better fitting brand extension facing more familiar brands. The gap in choice shares is reduced though in this case it is still statistically significant (45% vs. 15%, respectively). Facing less familiar brands may not improve the response of consumers to the worse fitting brand extension (i.e., not overcoming the effect of fit) but at least it is in a position as good as of the better fitting brand extension competing in a more demanding setting.

  • Perceived risk intervenes in a more complicated relationship as a mediator of the effect of fit on brand extension evaluation, and also in mediating the effect of relative familiarity in competitive settings. Mediation implies, for example, that a worse fitting extension evokes greater risk which is responsible for lowering the brand extension evaluation; consumers may seek more familiar brands to alleviate that risk.

A parent-brand can assume an advantage in an extension category even though it encounters brands that are familiar within that category, and may even be considered experts in the field: if the extending brand is leading within its original category and is better known beyond it, this can give it a leverage on the incumbents if those brands are more ‘local’ or specific to the extension category. For example, it would be easier for Nikon leading brand of cameras to extend to binoculars (better fit) where it meets brands like Bushnell and Tasco than extending to scanners (also better fit) where it has to face brands like HP and Epson. In the case of worse fitting extensions, it could be significant for Nikon whether it extends to CD players and competes with Sony and Pioneer or extends to laser pointers and faces Acme and Apollo — in the latter case it may enjoy the kind of leverage that can overcome a worse fit. (Product and brand examples are borrowed from Study 1). Further research may enquire if this would work better for novice consumers than experts. Milberg, Sinn and Goodstein recommend to consider additional characteristics that brands may differ on (e.g., attitude, image, country of origin), suggesting more potential bases of strength.

Entering a new product category for a company is often a difficult challenge, and choosing the more appropriate branding strategy for launching the product can be furthermore delicate and consequential. If the management chooses to make a brand extension, it should consider aspects of relative strength of its parent-brand, such as familiarity, against the incumbent brands of the category it plans to enter in addition to a variety of other characteristics of product types and its brand identity. However, the managers can take advantage as well of intermediate solutions in brand architecture to combine a new brand name with an endorsement of an established brand (e.g., higher-level brand for a product range). Choosing the better branding strategy may be helped by better understanding of the differences and relations (e.g., hierarchy) between product categories as perceived by consumers.

Ron Ventura, Ph.D. (Marketing)

Notes:

1. Consumer Reactions to Brand Extensions in a Competitive Context: Does Fit Still Matter?; Sandra J. Milberg, Francisca Sinn, & Ronald C. Goodstein, 2010; Journal of Consumer Research, 37 (October), pp. 543-553.

2.  Consumer Evaluations of Brand Extensions; David A. Aaker and Kevin L. Keller, 1990; Journal of Marketing, 54 (January), pp. 27-41.

3.  Drivers of Brand Extension Success; Franziska Volckner and Henrik Sattler, 2006; Journal of Marketing, 70 (April), pp. 18-34.

4. Brands and Branding: Research Finding and Future Priorities; Kevin L. Keller and Donald R. Lehmann, 2006; Marketing Science, 25 (6), pp. 740-759.

5. Ibid. 1.

Read Full Post »

A new film this year, “Sully”, tells the story of US Airways Flight 1549 that landed safely onto the water surface of the Hudson River on 15 January 2009 following a drastic damage to the plane’s two engines. This article is specifically about the decision process of the captain Chesley (Sully) Sullenberger with the backing of his co-pilot (first officer) Jeff Skiles; the film helps to highlight some instructive and interesting aspects of human judgement and decision-making in an acute crisis situation. Furthermore, the film shows how those cognitive processes contrast with computer algorithms and simulations and why the ‘human factor’ must not be ignored.

There were altogether 155 people on board of the Airbus A320 aircraft in its flight 1549 from New-York to North Carolina: 150 passengers and five crew members. The story unfolds whilst following Sully in the aftermath of the incident during the investigation of the US National Transportation Safety Board (NTSB) which he was facing together with Skiles. The film (directed by Clint Eastwood, featuring Tom Hanks as Sully and Aaron Ackhart as Skiles, 2016) is based on Sullenberger’s autobiographic book “Highest Duty: My Search for What Really Matters” (2009). Additional resources such as interviews and documentaries were also used in preparation of this article.

  • The film is excellent, recommended for its way of delivering the drama of the story during and after the flight, and for the acting of the leading actors. A caution to those who have not seen the film: the article includes some ‘spoilers’. On the other hand, facts of this flight and the investigation that followed were essentially known before the film.

This article is not explicitly about consumers, although the passengers, as customers, were obviously directly affected by the conduct of the pilots as it saved their lives. The focus, as presented above, is on the decision process of the captain Sullenberger. We may expect that such an extraordinary positive outcome of the flight, rescued from a dangerous circumstance, would have a favourable impact on the image of the airline US Airways that employs such talented flight crew members. But improving corporate image or customer service and relationships were not the relevant considerations during the flight, just saving lives.

Incident Schedule: Less than 2 minutes after take-off (at ~15:27) a flock of birds (Canada geese) clashed into both engines of the aircraft. It is vital to realise that from that moment, the flight lasted less than four minutes! The captain took control of the plane from his co-pilot immediately after impact with the birds, and then had between 30 seconds to one minute to make a decision where to land.  Next, just 151 seconds passed from impact with the birds and until the plane was approaching right above the Hudson river for landing on the water. Finally, impact with water occurred 208 seconds after impact with the birds (at ~15:30).

Using Heuristics: The investigators of NTSB told Sully (Hanks) about flight calculations performed in their computer simulations, and argued that according to the simulation results it had not been inevitable to land on the Hudson river, a highly risky type of crash-land. In response, Sully said that it had been impossible for himself and Skiles to perform all those detailed calculations during the four minutes of the flight after the impact of the birds with the aircraft’s engines; he was relying instead on what he saw with his eyes in front of him — the course of the plane and the terrain below them as the plane was gliding with no engine power.

The visual guidance Sully describes as using to navigate the plane resembles a type of ‘gaze heuristic’ identified by professor Gerd Gigerenzer (1). In the example given by Gigerenzer, a player who tries to catch a ball flying in the air does not have time to calculate the trajectory of the ball, considering its initial position, speed and angle of projection. Moreover, the player should also take into account wind, air resistance and ball spin. The ball would be on the ground by the time the player makes the necessary estimations and computation. An alternative intuitive strategy (heuristic) is to ‘fix gaze on the ball, start running, and adjust one’s speed so that the angle of gaze remains constant’. The situation of the aircraft flight is of course different, more complex and perilous, but a similar logic seems to hold: navigating the plane in air safely towards the terrain surface (land or water) when there is no time for any advanced computation (the pilot’s gaze would have to be fixed on the terrain beneath towards a prospect landing ‘runway’). Winter winds in New-York City on that frozen day have probably made the landing task even more complicated.  But in those few minutes available to Sully, he found this type of ‘gaze’ or eyesight guiding rule the most practical and helpful.

Relying on Senses: Sullenberger made extensive use of his senses (visual, auditory, olfactory) to collect every information he could get from his surrounding environment. To start with, the pilots could see the birds coming in front of them right before some of them were clashing into the engines — this evidence was crucial to identifying instantly the cause of the problem though they still needed some time to assess the extent of damage. In an interview to CBS’s programme 60 Minutes (with Katie Couric, February 2009), Sully says that he saw the smoke coming out from both engines, smelled the burned flesh of the birds, and subsequently heard a hushing noise from the engines (i.e., made by the remaining blades). He could also feel the trembling of the broken engines. This multi-modal sensory information contributed to convincing him that the engines were lost (i.e., unable to produce thrust) in addition to failure to restart them. Sully also utilised all that time information from the various meters or clocks in the cockpit dashboard in front of him (while Skiles was reading to him from the manuals). The captain was thus attentive to multiple visual stimuli (including and beyond using a visual guidance heuristic) in his decision process, from early judgement to action on his decision to land onto the water of the Hudson river.

Computer algorithms can ‘pick-up’ and process all the technical information of the aircraft displayed to the pilots in the cockpit. The algorithms may also apply in the computations additional measurements (e.g., climate conditions) and perhaps data from sensors installed in the aircraft. But the computer algorithms cannot ‘experience’ the flight event like the pilots. Sully could ‘feel the aircraft’, almost simultaneously and rapidly perceive the sensory stimuli he received in the cockpit, within and outside the cabin, and respond to them (e.g., make judgement). Information available to him seconds after impact with the birds gave him indications about the condition of the engines that algorithms as used in the simulations could not receive. That point was made clear in the dispute that emerged between Sully and the investigating committee with regard to the condition of one of the engines. The investigators claimed that early tests and simulations suggested one of the engines was still functioning and could allow the pilots to bring the plane to land in one of the nearby airports (returning to La Guardia or reverting to Teterboro in New-Jersey). Sully (Hanks) disagreed and argued that his indications were clear that the second engine referred to was badly damaged and non-functional — both engines had no thrust. Sully was proven right — the committee eventually updated that missing parts of the disputed engine were found and showed that the engine was indeed non-functional, disproving the early tests.

Timing and the Human Factor: The captain Sullenberger had furthermore a strong argument with the investigating committee of NTSB about their simulations in attempt to re-construct or replicate the sequence of events during the flight. The committee argued that pilots in a flight simulator ‘virtually’ made a successful landing in both La Guardia and Teterboro airports when the simulator computer was given the data of the flight. Sully (Hanks) found a problem with those live but virtual simulations. The flight simulation was flawed because it made the assumption the pilots could immediately know where it was possible to land, and they were instructed to do so. Sully and Skiles indeed knew immediately the cause of damage but still needed time to assess the extent of damage before Sully could decide how to react. Therefore, they could not actually turn the plane towards one of those airports right after bird impact as the simulating pilots did. The committee ignored the human factor, as argued by Sully, that had required him up to one minute to realise the extent of damage and his decision options.

The conversation of Sully with air controllers demonstrates his assessments step-by-step in real-time that he could not make it to La Guardia or alternatively to Teterboro — both were effectively considered — before concluding that the aircraft may find itself in the water of the Hudson. Then the captain directed the plane straight above the river in approach to crash-landing. One may also note how brief were his response statements to the air controller.  Sully was confident that landing on the Hudson was “the only viable alternative”. He told so in his interview to CBS. In the film, Sully (Hanks) told Skiles (Ackhart) during a recuperating break outside the committee hall that he had no question left in his mind that they have done the right thing.

Given the strong resistance of Sully, the committee ordered additional flight simulations where the pilots were “held” waiting for 35 seconds to account for the time needed to assess the damage before attempting to land anywhere. Following this minimum delay the simulating pilots failed to land safely neither at La Guardia nor at Teterboro. It was evident that those missing seconds were critical to arriving in time to land in those airports. Worse than that, the committee had to admit (as shown in the film) that the pilots made multiple attempts (17) in their simulations before ‘landing’ successfully in those airports. The human factor of evaluation before making a sound decision in this kind of emergency situation must not be ignored.

Delving a little deeper into the event helps to realise how difficult the situation was.  The pilots were trying to execute a three-part checklist of instructions. They were not told, however, that those instructions were made to match a situation of loss of both engines at a much higher altitude than they were at just after completing take-off. The NTSB’s report (AAR-10-03) finds that the dual engine failure at a low altitude was critical — it allowed the pilots too little time to fulfill the existing three-part checklist. In an interview to Newsweek in 2015, Sullenberger said on that challenge: “We were given a three-page checklist to go through, and we only made it through the first page, so I had to intuitively know what to do.”  The NTSB committee further accepts in its report that landing at La Guardia could succeed only if started right after the bird strike, but as explained earlier, that was unrealistic; importantly, they note the realisation made by Sullenberger that an attempt to land at La Guardia “would have been an irrevocable choice, eliminating all other options”.

The NTSB also commends Sullenberger in its report for operating the Auxiliary Power Unit (APU). The captain asked Skiles to try operating the APU after their failed attempt to restart the engines. Sully decided to take this action before they could reach the article on the APU in the checklist. The operation of the APU was most beneficial according to NTSB to allow electricity on board.

Notwithstanding the judgement and decision-making capabilities of Sully, his decision to land on waters of the Hudson river could have ended-up miserably without his experience and skills as a pilot to execute it rightly. He has had 30 years of experience as a commercial pilot in civil aviation since 1980 (with US Airways and its predecessors), and before that had served in the US Air Force in the 1970s as a pilot of military jets (Phantom F-4). The danger in landing on water is that the plane would swindle and not reach in parallel to the water surface, thus one of the wings might hit water, break-up and cause the whole plane to capsize and break-up into the water (as happened in a flight in 1996). That Sully succeeded to safely “ditch” on water surface is not obvious.

The performance of Sullenberger from decision-making to execution seems extraordinary. His judgement and decision capacity in these flight conditions may be exceptional; it is unclear if other pilots could perform as well as he has done. Human judgement is not infallible; it may be subject to biases and errors and succumb to information overload. It is not too difficult to think of examples of people making bad judgements and decisions (e.g., in finance, health etc.). Yet Sully has demonstrated that high capacity of human judgement and sound decision-making exists, and we can be optimistic about that.

It is hard, and not straightforward, to extend conclusions from flying airplanes to other areas of activity. In one aspect, however, there can be some helpful lessons to learn from this episode in thinking more deeply and critically about the replacement of human judgement and decision-making with computer algorithms, machine learning and robotics. Such algorithms work best in familiar and repeated events or situations. But in new and less familiar situations and in less ordinary and more dynamic conditions humans are able to perform more promptly and appropriately. Computer algorithms can often be very helpful but they are not always and necessarily superior to human thinking.

This kind of discussion is needed, for example, in respect to self-driving cars. It is a very active field in industry these days, connecting automakers with technology companies for installing autonomous computer driving systems in cars. Google is planning on creating ‘driverless’ cars without a steering wheel or pedals; their logic is that humans should not be involved anymore in driving: “Requiring a licensed driver be able to take over from the computer actually increases the likelihood of an accident because people aren’t that reliable” (2). This claim is excessive and questionable. We have to carefully distinguish between computer aid to humans and replacing human judgement and decision-making with computer algorithms.

Chesley (Sully) Sullenberger has allowed himself as the flight captain to be guided by his experience, intuition and common sense to land the plane safely and save the lives of all passengers and crew on board. He was wholly focused on “solving this problem” as he told CBS, the task of landing the plane without casualties. He recruited his best personal resources and skills to this task, and in his success he might give everyone hope and strength in belief in human capacity.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) “Gut Feelings: The Intelligence of the Unconscious”, Gerd Gigerenzer, 2007, Allen Lane (Pinguin Books).

(2) “Some Assembly Required”, Erin Griffith, Fortune (Europe Edition), 1 July 2016.

 

Read Full Post »

Consumers evoke from the visual appearance of a product their impressions of its beauty or aesthetics. They may also interpret physical features embedded in the product form (e.g., handles, switches, curvature) as cues for a proper use of the product. But there is an additional hidden layer of the design that may influence the judgement of consumers, that is the intention of the product designer(s). The intention could be an idea or a motive behind the design, as to what a designer wanted to achieve. However, intentions, only implicit in product appearance, may not be clear or easy to infer.

The intention of a designer may correspond to the artistic creativity of the product’s visual design (i.e., aesthetic appeal), its purpose and mode of use, and furthermore, extending symbolic meanings (e.g., social values, self-image of the target users). For a consumer, judgement could be a question of what one infers and understands from the product’s appearance, and how close one understands it to be the intention of the designer. For example, a consumer can make inferences from cues in the product form  (e.g., an espresso machine) about its appropriate function (e.g., how to insert a coffee capsule in order to make a drink) — but a consumer may ask herself, is that the way the designer intended the product to be used?  These inferences are interrelated and complementary in determining the ‘correct’ purpose, function or meaning of a product. There are original and innovative products for which the answers are more difficult to produce than for others based only on a product’s appearance.

  • Note: Colours and signs on the surface of a product may be informative in regard to function as well as symbolic associations of a product.

The researchers da Silva, Crilly and Hekkert (2015) investigated if and how consumers’ knowledge of the designers’ intentions can influence their appreciation of the respective products. Yet, in acknowledgement that consumers are likely to derive varied inferences on intention (some of them mistaken) from visual images of products, the researchers present verbal statements on intentions in addition to images. Moreover, their studies show that there is important significance to the contribution of the verbal statements, explicitly informing consumers-respondents of designers’ intentions, in influencing (improving) consumers’ appreciation of products (1).

To  begin with, consumers usually have different conceptions and understanding of design than professionals in the field. Thereby, most consumers are not familiar with terminology in the domain of design (e.g., typicality/novelty, complexity, unity, harmony) and may use their own vocabulary to describe attributes of appearance; if the same terms are used, they may not have the same meaning or interpretation among designers and common consumers (2). Nevertheless, consumers have innate tastes for design (e.g., based on principles of Gestalt), and with time they may develop better comprehension, appraisal skills, and refined preferences for design of artefacts (as well as buildings, paintings, photographs etc.). The preferences of individuals may progress as they develop greater design acumen and accumulate more experience in reacting to designed objects while preferences may also be affected by one’s personality traits. Design acumen, in particular, pertains to the aptitude or approach of people to visual design, which may be characterised by quicker sensory connections, greater sophistication of preferences, and stronger propensity for processing visual versus verbal information (3). The gaps prevailing between consumers and designers in domain knowledge and experience may cause diversions when making inferences directly about a product as well as when ‘reading’ the designer’s intention from the product’s appearance.

The starting point of da Silva, Crilly and Hekkert posits that “the designer’s intention can intuitively be regarded as the essence of a product and that knowledge of this intention can therefore affect how that product is appreciated” (p. 22). The ‘essence’ describes how a product is supposed to behave or perform as foreseen by the designer; thinking about it by consumers can give them pleasure as much as perceiving the product’s features.

Appreciation in Study 1 is measured as a composite of five scale items — liking, beauty, attractiveness, pleasingness, and niceness; it is a form of ‘valence judgement’ but with a strong “flavour” of aesthetics, a seeming remainder of its origin as a scale of aesthetic appreciation adapted by the researchers to represent general product appreciation.

  • Note: The degree to which the researchers succeeded in expanding the meaning of ‘appreciation’ may have some bearing on the findings where respondents make judgements beyond aesthetics (e.g., the scale lacks an item on ‘usefulness’).

At first it is established that knowledge of explicit intentions of designers, relating to 15 products in Study 1, influenced the appreciation of the designed products for good or bad (i.e., in absolute values) vis-à-vis the appreciation based on pictures alone. Subsequently, the researchers found support for overall increase in appreciation (i.e., positive effect) following the exposure to explicit statements of the designers’ intentions.

A deeper examination of the results revealed, however, that for three products there was a more substantial improvement; for ten products a moderate or minor increase was found due to intention knowledge; and two products suffered a decrement in appreciation. Furthermore, the less a product was appreciated based only on its image, the more it could gain in appreciation after consumers were informed of the designer’s intention. Products do not receive higher post-appreciation merely because they were appreciated better in the first place. More conspicuously, for products that were more difficult to interpret and judge based on their visual image, knowledge of the designer’s intention could help consumers-respondents realise and appreciate much better their purpose and why they were designed in that particular way, considering both their visual appeal and function (but there is a qualification to that, later explained).

The second study examined reasons for changes in appreciation following to being informed of designers’ intentions. Study 2 aimed to distinguish between appreciation that is due to appraisal of the intention per se and appreciation attributed to how well a product fulfills a designer’s intention, independent of whether a consumer approves or not of the intention itself. This study concentrated on three of the products used in Study 1, described briefly with their stated intentions (images included in the article):

  • A cross-cultural memory game (Product B) — The game “was designed with the aim of making the inhabitants of The Netherlands aware of their similarities instead of their differences” (i.e., comparing elements of Dutch and Middle Eastern cultures). [Product B gained the most in post-appreciation in Study 1.]
  • A partially transparent bag (Product C) — Things that are no longer in need, but are still in good condition, can be left in this bag on the street for anyone interested: “It was designed with the aim of enabling people to be generous towards strangers.” [Moderate gain.]
  • A “fitted-form” kitchen cupboard (Product G) — In this cupboard everyday products can be stored in fitted compartments according to their exact shapes. The designer’s intention said the product “was designed with the aim of helping people appreciate the comfortable predictability of daily household task”. [Product G gained the least in post-appreciation in Study 1.]

Consistent with Study 1, these three products were appreciated similarly and to a high degree based on images alone, and their appreciation increased to large, medium and small degrees after being informed of intentions. It is noted, however, that overall just half of respondents reported that knowing an intention changed how much they liked the respective product (about two-thirds for B, half for C, and a third for G). Subsequently respondents were probed about their reasons for changes in appreciation (liking) and specifically about their assessment of the product as means to achieve the stated intention. Three themes emerged as underlying the influence of intention knowledge on product appreciation: (a) perception of the product; (b) evaluation of the intention; and (c) evaluation of the product as a means to fulfill its intention (as explicitly queried).

Knowledge of the designer’s intention can change the way consumers perceive the product, its form and features. Firstly, it can make the product appear more interesting, such as by adding an element of surprise, an unexpected insight about its form (found especially for product B). In some cases it simply helps to comprehend the product’s form. The insight gained from knowing the designer’s intention may be expressed in revealing a new meaning of the product that improves appreciation (e.g., a more positive social ‘giving’ meaning of product C). But here is a snag — if the intention consumers are told of contradicts the meaning they assigned to the product when initially perceiving its image, it may inversely decrease one’s appreciation. For example, the ‘form-fitted’ cupboard (G) may seem nicely chaotic, but the way a consumer-participant interpreted it does not agree with the intention given by the designer (it ‘steals’ something from its attraction), and therefore the consumer becomes disappointed.

Upon being informed of the designer’s intention, a consumer may appreciate an idea or cause expressed in the intention itself (e.g., on merit of being morally virtuous, products B and C). The positive attitude towards the intention would then be transferred to the product (e.g., ‘helping people is a very beautiful thing’ in reference to C). On the downside, knowing an intention may push consumers away from a product (e.g., disliking the ‘predictability’ of one’s behaviour underlying product G). A product may thus gain or lose consumers’ favour in so far as the intention reflects on its essence.

But relying on a (declared) intention for the idea, cause or aim it conveys is not a sufficient criterion for driving appreciation upper or lower. Consumers also consider, as expected of them, whether the product is an able means to implement an idea or fulfill its aim. It is not just about what the designer intended to achieve but also how well a product was designed to achieve the designer’s goal. Participants in Study 2 were found to hold a product in favour for its capacity to fulfill its intended aim, even though they did not judge it as virtuous or worthy. There were also opposite cases where appreciation decreased but participants pointed out that the fault was not in the intention, rather in its implementation (e.g., “I think it’s a good idea [intention] but this [product C] won’t really work”). The authors suggest that participants use references in their judgements, including alternative known or imagined products which they believe to be more successful for fulfilling a similar aim or alternative aims or causes they could think of as appropriate for the same product.

The researchers find evidence in participants’ explanations suggesting they see how efficiency can be beautiful (e.g., how materials are used optimally and aesthetically). They relate this notion to a design principle of obtaining ‘maximum-effect-from-minimum-means’. Participants also endorsed novel or unusual means to realise the intention behind a product. Hekkert defined the principle above as one of the goals to pursue for a pleasing design.  It means conveying more information through fewer and simpler features, creating more meanings through a single construct, and applying metaphors. Hekkert also recommended a sensible balance between typicality and novelty (‘most advanced, yet acceptable’) that will inspire consumers but not intimidate them (4).

  • This research was carried out as part of the Project UMA: “Unified Model of Aesthetics” for designed artefacts at the Department of Industrial Design, Delft University of Technology, The Netherlands. (See how the model depicts a balance in meeting safety needs versus accomplishment needs for aesthetic pleasure: connectedness-autonomy, unity-variety, typicality-novelty).

Knowledge of the intentions of designers can elucidate for consumers why a product was designed to appear and to be used in a particular way. It contributes motivation or cause (e.g., social solidarity, energy-saving) for obtaining and using the designed product. But the intention should be reasonable and agreeable to consumers, and the product design in practice has to convince consumers it is fit and capable to fulfill the intention. It is nevertheless desirable that the product is visually pleasing, as an object of aesthetic appeal and as a communicator of functional and symbolic meanings.

When marketers assess that consumers are likely to have greater difficulty to interpret a product visual design and infer the intention behind it, they may wisely accompany a presentation of the product with a statement by the designer. This would apply, for instance, to innovative products, early products of their type, or original concepts for known products. The designer may introduce the design concept, his or her intention or aim, and perhaps how it was derived; this introduction may be delivered in text as well as video in assorted media as suitable (print, online, mobile). On the part of consumers, exposure to the designer’s viewpoint would  enrich their shopping and purchasing experience, helping them to develop better-tuned visual impressions and judgements of products.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) How People’s Appreciation of Products Is Affected by Their Knowledge of the Designers’ Intentions; Odette da Silva, Nathan Crilly, & Paul Hekkert, 2015; International Journal of Design, 9 (2), pp. 21-33.

(2) How Consumers Perceive Product Appearance: The Identification of Three Product Appearance Attributes; Janneke Blijlevens, Marielle E.H. Creusen, & Jan P.L. Schoorman, 2009; International Journal of Design, 3 (3), pp. 27-35.

(3) Seeking the Ideal Form: Product Design and Consumer Response; Peter H. Bloch, 1995; Journal of Marketing, 59 (3), pp. 16-29.

(4) Design Aesthetics: Principles of Pleasure in Design; Paul Hekkert, 2006; Psychology Science, 48 (2), pp. 157-172.

Read Full Post »

There can hardly be a doubt that Internet users would be lost and unable to exploit the riches of information in the World Wide Web (WWW), and the Internet overall, without the aid of search engines (e.g., Google, Yahoo!, Bing). Anytime information is needed on a new concept or in an unfamiliar topic, one turns to a search engine for help. Users search for information for various purposes in different spheres of life — formal and informal education, professional work, shopping, entertainment, and others. While on some tasks the relevant piece of information can be quickly retrieved from a single source chosen from the results list, oftentimes a rushed search that relies on results in immediate sight is simply not enough.

And yet users of Web search engines, as revealed in research on their behaviour, tend to consider only results that appear on the first page (a page usually includes ten results). They may limit their search task even further by focusing on just the first “top” results that can be viewed on the screen, without scrolling down to the bottom of the first page. Users then also tend to proceed to view only a few webpages by clicking their links on the results list (usually up to five results)[1].

  • Research in this field is based mostly on analysis of query logs, but researchers also apply lab experiments and observation of users in-person while performing search tasks.     

Internet users refrain from going through results pages and stop short of exploring information sources located on subsequent pages that are nonetheless potentially relevant and helpful. It is important, however, to distinguish between search purposes, because not for every type of search looking farther than the first page is necessary and beneficial. Firstly, our interest is in a class of informational search whose purpose in general is to learn about a topic (other recognized categories are navigational search and transactional / resource search)[2]. Secondly, we may distinguish between a search for more specific information and a search for learning more broadly about a topic. The goal of a directed search is to obtain information regarding a particular fact or a list of facts (e.g., UK’s prime minister in 1973, state secretaries of the US in the 20th century). Although it is likely we could find answers to such questions from a single source (e.g., Wikipedia), found on the first page of results, it is advisable to verify the information with a couple of additional sources; but that usually would be sufficient. An undirected search, on the other hand, is aimed to learn more broadly about a topic (e.g., the life and work of architect Frank Lloyd Wright, online shopping behaviour). The latter type of search is our main focus since in this case ending a search too soon can be the more damaging and harmful to our learning or knowledge acquisition [3]. This may also be true for other types of informational search identified by Rose and Levinson, namely advice seeking and obtaining a list of sources to consult [4].

With respect to Internet users especially in the role of consumers, and to their shopping activities, a special class of topical search is associated with learning about products and services (e.g., features and attributes, goals and uses, limitations and risks, expert reviews and advice). Negative consequences of inadequate learning in this case may be salient economically or experientially to consumers (though perhaps not as serious for our knowledgebase compared with other domains of education).

The problem starts even before the stage of screening and evaluating information based on its actual content. That is, the problem is not of selectively choosing sources that appear reliable or their information seems relevant and interesting; it is neither of selectively favouring information that supports our prior beliefs and opinions (i.e., a confirmation bias). The problem has to do with the tendency of people to consider and apply only that portion of information that is put in front of them. Daniel Kahneman pointedly labeled this human propensity WYSIATI — What You See Is All There Is — in his excellent book Thinking, Fast and Slow [4]. Its roots may be traced to the availability heuristic which deals with the tendency of people to rely on the exemplars of a category presented, or ease of accessing the first category instances from one’s memory, in order to make judgements about frequency or probability of categories and events. The heuristic’s effect extends also to error in assessing size (e.g., using only the first items of a data series to assess its total size or sum). However, WYSIATI should better be viewed in the wider context of a distinction explained and elaborated by Kahneman between what he refers to as System 1 and System 2.

System 1 is intuitive and quick-to-respond whereas System 2 is more thougthful and deliberate. While System 2 is effortful, System 1 puts as little effort as possible to make a judgement or reach a conclusion. System 1 is essentially associative (i.e., it draws on quick associations that come to mind), but it consequently also tends to jump to conclusions. System 2 on the other hand is more critical and specialises in asking questions and seeking more required information (e.g., for solving a problem). WYSIATI is due to System 1 and can be particularly linked with other possible fallacies related to this system of fast thinking (e.g., representativeness, reliance on ‘low numbers’ or insufficient data). Albeit, the slow thinking System 2 is lazy — it does not hurry to intervene, and even when it is activated on the call of System 1 often enough it only attempts to follow and justify the latter’s fast conclusions [5]. We need to enforce our will in order to make our System 2 think harder and improve where necessary on poorly-based judgements made by System 1. 

Several implications of WYSIATI when using a Web search engine become apparent. It is appealing to follow a directive which says: the search results you see is all there is. It is in the power of System 1 to tell users when utilising a search engine: there is no need to look further — consider links to search hits immediately accessible on the first page, preferably seen on the screen from top of the page, perhaps scroll down to its bottom. Users should pause to ask if the information proposed is sufficient or they need to look for more input.

  • Positioning a “ruler” at the bottom of any page with page numbers and a Next button that searchers can click-through to proceed to additional pages (e.g., Google) is not helpful in this regard — such a ruler should be placed also at the top of a page to encourage or remind users to check subsequent pages, whether or not one observes all the results on a given page.

Two major issues in employing sources of information are relevance and credibility of their content. A user can take advantage of the text snippet quoted from a webpage under the hyperlinked heading of each result in order to initially assess if it is relevant enough to enter the website. It is more difficult, however, to judge the credibility of websites as information sources, and operators of search engines may not be doing enough to help their users in this respect. Lewandowski is critical of an over-reliance of search engines on popularity-oriented measures as indicators of quality or credibility to evaluate and rank websites and their webpages. He mentions: the source-domain popularity; click and visit behaviour of webpages; links to the page in other external pages, serving as recommendations; and ratings and “likes” by Internet users [6]. Popularity is not a very reliable, guaranteed indicator of quality (as known for extrinsic cues of perceived quality of products in general). A user of a search engine could be misguided in relying on the first results suggested by the engine in confident belief that they have to be the most credible. Search engines indeed use other criteria for their ranking like text-based tests (important for relevance) and freshness, but with respect to credibility or quality, the position of a webpage in the list of results could be misleading.

  • Searchers should consider on their own if the source (company, organization or other entity) is familiar and has good reputation in the relevant field, then judge the content itself. Yet, Lewandowski suggests that search engines should give priority in their ranking and positioning of results to entities that are recognized authorities appreciated for their knowledge and practice in the domain concerned [7]. (Note: It is unverified to what extent search engines indeed use this kind of appraisal as a criterion.) 

Furthermore, organic results are not immune to marketing-driven manipulations. Paid advertised links normally appear now on a side bar, at top or bottom of pages, mainly the first one, and they may also be flagged as “ads”. Thus searchers can easily distinguish them and choose how to treat them. Yet, the position of a webpage in the organic results list may be “assisted” by using techniques of search engine optimization (SEO), increasing their frequency of retrieval, for example through popular keywords or tagwords in webpage content or promotional links to the page (non-ads). Users should be careful of satisficing behaviour, relying only on early results, and be willing to look somewhat deeper into the results list on subsequent pages (e.g., at least 3-4 pages, sometimes reach page 10). Surprisingly instructive and helpful information may be found in webpages that appear on later results pages. 

  • A principal rule of information economics may serve users well: keep browsing results pages and consider links proposed until additional information seems marginally relevant and helpful and does not justify the additional time continuing to browse results. Following this criterion suggests no rule-of-thumb for the number of pages to view — in some cases it may be sufficient to consider two results pages, while in others it could be worth considering even twenty pages. 

Another aspect of search behaviour concerns the composition of queries and the transition between search queries during a session. It is important to balance sensibly and efficiently between the number of queries used and the number of results pages viewed on each search trial. Web searchers tend to compose relatively short queries, about 3-4 keywords on average in English (in German queries are 1-2 words long since German includes many composite words). Users make relatively little use of logical operators. However, users update and change queries when they run into difficulty in finding the information they seek. It becomes a problem if they get unsatisfied with a query because they could not find the needed information too shortly. Users also switch between strings of keywords and phrases in natural language. Yet updating the query (e.g., replacing or adding a word) frequently changes the results list only marginally. The answer to a directed search may be found sometimes around the corner, that is, in a webpage whose link appears on the second or third results page. And as said earlier, it is worth checking 2-3 answers or sources before moving on. Therefore, it is wise even to eye-scan the results on 2-4 pages (e.g., based on heading and snippet) before concluding that the query was not accurate or effective enough.

  • First, users of Web search engines may apply logical operators to define and focus their area of interest more precisely (as well as other criteria features of advanced search, for example time limits). Additionally, they may try the related query strings suggested by the search engine at the bottom of the first page (e.g., in Google). Users can also refer to special domain databases (e.g., news, images) shown on the top-tab. Yahoo! Search, furthermore, offers on the first page a range of results types from different databases mixed with general Web results. And Google suggests references to academic articles from its Google Scholar database for “academic” queries.

The way Interent users perceive their own experience with search engines can be revealling. In a survey of Pew Research Center on Internet & American Life in 2012, 56% of respondents (adults) expressed strong confidence in their ability to find the information they need by using the service of a search engine and an additional 37% said they were somewhat confident. Also, 29% said they are always able to find the information looked for and 62% said they can find it most of the time, making together a vast majority of 91%. Additionally, American respondents were mostly satisfied with information found, saying that it was accurate and trustworthy (73%), and thought that relevance and quality of results improved over time (50%).

Internet users appear to set themselves modest information goals and become satisfied with the information they gathered, suspectedly too quickly. They may not appreciate enough the possibilties and scope of information that search engines can lead them to, or simply be over-confident in their search skills. As suggested above, a WYSIATI approach could drive searchers of the Web to end their search too soon. They need to make the effort, willingly, to overcome this tendency as the task demands, getting System 2 at work. 

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) As cited by Dirk Lewandowski (2008), Search Engine User Behaviour: How Can Users Be Guided to Quality Content, Information Service & Use, 28, pp. 261-268 http://eprints.rclis.org/16078/1/ISU2008.pdf ; also see for example research by Bernard J. Jansen and Amanda Spink (2006) on How Are We Searching the World Wide Web.

2) Daniel E. Rose & Danny Levinson (2004), Understanding User Goals in Web Search, ACM WWW Conference, http://facweb.cs.depaul.edu/mobasher/classes/csc575/papers/www04–rose.pdf 

(3) Dirk Lewandowski (2012), Credibility in Web Search Engines, In Online Credibility and Digital Ethos: Evaluating Computer-Mediated Communication, S. Apostel & M. Fold (Eds.) Hershey, PA: IGI Global (viewed at: http://arxiv.org/ftp/arxiv/papers/1208/1208.1011.pdf, 8 July ’14)

(4) Daniel Kahneman (2011), Thinking, Fast and Slow, Penguin Books.

(5)  Ibid. 4.

(6) Ibid. 3

(7) Ibid 1 (Lewandowski 2008). 

 

         

Read Full Post »

The Rolling Stones most probably need no introduction. At least those born anytime between 1950 and 1980 should know the band, with Mick Jagger as its lead singer, and some of their widely known hits like (Can’t Get No) Satisfaction, Start me Up, Jumpin’ Jack Flash, and Paint It Black. By continuing to perform after the 1970s the band has given better chance for younger generations to become its fans as well. It is the longest acting rock band ever (since 1962, albeit some changes in their original line-up).

  • They are now four: Mick Jagger, Keith Richards, Charles Watts, and Ron Wood (the first two have written most of the band’s songs). Wood replaced (1975) Mike Taylor who has recently returned to perform with the band as a guest.

Therefore, when it was announced this spring that the Rolling Stones will have performed in a concert for the first time in Israel on 4 June 2014, the news were received with great excitement and anticipation. But then came a snag: the ticket prices declared were higher than Israeli rock fans apparently expected. The concert took place in the city park of Tel-Aviv in an area similar in form to an “amphitheatre”. There were three types of tickets. A small portion were allocated for standing on the lawn in a close area in front of the stage (“Golden Ring”), priced 1600 NIS (US$460) for a ticket. The vast majority of tickets allowed standing on the lawn stretching from behind the Golden Ring to the back slopes of the “amphitheatre”. Each ticket cost 700 NIS (US$200). Additional VIP tickets with extra perks offered sitting places on a staircase-balcony situated on the right-hand side, facing the stage, for the price of 2700 NIS (US$770). A total of 50,000 tickets were offered.

Rock fans have made mainly two types of complaints: (a) that tickets were more expensive than demanded for other concerts of foreign artists performing in Israel this year and in the past few years; (b) they were more expensive compared with prices charged in concerts of the Rolling Stones in other countries (e.g., 2012-2013 “50 & Counting” tour and the current 2014 “On Fire” tour).  Those price comparisons were used as a basis for consumers to claim that the ticket prices in Israel were unfair. The anger was directed towards both the local organizing agent and the Rolling Stones. Social activists ran a protest campaign in social media to persuade fans not to buy tickets. It most likely explains the sluggish progress of ticket sales until the day of the concert. All that time in the run-up to the concert there were talks that not enough people were buying tickets. Eventually, the amphitheatre was filled-up with 48,00o spectators, including the VIP balcony (a sigh of relief is permitted).

Consumers frequently judge the fairness or unfairness of a price in question based on comparisons to prices paid by others (e.g., friends), to prices paid by oneself on previous occasions, and to prices paid in other outlets for the same or similar products or services. Such comparisons are not easy to make, varying in accuracy and level of relevance. A key criterion for the relevance of a comparison is the degree of similarity between cases for which prices are compared — the more similar cases are on their non-price aspects while the prices are non-equitable, the judgement of unfairness is expected to be stronger.

  • When comparing with the prices for other rock or pop concerts consumers attended in the past, we should take into account factors such as: (1) the other artists used as reference; (2) when the other concerts took place (e.g., this year, three years ago); (3) the venue for the concert (e.g., a park, a football/basketball stadium, a concert hall). Further attributes extend from a difference in venue: seating or standing tickets, distance from stage, and flat versus rising ground or balcony. For example, standard tickets for standing in the same park at the concert of Paul McCartney cost 500 NIS, but that was five years ago. Niel Young, however, will be performing at that park later this summer, and standing tickets cost less than 400 NIS. In another case, Cliff Richard performed last year at Tel-Aviv basketball stadium: tickets for sitting on the flat floor of the basket field cost about 1000-1500 NIS while tickets in the first rows of the tier balcony facing the stage cost about 650 NIS. Arguing for unfairness is therefore not straightforward.
  • In comparisons to concerts of the Rolling Stones in other countries, differences associated with the venue of the concert are again important. In addition, one may also need to account for differences in standard-of-living and purchasing-power-parity (PPP) between countries. Fans in Israel, for instance, were angered that tickets in countries like the US or UK  where standard-of- living is higher than in Israel actually cost less when translated to shekels. Let us consider a few cases in example: (1) Ticket prices for concerts in Rome (22 June) and Paris (13 June) range from “standard” €78 (~US$110) to “premium” €150 (~US$210), nominally and relatively less expensive; (2) In the concert at Perth Arena in Australia, scheduled for 29 October this year, tickets for standing in the Tongue Pit adjacent to the stage or for seating in the flat area at the centre of the arena cost A$580 (~US$540) whereas tickets for sitting on the lower rows of the tier balconies more distant from the stage cost A$376 (~US$350) — while some place arrangements may be more convenient in Perth, overall the tickets are not less expensive than in Israel; (3) In fact, complaints about the relatively high prices the Rolling Stones charge have also been voiced in other countries — for instance, an article in The Telegraph criticised the high prices for the band’s concerts in London in November 2012 during their 50 & Counting tour (prices ranged between £95 and £375 [~US$150-600] with VIP Hospitality tickets priced £950! [~US$1520]), requiring the Rolling Stones to defend the prices they charge (Ron Wood explained they invested millions in arranging the stage).  Truly, there are not many active bands today like them.

In a cognitive, calculated decision process, according to the theory of mental accounting (Thaler), a consumer would evaluate the value to him or her of attending a rock concert based on some attributes or benefits of the band performing (e.g., how much the songs are liked, their singing and music-playing, and the show given at live concerts). Expressed in monetary terms, it is the highest price a consumer is willing to pay that would be equivalent to the psychological value to him (similar to the concept of reservation price in economic theory). The difference between the monetary value of equivalence and the (normal) price the consumer is asked to pay denotes the acquisition utility for the consumer.

  • The normal or ‘list’ price is often not the actual price paid due to special deals and discounts put forward — a difference between the normal price and the discounted actual price denotes the additional transaction utility a consumer can gain. For instance, customers of an Israeli mobile telecom company could buy their tickets for the Rolling Stones concert at prices 100 NIS lower than the official prices. (Some fans had a chance to buy standard tickets at half price of 350 NIS in a contest organised by the band.)

This methodic way for deriving a (perceived) value and reaching a decision may run out-of-order when trying to apply it to a rock or pop concert. Music as a form of art evokes emotions that are likely to disrupt sensible calculations of value. Moreover for devoted fans of a singer or a band, adoration and affective attachment are likely to influence their decision process more strongly. The fans of the band may find it difficult and disturbing to analyse their experience of listening to the music or attending a live concert in the way required to derive a well-founded value or utility. When the experience is about enjoyment, excitement, and getting carried-away by the music, the monetary value or the price fans are willing to pay can be expected to receive a boost upwards. They could perceive a reasonable acquisition utility even with the premium near-stage or VIP tickets.

But many other fans who feel close to rock and pop music, who may be greater fans of other artists of these genres, could also be strongly attracted to attend the concert because of the extraordinary opportunity to see and hear the Rolling Stones performing live in Tel-Aviv. Consumers may sense the historical significance of such an event, not to be missed. That could act as an emotional inducement for these fans to elevate the price they are willing to pay high enough to buy at least the standard type of ticket. It took until an hour before the concert to ascertain that there were indeed enough of them to fill the amphitheatre in the park (with some help from discounts).

Two important ways of approaching price were considered above: one is directed inwards and focuses on the perceived value of the target service, a rock concert; the other is directed outwards and compares the target price with prices for other cases or episodes that seem similar to the consumers and through which they judge the (un)fairness of the target price. Both avenues introduced challenging problems for the rock concert; it probably could have not occurred without the emotional component of the decision process. However, it does not have to spoil the event itself for those who bought tickets. Price may continue to pre-occupy the customers’ minds in the gap period between the time of buying the ticket and the day of the concert. When the event arrives customers “close” the mental account; they may either conclude the value obtained from their ticket acquisition or shift their attention fully to the event and the benefits it delivers, the more desirable way for them to avoid conflicts of value.

The concert of the Rolling Stones was wonderful. Mick Jagger was fantastically energetic on the stage (admirable in his age of 70+), and Keith Richards looked especially joyful. Jagger also demonstrated nice skills in expressing himself in Hebrew to the delight of the local audience. The band performed the songs mentioned above among others (19 in total); unfortunately Jagger did not sing their beautiful song Ruby Tuesday, but he performed another ballad from their repertoire not appearing regularely in their concerts, Angie.

  • Given the enthusiasm of the audience, the spectators did not let price issues spoil the celebration. There were other two factors that threatened to hinder the enjoyment. First, there was a heat wave that evening with high humidity — but that could not be anticipated and was beyond human control. It just had to be tolerated while drinking lots of water. The second factor was entirely due to human behaviour — spectators lifting smartphones above their heads in attempt to record on video episodes from the concert. The quality of images captured on the little screen (e.g., from a distance of 200m+) and the enjoyment spectators feel doing so is left for debate elsewhere. Meanwhile those screens waived above “stole” pieces from the field-of-view of the spectators behind who tried to escape them — what a shame.

The Rolling Stones did everything in their power, and they had the power, to make spectators happy for the money they had paid to the last shekel. Price did matter in making the decision to purchase and it even threatened to spoil the concert. However, that was true during the run-up period until the concert started on 4 June at 21:15. As the performance went on the spectators could easily forget about the price. The price effect was mitigated or vanished, leaving the spectators with pleasure of the music and performance of the Rolling Stones and particularly Mick Jagger. One may think of other artists who can achieve this outcome, but the Rolling Stones are definitely on the top list. It remains a specially good experience to remember.

Ron Ventura, Ph.D. (Marketing)

 

 

Read Full Post »

Consumers often use price information as a cue to infer the quality of products — it is a familiar phenomenon based on the belief that price and quality are positively correlated. Consider for instance  laptop computers: consumers may rely on price to predict the quality of a laptop model for which there is lack of information about attributes that determine its quality, or rather because they have a difficulty to understand the technical features and try to infer the laptop’s expected quality based on its (list) price. Wine is another excellent example for a product whose quality consumers try to assess based on its price. The perceived price-quality relation is not always well-substantiated, which may lead to some costly mistakes. Reliance on price to judge quality is contingent on individual, contextual (e.g., product type) and situational factors.

Consumers may rely on price as an informational cue for different purposes: (a) to reduce the risk of buying a product of an unacceptable low quality; (b) avoid or mitigate effort of evaluating complex product information; (c) anticipate differences in quality between product brands and models (but sometimes also their symbolic meanings associated with prestige and luxury). Price-quality judgements involve two essential steps: estimating the strength of a relationship between price and quality in a focal product category, and applying this judgement to predict the quality of a particular product item (e.g., a new product model). Consumers may differ in their proficiency both to assess the relationship and applying it in various every-day situations.

The magnitude of price-quality correlations varies between product categories, and most consumers are aware of it. However, their calibration of the price-quality relationship for particular product types is often flawed and consumers over-estimate the correlations. Consumers tend to follow a general belief about price-quality relation without properly testing it as a hypothesis in the product category under consideration for purchase; alternately they bias their judgement by considering only evidence consistent with the prior belief (e.g., as the load of information to process is larger and harder to grapple with, and when information is organised in a format that highlights price-quality correlation [1]). Consumers also differ in the first place in their propensity to hold a price-quality belief (i.e., how strongly are consumers price-quality schematic). Capturing the actual reliance on price as a quality cue may also turn to be elusive because applying such a rule depends on the amount and nature of product information available.

In a research recently published (2013) Lalwani and Shavitt study how consumer propensity to perceive a price-quality relationship is governed or moderated by thinking styles and modes of self-construal exerted from consumers’ relations with others in their groups of membership. They distinguish between (1) independents (individualists) who prefer to form their opinions and set personal goals on their own, in hope those will be accepted by their in-group peers but not to be censored by the latter, and (2) interdependents (collectivists) who are inclined to form opinions and set goals that are subordinated to those of the in-group to which they belong. They refer to cultural self-construal by acknowledging that independence has been associated more closely with Western nations or Caucasian societies and interdependence with South and East Asian nations or societies. The distinction is primarily relevant to the construction of price-quality judgements by its correspondence with analytic vs. holistic styles of thinking, respectively. The authors additionally examine specific conditions that may enhance or inhibit the use of price to infer quality.

Analytic thinking orientates to process and evaluate a single piece of information at a time — for example, examine a value for a product item on a specific attribute. The ‘analytic’ consumer may compare between a few models on a specific attribute but ignore any other attributes. In a pictorial image, analytic thinking implies that the individual would look at each object in the image separately rather than inspecting a collection of elements in a scene. Holistic thinking, on the other hand, orientates to observe and evaluate relations between attributes and objects. It is much less focused on single items of information in favour of considering collections of them and how they relate to each other. In a pictorial image, holistic thinking means that an individual more easily identifies combinations of elements and conceives inter-relations between them in the whole scene. The argument put forward, and tested, by Lalwani and Shavitt posits that interdependents (collectivists) who are reliant on their social connections, and who are more considerate of the needs and goals of others in their in-groups before their own, are more predisposed to apply holistic thinking; independents (individualists) who tend to focus on their single-self’s needs and goals before others are more inclined to adopt an analytic style of thinking. Holistic thinking that endorses relational processing is clearly essential for making judgements about a price-quality relationship. The authors are particularly concerned with the boundary conditions under which the advantage of holistic thinking in making price-quality judgements has an impact.

Lalwani and Shavitt take notice that independent and interdependent modes of self-construal are not exclusive of each other, that is, they may be exhibited simultaneously in the same person or within a particular society. Therefore, following previous research, the authors apply two scales, one to measure independence and the other for interdependence as opposed to treating these modes as polar ends of the same continuum. They find that a stronger tendency to perceive a price-quality relationship (a global belief) is predicted by greater inclination for interdependent self-construal. No similar relation is found with independent self-construal. This confirms that only interdependent self-construal may support consumer tendency to rely on a price-quality relationship. [2]

Asians and Hispanic (in the US), representing interdependent self-construals, have been found to utilise price to infer the quality of a “new” target product item (alarm clock) whereas Caucasians (independents) showed no significant sensitivity to differences in price for the target product. It is emphasised that the Asians/Hispanics participants not just considered price-quality information available on “base” items but also practically used price in its evaluation of quality for the target item.

The difference in type of self-construal does not clarify sufficiently how this should lead to differences in approach to the perceived price-quality relationship. That is where the difference between holistic and analytic thinking takes its role. If we look only at the distinction between American nationals and Indian nationals, it would be relatively difficult to understand why the Indians have been found to exhibit a stronger tendency to rely on price as a quality cue. This difference is partially explained (mediated) once the researchers account for a difference in tendency to think holistically — the Indians also have a stronger tendency for that type of thinking that better supports processing of relations between price and quality.

Even more convincing are the results from a study in which an exercise with a pictorial image was conducted to encourage (prime) analytic versus holistic thinking by participants (American Asians/Hispanic vs. Caucasians). As expected, holistic thinking facilitated reliance on price when evaluating the quality of a “new” target product item (calculator) for both Asians/Hispanic and Caucasians. That is, they evaluated the higher priced target brand to be of higher quality than a lower priced brand. Nonetheless, the Asian/Hispanic who are more likely to be ‘interdependent’ differentiated even more strongly the quality between higher- and lower-priced target brands — revealing their advantage for relational processing. In contrast, when both Asians/Hispanic and Caucasians are primed to think analytically, none of them seems to use price as a quality cue. This highlights the power of holistic thinking for making price-quality judgements; vice versa, “imposing” analytic thinking on those who have a stronger tendency for holistic thinking seems to over-ride their advantage in predicting quality based on price.

Lalwani and Shavitt point-out that an advantage for relational processing in using price as a quality cue takes effect in kind of intermediate conditions: when there is a logical basis and supportive evidence (e.g., market conditions, product information available) for relying on price to infer quality, yet neither when conditions are poor/prohibitive nor when evidence of a price-quality relationship is just obvious and applying it is fairly easy. This is demonstrated in two cases: (a) an advantage for relational processing with regard to non-symbolic, functional or practical products (e.g., paper towels) vs. symbolic products that are better able to express one’s identity (e.g., watches, bicycle) — the latter product type induces a price-quality tendency in both ‘independents’ and ‘interdependents’; (b) an advantage for relational processing when information is provided on (non-price) attributes of moderate bandwidth (e.g., quality, durability, reliability), not for broad, generalised evaluations/attitudes (everybody uses price) and not narrow, specific features (nobody uses price). When conditions are sufficient but not too permissive, only those who have the advantage will discriminate products on perceived quality according to price.

The distinction between independent and interdependent self-contrual is somewhat circumstantial with respect to the utilisation of price as a quality cue. It does not immediately make sense why the two behavioural phenomena should be related. References to national and ethnic origins may also be too liberal generalisations that do not contribute enough to our understanding except for exposing the relationship. At the bottom of a distinction between modes of self-construal regarding price-quality judgement underlies the important distinction between holistic and analytic thinking. Lalwani and Shavitt effectively suggest that the extent to which people think in terms of relations between objects or their attributes corresponds with their attitude towards relations with other people, and hence the latter’s connection with the relationship between price and perceived quality. The distinction between thinking styles therefore seems to shed more light on conditions that induce or limit reliance on price as a quality cue.

Yet, establishing a connection between self-construal. particularly represented by national or ethnic (sociocultural) origins, and reliance on price as a quality cue, can be most productive and helpful for segmentation — it facilitates the identification of and access to relevant segments for marketing initiatives associated with the price-perceived quality relationship. The implications may be in devising advertising messages or premium product offering that target consumers with expected greater tendency to make price-quality inferences.  Consequently those consumers would likely be more favourable towards and receptive of higher-priced products/brands. This research further contributes to previous knowledge in the field by suggesting conditions under which most consumers or only selective segments would be evoked to make price-quality judgements. Marketers may consider the breadth of attributes described (broader dimensions vs. features) in addition to the structure of information presented to consumers [e.g., rank-order products by quality vs. random order, [3]).

Source:

You Get What You Pay For? Self-Construal Influences Price-Quality Judgements; Ashok K. Lalwani and Sharon Shavitt, 2013; Journal of Consumer Research, 40 (August), pp. 255-267, DOI:
10.1086/670034

Notes:

[1] A Selective Hypothesis Testing Perspective on Price-Quality Inference and Inference-Based Choice; Maria L. Cronley, Steven S. Posavac, Tracy Meyer, Frank R. Kardes, & James J. Kellaris, 2005; Journal of Consumer Psychology, 15 (2), pp. 159-169

[2]  Statistical Note: The validity of the results of multiple regression analysis performed is contingent on the two scales of individualism-independence and collectivism-interdependence not being negatively correlated. Such evidence is not reported. Turning to the source (Oyserman, 1993) reveals, as logically expected, that some of the statements are in contradiction between the pair of scales. In this case, the version of scales adopted by the authors suggests less conflict and the correlation between them is near zero. On the one hand, it is a little surprising that not even a low negative correlation was found to indicate the contrast between these constructs. On the other hand, a strong negative correlation between the scales could mean that only the stronger predictor, ‘interdependence’, won over the other confounded predictor and thus came out as the single significant predictor.

[3] Ibid. 1.

Read Full Post »

Think of this: You walk into your car, close the door, and fasten your seat belt. Then you keypunch your destination on a panel or simply say “Go to XXX”, press a button, and your self-driving car gets on its way. During the travel you may read, eat a light meal, or do some work on your lapKnight Rider Hasselhoff and KITTtop computer/tablet device. This scenario is not that much imaginary — the development and testing of autonomous or driverless cars is already in progress, and the time the first models are marketed and hit the roads may be just a few years ahead. Some may want to see it as a dream-come-through when every person can have his or her own private chauffeur — installed in the car. For some the new robotic car may remind them of KITT, the clever and talking sports car in the popular futuristic TV series Knight Rider from the 1980s (featuring David Hasselhoff). However one relates to the concept of a self-driving car, it is likely to change dramatically the whole experience of travelling in a car, especially in the driver seat.

The autonomous car of Google appears to be the most publicised venture of this kind, full of ambition, but Google is just one of the players. Projects in this evolving technological field have called for collaboration between technology companies or academic research labs, responsible essentially for creating the sensors, computer vision and information technologies required for navigating and operating the automated cars, and automakers (e.g., Toyota, Audi, Renault-Nissan) that provide the vehicles. While the relevant devices and technologies could already exist, they have to be particularly accurate to be self-reliant and they must communicate properly with the ordinary car’s systems to control them safely in real-time; the effort to achieve those targets is still in progress.

The elaborate system of Google equips an autonomous car with a radar, laser range finders (aka lidars), and associated software. Extended capabilities of this system allow the car independently to smoothly join the traffic on freeways/highways, cross intersections, make right and left turns, and pass slower vehicles. The system’s cost is estimated at $70,000 to be installed in each car (1).

A high-tech company based in Israel, MobilEye Vision Technologies, offers an alternative approach that is based on camera only to collect all the visual information necessary from the scene of the road. For MobilEye, engaging in the driverless car challenge seems as a clear extension to their existing capabilities in developing and producing Advanced Driver Assistance Systems, camera-driven applications for alerting human drivers of collision risks (e.g., pedestrians starting to cross the street, insufficient distance from the car in front of them, as well as passing the legal speed limit)(2). The competence of MobilEye’s system for a driverless car is reduced at this time vis-a-vis Google’s system, but that may be attributed partly to the fact that the system currently applies a single camera at the front windshield. Hence, a car equipped with their system is capable only of self-driving in a single lane on freeways; yet it can detect traffic lights, slow down until complete stop, and then resume the journey in freeway speed. But the capabilities and performance of their system in driving a car are expected to improve as company officials say they plan to enhance it with a wide-angle camera and additional cameras, side-mounted and rear-facing. They aim to match the capabilities of the Google’s autonomous system but with a technological solution that is much more cost-effective to put on the road (1).

The most urgent and vital issue to address with respect to driverless cars has to be road safety. It is the motivation more frequently suggested for making the transition from human driving to robotic driving; that is, a computer-based system would behave more reliably on the road than a human driver and therefore will lead to a considerable reduction in road accidents.

Car accidents are often caused by misjudgement of a situation on the road in a matter of seconds by the driver who consequently takes the wrong action. But accidents also occur because drivers make dangerous moves, believing overconfidently that they can pull-it-through (e.g., “stealing” a red light, passing a slower car without sufficient distance from other cars or clear view of traffic on the opposite direction, speeding). A robotic system may indeed be able to prevent many accidents in either circumstances — its estimates (e.g., distance) would be more accurate and the computer algorithms it utilizes would make more reliable decisions, certainly not subject to human tendencies of risk-seeking and whims. Human judgement is fallible and intuitive quick decisions can be misguided. But intuition on many occasions is very effective in identifying obstacles, irregularities and hazards, and therefore helps avoid personal harms or accidents. It allows drivers to make sufficiently accurate decisions in a short time, important especially when time limitations are in force. Gut feelings also play an important guiding role. Yet when more time is available, drivers can plan their path and re-examine their intuitive judgement.

Sadly, drivers get into dangerous situations because they distract themselves, willingly or unintentionally, from whatever happens on the road (e.g., operating and talking on the mobile phone, kids that are quarrelling in the back seat). Thus, video demonstrations of MobilEye of how their warning system helps to avoid an accident (e.g., pedestrian ahead) focuses on incidents when the driver is distracted, possibly operating his music player or searching in his handbag, something he should not have done in the first place. However, the logic that since this kind of behaviour and other human fallacies cannot be completely prevented, and efforts for educating and training people to drive better are ineffective, we should pass control indefinitely to robotic systems, is normatively flawed and even dangerous — it allows people to feel less responsible. Nevertheless, an autonomous system may be welcome to resolve specific incidents when distraction to other activities cannot be delayed or when fatigue breaks-in.

Subsequently, an interesting question to be posed is: How well will robotic driving systems be able to anticipate human behaviour on the roads? Assuming that the human driver keeps his/her eyes on the road, who will more successfully detect a pedestrian about to step into the road from between parking cars, the driver or the robotic system with its sensors? Will the latter respond in time without human intervention? While there are some fascinating projections about how the new cars will impact urban life (e.g., parking, traffic lights, building construction (3)), there is lack of convincing evidence yet that the driverless cars are ready for crowded busy urban areas. Furthermore, replacing the fleet of cars on the roads can be expected to take years (auto experts suggest that the first models will be commercially available as early as 2020 and most cars will be autonomous from 2040 to 2050). This is not likely to be a smooth transition period; transport and urban policy makers must be carefully prepared for it. Particularly they should be addressing how effectively driverless cars are able to anticipate and respond to errors or misconduct of human drivers and the risk of accidents due to human drivers who misunderstand how self-driving cars manoeuver or even try to outsmart the robotic cars.

It is therefore that much essential that autonomous cars will in fact operate in different modes of human and robotic control during the transition period, and continue further later. David Friedman, Deputy Administrator of the National Highway Traffic Safety Administration in the US identified in an interview to Wall Street Journal five levels of automation, from “0” (all-human) to “4” (full automation). The intermediate Level 3 indicates “limited automation”, using assisted positioning technologies but which require the human driver to retake control from time to time (4).  Although human judgement is imperfect, human drivers should be given flexibility in relying on automated driving and be allowed to occasionally intervene. John Markoff of The New-York Times/IHT reports that the Toyota-Google car (Level 3 [WSJ]) made him feel more detached from the operation of the “robot” while the Audi-MobilEye car made him better realise what it takes for a “robot” to drive a car (1). Nevertheless, there is not a definite answer to the question what is correct to do in critical moments: should the human driver trust the system to do its job or to interfere and take control? Markoff felt less confident when the car in front slowed ahead of a stoplight (on the road down to the Dead Sea) and it “took all of my willpower”, in his words, to trust the car and not intervene. That system probably still has to be improved, but such episodes are likely to continue to be experienced all the time. On the one hand, computer algorithms are likely to deal better more frequently with road/traffic conditions and the driver should sit back and trust the robot. On the other hand, the driver should be advised not to engage too deeply in activities like reading or playing a video game and remain conscious of the road, prepared to take control in complex and less normal situations.

Introducing driverless cars may have, furthermore, significant implications for connectivity of the computerised car with and use of external information resources, and consequently for our privacy versus convenience. Thinking in particular of an information giant like Google, it is difficult to imagine that the company will not make use of the flow of information it may receive from cars for marketing purposes. True, much information can already be gathered and utilised by existing navigation applications and be shared through them. And yet, employing an autonomous driving system is going to involve even increased volumes and expanded types of information; collecting the information will be justified by operational requirements of the system, which will be difficult to argue with (e.g., information from Google’s sensors on a car can be matched any time with cloud-based data sets). That is, the autonomous system, a navigation application in the car, and external information resources will have continuous “conversations” as the car drives.

Therefore, during a future autonomous car drive, you may not be left so free to read or do your work. It will become more likely that as you pass near a restaurant you receive an alert that you have not visited it lately, and as you approach a DIY store you are notified of their great discount deals, etc.. The system will know much better what business establishments of interest the car is going to pass by and when it is expected to reach them; the sensors may also detect brand signage of interest on other vehicles or on the roadsides, consult external information resources and send a message to the driver.  That is not to mention the history of the pathways of a car that can be gathered, accumulated and saved on external databases. The opportunities for business enterprises and marketers are enormous and they are just starting to reveal. It would be convenient for digital-oriented consumers to receive some of those messages, but it would be also at a growing cost of losing the privacy of their whereabouts.

The cars of the future are expected to be increasingly more electronically wired, connected to the Internet (wirelessly), and supplanted with sensors, processors, and computer applications/applets. Some experts suggest that the dashboard controls of the car will actually become virtual and displayed on the driver’s smartphone instead of embedded in the car. Overall, analogue instruments are expected to be replaced by digital ones (5). It could change considerably the mission of car repairs, requiring more involvement of electronics and computer experts vis-a-vis mechanics and electricians. It would probably make more complex the care and maintenance of the car by its owner. The car will be more susceptible to sudden shutdown due to software failure or malfunction; the owner will have to take care of updating the various software installed on the car, wirelessly from the Internet or by a USB key (5); and thereby it may also be necessary to install anti-virus protection software on the car.

Eventually, technological visionaries and proponents of self-driving robotic cars should keep in mind that driving gives pleasure to many car owners besides the benefit of bringing them from place to place. A law-biding driver who simply enjoys the experience should not be deprived of it. However, not every journey is enjoyable and driving enjoyment should be balanced against releasing the human driver from effects of fatigue and stress. Therefore a self-driving system may prove greatly positive and desirable after an extended period of driving during a long travel, on a monotonous long straight freeway, in city centres, and in traffic jams. Yet, don’t be surprised if your car drives you off the road to a nearby steakhouse restaurant at its discretion.

Ron Ventura, Ph.D. (Marketing)

Sources:

(1) “Low-Cost System Offers Clues to Fast-Approaching Future of Driverless Car”, John Markoff, The International Herald Tribune (Global Edition of The New-York Times), 29 May 2013 (See the original article in NYT with a short demo video of MobilEye at: http://www.nytimes.com/2013/05/28/science/on-the-road-in-mobileyes-self-driving-car.html?pagewanted=all&_r=0 )

(2) Website of MobilEye Vision Technologies (www.mobileye.com — see Products pages).

(3) “Driverless Cars Could Reshape the City of the Future”, Nick Bilton, The Boston Globe (Online), 8 July 2013. http://www.bostonglobe.com/business/2013/07/07/driverless-cars-could-reshape-cities/SuUfDpWx9qs9Db3mxr7hRN/story.html

(4)’ “Self-Driving Car Sparks New Guidelines”, Joseph B. White, The Wall Street Journal (Online WSJ.com), 30 May 2013.  http://online.wsj.com/article/SB10001424127887323728204578515081578077890.html

(5) “Automobiles Ape the iPhone”, Seth Fletcher, Fortune (European Edition), 20 May 2013, Volume 167, No. 7, p. 25.

Read Full Post »

Older Posts »