Feeds:
Posts
Comments

Posts Tagged ‘Choice’

Fifteen years have passed since a Nobel Prize in economics was awarded to Daniel Kahneman to this time (Fall 2017) when another leading researcher in behavioural economics, Richard Thaler, wins this honourable prize. Thaler and Kahneman are no strangers — they have collaborated in research in this field from its early days in the late 1970s. Moreover, Kahneman together with the late Amos Tversky helped Thaler in his first steps in this field, or more generally in meeting economics with psychology. Key elements of Thaler’s theory of Mental Accounting are based on the value function in Kanheman and Tversky’s Prospect theory.

In recent years Thaler is better known for the approach he devised of choice architecture and the tools of nudging, as co-author of the book “Nudge: Improving Decisions About Health, Wealth and Happiness” with Cass Sunstein (2008-9). However, at the core of the contribution of Thaler is the theory of mental accounting where he helped to lay the foundations of behavioural economics. The applied tools of nudging are not appropriately appreciated without understanding the concepts of mental accounting and other phenomena he studied with colleagues which describe deviations in judgement and behaviour from the rational economic model.

Thaler, originally an economist, was unhappy with predictions of consumer choice arising from microeconomics — the principles of economic theory were not contested as a normative theory (e.g., regarding optimization) but claims by economists that the theory is able to describe actual consumer behaviour and predict it were put into question. Furthermore, Thaler and others early on argued that deviations from rational judgement and choice behaviour are predictable.  In his ‘maverick’ paper “Toward a Positive Theory of Consumer Choice” from 1980, Thaler described and explained deviations and anomalies in consumer choice that stand in disagreement with the economic theory. He referred to concepts such as framing of gains and losses, the endowment effect, sunk costs, search for information on prices, regret, and self-control (1).

The theory of mental accounting developed by Thaler thereafter is already an integrated framework that describes how consumers perform value judgements and make choice decisions of products and services to purchase while recognising psychological effects in making economic decisions (2).  The theory is built around three prominent concepts (described here only briefly):

Dividing a budget into categories of expenses: Consumers metaphorically (but sometimes physically) allocate the money of their budget into buckets or envelopes according to type or purpose of expenses. It means that they do not transfer money freely between categories (e.g., food, entertainment). This concept contradicts the economic principle of fungibility, thus suggesting that one dollar is not valued the same in every category. A further implication is that each category has a sub-budget allotted to it, and if expenses in the category during a period surpass its limit, a consumer will prefer to give up on the next purchase and refrain from adding money from another category. Hence, for instance,  Dan and Edna will not go out for dinner at a trendy restaurant if that requires taking money planned for buying shoes for their child. However, managing the budget according to the total limit of income in each month is more often unsatisfactory, and some purchases can still be made on credit without hurting other purchases in the same month. On the other hand, it can readily be seen how consumers get into trouble when they try to spread too many expenses across future periods with their credit cards, and lose track of the category limits for their different expenses.

Segregating gains and integrating losses: In the model of a value function by Kahneman and Tversky, value is defined upon gains and losses as one departs from a reference point (a “status quo” state). Thaler explicated in turn how properties of the gain-loss value function would be implemented in practical evaluations of outcomes. The two general “rules”, as demonstrated most clearly in “pure” cases, say: (a) if there are two or more gains, consumers prefer to segregate them (e.g., if Chris makes gains on two different shares on a given day, he will prefer to see them separately); (b) if there are two or more losses, consumers prefer to integrate them (e.g., Sarah is informed of a price for an inter-city train trip but then told there is a surcharge for travelling in the morning — she will prefer to consider the total cost for her requested journey). Thaler additionally proposed what consumers would prefer doing in more complicated cases of “mixed” gains and losses, whether to segregate between the gain and loss (e.g., if the loss is much greater than the gain) or integrate them (e.g., if the gain is larger than the loss so that one remains with a net gain).

Adding-up acquisition value with transaction value to evaluate product offers: A product or service offer generally exhibits in it benefits and costs to the consumer (e.g., the example of a train ticket above overlooked the benefit of the travel to Sarah). But value may arise from the offering or deal itself beyond the product per se. Thaler recognised that consumers may look at two sources of value, and composing or adding them together would yield the overall worth of a product purchase offer: (1) Acquisition utility is the value of a difference between the [monetary] value equivalent of a product to the consumer and its actual price; (2) Transaction utility is the value of a difference between the actual price and a reference price. In the calculus of value, hides the play of gains and losses. This value concept was quite quickly adopted by consumer and marketing researchers in academia and implemented in means-end models that depict chains of value underlying the purchase decision process of consumers (mostly in the mid-1980s to mid-1990s). Thaler’s approach to ‘analysing’ value is getting more widely acknowledged and applied also in practice, as expressions of value as such in consumer response to offerings can be found in so many domains of marketing and retailing.

A reference price may receive different representations, for instance: the price last paid; price recalled from a previous period; average or median price in the same product class; a ‘normal’ or list price; a ‘fair’ or ‘just’ price (which is not so easy to specify). The transaction value may vary quite a lot depending on the form of reference price a consumer uses, ceteris paribus, and hence affect how the transaction value is represented (i.e., as a gain or a loss and its magnitude). Yet, it also suggests that marketers may hint to consumers a price to be used as a reference price (e.g., an advertised price anchor) and thus influence consumers’ value judgements.

We often observe and think of discounts as a difference between an actual price (‘only this week’) and a higher normal price — in this case we may construe the acquisition value and transaction value as two ways to perceive gain on the actual price concurrently. But the model of Thaler is more general because it recognizes a range of prices that may be employed as a reference by consumers. In addition, a list price may be suspected to be set higher to invoke in purpose the perception of a gain vis-à-vis the actual discounted price which in practice is more regular than the list price. A list price or an advertised price may also serve primarily as a cue for the quality of the product (and perhaps also influence the equivalent value of the product for less knowledgeable consumers), while an actual selling price provides a transaction value or utility. In the era of e-commerce, consumers also appear to use the price quoted on a retailer’s online store as a reference; then they may visit one of its brick-and-mortar stores, where they hope to obtain their desired product faster, and complain if they discover that the price for the same product in-store is much higher. Where customers are increasingly grudging over delivery fees and speed, a viable solution to secure customers is to offer a scheme of ‘click-and-collect at a store near you’. Moreover, when more consumers shop with a smartphone in their hands, the use of competitors’ prices or even the same retailer’s online prices as references is likely to be even more frequent and ubiquitous.


  • The next example may help further to illustrate the potentially compound task of evaluating offerings: Jonathan arrives to the agency of a car dealer where he intends to buy his next new car of favour, but there he finds out that the price on offer for that model is $1,500 higher than a price he saw two months earlier in ads. The sales representative claims prices by the carmaker have risen lately. However, when proposing a digital display system (e.g., entertainment, navigation, technical car info) as an add-on to the car, the seller proposes also to give Jonathan a discount of $150 on its original price tag.
  • Jonathan appreciates this offer and is inclined to segregate this saving apart from the additional pay for the car itself (i.e., ‘silver-lining’). The transaction value may be expanded to include two components (separating the evaluations of the car offer and add-on offer completely is less sensible because the add-on system is still contingent on the car).

Richard Thaler contributed to the revelation, understanding and assessment of implications of additional cognitive and behavioural phenomena that do not stand in line with rationality in the economic sense. At least some of those phenomena have direct implications in the context of mental accounting.

One of the greater acknowledged phenomena by now is the endowment effect. It is the recognition that people value an object (product item) already in their possession more than when having the option of acquiring the same object. In other words, the monetary compensation David would be willing to accept to give up on a good he holds is higher than the amount he would agree to pay to acquire it —  people principally have a difficulty to give up on something they own or endowed with (no matter how they originally obtained it). This effect has been most famously demonstrated with mugs, but to generalise it was also tested with other items like pens. This effect may well squeeze into consumers’ considerations when trying to sell much more expensive properties like their car or apartment, beyond an aim to make a financial gain. In his latest book on behavioural economics, ‘Misbehaving’, Thaler provides a friendly explanation with graphic illustration as to why fewer transactions of exchange occur between individuals who obtain a mug and those who do not, due to the endowment effect vis-à-vis a prediction by economic theory (3).

Another important issue of interest to Thaler is fairness, such as when it is fair or acceptable to charge a higher price from consumers for an object in shortage or hard to obtain (e.g., shovels for clearing snow on the morning after a snow storm). Notably, the perception of “fairness” may be moderated depending on whether the rise in price is framed as a reduction in gain (e.g., a discount of $2o0 from list price being cancelled for a car in short supply) or an actual loss (e.g., an explicit increase of $200 above the list price) — the change in actual price is more likely to be perceived as acceptable in the former case than the latter (4). He further investigated fairness games (e.g., Dictator, Punishment and Ultimatum). Additional noteworthy topics he studied are susceptibility to sunk cost and self-control.

  • More topics studied by Thaler can be traced by browsing his long list of papers over the years since the 1970s, and perhaps more leisurely through his illuminating book: “Misbehaving: The Making of Behavioural Economics” (2015-16).

The tactics of nudging, as part of choice architecture, are based on lessons from the anomalies and biases in consumers’ procedures of judgement and decision-making studied by Thaler himself and others in behavioural economics. Thaler and Sunstein looked for ways to guide or lead consumers to make better choices for their own good — health, wealth and happiness — without attempting to reform or alter their rooted modes of thinking and behaviour, which most probably would be doomed to failure. Their clever idea was to work within the boundaries of human behaviour to modify it just enough and in a predictable way to put consumers on a better track to a choice decision. Nudging could mean diverting a consumer from his or her routine way of making a decision to arrive to a different, expectedly better, choice outcome. It often likely involves taking a consumer out of his or her ‘comfort zone’. Critically important, however, Thaler and Sunstein conditioned in their book ‘Nudge’ that: “To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates“. Accordingly, nudging techniques should not impose on consumers the choice of any designated or recommended options (5).

Six categories of nudging techniques are proposed: (1) defaults; (2) expect errors; (3) give feedback; (4) understanding “mappings”; (5) structure complex choices; and (6) incentives. In any of these techniques, the intention is to allow policy makers to direct consumers to choices that improve the state of consumers. Yet, the approach they advocate of ‘libertarian paternalism’ is not received without contention —  while libertarian, that is without coercing a choice, a question remains what gives an agency or policy maker the wisdom and right to determine which options should be better off for consumers (e.g., health plans, saving and investment programmes). Thaler and Sunstein discuss the implementation of nudging mostly in the context of public policy (i.e., by government agencies) but these techniques are applicable just as well to plans and policies of private agencies or companies (e.g., banks, telecom service providers, retailers in their physical and online stores). Nevertheless, public agencies and even more so business companies should devise and apply any measures of nudging to help consumers to choose the better-off and fitting plans for them; it is not for manipulating the consumers or taking advantage of their human errors and biases in judgement and decision-making.

Richard Thaler reviews and explains in his book “Misbehaving” the phenomena and issues he has studied in behavioural economics through the story of his rich research career — it is an interesting, lucid and compelling story. He tells in a candid way about the stages he has gone through in his career. Most conspicuously, this story also reflects the obstacles and resistance that faced behavioural economists for at least 25-30 years.

Congratulations to Professor Richard Thaler, and to the field of behavioural economics to which he contributed wholesomely, in theory and in its application.    

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) Toward a Positive Theory of Consumer Choice; Richard H. Thaler, 1980/2000; in Choices, Values and Frames (eds. Daniel Kahneman and Amos Tversky)[Ch. 15: pp. 269-287], Cambridge University Press. (Originally published in Journal of Economic Behaviour and Organization.)

(2) Mental Accounting and Consumer Choice; Richard H. Thaler, 1985; Marketing Science, 4 (3), pp. 199-214.

(3) Misbehaving: The Making of Behavioural Economics; Richard H. Thaler, 2016; UK: Penguin Books (paperback).

(4) Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias; Daniel Kahneman, Jack L. Knetsch, & Richard H. Thaler, 1991/2000; in Choices, Values and Frames (eds. Daniel Kahneman and Amos Tversky)[Ch. 8: pp. 159-170], Cambridge University Press. (Originally published in Journal of Economic Perspectives).

(5) Nudge: Improving Decisions About Health, Wealth, and Happiness; Richard H. Thaler and Cass R. Sunstein, 2009; UK: Penguin Books (updated edition).

Advertisements

Read Full Post »

A new film this year, “Sully”, tells the story of US Airways Flight 1549 that landed safely onto the water surface of the Hudson River on 15 January 2009 following a drastic damage to the plane’s two engines. This article is specifically about the decision process of the captain Chesley (Sully) Sullenberger with the backing of his co-pilot (first officer) Jeff Skiles; the film helps to highlight some instructive and interesting aspects of human judgement and decision-making in an acute crisis situation. Furthermore, the film shows how those cognitive processes contrast with computer algorithms and simulations and why the ‘human factor’ must not be ignored.

There were altogether 155 people on board of the Airbus A320 aircraft in its flight 1549 from New-York to North Carolina: 150 passengers and five crew members. The story unfolds whilst following Sully in the aftermath of the incident during the investigation of the US National Transportation Safety Board (NTSB) which he was facing together with Skiles. The film (directed by Clint Eastwood, featuring Tom Hanks as Sully and Aaron Ackhart as Skiles, 2016) is based on Sullenberger’s autobiographic book “Highest Duty: My Search for What Really Matters” (2009). Additional resources such as interviews and documentaries were also used in preparation of this article.

  • The film is excellent, recommended for its way of delivering the drama of the story during and after the flight, and for the acting of the leading actors. A caution to those who have not seen the film: the article includes some ‘spoilers’. On the other hand, facts of this flight and the investigation that followed were essentially known before the film.

This article is not explicitly about consumers, although the passengers, as customers, were obviously directly affected by the conduct of the pilots as it saved their lives. The focus, as presented above, is on the decision process of the captain Sullenberger. We may expect that such an extraordinary positive outcome of the flight, rescued from a dangerous circumstance, would have a favourable impact on the image of the airline US Airways that employs such talented flight crew members. But improving corporate image or customer service and relationships were not the relevant considerations during the flight, just saving lives.

Incident Schedule: Less than 2 minutes after take-off (at ~15:27) a flock of birds (Canada geese) clashed into both engines of the aircraft. It is vital to realise that from that moment, the flight lasted less than four minutes! The captain took control of the plane from his co-pilot immediately after impact with the birds, and then had between 30 seconds to one minute to make a decision where to land.  Next, just 151 seconds passed from impact with the birds and until the plane was approaching right above the Hudson river for landing on the water. Finally, impact with water occurred 208 seconds after impact with the birds (at ~15:30).

Using Heuristics: The investigators of NTSB told Sully (Hanks) about flight calculations performed in their computer simulations, and argued that according to the simulation results it had not been inevitable to land on the Hudson river, a highly risky type of crash-land. In response, Sully said that it had been impossible for himself and Skiles to perform all those detailed calculations during the four minutes of the flight after the impact of the birds with the aircraft’s engines; he was relying instead on what he saw with his eyes in front of him — the course of the plane and the terrain below them as the plane was gliding with no engine power.

The visual guidance Sully describes as using to navigate the plane resembles a type of ‘gaze heuristic’ identified by professor Gerd Gigerenzer (1). In the example given by Gigerenzer, a player who tries to catch a ball flying in the air does not have time to calculate the trajectory of the ball, considering its initial position, speed and angle of projection. Moreover, the player should also take into account wind, air resistance and ball spin. The ball would be on the ground by the time the player makes the necessary estimations and computation. An alternative intuitive strategy (heuristic) is to ‘fix gaze on the ball, start running, and adjust one’s speed so that the angle of gaze remains constant’. The situation of the aircraft flight is of course different, more complex and perilous, but a similar logic seems to hold: navigating the plane in air safely towards the terrain surface (land or water) when there is no time for any advanced computation (the pilot’s gaze would have to be fixed on the terrain beneath towards a prospect landing ‘runway’). Winter winds in New-York City on that frozen day have probably made the landing task even more complicated.  But in those few minutes available to Sully, he found this type of ‘gaze’ or eyesight guiding rule the most practical and helpful.

Relying on Senses: Sullenberger made extensive use of his senses (visual, auditory, olfactory) to collect every information he could get from his surrounding environment. To start with, the pilots could see the birds coming in front of them right before some of them were clashing into the engines — this evidence was crucial to identifying instantly the cause of the problem though they still needed some time to assess the extent of damage. In an interview to CBS’s programme 60 Minutes (with Katie Couric, February 2009), Sully says that he saw the smoke coming out from both engines, smelled the burned flesh of the birds, and subsequently heard a hushing noise from the engines (i.e., made by the remaining blades). He could also feel the trembling of the broken engines. This multi-modal sensory information contributed to convincing him that the engines were lost (i.e., unable to produce thrust) in addition to failure to restart them. Sully also utilised all that time information from the various meters or clocks in the cockpit dashboard in front of him (while Skiles was reading to him from the manuals). The captain was thus attentive to multiple visual stimuli (including and beyond using a visual guidance heuristic) in his decision process, from early judgement to action on his decision to land onto the water of the Hudson river.

Computer algorithms can ‘pick-up’ and process all the technical information of the aircraft displayed to the pilots in the cockpit. The algorithms may also apply in the computations additional measurements (e.g., climate conditions) and perhaps data from sensors installed in the aircraft. But the computer algorithms cannot ‘experience’ the flight event like the pilots. Sully could ‘feel the aircraft’, almost simultaneously and rapidly perceive the sensory stimuli he received in the cockpit, within and outside the cabin, and respond to them (e.g., make judgement). Information available to him seconds after impact with the birds gave him indications about the condition of the engines that algorithms as used in the simulations could not receive. That point was made clear in the dispute that emerged between Sully and the investigating committee with regard to the condition of one of the engines. The investigators claimed that early tests and simulations suggested one of the engines was still functioning and could allow the pilots to bring the plane to land in one of the nearby airports (returning to La Guardia or reverting to Teterboro in New-Jersey). Sully (Hanks) disagreed and argued that his indications were clear that the second engine referred to was badly damaged and non-functional — both engines had no thrust. Sully was proven right — the committee eventually updated that missing parts of the disputed engine were found and showed that the engine was indeed non-functional, disproving the early tests.

Timing and the Human Factor: The captain Sullenberger had furthermore a strong argument with the investigating committee of NTSB about their simulations in attempt to re-construct or replicate the sequence of events during the flight. The committee argued that pilots in a flight simulator ‘virtually’ made a successful landing in both La Guardia and Teterboro airports when the simulator computer was given the data of the flight. Sully (Hanks) found a problem with those live but virtual simulations. The flight simulation was flawed because it made the assumption the pilots could immediately know where it was possible to land, and they were instructed to do so. Sully and Skiles indeed knew immediately the cause of damage but still needed time to assess the extent of damage before Sully could decide how to react. Therefore, they could not actually turn the plane towards one of those airports right after bird impact as the simulating pilots did. The committee ignored the human factor, as argued by Sully, that had required him up to one minute to realise the extent of damage and his decision options.

The conversation of Sully with air controllers demonstrates his assessments step-by-step in real-time that he could not make it to La Guardia or alternatively to Teterboro — both were effectively considered — before concluding that the aircraft may find itself in the water of the Hudson. Then the captain directed the plane straight above the river in approach to crash-landing. One may also note how brief were his response statements to the air controller.  Sully was confident that landing on the Hudson was “the only viable alternative”. He told so in his interview to CBS. In the film, Sully (Hanks) told Skiles (Ackhart) during a recuperating break outside the committee hall that he had no question left in his mind that they have done the right thing.

Given the strong resistance of Sully, the committee ordered additional flight simulations where the pilots were “held” waiting for 35 seconds to account for the time needed to assess the damage before attempting to land anywhere. Following this minimum delay the simulating pilots failed to land safely neither at La Guardia nor at Teterboro. It was evident that those missing seconds were critical to arriving in time to land in those airports. Worse than that, the committee had to admit (as shown in the film) that the pilots made multiple attempts (17) in their simulations before ‘landing’ successfully in those airports. The human factor of evaluation before making a sound decision in this kind of emergency situation must not be ignored.

Delving a little deeper into the event helps to realise how difficult the situation was.  The pilots were trying to execute a three-part checklist of instructions. They were not told, however, that those instructions were made to match a situation of loss of both engines at a much higher altitude than they were at just after completing take-off. The NTSB’s report (AAR-10-03) finds that the dual engine failure at a low altitude was critical — it allowed the pilots too little time to fulfill the existing three-part checklist. In an interview to Newsweek in 2015, Sullenberger said on that challenge: “We were given a three-page checklist to go through, and we only made it through the first page, so I had to intuitively know what to do.”  The NTSB committee further accepts in its report that landing at La Guardia could succeed only if started right after the bird strike, but as explained earlier, that was unrealistic; importantly, they note the realisation made by Sullenberger that an attempt to land at La Guardia “would have been an irrevocable choice, eliminating all other options”.

The NTSB also commends Sullenberger in its report for operating the Auxiliary Power Unit (APU). The captain asked Skiles to try operating the APU after their failed attempt to restart the engines. Sully decided to take this action before they could reach the article on the APU in the checklist. The operation of the APU was most beneficial according to NTSB to allow electricity on board.

Notwithstanding the judgement and decision-making capabilities of Sully, his decision to land on waters of the Hudson river could have ended-up miserably without his experience and skills as a pilot to execute it rightly. He has had 30 years of experience as a commercial pilot in civil aviation since 1980 (with US Airways and its predecessors), and before that had served in the US Air Force in the 1970s as a pilot of military jets (Phantom F-4). The danger in landing on water is that the plane would swindle and not reach in parallel to the water surface, thus one of the wings might hit water, break-up and cause the whole plane to capsize and break-up into the water (as happened in a flight in 1996). That Sully succeeded to safely “ditch” on water surface is not obvious.

The performance of Sullenberger from decision-making to execution seems extraordinary. His judgement and decision capacity in these flight conditions may be exceptional; it is unclear if other pilots could perform as well as he has done. Human judgement is not infallible; it may be subject to biases and errors and succumb to information overload. It is not too difficult to think of examples of people making bad judgements and decisions (e.g., in finance, health etc.). Yet Sully has demonstrated that high capacity of human judgement and sound decision-making exists, and we can be optimistic about that.

It is hard, and not straightforward, to extend conclusions from flying airplanes to other areas of activity. In one aspect, however, there can be some helpful lessons to learn from this episode in thinking more deeply and critically about the replacement of human judgement and decision-making with computer algorithms, machine learning and robotics. Such algorithms work best in familiar and repeated events or situations. But in new and less familiar situations and in less ordinary and more dynamic conditions humans are able to perform more promptly and appropriately. Computer algorithms can often be very helpful but they are not always and necessarily superior to human thinking.

This kind of discussion is needed, for example, in respect to self-driving cars. It is a very active field in industry these days, connecting automakers with technology companies for installing autonomous computer driving systems in cars. Google is planning on creating ‘driverless’ cars without a steering wheel or pedals; their logic is that humans should not be involved anymore in driving: “Requiring a licensed driver be able to take over from the computer actually increases the likelihood of an accident because people aren’t that reliable” (2). This claim is excessive and questionable. We have to carefully distinguish between computer aid to humans and replacing human judgement and decision-making with computer algorithms.

Chesley (Sully) Sullenberger has allowed himself as the flight captain to be guided by his experience, intuition and common sense to land the plane safely and save the lives of all passengers and crew on board. He was wholly focused on “solving this problem” as he told CBS, the task of landing the plane without casualties. He recruited his best personal resources and skills to this task, and in his success he might give everyone hope and strength in belief in human capacity.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) “Gut Feelings: The Intelligence of the Unconscious”, Gerd Gigerenzer, 2007, Allen Lane (Pinguin Books).

(2) “Some Assembly Required”, Erin Griffith, Fortune (Europe Edition), 1 July 2016.

 

Read Full Post »

From a consumer viewpoint, choice situations should be presented in a clear and comprehensible manner that facilitates consumers’ correct understanding of what is at stake and helps them to choose an alternative that fits most closely their needs or preferences. But policy makers may go farther and design choices to direct the decision-making consumers to a desirable or recommended alternative in their judgement.

It is very likely for Humans (unlike economic persons, or Econs) to be influenced in their decisions by the way a choice problem is presented; even if unintentional — it is almost unavoidable. Sometimes, however, an intervention to influence a decision-maker is done intentionally. Choice architecture relates to how choice problems are presented: the way the problem is organised and structured, and how alternatives are described, including tools or techniques that may be used to guide a decision-maker to a particular choice alternative. Richard Thaler and Cass Sunstein have called such tools ‘nudges’, and the designer of the choice problem is referred to as a ‘choice architect’. In their book, “Nudge: Improving Decisions About Health, Wealth and Happiness” (2009), the researchers were very specific, nonetheless, about the kinds of nudging they support and advocate (1). A nudge may be likened to a light push of a consumer out of his or her ‘comfort zone’ towards a particular choice alternative (e.g., action, product), but it should be harmless and left optional to consumers whether to accept or reject.

Thaler and Sunstein argue that in some cases more action is needed to ‘nudge’ consumers in a right direction. That is because consumers, as Humans, often do not consider carefully enough the choice situation and alternatives, they tend to err, and may not do what would actually be in their own best interest. It may be added that consumers’ preferences may not be well-established, and when these are unstable it could make it furthermore difficult for consumers to find an alternative that fits their preferences more closely. Hence, the authors recommend acting in a careful corrective manner that guides consumers towards an alternative that a policy maker assesses will serve them better (e.g., health-care, savings). Yet they insist that any intervention of nudging should not be imposed on the consumer. They call their approach ‘libertarian paternalism’ — a policy maker may tell consumers what alternative would be right for them but the consumer is eventually left with the freedom of choice how to act. They state that:

To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting the fruit at eye level counts as a nudge. Banning junk food does not.

Thaler and Sunstein suggest six key principles, or types, of nudges: (a) Defaults; (b) Expect error (i.e., nudges designed to accommodate human error); (c) Give feedback (nudges reliant on social influence may be included here); (d) Understanding ‘mappings’ (i.e., a match between a choice made and its welfare outcome, such as consumption experience); (e) Structure complex choices; (f) Incentives. The authors discuss and propose how to use those tools in dealing with choice issues such as complexity and a status quo bias (inertia) (e.g., applied to student loans, retirement pensions and savings, medication plans).

Let’s look at some examples of how choice architecture may influence consumer choice:

A default may be set-up to determine what happens if a consumer makes no active choice (e.g., ‘too difficult to choose’, ‘too many options’) or to induce the consumer to take a certain action. Defaults can change the significance of opt-in and opt-out choice methods. A basic opt-in could ask a consumer to tick a box if she agrees to participate in a given programme. Now consider a slight change by pre-ticking the box as default — if the consumer does not like to join, she can uncheck the box (opt-out). A more explicit default and opt-out combination could state up-start (e.g., in a heading) that the consumer is automatically enrolled in the programme and if she declines she should send an e-mail to the organiser. If inclusion in a programme is the default, and consumers have to opt-out of the programme, many more will end-up enrolled than if they had to actively approve their participation. Yet the effect may vary depending on the ease of opting-out (just unchecking the box vs. sending a separate e-mail). Defaults of this type may be used for benign purposes such as subscription to a e-newsletter versus sensitive purposes like organ donation (2).

  • A default option is particularly attractive when the ‘alternative’ action is actually choosing from a long list of other alternatives (e.g., mutual and equity funds for investment).

Making a sequence of choice decisions is a recurring purchase activity. As a simple example, suppose you have to construct a list of items that you want to purchase (e.g., songs to compile, books to order) by choosing one item from each of a series of choice sets.  Presenting choice-sets in an increasing order of choice-set size is likely to encourage the chooser to enter a maximising mind-set — starting with a small set, it is easier to examine more closely all options in the set before choosing, and while the set size increases the chooser will continue trying to examine options more exhaustively. When starting with a large choice-set and decreasing the size thereon, the opposite happens where the chooser enters a simplifying or satisficing mind-set. Thus, over choice-sets, the chooser in an increasing order condition is likely to perform a deeper search and examine overall more options. As described by Levav, Reinholtz and Lin, consumers are “sticky adapters” (3). When constructing an investment portfolio, for instance, a financial policy maker may nudge investors to examine more of the funds, bonds and equities available by dividing them into classes to be presented as choice-sets in an increasing order of size (up to a reasonable limit).

Multiple aspects of choice design or architecture arise in the context of mass customization. Taking the case of price, a question arises whether to specify the cost of each level of a customized attribute (actually the price premium for upgraded levels vs. a baseline level) or the total price of the final product designed. A proponent opinion argues that providing detailed price information for levels of quality attributes allows consumers to consider the monetary implications of choosing an upgraded level on each attribute. It is not as difficult as trying to extract the marginal cost of a level chosen on each quality attribute from the total price. Including prices for levels of quality attributes leads consumers to choose more frequently intermediate attribute levels (compared with a by-alternative choice-set)(4). A counter opinion posits that carefully weighing price information on each attribute is not so easy (consumers report higher subjective difficulty), actually causing consumers to be too cautious and configure products that are less expensive but also of lower quality. Hence, providing a total price for the outcome product could be sufficient and more useful for the customers (5). It is hard to give any conclusive design suggestion in this case.

In a last example, the form in which calorie information is provided on restaurant menus matters no less than posting it. As a recent research by Parker and Lehmann shows, it is practically possible to be over-doing it (6). Consistent with other studies, the researchers find that when posting calorie figures next to food dishes, consumers choose from the calorie-posted menu items with lower calorie content on average than from a similar traditional menu but with no calorie figures. Separating low-calorie items from their original categories of food type (e.g., salads, burgers) into a new group, as some restaurants do, may eliminate, however, the advantage of calorie-posting. While the logic of a separate group is that it would make the group more conspicuous and easier for diners to attend to it, it could make it easier for them instead to exclude those items from consideration. Nevertheless, some qualification is needed as the title given to the group also matters.

Parker and Lehmann show that organising the low-calorie items in a separate group explicitly titled as such (e.g., “Low Calories”, “Under 600 Calories”) attenuates the posting effect, thus eliminating the advantage of inducing consumers to order lower-calorie items. The title is important because it is easier this way for consumers to screen out this category from consideration (e.g., as unappealing on face of it). It is demonstrated that giving a positive name unrelated to calories (e.g., “Eddie’s Favourites”, “Fresh and Fit”) would generate less rejection and make it no more likely to be screened out as a group than other categories. In a menu that is just calorie-posted, consumers are more likely to trade-off the calories with other information on a food item such as its composition and price. But if the consumers are helped to screen the low-calorie group as a measure of simplifying their decision process in an early stage, it means they would also ignore their calorie details.

  • An additional explanation can be suggested for disregarding the low-calorie items when grouped together: If those items are mixed in categories of other items similar to them in type of food, each item would stand-out as ‘low calorie’ and be perceived as different and more important. If the low-calorie items are aggregated on the other hand in a set-aside group, they are more likely to be perceived as of diminished importance or appeal collectively and be ignored together. (cf. [7]). Therefore, creating a separate group of varied items pulled out from all the other groups sends a wrong message to consumers and may nudge them in the wrong direction.

Both public and private policy makers can use nudging. But there are some limitations deserving attention especially with regard to private (business) policy makers. Companies sometimes act out of belief that in order to recruit customers they should present complex alternative plans (e.g., mobile telecoms, insurance, bank loans), which includes obscuring vital details and making comparisons between alternatives very difficult. They see nudging tools that are meant to reduce complexity of consumer choice as playing against their interest (e.g., if choice is complex it will be easier for the company to capture [trap-in] the customer). That counters the intention of Thaler and Sunstein, and they stand against this kind of practice.

In the case of helping customers to see more clearly the relation, and match, between their patterns of service usage and the cost they are required to pay, Thaler and Sunstein propose a nudge scheme called RECAP — Record, Evaluate, and Compare Alternative Prices. The scheme entails publishing in readily accessible channels (e.g., websites) full details of their service and price plans as well as provide existing customers periodic reports that show how their level of usage on each component of service contributes to total cost. These measures that increase transparency would help customers understand what they pay for, monitor and control their costs, and reconsider from time to time their current service plan vis-à-vis alternative plans of the same provider and those of competitors. The problem is that service providers are usually reluctant to hand over such detailed information from their own good will. Public regulators may have to require companies to create a RECAP scheme, or perhaps nudge them to do so.

In the lighter scenario, companies prefer to avoid nudging techniques that work in the benefit of consumers because of concern it would hurt their own interests. In the worse scenario, companies misinterpret nudging and use tools that actively manipulate consumers to choose not in their benefit (e.g., highlight a more expensive product the consumer does not really need). Thaler and Sunstein are critical of either public or private (business) policy makers who conceive and apply nudges in their own self-interest. They tend to dedicate more effort, however, to counter objections to government intervention in consumers’ affairs and popular suspicions of malpractice by branches of the government (i.e., these issues seem to be of major concern in the United States that may not be fully understood in other countries). Of course it is important not turn a blind eye to harmful usage of nudges by public as well as private choice architects.

There are many opportunities in cleverly using nudging tools to guide and assist consumers. Yet there can be a thin line between interventions of imposed choice and free choice or between obtrusive and libertarian paternalism. Designing and implementing nudging tools can therefore be a delicate craft, advisably a matter primarily for expert choice architects.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) “Nudge: Improving Decisions About Health, Wealth and Happiness”; Richard H. Thaler and Cass R. Sunstein, 2009; Penguin Books (updated edition).

(2) Ibid 1, and: “Beyond Nudges: Tools of Choice Architecture”; Eric J. Johnson and others, 2012; Marketing Letters, 23, pp. 487-504.

(3) “The Effect of Ordering Decisions by Choice-Set Size on Consumer Search”; Jonathan Levav, Nicholas Reinholtz, & Claire Lin, 2012; Journal of Consumer Research, 39 (October), pp. 585-599.

(4) “Contingent Response to Self-Customized Procedures: Implications for Decision Satisfaction and Choice”; Ana Valenzuela, Ravi Dahr, & Florian Zettelmeyer, 2009; Journal of Marketing Research, 46 (December), pp. 754-763.

(5) “Marketing Mass-Customized Products: Striking a Balance Between Utility and Complexity”; Benedict G.C. Dellaert and Stefan Stremersch, 2005; Journal of Marketing Research, 42 (May), pp. 219-227.

(6) “How and When Grouping Low-Calorie Options Reduces the Benefits of Providing Dish-Specific Calorie Information”; Jeffrey R. Parker and Donald R. Lehmann, 2014; Journal of Consumer Research, 41 (June), pp. 213-235.

(7) Johnson et al. (see #2).

Read Full Post »

During a shopping journey in a store where a consumer intends to buy multiple products, he or she is required to make a sequeqnce of choice decisions. Each decision is about to be made in a category with different product attributes, but beyond that there could also be differences in the settings of the choice situations, such as the size of the choice set, the structure of information display for product items, and information format. The transition between choice problems that differ in their characteristics should require shoppers to make some adjustments in preparation to reach a decision, each time in somewhat different settings. This is in fact true when filling a basket either in a physical store or on a website of an online store — shoppers have to shift between decision problems, and on the way they may need to replace or correct their choice strategy.

Researchers have been studying the paths that shoppers frequently follow, moving between sections of a store during their shopping trip. This type of research usually focuses on identifying and depicting the sequence in which store sections and product categories are visited, and the frequency in which category displays are stopped-by. However, the transitions from a choice decision in one category to another may also have  consequences for the decision process in any single category visited (e.g., as in adjusting for every new choice problem). Moreover, the sequence or order in which choice problems are resolved may have an effect on particular decisions.

  • Different techniques are applied for tracking the pathways of shoppers in brick-and-mortar stores (e.g., RFID, mobile-based GPS, video recording through surveillance cameras). Studies in supermarkets have shown what areas of a store shoppers approach first, and how they start by walking to the back of the store and then make incursions into each aisle (without leaving the aisle on the other end but returning to their point-of-entry). Hui, Bradlow and Fader reveal that as shoppers spend more time at the store, the checkout looms more attractive — the shoppers who feel a stronger time pressure become more likely to go through an aisle and approach a checkout counter. As perceived time pressure increases shoppers also tend to cut-off exploration and concentrate on visiting product displays from which they are most likely to purchase. (1)

Consumers have been described as adaptive decision-makers who adjust their decision strategies according to characteristics of the problem structure or context — for example, the amount of information available (given the number of alternatives or attributes), the type of information (e.g., scale, units), or the order in which information elements are displayed. In the outset, consumers may be guided by top-down goals — maximizing accuracy (relative to a maximum-utility ‘rational’ rule) and minimizing cognitive effort; a decision strategy (i.e., a rule like Equal-Weights or Lexicographic) can be selected in advance with respect to the accuracy-effort trade-off assessment of rules in a given choice situation, this according to Payne, Bettman and Johnson. However, they argue that this approach may not be sufficient on various occasions. When the characteristics of a choice problem are not familiar to the consumer, he or she will construct a strategy step-by-step as the structure and detail of information on alternatives is observed and learned. Even in cases the choice situation and context are familiar, the consumer may face unexpected changes or updates in information (e.g., inter-attribute relations) that may require her or him to modify the strategy. Hence, a consumer who started with a specific rule may replace it with another on-the-fly in response to data encountered, and often elements from different rules may be combined into an adaptive new choice strategy (as opposed to a ‘pure’ strategy)(2).

The construction of a decision strategy is therefore frequently the product of a delicate balance between top-down (goal driven) and bottom-up (data driven) processing. When in particular preferences also are not well-established by the consumer, preferences (e.g., importance weights of attributes) also are formed or constructed as one proceeds in the decision process. In such a case the preferences formed would be more contingent on the particular process followed and the strategy constructed thereby. Bettman, Luce and Payne extended the constructive choice model and added to the goals of maximizing accuracy and minimizing effort two more goals (directed by a perceptual framework): minimizing negative emotions (e.g., perceived losses, difficult trade-offs) and maximizing the ease of justifying decisions (to others or to oneself). (3)

However, the adaptation of consumers may not be complete, and thus a shopper may not fully “reset” or fit his decision strategy to features of the next choice problem, which may differ from features of the previous choice setting. Levav, Reinholtz and Lin investigated specifically the impact of one characteristic of decision problems on a decision process: the number of alternatives (4). They tested how many alternatives consumers would inspect more closely from each choice set, if the total number of alternatives available increases from the first to the last decision problem (e.g., 5, 10, 15 and so on until 50), versus a decrease in the number of alternatives available from the first to the last decision (e.g., 50, 45, 40 and so on until 5 — participants were allowed to sample songs to listen to before choosing a song for each track on a disc).

In one of the decision contexts tested, most relevant here, the researchers simulated an online shopping trip: participants in the experiment were asked to choose in sequence from eight different product categories (e.g., body lotions, energy bars, notebooks, shampoo). For some of the participants the number of alternatives increased between categories (i.e., 5, 8, 13, 17, 20, 23, 26, 30) whereas for the others the number of alternatives in a choice set changed in a reverse order (product categories were also presented in two opposite sequences of alphabetical order). Participants could examine more closely each option in a choice set by mouse-hovering on a thumbnail photo of the product item to see its enlarged photo image, its price, and a short product description.

  • Note: In a physical store the equivalent would be picking a product package from a shelf, inspecting it from different angles, reading the label etc. Advanced 3-D graphic simulators let a user-shopper in a like fashion to virtually “pick” a product item from a shelf display image, rotate it, “zoom-in” to read more clearly its label, etc.

Levav and his colleagues found that the direction in which the size of the choice set changes matters, and that particularly a low or high number of options in the first decision problem induces consumers to examine more or less information on options through the shopping trip. If a shopper starts with a small choice-set, he or she is more strongly inclined to inspect every option or acquire more information on each option available. This tendency endures in the next choice problems as the number of options increases, though it may level-off at some point.

In the online shopping experiment, the “shoppers” in the increasing condition examine on average the description for each option more times than “shoppers” in the decreasing condition for smaller choice sets. The former gradually adjust downward the amount of information acquired on each option but the amount of information “gathered” overall does not decrease; for relatively small choice sets (up to 13 options) they would still examine more information on options than “shoppers” who started their journey with the largest choice set. A “shopper” who starts with a large choice set constrains himself from the beginning to inspect options less closely; even as the choice set may become more “manageable” in size, the average “shopper” does not intensify the examination of information on single options considerably, clearly not to the level as “shoppers” whose first decision is from the smallest choice set.

  • For choice sets larger than 17-20 options, where the task for “shoppers” in the increasing condition may become too time-and-effort consuming and “shoppers” in the decreasing condition may still feel too pressed, the level of information acquisition is more similar.

The researchers refer to this form of behaviour as “bounded adaptivity“; they explicate: “Our results indicate that people are actually “sticky adapters” whose strategies are adapted to new contexts — such as the initial choice set — but persist to a significant degree even in the face of changes in the decision environment” (p. 596). The authors suggest, based on results from one of their experiments, that an increasing condition, where consumers’ first choice decision is made from a small choice set, may activate in  consumer a ‘maximizing’ mind-set, searching deeper into information on alternatives (as opposed to a probable ‘satisficing’ mind-set of a consumer in a condition of decreasing size of choice set). Levav et al. note that while ‘maximizing’ has often been regarded in literature as a chronic trait of personality, they see the possibility that this mind-set can be triggered by a decision situation.

If decisions during the shopping trip are not made independently, since adaptation where necessary is not complete or “sticky”, studying in isolation the decision process a shopper goes through in front of a particular product display could be misleading. For instance, the shopper’s decision strategy may be influenced by a choice strategy used previously.  An “imperfect” or “sticky” adaptivity does not have to reflect a deficiency of the consumer-shopper. It may simply designate the sensible level of adaptivity needed in a given decision situation.

(1) Shoppers may not have to hurry to modify their strategy if the perceived change in conditions of the choice problem is small enough to allow them to act similar as before. Shoppers can often adjust their decision tactic gradually and slowly until they get to a situation when a more significant modification is required. (“Shoppers” in the decreasing condition above seem to be more “in fault” of remaining “sticky”.)

(2) Shoppers-consumers look for regularities in the environment in which they have to decide and act (i.e., arrangement of products, structure and format of information) that can save them time and effort in their decision process. Regularities are exhibited in the ways many stores are organised (e.g., repetitive features in display of products) that shoppers can gain from in decision efficiencies. Regularities are likely to reduce the level of ongoing adpativity shoppers may need to exercise.

(3) On some shopping trips, ordinary or periodic (e.g., at the supermarket), shoppers frequently do not have the time, patience or motivation to prepare and deliberate on their choice in every category candidate for purchase. They tend to rely more on routine and habit. Prior knowledge of the store (e.g., one’s regular neighbourhood store) is beneficial. Shoppers would want to adapt more quickly, perhaps less carefully or diligently, and they may be more susceptible to “sticky” adaptivity.

It can be difficult to influence when and how shoppers attend to various sections or displays for performing their decision in differing choice settings. But it is possible to identify what zones shoppers are more likely to visit in early stages of their shopping trip. If a store owner or manager wants to induce shoppers thereafter to search product selections at greater depth, he or she may arrange in those locations displays with a small number of options for a product type. It should be even easier to track movements and direct shoppers to planned sections in an online store website. On the other hand, the retailer may stage a display with some surprising or unexpected information features for disrupting the ordinary search, and induce shoppers to work-out their decision strategy more diligently, thus devoting more attention to the products. However, this tactic should be used more carefully and restrictively so as not to turn-away frustrated or agitated customers.

Displays in the store (physical or virtual) and information conveyed on product packaging (including graphic design) together influence the course of consecutive decision processes shoppers apply or construct.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) Testing Behavioral Hypotheses Using an Integrated Model of Grocery Store Shopping Path and Purchase Behavior; Sam K. Hui, Eric T. Bradlow, & Peter S. Fader, 2009; Journal of Consumer Research, 36 (Oct.), pp. 478-493.

(2) The Adaptive Decision Maker; John W. Payne, James R. Bettman, & Eric J. Johnson, 1993; Cambridge University Press.

(3) Constructive Consumer Choice Processes; James R. Bettman, Mary Frances Luce, & John W. Payne, 1998; Journal of Consumer Research, 25 (Dec.), pp. 187-217.

(4) The Effect of Ordering Decisions by Choice Set Size on Consumer Search; Jonathan Levav, Nicholas Reinholtz, & Claire Lin, 2012; Journal of Consumer Research, 39 (Oct.), pp. 585-599.

Read Full Post »

Obama’s administration is taking a bold step in fighting overweight and moreover obesity: requiring chain restaurants and similar food establishments to post information on food calories for their items or dishes on menus and menu boards. The new directive published in November 2014 by the United States’ Food and Drug Administration (FDA) is mandated by the Affordable Care Act passed by Congress in 2010. The expectation is that restaurant customers will consider the nutritional values, particularly calories, of  food items on the menu if the information appears in front of them, inducing them to make more healthy choices. It is estimated that Americans consume a third of their calories dining out. But will consumers, who are not voluntarily concerned about healthy dietary, change their eating behaviour away-from-home just because the information is easily and promptly available?

The new requirements of the FDA apply to restaurant chains with 20 or more outlets, including fast food chains — likely a primary target of the new directive. Detail of total calorie content of food items should appear on print menus (e.g., at full-service restaurants) and menu boards positioned above counters for ordering (e.g., at fast-food restaurants). The rule covers meals served at a table or taken to a table by the customer to be consumed, take-away food like pizzas, and food collected at drive-through windows. Also included are sandwiches-made-to-order at a grocery store or delicatessen, coffee-shops, and even ice-cream parlours. (1)

  •  The FDA directive also refers in a separate section to food sold through vending machines by owners or operators of 20 or more machines.

Calorie content in a food item (actually kilocalorie) indicates the amount of energy it provides. Usually the energy intake of consumers from meals, snacks and refreshments is more than the body requires, and the surplus not “burned”   accumulates and adds to body weight. The rule maintains that additional information on components such as calories from total and saturated fat, sodium, carbohydrates, protein, and sugars should be made available on request in writing. Critics could argue that while a summary measure of energy is an important nutritional factor, other nutritional values as those mentioned by the FDA, and more (e.g., fat in grams, Vitamins A and C), also need to be transparent to consumers. Practically, loading menus, and foremost menu boards, with too many nutritional details may be problematic for both business owners and their customers. Therefore, there is logic in focusing on an indicator regarded of higher priority. Nonetheless, restaurants should offer a supplementary menu with greater nutritional values to customers who are interested. Again, the question is how many customers will request and use that extra information.

The food service industry overall reacted positively to the new rules. The National Restaurant Association in the US (representing 990,000 restaurant and food-service outlets) is satisfied with the way the FDA has addressed its major concerns. Contention remains over food sold in amusements parks and cinemas, and regarding fresh sandwiches and salads and ready-to-eat meals made by supermarkets for individual consumers (i.e., single-serving). In fact,  several restaurant chains have already been displaying nutritional information on menus voluntarily for several years to cater for more health-conscious customers and improve their retail-brand image (e.g., Starbucks, McDonalds, Subway). Some chains also provide detailed nutrition information and assistant tools for customers to plan their meals on the chains’ websites. It should further be noted that regulations for posting nutrition information in food-service establishments are in place at the level of local authorities in various cities and counties across the US. Business and regional administrative initiatives are not new in the US as well as in Canada and other countries. However, such measures will be obligatory in the US at a country-level within a year ahead.

Consumers are likely to have some general guidelines (a schema of rules) in memory that they can consult on what is more or less healthy to eat and how much to eat of different items (e.g., “high levels of calories, fat and salt in hamburgers and french fries”, “cream cakes are rich with calories and sugar”). When arriving to a restaurant or coffee-shop, the more conscious consumer may apply those guidelines to compose one’s meal with greater care for his or her health. Yet, the ability to extract accurate nutrition values of food items offered on the menu is likely to be rather limited — our memory is not accurate and retrieving information may also be biased by prior goals or hypotheses. Even if we consider only total calories, we would recall gross estimates or value ranges for general food categories. Consumers furthermore tend to take into account only the alternatives explicitly presented and attribute information available on them in a choice setting (a “context effect”). Information not provided (e.g., has to be retrieved from memory) is likely to be ignored. Customers anxious enough may pull out a mobile device and look up some more accurate nutritional information from an app or a website of the company or a third-party source. But for most consumers, it should appear, there is strong logic as well as justification to provide the nutrition information on specific food items easily accessible at the food outlet to allow them to consider it on-the-spot in their choices.

A probable cause of resistance from consumers to take into account the nutritional content of the food they are about to order is that this might spoil their pleasure of eating the meal.  People commonly prefer to concentrate on which items to order that will be more enjoyable for them on a given occasion. The negative nutritional consequences of the desired food could be considered as ‘cost’, just like monetary price and perhaps even worse, a notion consumers would like to avoid. There is also a prevailing belief that healthier food is less tasty. To make consumers more receptive they would have to be persuaded beforehand that this belief is false or that nutritional components have both positive and negative consequences to consider. Surely consumers have to account for constraints on their preferences; health advocates have to help and ease any barriers to embracing health constraints, or turn pre-conceived constraints into consumers’ own preferences.

We may gain another insight into consumer food choices by considering the comparisons consumers utilise to make decisions. Simonson, Bettman, Kramer and Payne (2013) offer a new integrative perspective on the selection and effect of comparisons when making judgements and choice decisions — how consumers select the comparisons they rely upon vis-à-vis those they ignore, and what information is used in the process. They propose that the comparisons consumers seek have first to be perceived relevant and acceptable responses to the task (e.g., compatible with a goal); these comparisons fall within the task’s Latitude of Acceptance (LOA). They also need to be justifiable. Then, consumers will prefer to rely upon comparisons that are cognitively easier to perform (i.e., greater comparison fluency), given the information available on options. Importantly, even if bottom-up evidence suggests that certain comparisons require less effort to apply, these will be rejected unless they are instrumental for completing the task. Information factors that can facilitate the comparison between options may affect, however, which comparisons consumers perform among those included in the LOA. The following are factors suggested by the researchers that increase the probability that a comparison will be performed: attribute values that can be applied “as-is” and do not need additional calculation or transformation (i.e., “concreteness effect”); alignable input (i.e., values stated in the same units); information perceptually salient; and yet also information that can generate immediate, affective responses. (2)

Let us examine possible implications. Suppose that you visit a grill bar-restaurant of a large known chain. You have to choose the food composition of your meal, keeping with one or more of the following personal goals: (a) “not leave hungry” (satiated); (b) pleasure or enjoyment (taste/quality); (c) “eat healthy” (nutrition); (d) “spend as little as possible” (cost). Calorie values are stated on menu in a column next to price. If the primary goal is to keep a healthy diet you would most likely use calorie information to compare options. However, if “eat healthy” is not a valued goal for you, there is greater chance that calorie information will be ignored — even if values of calories are very easy to read-out, assess and compare. They may be perceived as distraction from considering and comparing, for instance, the ingredients of items that would determine your enjoyment from different food options. Consumers often have a combination of goals in mind, and thus if your goals are nutrition and price, there is an advantage to displaying numeric calorie and price values next to each other across items. It would be more difficult to weigh-in calories with information on ingredients that should predict enjoyment or satiation as your goals. Therefore, it can be important to display nutritional values in a format that facilitates comparison, and not provide too many values. Yet, if “eat healthy” is not one’s goal all those measures are unlikely to have much effect on choice.

  • Some would argue that a salient perceptual stimulus can trigger consumer response in the desired direction even unconsciously. That is a matter for debate — according to the viewpoint above strong perceptual or affective stimuli will not be influential if the consumer’s goal is driving him in another direction.
  • Given the growing awareness to health, justifying decisions based on calories to others may be received more favourably. Can this be enough to induce consumers to incorporate a nutrition comparison in their decision when it is not their personal goal?

A research study performed by the Economic Research Service (ERS) of the US Department of Agriculture (USDA) examined consumer response to display of nutrition information in food service establishments, comparing between fast-food and full-service chain restaurants. The researchers (Gregory, Rahkovsky, & Anekwe, 2014) show that consumers who see nutrition information have a greater tendency to use it during choice-making in full-service restaurants; overall, women are more sensitive to such information than men (especially using it in fast-food restaurants). Furthermore, they provide support that consumers who are already more conscious and care about a healthful diet are more likely to react positively to nutrition information in restaurants:

  • Consumers who inspect always or most of the time the nutrition labeling on food products purchased in a store (enforced in the US for more than twenty years) are more likely to see and then use the nutrition information presented in full-service restaurants (notably, 76% of those who inspect the store-food labeling regularly use the information seen in the restaurant versus 18% of those who rarely or never use the labeling on store-food).
  • Additionally, the researchers find that a Healthy Eating Index score (measuring habitude to using nutrition information and keeping a healthy diet) is positively correlated with intention to use nutrition information in fast-food or full-service restaurants (those who would often or sometimes use the information in full-service restaurants score 57-54 versus those who would use it rarely or never who score 50 on a scale of 1 to 100).

Gregory and his colleagues at USDA-ERS argue that following these findings, displaying nutrition information on menus at food-away-from-home establishments may not be enough to motivate consumers not already caring about healthful diet to read and use that information — “It may be too optimistic to expect that, after implementation of the nutrition disclosure law, consumers who have not previously used nutrition information or have shown little desire to use it in the future will adopt healthier diets.”

A research study in Canada involved an interesting comparison between two hospital cafeterias, a ‘control’ cafeteria that displays limited nutrition information on menu boards and an ‘intervention’ cafeteria that operates an enhanced programme displaying nutrition information in different formats plus educational materials (Vanderlee and Hammond, 2014). The research was based on interviews with cafeteria patrons. A significantly higher proportion of participants in the ‘intervention’ cafeteria reported noticing nutrition information (80%) than in the ‘control’ cafeteria (36%). However, among those noticing it, similar proportions (33% vs. 30%, respectively) stated that the information influenced their item choices. Hospital staff were more alert and responsive to the information than visitors to the hospital and patients. This research also indicates that customers who use more frequently nutrition labels on pre-packaged food products are also more likely to perceive themselves being influenced by such information.

Vanderlee and Hammond subsequently found lower estimated levels of calories, fat and sodium in the food consumed in the ‘intervention’ cafeteria than the ‘control’ cafeteria (using secondary information on nutrition content of food items). In particular, customers at the ‘intervention’ cafeteria who specifically reported being influenced by the information consumed less energy (calories).(3)

Actions to consider: Fast-food restaurants may place menus with extended nutrition information, beyond calories, on or next to the counter where customers stand for ordering. Full-service restaurants may place extended menus on tables, or at least a card inviting customers to request such a menu from the waiter. It may be advisable to add one more nutrition value next to calories as a standard (e.g., sugars because of the rise in diabetes and the health complications it may cause). Notwithstanding, full-service restaurants could be allowed to implement the rule during the day (e.g., for business lunch), but in the evening spare customers the pleasure of dining-out as entertainment without worries. Nonetheless, menus with nutrition information should always be available on request.

Nutrition information displayed on menus and menu-boards can indeed help consumers in restaurants, coffee-shops etc., to make more healthy food choices, but it is likely to help mostly those who are already health-conscious and in habit of caring about their healthful diet. Information clearly displayed has a good chance to be noticed; yet, educating and motivating consumers to apply it for a healthier diet should start at home, in school, and in the media. A classic saying applies here: You can lead a horse to the water but you cannot make it drink. Nutrition information may be a welcome aid for those who want to eat more healthy but it is less likely to make those who do not care about healthful diet beforehand to use the information in the expected manner.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) Overview of FDA Labeling Requirements for Restaurants, Similar Food Retail Establishments and Vending Machines, The Federal Food and Drug Administration (US), November 2014 http://www.fda.gov/Food/IngredientsPackagingLabeling/LabelingNutrition/ucm248732.htm; Also see: “US Introduces Menu Labeling Standards for Chain Restaurants”, Reuters, 24 Nov. 2014. http://www.reuters.com/article/2014/11/25/usa-health-menus-idUSL2N0TE1KP20141125

(2) Comparison Selection: An Approach to the Study of Consumer Judgment and Choice; Itamar Simonson, James R. Bettman, Thomas Karamer, & John W. Payne, 2013; Journal of Consumer Psychology, 23 (1), pp. 137-149

(3) Does Nutrition Information on Menus Impact Food Choice: Comparisons Across Two Hopital Cafeterias; Lana Vanderlee and David Hammond, 2013; Public Health Nutrition, 10p, DOI: 10.1017/S136898001300164X. http://www.davidhammond.ca/Old%20Website/Publication%20new/2013%20Menu%20Labeling%20(Vanderlee%20&%20Hammond).pdf; Also see: “Nutrition Information Noticed in Restaurants If on Menu”; Roger Collier; Canadian Medical Association Journal, 3 Aug., 2013 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3735740/

 

Read Full Post »

With click rates on online ad banners ranging between o.5% and 2% it is not difficult to understand why many in the marketing, advertising and media professions often question the efficacy of click-based models of advertising on the Internet. It is a problem for both advertisers of products and services and the website owners that publish ad banners on their pages.

For advertisers, exposure of consumers to their ads is not a sufficient or satisfying criterion but immediate action in response to the ad banner is very difficult to elicit; perhaps clicking-through should not be expected just because these objects are “clickable links”.  Should the effectiveness of ad banners be doubted because of low traffic it may generate, or is it that the criteria used are inappropriate?

For the owners of websites used as vehicles for advertising (e.g., newsmedia, portals, social media), it is a question of effectiveness in generating satisfactory revenue from those ads, conditioned on mouse clicks. When webpages receive high volumes of visits, even very low click rates may be sufficient to collect a handsome sum of money, but this cannot be generalised to most websites and pages. On the other hand, if a website is loaded with ads across the pages to generate more revenue, it may end up cluttering its own content and chasing away visitors.

Internet users who browse websites in search for information on a particular subject (e.g., photography, nature), and  read or watch related content on webpages, are very likely to see ad banners as no more than a distraction from their main task. Clicking on a banner that sends the users to another page means an interruption of the kind many would not welcome. There are exceptions, of course, when for example the ads are for products (e.g., cameras, hiking gear) related to the main topic of the website and thus provide access to additional information that can be of interest on relevant options (i.e., context in which ads appear matters). Ads may be perceived less disturbing to surfers who are engaged in exploration with no planned goal but for fun and entertainment; checking on advertised companies and products may be accepted as part of the exploration, although maybe not in every condition (e.g., when users are wary of non-trusted solicitations, busy interacting with friends in social networks, engaged in watching music videos and so on).

However, viewing an ad banner for a brand can leave an impression, and a trace in memory, in consumers’ minds that will have its effect at a later time, especially if a choice situation in the same product domain is looming soon after. Consumers may register in memory the exposure episode, with the brand name and additional information contained in the ad, for checking-up later without being required to click-through at the same moment. Importantly, this “registration” does not have to occur consciously to make an impact.

If a consumer-surfer is interested, he or she may attempt intentionally to remember the ad and look-up for the brand’s website when the time becomes available and convenient. When  working on a computer or a mobile device, one can easily type a note or set-up a reminder, especially if the website address also appears on the banner. But an ad banner can operate without waiting for a voluntary response or overt reaction from the consumer.  It depends to a large extent on the kind of impression made by the visual image of the ad banner on the consumer-surfer at an initial or quick glance. An image that is easier for the eye and mind to process, that feels pleasant to look at, its informational content will become more readily acceptable and persuasive. Visual processing fluency (1) at the perceptual level suggests that principal elements of the image can be identified with little effort and great accuracy — for instance, in a banner’s image, that may include the brand/company name, logo icon, and picture of a product. Visual fluency can be facilitated by the use of colours and recognizable shapes that are pleasing to watch, symmetry, clear contrast between figure and ground, etc. Its persuasive effect may not be strong enough to trigger a mouse-click yet increased fluency can make the ad’s content better remembered as well as better liked by the viewer for a longer time after exposure.

An ad banner can influence consumer attitude and response also through a process of priming. This type of effect in the particular domain of ad banners on the Internet has been studied by Mitchel and Valenzuela (2). The consumer is initially introduced to the ad in a seemingly casual and incidental way. However, information in the ad stimulus, “planted” as a trace in the consumer’s memory, would prime her or him, unconsciously, to use it during a future task, for example when recalling brands or choosing between alternative brands. Such exposure could work simply by evoking a positive attitude towards the brand in the priming ad. In another procedure, a joint presentation of a brand with a product attribute in the ad banner would prime the consumer to look for and give priority to that same combination when it appears in the information provided on a set of product alternatives to choose from.

according to this research, priming by an ad banner can affect the consideration of brands for purchase (tested with airlines) in three significant ways. First, a brand whose ad had been shown earlier was more likely to be considered for purchase (of air-tickets) than if an ad for another brand had been shown or no ad at all (control). Second, this effect is stronger for a lower quality brand than for a higher quality brand, that is, a stronger brand has less to gain from priming through its ad banner. Third, when consideration is based on recall from memory, priming has a stronger effect in leveraging the likelihood of consideration of a primed brand than if the brands have to be selected from a constrained list — this may be explained by the added impact of priming through prior exposure on memory (note: this difference is valid only for the lower-quality brand!). Advantages of priming are established also when making the final choice of a single brand to purchase from (subject again to the second and third qualifications above).

Mitchel and Valenzuela further reveal in their research an interesting effect of priming of established brands on a “new” unfamiliar brand (i.e., a fictional airliner). All participants were exposed to an ad banner for the unfamiliar brand before given any tasks and therefore the relevant priming effects arise from the lower-quality and higher-quality brands. It is shown that results for the unfamiliar brand were more favourable if at the beginning of the research the higher-quality brand had been primed rather than if the lower-quality brand or neither of them had been primed. The more positive image of a higher-quality brand seems to spill over to the unfamiliar brand by lifting the brand’s evaluation higher and increasing its likelihood of consideration and being finally chosen — an advantage that earlier priming of a familiar but lower-quality brand cannot provide to the unfamiliar brand.

We may learn from this research that ad banners can be utilised to create an advantage for a brand during consumers’ decision processes without their full awareness of it but it will not help any brand — it is more suitable for brands that are currently weaker — and not in every situation. The placement of the ad banner for this purpose has to be planned wisely, preferably in websites, and on particular webpages, where consumers are engaged in learning about a product domain or making the first steps of searching and screening products. Designing an ad banner that is clear, concise and pleasant to watch can only help to maximise impact.

Measuring the effectiveness of ad banners is undoubtedly faced with difficulties and barriers. There is greater tendency to refer to statistics of page views to assess also potential exposure  to ads placed on a page (“impressions”). However, overall “page impressions” are not detailed enough as they refer to the whole webpage; they cannot tell us to which sections or objects, particularly ad banners, a consumer-surfer attends, nor at what level information is processed. Capturing fixations on particular objects by Internet users requires an application of the methodology of eye-tracking. Latency of eye fixations can already provide an indirect indicator of the extent of processing information. However, that methodology cannot be practically and economically applied on a large-scale nor can it be applied on a regular basis.

A third-way approach that is based on tracking mouse movements over a webpage, and is able to detect objects on which a mouse hovers even without clicking on them, provides a sort of middle-ground solution. It is not as complete and accurate as eye-tracking but it can provide a substantive even if partial information on objects to which a consumer-surfer attends; it is based on the premise that our hand often follows our eyes (i.e., visuo-motor correlation) and we tend to point the mouse on a place or item we concentrate at a given moment. And, not least, it is a more feasible solution, technically and economically, to operate on a large data scale. At this time, it seems as a viable platform for developing extensions and improved measures of consumer attention, browsing behaviour, and response to stimuli.

  • The Internet company ClickTale, for example, offers a range of methods for analysis and visualisation of users’ behaviour with a mouse (e.g., “heat maps” based on frequency of mouse “landings” in different locations over a webpage and tracking the movements of a mouse on a webpage).

There are remaining limitations to behavioural data that do not allow us to assess more fully the extent to which ad banners are processed and how it may affect our attitudes, thoughts and feelings. Difficulties can be foreseen for example in measuring the implicit effects of visual fluency or priming on consumers in a “live” environment in real-time. The way to test and measure these effects is by conducting experiments while combining cognitive, attitudinal and behavioural data. The new age of touch screens presents yet a new set of challenges in measuring covert and overt responses.

To conclude, here are a few points that may be worth considering:

  1. The relatively small area of a standard ad banner can make it challenging to construct and design effective ads. First, it is recommended to graphically design an image that is visually fluent for the consumers-surfers, as much as it is in control of the designer  — the rest is in the eye and mind of the viewer. Second, include sufficient information in the banner, like a key claim or description of strengths, that the consumer can relate to and keep in mind, consciously or unconsciously, without having to click-through anywhere else. Third, include a web address the consumer can save and use anytime later.
  2. Think a few steps ahead, what consumers-viewers may do next, that is, how consumers may be influenced by the information and utilise it in a subsequent activity (e.g., shopping online). Thereby, plan the content, placement and timing of the ad banner with respect to events or types of behaviour it intends to affect.
  3. Animated ad banners quickly capture the attention of viewers by their motion. However, such ad banners that appear especially on sidebars attract attention involuntarily at the periphery of the visual field, that is, even if the reader tries to avoid it. Limit the period of time the animation works or let the user stop it lest she is likely to abandon the page altogether.
  4. Beyond the advantages of motion and sound of ad video clips, they can be activated on-site and viewed without requiring the consumer-surfer to leave anywhere else, an important benefit of time-saving and convenience. They should display a visually appealing opening screen and be kept at time-lengths of 30 seconds to two minutes to attract and engage viewers for a reasonable period of suspension from other tasks on the website.

References:

1. Cognitive and Affective Consequences of Visual Fluency: When Seeing Is Easy on the Mind; Piotr Winkielman, Norbert Schwarz, Rolf Reber, & Tedra Fazendeiro, 2003; in Persuasive Imagery: A Consumer Response Perspective, L. M. Scott and R. Batra (eds.)(pp. 75-91), Lawrence Erlbaum Associates.

2. How Banner Ads Affect Brand-Choice Without Click-Through; Andrew Mitchel and Ana Valenzuela, 2005; in Online Consumer Psychology: Understanding and Influencing Consumer Behavior in the Virtual World, C. P. Haugtvedt, K. A. Machleit, & R. F. Yalch (eds.)(pp. 125-142), Lawrence Erlbaum Associates.

Read Full Post »

Competition in health-related industries (i.e., health care services, pharmaceutical, biotechnology) has been increasing continuously in the past two to three decades. The health business has also become more complex and multilayered with public and private institutions, individual doctors and patients, as players. Consequently, decision processes on medical treatment may become more complicated or variable, being more difficult to predict which treatment or medication will be administered to patients. For example:

  • For many medical conditions there are likely to exist a few alternative brands or versions of the same type of prescribed medication. Depending on the health systems in different countries, and on additional situational factors, it may be decided by a physician, a health care provider and/or insurer, or a pharmacist what particular brand of medication a patient would use. In some cases the patient may be allowed to choose between a more expensive brand and an economic brand (e.g., original and generic brands, subsidised and non-subsidised brands).
  •  There are plenty of over-the-counter (OTC) medications, formulae and devices that patients can buy at their own discretion, possibly with a recommendation of a physician or pharmacist.
  • Public and private medical centers and clinics offer various clinical tests and treatments (e.g., prostate screening, MRI scanning, [virtual] colonoscopy), often going above the heads of general/family physicians of the concerned patients.
  • In more complex or serious conditions, a patient may choose between having a surgery at a public hospital or at a private hospital, depending on the coverage of his or her health insurance.

In the late 1990s, professionals, executives and researchers in health-related areas have developed an interest in methods for measuring preferences that would allow them to better understand how decisions are made by their prospect customers, especially doctors and patients (“end-consumers”). This knowledge serves (a) to address more closely the preferences of patients or requirements of physicians, and (b) to channel planning, product development or marketing efforts more effectively. In particular, they have become interested in methods of conjoint analysis and choice-based conjoint that have already been prevalent in marketing research for measuring and analysing preferences. Conjoint methods are based on two key principles: (a) making trade-offs between decision criteria, and (b) decomposition of stated preferences with respect to whole product concepts (e.g., a medication) by means of statistical techniques into utility values for levels of each attribute or criterion describing the product (e.g., administering 2 vs. 4 times in 24 hours). The methods differ, some argue quite distinctly, in terms of the form in which preferences are expressed (i.e., ranking or rating versus choice) and in the statistical models applied (e.g., choice-based conjoint is often identified by its application of discrete choice modelling). An important benefit for pharmaceutical companies, for example, is gained in learning what characteristics of a medication (e.g., anti-depressant) contribute more to convincing physicians to prescribe it, versus factors like risks or side-effects that lead them to avoid a medication.

The product concepts presented are hypothetical in the sense that they are specified by using controlled experimental techniques and do not necessarily match existing products at the time of study. This property is essential for deriving utility values for the various levels of product attributes studied, and to allow prediction by simulation of shares of preference (“market shares”) for future products. The forecasting power of conjoint models is considered their major appeal from a managerial perspective. In addition, conjoint data can be used for segmenting patients and designing refined targeted marketing strategies.

Interest in application of conjoint methods in a health context has grown in the past decade. According to a review research of conjoint studies reported in 79 articles published between 2005 and 2008, the number of studies nearly doubled from 16 in 2005 to 29 in 2007. The researchers estimated that by the end of 2008 the number of published studies would reach 40. The most frequent areas of application have been cancer (15%) and respiratory disorder (12%)(1). However, applications of conjoint techniques can be found also for guiding policy making and the design of health plans in a broader context of health-care services provided to patients (e.g., by HMOs).

Most conjoint studies in health (71%) apply choice experiments and modelling, becoming the dominant approach (close to 80%) particularly in 2008. A typical study includes 5 or 6 attributes with 2 or 3 levels for each attribute. Most studies in a choice-based approach involve 7 to 8 scenarios (choice sets) but studies with 10-11 or 14-15 scenarios are also frequent (2). A choice scenario normally includes 3 to 5 concepts from which a respondent has to choose a single most prefered concept.

Interpretation of conjoint studies among medical doctors needs a special qualification to be distinguished from studies of patients or consumers. That is because the physicians make professional judgements about the most appropriate treatment option for their patients.  Therefore, it is less appropriate to relate to personal preferences in this context. It is more sensible and suitable to talk about decision criteria that physicians apply, their priorities (i.e., represented by importance weights), and requirements of physicians from pharmaceutical or other treatment alternatives available in the market.

Including monetary cost in conjoint studies on products and services in health-care may be subject to several complications and limitations. That may be the reason for the relatively low proportion of articles on conjoint studies in health that were found to include prices (40%)  (3). For instance, doctors do not take money out of their own pockets to pay for the medications they prescribe, so it is generally less relevant to include price in their studies. It may be sensible, however, to include cost in cases where doctors are allowed to purchase and hold a readily available  inventory of medications for their visiting patients in their private clinics (e.g., Switzerland). It may still be useful to examine how sensitive doctors are to the cost of medication that their patients will have to incur when prescribing them. However, this practice may be additionally complicated because the actual price patients pay for a specific medication is likely to change according to the coverage of their health plan or insurance. It is appropriate and recommended to include price in studies on OTC medications or health-related devices (e.g., for measuring blood pressure). Aspects of cost can be included in studies on health plans such as the percentage of discounts provided on medications and other types of clinical tests and treatments in the plan’s coverage.

An Example for a Conjoint Study on Health-Care Plans:

A choice-based conjoint study was conducted to help a health-care coverage provider assess the potential for a new modified heath plan it was considering to launch. Researchers Gates, McDaniel and Braunsberger (4)  designed a study with 11 attributes including provider names (the client and two competitors), network of physicians accessible, payment per doctor visit, prescription coverage, doctor quality, hospital choice, monthly premium, and additional attributes. Each respondent was introduced to 10 choice sets where in each set he or she had to choose one out of four plans. This setting was elected so that in subsequent simulations the researchers could more accurately test scenarios with existing plans of the three providers plus a new plan by the client-provider. The study was conducted among residents in a specific US region by mail. Yet beforehand a qualitative study (focus group discussions) and a telephone survey have been carried out to define, screen and refine the set of attributes to be included in the conjoint study. 506 health-care patients returned the mail questionnaire (71% response rate out of those in the phone survey who agreed to participate in the next phase).

The estimated (aggregate) utility function suggested to the researchers that the attributes could be divided into two classes of importance: primary criteria for choosing a health plan and secondary considerations. The primary criteria focused on access allowed to doctors in the region of residence and cost associated with the plan, representing the more immediate concerns to target consumers in the market in choosing a health-care plan by a HMO. It was mainly confirmed in the study that consumers are less concerned by narrowing the network of doctors they may visit, as long as they can keep their current family physician and are not forced to replace him or her with another on the list. Respondents appeared to rely less on reported quality ratings of doctors and hospitals. Vision tests and dental coverage were among the secondary considerations. Managers could thereby examine candidate modifications to their health plan and estimate their impact on market shares.

The conjoint methods offer professionals and managers in health-related organizations research tools for gaining valuable insights into patient preferences or criteria governing the clinical decisions of doctors on medications and other treatments. These methods can be particularly helpful in guiding the development of pharmaceutical products or instruments for performing clinical tests and treatments when issues of marketing and promoting them to decision makers come into play. As illustrated in the example, findings from conjoint studies can be useful in policy making on health-care services and designing attractive health plans to patients. This kind of research-based knowledge is acknowledged more widely as a key to success in the highly competitive environs of health-care.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1)  Conjoint Analysis Applications in Health – How Are Studies Being Designed and Reported? An Update of Current Practice in the Published Literature Between 2005 and 2008, D. Marshall, J.F.P. Bridges, B. Hauber, R. Cameron, L. Donnalley, K. Fyie, and F.R. Johnson, 2010, The Patient: Patient-Centered Outcomes Research, 3 (4), 249-256

(2) Ibid. 1.

(3) Ibid. 1.

(4) Modeling Consumer Health Plan Choice Behavior to Improve Customer Value and Health Plan Market Share, Roger Gates, Carl McDaniel, and Karin Braunsberger, 2000, Journal of Business Research, 48, pp. 247-257 (The research was executed by DSS Research to which Gates is affiliated).

Additional sources:

A special report on conducting conjoint studies in health was prpared in 2011 by a task force of the International Society for Pharmaeconomics and Outcomes Research. The authors provide methodological recommendations for guiding the planning, design, and analysis and reporting conjoint studies in health-related domains.

Conjoint Analysis Applications in Health – A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force, John F.P. Bridges, and A. Brett Hauber et al., 2011, Value in Health, 14, pp. 403-413

http://www.ispor.org/taskforces/documents/ISPOR-CA-in-Health-TF-Report-Checklist.pdf

Read Full Post »

Older Posts »