Feeds:
Posts
Comments

Posts Tagged ‘Preferences’

Surge pricing is a variant of dynamic pricing (also known as variable pricing). The dynamics of prices means that prices can now change much more frequently and vary across customers, time and place at ever higher resolution; a price surge or hike at peak moments in demand can be described as an outcome of dynamic pricing. Surge pricing received great attention due to Uber’s application of this strategy, and not least because of the controversial way that Uber implemented it. But dynamic pricing, and surge pricing within it, is a growing field with various forms of applications in different domains.

A price surge is generally attributed to a surge in demand. In the case of Uber, when the number of customer requests for rides (‘hailing’) critically exceeds the number of drivers available in a given geographic area, Uber enforces a ‘surge multiplier’ of the normal (relatively low) price or tariff (e.g., two times the normal price). The multiplier remains in effect for a period of time until demand can be reasonably met. The advantages, as explained by Uber, are that through this price treatment (1) drivers can be encouraged to join the pool of active drivers (i.e., ready to receive requests on Uber app), as well as  pulling drivers from adjacent areas; and (2) priority can be given to a smaller group of those customers who are in greater need of prompt service and are willing to pay the higher price. Consequently, waiting times for customers willing to pay the price premium will be shorter.  (Note: Lyft is applying a similar approach.)

There are some noteworthy aspects to the modern surge pricing. A basic tenet of economic theory says that when demand surpasses the supply of a good or service, its price will rise until a match is reached between the levels of demand and supply so as to ‘clear the market’. Yet the neo-classic economic theory also assumes that the equilibrium price applies to all consumers (and suppliers) in the market for a length of time that the stable equilibrium prevails; it does not account well for temporary ‘shocks’. Proponents of surge pricing argue that this pricing strategy is an appropriate correction to a market failure caused by short-term ‘shocks’ due to unusual events in particular places. There is room in economic theory for more complex situations that allow for price differentials such as seasonality effects or gaps between geographic regions (e.g., urban versus rural, central versus peripheral). Still, seasonal prices are the same “across-the-board” for all; and regions of different geographic markets are usually well separated. On the other hand, in surge pricing, and in dynamic pricing more broadly, it is possible through advanced technology to isolate and fit a price to a very specific group of consumers in a given time and space.

One of the concerns with surge pricing in ride e-hailing is that the method could take advantage of consumers-riders when they have little choice, cannot afford to wait too long (e.g., hurry to get to a meeting or to the airport) or cannot afford a price several times higher than normal (e.g., multipliers of more than 5x). The problem becomes more acute as surge pricing seems to ‘kick in’ at worst times for riders, when they are in distress [1a](e.g., in heavy rain, late at night after a party). The method seems to screen potential riders not based on how badly they need the service but on how much they are willing to pay. The method may fix a problem for the service platform provider more than for its customers. Suppose hundreds of people are coming out at the same time from a hall after a live music concert. If the surge multiplier shown in the app at the time the prospect rider wants to be driven home is too high because of the emerging peak in demand, he or she is advised to wait somewhat longer until it slides down again. How long should riders wait for the multiplier to come down? Often enough, so it is reported, it takes just a few minutes (e.g., minor traffic fluctuations). But in more stubborn situations the rider may be able to catch a standard taxi by the time the multiplier declines, or if the weather permits, walk some distance where one can hail a taxi or get onto another mode of public transport.

Another pitfall is reduced predictability of the occurrence of surge pricing. Consumers know when seasons start and end and can learn when to expect lower and higher prices  accordingly (though it used to be easier thirty years ago). In public transport, peak hours (e.g., morning, afternoon) are usually declared in advance, wherein  travel tariffs could be elevated during those periods. Since surge pricing is based on real-time information available to the service platform provider, it is harder to predict the occasions when surge pricing will be activated, and furthermore the extent of price increase. Relatedly, drastic price changes (e.g., due to high frequency of updates, strong fluctuations) tends to increase the uncertainty for service users [1b].

The extent of price surge or hike is a particular source of confusion. Users are notified before hailing a Uber driver if surge is on, and a surge multiplier in effect at that time should appear on the screen. The multiplier keeps being updated on the platform. It is sensible, however, for the multiplier to stay fixed for an individual rider after the service is ordered. Thus the rider can make a decision based on a known price level for the duration of the ride (or an estimate of the cost to expect). Otherwise, the rider may be exposed to a rising price rate while being driven to destination — but the rider should also benefit if the multiplier starts to slide down (or entering another area where surge is off). The first scenario resembles more a situation of bidding whereas the latter scenario looks more like gambling. Stories and complaints from Uber users reveal recurring surprises and unclarity about the cost of rides (e.g., claims the multiplier was 9x, a ride of 20 minutes that cost several hundreds of dollars, a claim the multiplier dropped but the total price did not go down in accordance). Users may not pay attention sufficiently to the multiplier before hailing a ride, do not comprehend how the pricing method works, or they simply lose track of the cost of the ride (i.e., the charge is automatic and appears later on the user’s account).

Discontent of customers may also be raised by a sharp contrast experienced between the relatively low normal price rate (e.g., compared with a standard taxi) and the high prices produced by surge multipliers [1c].  A counter argument contends that the price hikes or surges allow for low rates at normal times by subsidising them [2]. More confusion about Uber’s pricing algorithm could stem from reports on additional factors that the company might use as input (e.g., people are more receptive of surge prices when the battery of their mobile phone is low, and customers are more willing to accept a rounded multiplier number than a close non-rounded figure just below or above it (MarketWatch.com, 28 December 2017).

  • Not even a strategy of surge pricing appears to be completely immune to attempts of manipulation. It was revealed in 2019 that drivers with Uber (and also Lyft’s) have tried to game the surge mechanism. The ‘trick’ is to turn off the app at a given time in a coordinated manner among drivers, let the surge multiplier rise, and then turn on the app again to gain quickly enough from the higher rate as long as it prevails. The method seems to have been used especially at airports in anticipation of incoming passengers, based on the knowledge of drivers of several flights scheduled to land during a short interval. The motivation for taking this action: the drivers claim they are not paid enough at normal times by the platform operators (BusinessInsider, 14 June 2019).

Uptal Dholakia, a professor of marketing at Rice University (also see [1]), suggested four remedies to the kinds of problems described above. First, he advised to set a cap (maximum) on surge multipliers and notify customers more clearly about them (greater transparency). In addition, he recommended curbing the volatility of price fluctuations and communicating better the benefits of the method (e.g., reduced waiting times). Dholakia also raised an issue about a negative connotation of the term ‘surge’ that perhaps should be replaced in customer communications [3].

Various forms of dynamic pricing, including surge pricing, are already utilised in multiple domains. It is noted, for instance, that the strategy of Uber was not initiated to resolve problems of traffic congestion; ‘surge’ may be activated as its result but the purpose is to resolve the interruptions that congestion may cause to the service. For dealing with traffic congestion and overload in roads, other types of surge pricing are being used by public authorities. First, a fast lane is dedicated on a highway or autoroute (e.g., entering a large city) for a fee — the amount of ‘surge’ fee is determined by the density of traffic on the other regular lanes. Drivers who wish to arrive faster should pay this fee that is displayed on a signboard as one approaches entry to the lane (a few moments are allowed to decide whether to stay or abort). Second, a congestion fee, which could actually be a variable surge fee, may be imposed on non-residents who seek to enter the municipal area of a city at certain hours of the day.

As indicated earlier, public transportation systems in large cities may charge a higher tariff during peak or rush hours. The time periods that a raised tariff applies are usually declared in advance (i.e., they are fixed). Peak and off-peak rates may apply to different types of travel fares. The scheme is employed to encourage passengers who do not really need to travel at those hours to change their schedule and not further load the mass transportation system. There is of course a downside to this approach for passengers who must travel on those hours, such as for getting to work (employers who cover travel expenses should set the amount according to the cost of the more expensive rate). Using surge pricing in this case would mean that passengers cannot tell for certain and in advance when a higher tariff applies, but the scale of ‘surge prices’ can be pre-set with a limited number of ‘steps’, and thus reduce resentment and opposition.

Other types of dynamic (variable) pricing involve strong technological and data capabilities, including demand at an aggregate level and customer preferences and behaviour (search, purchase) at the individual level. A company like Amazon.com keeps updating its prices around the clock based on data of demand for products sold on its e-commerce platform. A more specific type of dynamic pricing entails the customisation of prices quoted to individual users-customers (i.e., different prices for the same book title offered to different customers). The approach maintains that a higher price could be set, for instance, for books in a category in which the customer purchases books more frequently and even based on search for titles in categories of interest. This form of price customisation is debatable because it aims to absorb a greater portion of the consumer’s value surplus (i.e., how much value a consumer assigns to a product above its monetary price requested by the seller), raising concerns of unfairness and discrimination. The risk to sellers is of making products less worthwhile to consumers to buy at the higher customised prices. (Note: Amazon was publicly blamed of using some form of price customisation in the early 2000s after customers discovered they had paid different prices from their friends; however the practice has not been banned and it is suspected to be in use by companies in different domains.)

  • Take for example the air travel sector: Airlines may use any of these methods of variable pricing: (a) Offering the same seat on the aircraft at different price levels (‘sub-classes’) depending on the timing of reservation before the scheduled flight: the earlier a reservation is made, the lower the price; (b) Changing fares for flights to different destinations based on fluctuations in demand for each destination and time of flight; (c) There are claims that airlines also adjust upwards the fares on flights to destinations that prospect travellers check more frequently in the online reservation system.

More companies in additional sectors are expected to join by applying varied forms of dynamic pricing. Retailers with physical stores are expected foremost to use dynamic pricing more extensively to tackle the growing challenges they face particularly from Amazon.com in the Western world (e.g., supermarkets will employ digital price displays that will allow them to change prices more continuously during the day and week according to visitor traffic levels). Restaurants may set higher prices during more busy hours at their premises, and hotels are likely to vary their room rates more intensively, taking into consideration not only seasonal fluctuations but also special events like conferences, festivals and fairs (e.g., see “The Death of Prices”, Axios, 30 April 2019).

Dynamic pricing, and surge pricing in particular, is the new reality in pricing policy, with applications getting increasingly pervasive. As technological and analytical capabilities only improve, the pricing models and techniques are likely to be enhanced and become furthermore sophisticated. Moreover, methods of artificial intelligence will improve in learning patterns of market and consumer behaviour, expected to enable companies to set prices with greater specificity and accuracy. At the same time, businesses need to take greater caution not to deter their customers by causing excessive confusion and aggravation. The question then becomes: What bases of discrimination — among consumers, at different times, and in different locations — would be considered fair and legitimate? This promises to be a major challenge for both enterprises that set prices and for the consumers who have to judge and respond to the dynamic prices.

Ron Ventura, Ph.D. (Marketing)

Notes:

[1a-c] “Uber’s Surge Pricing: Why Everyone Hates It?”, Uptal M. Dholakia, Government Technology (magazine’s online portal), 27 January 2016

[2] “Frustrated by Surge Pricing? Here’s How It Benefits You in the Long Run”, Knowledge @Wharton (Management), 5 January 2016. A talk with Ruben Lobel and Kaitlin Daniels at Wharton Management School at the University of Pennsylvania.

[3] “Everyone Hates Uber’s Surge Pricing — Here’s How to Fix It”, Uptal M. Dholakia, Harvard Business Review (Online), 21 December 2015

Read Full Post »

From a consumer viewpoint, choice situations should be presented in a clear and comprehensible manner that facilitates consumers’ correct understanding of what is at stake and helps them to choose an alternative that fits most closely their needs or preferences. But policy makers may go farther and design choices to direct the decision-making consumers to a desirable or recommended alternative in their judgement.

It is very likely for Humans (unlike economic persons, or Econs) to be influenced in their decisions by the way a choice problem is presented; even if unintentional — it is almost unavoidable. Sometimes, however, an intervention to influence a decision-maker is done intentionally. Choice architecture relates to how choice problems are presented: the way the problem is organised and structured, and how alternatives are described, including tools or techniques that may be used to guide a decision-maker to a particular choice alternative. Richard Thaler and Cass Sunstein have called such tools ‘nudges’, and the designer of the choice problem is referred to as a ‘choice architect’. In their book, “Nudge: Improving Decisions About Health, Wealth and Happiness” (2009), the researchers were very specific, nonetheless, about the kinds of nudging they support and advocate (1). A nudge may be likened to a light push of a consumer out of his or her ‘comfort zone’ towards a particular choice alternative (e.g., action, product), but it should be harmless and left optional to consumers whether to accept or reject.

Thaler and Sunstein argue that in some cases more action is needed to ‘nudge’ consumers in a right direction. That is because consumers, as Humans, often do not consider carefully enough the choice situation and alternatives, they tend to err, and may not do what would actually be in their own best interest. It may be added that consumers’ preferences may not be well-established, and when these are unstable it could make it furthermore difficult for consumers to find an alternative that fits their preferences more closely. Hence, the authors recommend acting in a careful corrective manner that guides consumers towards an alternative that a policy maker assesses will serve them better (e.g., health-care, savings). Yet they insist that any intervention of nudging should not be imposed on the consumer. They call their approach ‘libertarian paternalism’ — a policy maker may tell consumers what alternative would be right for them but the consumer is eventually left with the freedom of choice how to act. They state that:

To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting the fruit at eye level counts as a nudge. Banning junk food does not.

Thaler and Sunstein suggest six key principles, or types, of nudges: (a) Defaults; (b) Expect error (i.e., nudges designed to accommodate human error); (c) Give feedback (nudges reliant on social influence may be included here); (d) Understanding ‘mappings’ (i.e., a match between a choice made and its welfare outcome, such as consumption experience); (e) Structure complex choices; (f) Incentives. The authors discuss and propose how to use those tools in dealing with choice issues such as complexity and a status quo bias (inertia) (e.g., applied to student loans, retirement pensions and savings, medication plans).

Let’s look at some examples of how choice architecture may influence consumer choice:

A default may be set-up to determine what happens if a consumer makes no active choice (e.g., ‘too difficult to choose’, ‘too many options’) or to induce the consumer to take a certain action. Defaults can change the significance of opt-in and opt-out choice methods. A basic opt-in could ask a consumer to tick a box if she agrees to participate in a given programme. Now consider a slight change by pre-ticking the box as default — if the consumer does not like to join, she can uncheck the box (opt-out). A more explicit default and opt-out combination could state up-start (e.g., in a heading) that the consumer is automatically enrolled in the programme and if she declines she should send an e-mail to the organiser. If inclusion in a programme is the default, and consumers have to opt-out of the programme, many more will end-up enrolled than if they had to actively approve their participation. Yet the effect may vary depending on the ease of opting-out (just unchecking the box vs. sending a separate e-mail). Defaults of this type may be used for benign purposes such as subscription to a e-newsletter versus sensitive purposes like organ donation (2).

  • A default option is particularly attractive when the ‘alternative’ action is actually choosing from a long list of other alternatives (e.g., mutual and equity funds for investment).

Making a sequence of choice decisions is a recurring purchase activity. As a simple example, suppose you have to construct a list of items that you want to purchase (e.g., songs to compile, books to order) by choosing one item from each of a series of choice sets.  Presenting choice-sets in an increasing order of choice-set size is likely to encourage the chooser to enter a maximising mind-set — starting with a small set, it is easier to examine more closely all options in the set before choosing, and while the set size increases the chooser will continue trying to examine options more exhaustively. When starting with a large choice-set and decreasing the size thereon, the opposite happens where the chooser enters a simplifying or satisficing mind-set. Thus, over choice-sets, the chooser in an increasing order condition is likely to perform a deeper search and examine overall more options. As described by Levav, Reinholtz and Lin, consumers are “sticky adapters” (3). When constructing an investment portfolio, for instance, a financial policy maker may nudge investors to examine more of the funds, bonds and equities available by dividing them into classes to be presented as choice-sets in an increasing order of size (up to a reasonable limit).

Multiple aspects of choice design or architecture arise in the context of mass customization. Taking the case of price, a question arises whether to specify the cost of each level of a customized attribute (actually the price premium for upgraded levels vs. a baseline level) or the total price of the final product designed. A proponent opinion argues that providing detailed price information for levels of quality attributes allows consumers to consider the monetary implications of choosing an upgraded level on each attribute. It is not as difficult as trying to extract the marginal cost of a level chosen on each quality attribute from the total price. Including prices for levels of quality attributes leads consumers to choose more frequently intermediate attribute levels (compared with a by-alternative choice-set)(4). A counter opinion posits that carefully weighing price information on each attribute is not so easy (consumers report higher subjective difficulty), actually causing consumers to be too cautious and configure products that are less expensive but also of lower quality. Hence, providing a total price for the outcome product could be sufficient and more useful for the customers (5). It is hard to give any conclusive design suggestion in this case.

In a last example, the form in which calorie information is provided on restaurant menus matters no less than posting it. As a recent research by Parker and Lehmann shows, it is practically possible to be over-doing it (6). Consistent with other studies, the researchers find that when posting calorie figures next to food dishes, consumers choose from the calorie-posted menu items with lower calorie content on average than from a similar traditional menu but with no calorie figures. Separating low-calorie items from their original categories of food type (e.g., salads, burgers) into a new group, as some restaurants do, may eliminate, however, the advantage of calorie-posting. While the logic of a separate group is that it would make the group more conspicuous and easier for diners to attend to it, it could make it easier for them instead to exclude those items from consideration. Nevertheless, some qualification is needed as the title given to the group also matters.

Parker and Lehmann show that organising the low-calorie items in a separate group explicitly titled as such (e.g., “Low Calories”, “Under 600 Calories”) attenuates the posting effect, thus eliminating the advantage of inducing consumers to order lower-calorie items. The title is important because it is easier this way for consumers to screen out this category from consideration (e.g., as unappealing on face of it). It is demonstrated that giving a positive name unrelated to calories (e.g., “Eddie’s Favourites”, “Fresh and Fit”) would generate less rejection and make it no more likely to be screened out as a group than other categories. In a menu that is just calorie-posted, consumers are more likely to trade-off the calories with other information on a food item such as its composition and price. But if the consumers are helped to screen the low-calorie group as a measure of simplifying their decision process in an early stage, it means they would also ignore their calorie details.

  • An additional explanation can be suggested for disregarding the low-calorie items when grouped together: If those items are mixed in categories of other items similar to them in type of food, each item would stand-out as ‘low calorie’ and be perceived as different and more important. If the low-calorie items are aggregated on the other hand in a set-aside group, they are more likely to be perceived as of diminished importance or appeal collectively and be ignored together. (cf. [7]). Therefore, creating a separate group of varied items pulled out from all the other groups sends a wrong message to consumers and may nudge them in the wrong direction.

Both public and private policy makers can use nudging. But there are some limitations deserving attention especially with regard to private (business) policy makers. Companies sometimes act out of belief that in order to recruit customers they should present complex alternative plans (e.g., mobile telecoms, insurance, bank loans), which includes obscuring vital details and making comparisons between alternatives very difficult. They see nudging tools that are meant to reduce complexity of consumer choice as playing against their interest (e.g., if choice is complex it will be easier for the company to capture [trap-in] the customer). That counters the intention of Thaler and Sunstein, and they stand against this kind of practice.

In the case of helping customers to see more clearly the relation, and match, between their patterns of service usage and the cost they are required to pay, Thaler and Sunstein propose a nudge scheme called RECAP — Record, Evaluate, and Compare Alternative Prices. The scheme entails publishing in readily accessible channels (e.g., websites) full details of their service and price plans as well as provide existing customers periodic reports that show how their level of usage on each component of service contributes to total cost. These measures that increase transparency would help customers understand what they pay for, monitor and control their costs, and reconsider from time to time their current service plan vis-à-vis alternative plans of the same provider and those of competitors. The problem is that service providers are usually reluctant to hand over such detailed information from their own good will. Public regulators may have to require companies to create a RECAP scheme, or perhaps nudge them to do so.

In the lighter scenario, companies prefer to avoid nudging techniques that work in the benefit of consumers because of concern it would hurt their own interests. In the worse scenario, companies misinterpret nudging and use tools that actively manipulate consumers to choose not in their benefit (e.g., highlight a more expensive product the consumer does not really need). Thaler and Sunstein are critical of either public or private (business) policy makers who conceive and apply nudges in their own self-interest. They tend to dedicate more effort, however, to counter objections to government intervention in consumers’ affairs and popular suspicions of malpractice by branches of the government (i.e., these issues seem to be of major concern in the United States that may not be fully understood in other countries). Of course it is important not turn a blind eye to harmful usage of nudges by public as well as private choice architects.

There are many opportunities in cleverly using nudging tools to guide and assist consumers. Yet there can be a thin line between interventions of imposed choice and free choice or between obtrusive and libertarian paternalism. Designing and implementing nudging tools can therefore be a delicate craft, advisably a matter primarily for expert choice architects.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) “Nudge: Improving Decisions About Health, Wealth and Happiness”; Richard H. Thaler and Cass R. Sunstein, 2009; Penguin Books (updated edition).

(2) Ibid 1, and: “Beyond Nudges: Tools of Choice Architecture”; Eric J. Johnson and others, 2012; Marketing Letters, 23, pp. 487-504.

(3) “The Effect of Ordering Decisions by Choice-Set Size on Consumer Search”; Jonathan Levav, Nicholas Reinholtz, & Claire Lin, 2012; Journal of Consumer Research, 39 (October), pp. 585-599.

(4) “Contingent Response to Self-Customized Procedures: Implications for Decision Satisfaction and Choice”; Ana Valenzuela, Ravi Dahr, & Florian Zettelmeyer, 2009; Journal of Marketing Research, 46 (December), pp. 754-763.

(5) “Marketing Mass-Customized Products: Striking a Balance Between Utility and Complexity”; Benedict G.C. Dellaert and Stefan Stremersch, 2005; Journal of Marketing Research, 42 (May), pp. 219-227.

(6) “How and When Grouping Low-Calorie Options Reduces the Benefits of Providing Dish-Specific Calorie Information”; Jeffrey R. Parker and Donald R. Lehmann, 2014; Journal of Consumer Research, 41 (June), pp. 213-235.

(7) Johnson et al. (see #2).

Read Full Post »

Mass customization allows companies to provide every customer a product made according to his or her preferred specifications, delivered for a mass of customers. Building on advanced information management technology and highly flexible computer-aided manufacturing (CAM) capacity, this approach enables a company to create a large variety (scope) of “ad-hoc” customized products. The interactive capabilities of the Internet, particularly Web 2.0, make configuring and ordering the self-designed product much more accessible to the public. Different methods for customization and (personalised) recommendation of products have been developed and implemented in recent years, but only the approach known as mass customization (MC)  actually allows a consumer to  order a self-designed product item. Yet, MC  has not been adopted by companies in many consumer markets so far and programmes initiated  often survive for just a few years. The main impediment has been in lowering the costs to levels compatible with mass production. It raises doubts that MC can become a viable business practice.

An online MC programme provides the consumers with an interactive Web-based configurator or MC toolkit application for choosing their preferred attribute specifications, guiding them through the self-design process step-by-step. Graphic-rich and user-friendly interfaces help to enhance the experience for consumers. The Internet offers two important capabilities that can smooth the whole MC process: (a) gathering the preferences data from customers in real-time, and (b) transferring the information to a company’s facility from anywhere a consumer operates the toolkit on a personal computer or a mobile device connected online.

The best early example of MC implementation is probably that of the Japanese National Bicycle Industrial Company (NBIC — owned by Panasonic) that allowed consumers to order ‘tailored’ bicycles. But that was already available before the age of Internet: measures to fit a pair of bicycle to a rider were taken on a specially built physical model. Among MC applications available to consumers through the Internet in the past and present we may mention for example:

  • NikeID for designing sports footwear (running for over ten years),
  • Levi’s Orignial Spin jeans for women (terminated),
  • Chocri chocolate bars and pralines from Germany (a UK service is currently suspended),
  • Reflect.com customized cosmetics (suspended),
  • Blank Label self-designed and made-to-measure  dress shirts for men (based in Boston & Shanghai and operating for four years),
  • Lego’s Create & Share programme incorporated an MC service called byMe (terminated in Jan. 2012) that allowed users to order a box with the parts-bricks for the model they personally designed with LEGO Digital Designer — the toolkit is still available,
  • Dell’s customized personal computers (changed customization approach).

In order to derive practical utility from configuring a product consumers should arrive to the task with adequate knowledge in the product category, understanding the attributes and their consequences with regard to quality or performance, and knowing which ones are the more important. This is particularly relevant for attributes for which there is shared convention as to options or levels that predict higher quality as opposed to attributes of more aesthetic nature and preferences reliant on personal tastes. Consequently, consumers are expected to have well-defined preferences on those attributes. However, many and even most of the consumers have just low to moderate levels of knowledge in any product category (e.g., food, home appliances, technologically advanced digital products). Furthermore, it is recognised now that consumers often do not have clear and well-established preferences and they resort to constructing their preferences as they advance towards a purchase decision. That means, for instance, that low-knowledge consumers who use an MC toolkit but do not clearly know what they are looking for are more likely to be influenced by the content of attributes offered for customisation by the product configurator and its overall structure.

But there is additional complexity to consumer response in the context of customization because the condition stated above on preferences may not be sufficient. Itamar Simonson, professor of marketing at Stanford University, expands the discussion by proposing that in addition to (a) having stable and well-developed preferences, consumer response to customised offers also depends on (b) the level of ‘self-insight’ into their own preferences and own judgement of their clarity and stability. When using the aid of a recommendation agent, it suggests implications such as the ability of consumers to accurately and clearly articulate their preferences to others, correctly acknowledging the real drive to their choices (e.g., rational vs. aesthetic or affective), and properly identifying a product recommendation that fits well their preferences (1). Consumers whose state of preferences is low on both factors are especially likely to be swayed by the attributes a recommending agent chooses to emphasise. In the case of using a product designer toolkit in MC, the burden on the consumer seems even greater, more explicitly requiring him or her to accurately articulate his preferences and subsequently confirm that the outcome product one designed indeed matches what he or she wanted; a major cause for consumers to abandon before ordering is their evaluation that the outcome product’s utility is less than planned. Another important cause is frustration and ungratifying experiences while utilising a configurator to self-design the product.

Consumers differ in the type of attributes they would want to customize, the number of attributes desirable for customization and the number of options or levels to choose from — factors that influence the purchase likelihood of a customised product. Interestingly, more knowledgeable consumers have not been found to be more inclined to purchase a customized product. Some differences in preference for layout of configurtors have been found related to variation in knowledge. For example, the less knowledgeable consumers are those who actually desire a larger number of options to choose from on attributes of personal subjective taste, because they tend to learn their preference as they look through options; high-knowledge consumers need that less. But we also have to take into account what consumers believe they know, and consumers are often wrong in that assessment (‘knowledge miscalibration’). Thus, overconfident novices are those who particularly want the higher number of levels compared with experts not sure of themselves (2).

Companies that engaged mass customization have frequently chosen a rather simple solution to these concerns: the attributes they offer for customization are primarily aesthetic, related to visual appearance of the product and much less to its actual performance. There is an over-emphasis of personalised features (e.g., posting a label of the customer’s name or an image created by her or him). Companies also tend to constrain the set of customisable attributes and offer very few of them — this is done not just for avoiding too much complication for the users  but for themselves, to leave them with more control over technical aspects of product design and the cost of making the customized products. While this may serve well the less knowledgeable consumers, it gives the impression that this is not a serious enterprise, more like a game or a ‘marketing gimmick’, which seems to lead the more knowledgeable consumers to dismiss this option for purchasing products. Even less knowledgeable customers may be disenchanted by constraints imposed in the wrong places.  Configurators should combine different types of attributes for customization that allow customers influence both functional utility and hedonic benefits (pleasure) from their product.

Companies have turned to other techniques such as recommendation agents and search assistants that would help customers find the most appropriate product model for them. A recommendation online system first probes the consumer about her or his preferences through a series of questions and then offer a set of product recommendations rank-ordered according to their match with the consumer’s preferences. This method is distinguished from MC in that it selects product versions from the existing assortment of the company and does not create a product specifically for the customer. This kind of aid satisfies the preferred balance for some consumers between the levels of perceived control they get and perceived assortment available, but it also depends on their belief that the system is more capable than themselves to find a product that matches their preferences. This may further depend on the amount of information asked for and on the type of procedure used to collect preference information. A search assistant that is common in shopping websites helps to drill through the assortment of product versions in a category and narrow it down according to attribute criteria chosen by the shopper, thus screening a smaller set of plausible alternatives. However such an assistant, that does not make recommendations, cannot be truly said to offer customization if it does not make use of preference information  from the shopper to organise his or her resulting set in a more efficient way.

Obtaining a product personally designed by the consumer may endow him or her with special positive feelings, providing an important drive to participate in such an activity. The benefits from MC pertain to the experience of designing or configuring the ‘private’ product as well as the subsequent value of the outcome product to the owner. However, researchers Franke, Schreier and Kaiser identified an extra effect they called “I designed it myself” that describes the subjective value, and elevating feeling, that arises from the consumer’s notion that she or he took part in creating the product. They suggest that this effect signifies that consumers would be willing to pay a higher price for the self-designed product compared with a similar kind of product picked off-the-shelf. The effect is contingent on an underlying sense or feeling of accomplishment of the consumer in his or her contribution to the product (e.g., that the effort invested was worthwhile, proven competency, pride). The researchers corroborate this effect in a series of experiments in terms of increased willingness-to-pay for a self-designed product and further show that it depends on the sense of accomplishment but does not exclude the role that perceived value of the outcome product has when making the purchase decision (3).

Companies that develop and implement mass customization programmes should take special care of a number of aspects of the interface consumers have with the Web-based design toolkit to improve their experience and enhance their satisfaction through the process.

  • First measure that may be taken is to create at least two versions of a configurator, one that would be more suitable for more proficient higher-knowledge customers and another for amateur lower-knowledge customers. More generally, it is advisable to give users a greater degree of flexibility in choosing the complexity of configuring the product that matches the level of difficulty they think they can handle. In other words, a firm may allow some control to users in choosing whether they wish to set only aesthetic properties (e.g., visual appearance) of the product or also selected functional attributes, how many attributes to configure, etc..  Additional measures can be to invite users to show their creativity in features of visual design (enhances the sense of contribution) and recommending options on functional features of the product.
  • Second, a company may target customers who are already more inclined to participate in other types of collaborative activities of product design and development, seeking the feelings of accomplishment, challenge and also enjoyment from this type of engagement (e.g., tie them together as LEGO used to do in its Create & Share programme). These customers may be valuable advocates that bring more followers to MC.
  • Third, a variety of aids should be applied to provide users with explanations, examples or illustrations of the options for configurations, warnings about attribute combinations that would not work well, and a graphic demonstration that helps the user to realise how the product builds up.

In spite of discouraging hurdles in the past decade, it would be wrong to conclude that mass customization could not grow and expand. Yet, some changes may have to occur in the future that make it more advantageous for both companies and consumers to exchange benefits of assortment with personal customization. It may also take more time to find out for which product types consumer preferences can be more usefully answered through MC. Nonetheless. 3D-printing and MC may complement and push forward the utilisation of each other, depending on the level of autonomy consumers wish to have in co-creating their products. Technology is most likely to keep advancing, making the self-design experience easier and more gratifying, but technology will not solve all issues at stake and it is vital to continue studying and experimenting to better understand the human-side of consumer expectations of, processing capacity, and response to MC programmes as well as the ensuing 3D-printing.

Ron Ventura, Ph.D. (Marketing)

References:

(1) Determinants of Customers’ Responses to Customized Offers: Conceptual Framework and Research Propositions, Itamar Simonson, 2005, Journal of Marketing, 65 (Jan.), 32-45.

(2) The Role of Idiosyncratic Attribute Evaluation in Mass Customization, Sanjay Puligadda, Rajdeep Grewal, Arvind Rangaswamy, and Frank R. Kardes, 2010, Journal of Consumer Psychology, 20 (3), 369-380

(3) The “I Designed It Myself” Effect in Mass Customization, Nikolaus Franke, Martin Schreier, and Ulrike Kaiser, 2010, Management Science, 56 (1), pp. 125-140.

Read Full Post »

It is increasingly evident that consumers no longer care to wait for companies to have their say on new products. Consumers want to be heard earlier in the process of developing products and exert more influence on the products they are going to use. The Internet, particularly Web 2.0 and its interactive methods and tools, is clearly playing a key role in facilitating and enhancing this mode of consumer behaviour.

The engagement of consumers in the process of new product development (NPD) can be viewed as a facet in the broader phenomenon where consumers are mixing production and consumption activities, known as ‘prosumption’. Tapscott and Williams contend in their book on “Wikinomics” (1) that many consumers seek to turn from passive product users into active users who also participate in the creation of the products they use and influence their design and function. But the type of involvement hereby referred to goes beyond the personal design of selected features of product items by consumers for their own use, as applied in mass customization; the contribution made by consumers (‘prosumers’) collaborating with companies in NPD is meant to positively affect many consumers other than themselves.  Tapscott and Williams suggest that companies should encourage their customers to contribute in more profound and significant ways to the design of products that may thereafter be marketed to many more users.

Agreeably, consumers differ in the extent and quality of contribution they are capable to make as function of their knowledge and skills in the domain of every product, and therefore consumers should be invited to collaborate in forums and with methods more appropriate for them. The forms of collaboration may vary from consumer participation in NPD research to generating ideas in social media forums and up to more extensive proposals of technical designs of product prototypes. As collaboration gets more advanced and significant it can greatly help — in addition to co-creating improved products — also to produce closer and more valuable relationships between a company and its consumers or customers. More advanced collaboration has the power to elevate relationships to a form of “partnership” and to increase the level of their strength and intimacy between a company and its more loyal customers.

In an instructive and interesting paper on Internet-based collaborative innovation, Sawhney, Verona, and Prandelli present methods which they classify by the nature of collaboration (breadth and richness) and the stage of NPD in which the given level of consumer involvement is applicable (e.g., front-end idea generation and concept development, back-end product design and testing)(2):

  • Deep-rich information at the Front-End stages: Discussions in virtual communities of social media that encourage exchange of ideas allow companies to capitalise on social or shared knowledge of consumers. Another method that relies on consumer-to-consumer communication is Information Pump, a type of “game” through which a company can reveal and better understand the vocabulary of consumers in describing product concepts vis-à-vis expressions of needs;
  • Reach a broad audience at the Front-End stages: Web-based conjoint analysis and choice techniques can be applied among consumer samples to gather and analyse relatively less rich but well-structured information about consumer preferences;
  • Deep-rich information at the Back-End stages: Web-based toolkits for exercising users’ innovation let the more expert consumers configure or design original product models of their own creation, working in a specially built environment and with computer-aided design tools — this approach relies on knowledge of individuals;
  • Reach a broad audience at the Back-End stages: Particularly applicable to digital products (e.g., software, web-based or mobile applications, video games) where prototype or experimental beta versions can be tested online; however, visual-simulated depictions of alternative virtual configurations of advanced prototypes can be applied to test and evaluate the acceptance of a wider range of tangible products.

In the virtual world of the Internet, unlike the physical world, there is a less rigid trade-off between breadth of access to consumers and richness of information (e.g., small focus groups versus surveys of large samples); this advantage is stated by Sawhney et al. “…Internet-based virtual environments allow the firm to engage a much larger number of customers without significant compromises on the richness of the interaction. ” This advantage is particularly demonstrated in social media forums.

It should be emphasised, nevertheless, that new methods of collaboration should not come in replacement of  NPD research methods; research-based methods and non-research methods of consumer-company interaction can wonderfully complement each other and should continue to be applied in parallel to answer different requirements of the NPD process for consumer informational input and aid. In a leading paper for the new age of NPD research, “The Virtual Customer” (3), Dahan and Hauser describe state-of-the-art research methods and techniques for different stages of the NPD process. They distinguish, for example, between (a) conjoint types of measurement techniques and models that are most suitable for guiding product design at an early stage (feature-based), and (b) a method applicable for testing the appeal and purchase potential of candidate prototypes (integrated concepts) at a more advanced stage of product development. The latter method in particular takes the advantage of displaying images of virtual prototypes (e.g., SUV car models) to consumers , supplemented by additional product and price information, in an online survey for testing  reaction (choice) before going to production. They also explain in great detail unorthodox methods such as the Information Pump and Securities Trading of Concepts.

  • It is noteworthy that most research methods concentrate on learning from consumers about their preferences without engaging them in proposing product designs; the User Design method, however, already gives more leeway to consumers-respondents to construct their desired products using a self-design tool similar to mass customisation.

Forums or personal pages in public social media networks are widely accepted these days as an excellent arena for companies to receive ideas from consumers for new products and gather information about their product preferences and expectations. However, it is likely to turn out as a formidable task to comb and pick-up ideas of real value and practical potential for implementation from these sources as well as user-generated-content in blogs. Some good ideas may also get lost in the river of postings or comments customers upload in a company’s page on service issues, billing etc.. Dedicating a special separate page for interaction with consumers on new products, goods or services, can help to raise the level of ideas formulated and to allow peer discussions on those ideas that can lead to their further progression. But even then, the ideas proposed in such a venue may be mostly initial concepts, vague or unfocused. Such a venue is a good place to start, allowing any customer interested to contribute. Thereafter, owners of more mature or promising ideas may be referred to a company-owned virtual forum on its own website where a more advanced collaboration with the consumers-contributors may be developed.

Managing collaborative activities for NPD in a company-owned website division can offer some valuable possibilities. First, it provides better control and capabilities for moderating discussions among users or interacting directly one-to-one with the originators of product-concept proposals; it would be an environment dedicated by the company and designed by it specially for interacting with users and among themselves. Second, performing collaborative activities in this environment is likely to attract users with higher level of knowledge, competence and interest in domains of the company’s products; greater proficiency of users demonstrated in their discussions frequently leads to natural screening-out of novice and less serious users.

Third comes the sensitive issue of security and protecting intellectual property. Companies do not tend to guarantee any protection for initial ideas brought up by consumers, not even in their own websites. Particularly in forums that are founded on sharing knowledge and discussion of ideas between users, information has to remain transparent and accessible to participants and to the company. Tapscott and Williams noted that consumers get excited by the creation of their own products and enjoy it even better when they can do it together (4).  However, companies can offer some better measures to secure information such as limiting access to discussions and materials (e.g., by password permission) and preventing unauthorised extraction of content. Where proposed designs of product models are meant to be shared, originators should get the option to credit their models with their IDs. Confidentiality and rights are offered for the most progressed technical designs that are planned to be adopted by a company for manufacturing and marketing.

Fourth, a company can provide an interactive toolkit for innovation on its website for consumers-collaborators who wish to take their ideas and concepts one step or more further. With the toolkit users can apply relevant design tools to sketch plans and construct virtual 3D product models. Depending on type of collaboration program and context, users can allow their proposals to be available to other users or to the company alone. Thomke and von Hippel proposed a complete process for customer innovation that includes several iterations of developing a design with a ‘toolkit for innovation’, building a prototype, receiving feedback from the company (‘test’), and return for revisions (5). Through early iterations the prototypes built by the system would be virtual, until the design is satisfactorily advanced to manufacture a physical prototype of the product. The authors suggest that the customer-led process is likely to require fewer iterations than in a ‘standard’ NPD process, save time and money, and free the company to invest more effort in improving manufacturing capabilities.

Different schemes have been devised for collaboration programs with customers:

  • The Open Innovation Collaborative Programme of Unilever, for example, is designated for highly skilled contributors with extensive knowledge in the domains of products for which they invite proposals (list of Wants, e.g., solutions for detergents). Collaborators are referred to a special portal for submission (in co-operation with a consulting firm yet2.com that manages the review process).
  • Other programmes are more popular in nature and appear suitable to a wider audience of consumers with varied levels of expertise. Take for instance the Create & Share collaborative suite by Lego on its website. More than a decade ago Lego cleverly realised with appreciation the creativity of its leading hobbyists and enthusiasts (adults included!) who invented original models based on existing parts and suggested new forms of Lego blocks; Lego started to accept such designs and offer new models’ sets and less conventional building parts. The online suite includes today a gallery of models built by fans, message boards, and especially the Lego Digital Designer toolkit application for constructing virtual plans of fans’ own models (unfortunately Lego has terminated last year its ByME customization program that allowed users to order their own physical models).

Consumers who collaborate with companies should be rewarded for their more significant contributions of ideas and products designs. On the one hand, the reward does not have to be monetary, cash-in-hand (some may not even want to be perceived as paid contributors/employees). On the other hand, companies should not get satisfied by relying on enjoyment of contributors and their feelings of self-fulfillment and accomplishment. Furthermore, a company should not appear to be relinquishing its duties in generating genuine ideas and developing new products to its customers. First, many customers will be happy to receive credit by name in recognition of their contribution in the company’s publications and websites. Second, contributors can be rewarded with special gifts or privileges in obtaining and using their own-designed products and other products of the company. Monetary prizes will probably continue to be distributed to winners in competitions.

Collaboration for innovation changes the relations between a company and its consumers or customers because it gets them to work together, co-creating new products that thereof better fit consumer needs and wants. Particularly activities that engage consumers in developing concepts and designing products have the better potential of narrowing gaps between companies and customers.  Research, collaboration in other ways, and internal development by professional teams within the company should be used together in integration in NPD activities.Collaboration shifts the balance of control more towards the consumers, but companies who learn how to share knowledge and competencies with the latter can gain in improving innovation practices, increasing value, and not least, enjoying stronger customer relationships.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) “Wikinomics: How Mass Collaboration Changes Everything“, Don Tapscott and Anthony D. Williams, 2006, Portfolio.

(2) “Collaborating to Create: The Internet as a Platform for Customer Engagement in Product Innovation”, Mohanbir Sawhney,  Gianmario Verona, & Emannuela Prandelli, 2005, Journal of Interactive Marketing, 19 (4), pp. 1-14 (DOI: 10.1002/dir 20046).

(3) “The Virtual Customer”, Ely Dahan and John R. Hauser, 2002, The Journal of Product Innovation Management, 19, pp. 332-353.

(4) Ibid. 1.

(5) “Customers as Innovators: A New Way to Create Value”, Stefan Thomke and Eric von Hippel, 2002, Harvard Business Review, 80 (April), pp. 74-82.

Read Full Post »

It is an ever lasting quest of advertisers to find the content, format and style that will draw more consumer attention to their ads, and subsequently elicit a positive response to the ads and their target brands. Consumers would have to focus on the ad long enough to capture some critical elements (e.g., visual or textual, informational and affective) so as to grasp a key message from the ad. With a print ad, often just a few seconds should be enough but on some ads it may take a minute or two to properly comprehend the ad and make sensible inferences. For video clip ads, on TV or the Internet, the consumer may ponder on the ad for no longer than its duration (e.g., 20-40 seconds), yet sometimes he or she may elaborate or relate to the ad for a few more minutes afterwards (e.g., particularly for humourous ads with a punch). It is a puzzle never really and fully solved, among other reasons because there is no single “secret solution” to this puzzle, and even the best solution for the same brand and audience can change over time and across situations.

There is a growing propensity among advertising professionals to claim that marketers should not expect consumers to think too much on an ad, that an ad should include minimum product information and instead concentrate more on gaining a pleasant emotional reaction. The problem of low involvement when consumers encounter ads, particularly during commercial breaks on TV, is a topic widely and extensively researched. Yet advertisers should not use this challenge as an excuse to produce simplistic ads of little informative value. There are enough occasions where it is suitable or even desirable to create more intriguing and thought-provoking ads. Ads that emphasise graphic elements in their design can be either gross and superficial or imaginative and clever. Advertisers should not shy from turning consumers to utilise the central route of processing product-relevant information contained in their ads (1). But then ads may induce consumers to think a little further, beyond a typical “central”, analytical processing of an ad to decode its message; these are cases where thinking may be accompanied by positive emotions like enjoyment and amusement. When catching the clever punch in a humourous ad, the consumer is entertained by both feelings of fun and the gratification that “I got it”.

On one hand, a print ad may include an impressive photographic image, complete with detail and colours at high-resolution (e.g., visualise a photo-scenery in National Geographic quality) that make them imagine themselves “jump-into the scene”. This approach may be suitable, for example, in the area of travel and tourism when advertising a vacation resort. Perception of highly vivid images is likely to interfere with voluntary mental imagery by consumers-viewers, based on their own ideas and experiences; but the picture-image can inspire the viewer to “experience” the scene-imagery as proposed by the advertiser (2). On the other hand, an ad may mask or omit in its composition certain visual elements, letting the consumer-viewer complete the image (e.g., following rules of Gestalt), and thereby arrive via this additional contemplation more independently to the main message of the ad. Such ads are engaging consumers by stimulating them to work-out the whole ad-scene; it has some risk, but when the viewer makes the extra effort to get the message, it is a rewarding experience.

More sophisticated and artful methods for creating intriguing ads use visual rhetorical figures such as rhymes (schemes) and metaphors (tropes). Visual figures, however, are still less frequent than verbal figures. Meaningful visual metaphors are particularly more difficult to construct (e.g., a package of tablets against a feeling of nausea is placed instead of the buckle in a car seatbelt). McQuarrie and Mick have shown that ads with visual figures are perceived more artful and clever than respective control “regular” ads, evoking more elaboration by being more vivid, interesting and provoking to viewers. They also induce greater pleasure in seeing the ad, implying a more positive attitude towards the ad. Moreover, these effects are stronger for ads that include a metaphor or pun than a scheme. The problem is that these ads are generally more difficult to comprehend, hence the risk in using this creative approach. The balance between pleasure and difficulty is very important — a visual metaphor, for instance, can create pleasure when it is intriguing at first sight and is interesting to resolve, yet it should not be too difficult to comprehend, confusing or ambiguous, lest it may cause frustration and fail to persuade (3). The visual figure intrigues viewers to “think into it” to imply its meaning (“implicature”); when the figure is too difficult to interpret, viewers are likely to imply more original but irrelevant meanings (4). Hence, the designer should keep in mind that while a visual rhetoric figure like a metaphor has to present a challenge, it must not be too sophisticated to allow the viewers to resolve it successfully.

Another perspective on the effort consumers have to invest in processing advertising information observes the difference between presenting product information as a list of attributes or conveyed in a “story”. Nielsen and Escalas suggest that making the information in the ad more difficult to process can have inverse effects on brand preferences or attitudes depending on how information is conveyed, having a negative effect when consumers process a list of attributes in an analytic mode versus a positive effect when consumers read a “brand story” in a narrative mode. Preference fluency defines the ease at which consumers are able to construct their preference for a brand. When consumers encounter a difficulty in reading or interpreting information relating to a brand, thus lowering preference fluency, they are more likely to conclude that something is wrong with that option and decline it. The researchers argue and demonstrate that while this consequence holds in the case of analytic processing, a different process happens when engaged in a narrative mode: the decreased fluency induces the consumers-viewers to get more immersed into the story, possibly by developing their own imagery around the base-story in search of meaning (a phenomenon known as “narrative transportation”), leading to stronger preference or a more positive brand evaluation (5).

In a series of three experiments, Nielsen and Escalas reveal some interesting differences between the two modes of processing information in ads. They show that making the information more difficult to perceive (e.g., using small vs. large font) in a list of attributes results in lower brand evaluation (consistent with previous research) but in a storyboard the result is a higher brand evaluation, as hypothesised. However, an instruction to participants to be critical and skeptical about the ad, directing them to analytic processing of a storyboard that should have involved narrative processing, a small font indeed produces a negative effect on their brand evaluations. The researchers also substantiate in two experiments (in two different product categories) the role of narrative transportation: when displaying a story, greater processing (reading) difficulty has a positive effect on brand evaluation but that is obtained by first evoking narrative transportation, and then narrative transportation positively effects the brand evaluation. This research thereof demonstrates how driving consumers to invest more cognitive effort in comprehending a story can benefit the target brand in the advertising.

There is also a basis for criticism of the research of Nielsen and Escalas. I wish to point out two weaknesses.

  • First, the authors focus on factors that influence the ease or difficulty of perceiving the ad (i.e., its perceptual fluency), viewing the ad image and reading text. They do not treat in their experiments semantic aspects of the ad, that is how well attributes are described or how clearly a story is told, its meaningfulness and associations it elicits in consumers (i.e., conceptual fluency). Is the presentation of text in small font the true motivation to increase effort by narrative transportation?  The research is lacking in that respect.
  • Second, the storyboard composed of a sequence of image-frames with captions and the single image of an ad with a list of product attributes do not match as parallels of the same ad format (video vs. print ad, respectively). The storyboard is not the natural way in which consumers view video-audio ads and process their “story”. Alternatively, an attribute-based style should have been contrasted with other configurations that convey a story but are compatible with the print format; for example, providing the same attribute information in a rich paragraph told in the frame of a story or a combination of image and text-paragraph.

Different predication prevail with regard to the occurrence of mental imagery and the type of processing it follows. Nielsen and Escalas explain that their display of product attributes should give rise to analytic processing. However, it has been argued that a single product profile described by concrete words is more likely to be conceived in a holistic manner, possibly in the form of mental image. On the other hand, a comparative ad with two adjunct product profiles encourages an analytic by-attribute type of processing. Rich verbal descriptions with concrete words,  pictures, and explicit instructions to imagine or visualise are recognized as effective techniques for eliciting mental imagery. In many cases a combination between them is the most productive strategy (e.g., joining a picture with concrete words, instructions accompanied by concrete words) (6). It may be noted that techniques applied in the ad design that are capable of eliciting imagery fit with the expectation of imagery during narrative transportation.

The research in this field is interesting and offers many insights on the possibilities and opportunities for creating more clever, intriguing and imaginative advertising. It has to appeal not only to advertising professionals in its creativity and sophistication but also to the consumers, capturing and driving them willingly to invest the extra cognitive effort. Yet, due to the importance of striking a right balance between difficulty of comprehension and pleasure, and the greater effort required to design successful ads, advertisers and advertising professionals often remain unconvinced that pursuing this course is cost-effective. They need more convincing empirical evidence that producing advertising that makes consumers think harder — but not too hard — can deliver the desired reactions and rewards.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) In reference to the Elaboration Likelihood Model: “Central and Peripheral Routes to Advertising Effectiveness: The Moderating Role of Involvement”, Petty, R.E., Cacioppo, J.T., & Schumann, D., 1983, Journal of Consumer Research, 10 (Sept.), pp. 135-146.

(2) “Brain Areas Underlying Visual Mental Imagery and Visual Perception: an fMRI Study”, Ganis, G., Thompson, W.L., & Kosslyn, S.M., 2004, Cognitive Brain Research, 20, pp. 226-241; “The Role of Imagery Instructions in Facilitating Persuasion in a Consumer Context”, Mani, G. & MacInnis, D.J., 2003, in Persuasive Imagery: A Consumer Response Perspective, Scott, L.M. & Batra, R. (eds.)(pp. 175-187), NJ: Lawrence Erlbaum Associates.

(3) “Visual Rhetoric in Advertising: Text-Interpretive, Experimental, and Reader-Response Analyses”, McQuarrie, E.F. & Mick, D.G., 1999, Journal of Consumer Research, 26 (June), pp. 37-54; also see their other article “The Contribution of Semiotic and Rhetorical Perspectives to the Explanation of Visual Persuasion in Advertising” in Persuasive Imagery: A Consumer Response Perspective (ibid. 2)(pp. 192-221).

(4) “Thinking Into It: Consumer Interpretation of Complex Advertising Images”, Philips B.J., 1997, Journal of Advertising, 26 (2), pp. 77-87.

(5) “Easier Is Not Always Better: The Moderating Role of Processing Type on Preference Fluency”, Nielsen, J.P. & Escalas, J.E., 2010, Journal of Consumer psychology, 20, pp. 295-305. (Available on the website of eLab at Vanderbilt University: http://elab.vanderbilt.edu/research_papers.htm)

(6) “The Role of Imagery in Information Processing: Review and Extensions, MacInnis, D.J. & Price, L.L., 1987, Journal of Consumer Research, 15 (March), pp. 473-491; “The Role of Imagery Instructions in Facilitating Persuasion in a Consumer Context” (ibid. 2); “The Effects of Information Processing Mode on Consumers’ Response to Comparative Advertising”, Thompson, D.V. & Hamilton, R.W., 2006, Journal of Consumer Research, 32 (March), pp. 530-540. (For more background on decision processes consult also the work of Payne, Bettman and Johnson on the constructive approach).

Read Full Post »

Social media networks are flourishing in activity. Most attention is given to Facebook that reached one billion members in the summer of this year. The lively arena of Facebook, humming with human interaction, and its potential to provide easy access to millions of consumers, has soon attracted the interest of marketers. A particular area of interest is the opportunity to study consumer perceptions, attitudes, preferences and behaviour through research activity in online social media networks, primarily in Facebook.

We may distinguish two tracks of research:

  • One track entails the collation and analysis of personal content created by network members with minimal or no intervention of companies. This track falls mainly within the domain of Big Data analytics that evolved dramatically in the past few years and keeps growing. Analytic processes may include text mining in search of keywords and key phrases in discussions, frequencies of “like”s, and movement between pages.
  • The other track, that is the focus of this post-article,  includes interaction between a company and consumers, usually within a community or forum set-up by the company in its corporate name or in the name of one of its brands (e.g., its “page” on Facebook). This activity may take the form of regular discussions initiated by the company (e.g., introducing an idea or a question on topic of enquiry to which members are invited to comment) but also invitations to participate in surveys and moderated focus-group discussions online.

Online marketing research is prevalent for at least ten years now and the methods associated with this field, including surveys, experiments and focus-group discussions, continue to improve. However, the belief taking hold among marketers that they can reliably and transparently shift their research studies to the environment of social media is illusive and misleading (see articles in The New-York Times, TheMarker [Israel]).

Advantages in speed and cost may be tempting marketers to replace established methods with new techniques accustomed to social media or attempt at launching the former from within social media networks. But social media has distinctive features, particularly in structure of information and the coverage of its audiences, that do not allow an easy and simple transition into the new environment, at least not so much as turning traditional marketing research methods redundant.

The problem starts with the “rules of game” typical in a social media network. The codes or norms of discourse between members in the network do not generally fit well with the requirements of rigorous tools of research for data collection. Questions in surveys usually have specially designed structures and format and are specific in defining what the respondent is asked about. They are formulated to achieve satisfactory levels of validity and reliability. The social network on the other hand gives utmost freedom of expression in writing entries or comments. It tries to avoid constraining members into particular modes of reply. Questions prompted to members are usually written in everyday friendly language, the less formal as possible. One may normally post one to three questions at most in such mode of discussion. It lacks any discipline that robust research usually demands. The mode of questioning normally feasible within the pages of the social media website may be acceptable for some forms of qualitative research but, reasonably, it takes more than a few questions to properly investigate any topic.

A marketer may get some idea of direction where consumers or customers are driving at in their thoughts and feelings by scrutinizing their answers subjectively and individually. But it would be presumptuous to derive quantitative estimates at any reasonable level of accuracy (e.g., purchase intentions and willingness-to-pay).

  • Critics of surveys argue that the reliability of responses is often compromised when respondents attempt to second-guess what the client of the survey wants to hear or they are subject to “social desirability”, that is, they are trying to give the answer believed to be approved by others. However, this problem is not any less susceptible to surface in comments in the setting of social media. When writing in their own words in the less formal setting of a social media community, members may feel more free to express their opinions, preferences, thoughts and feelings; yet they are still expressing what they are ready to share. Furthermore, the social media is a great venue for people to promote the way they wish to be perceived by others, that is, their “other-image”, so we should not assume that they are not “fixing” or “improving” on some of their answers about their preferences, attitudes, the brands they use, etc.

One may use a web application to upload a short survey questionnaire embedded in his or her own page or as a pop-up window. The functionality of such surveys is rather limited, with only a few questions, and is usually more of a gadget than a research tool. The appropriate alternative for launching a more substantive study is to invite and refer participants to a different specialised website where an online survey is conducted with a self-administered questionnaire or a remote focus-group session can be carried out. Here we should become concerned: Who answers the survey questions or takes part in a study? Who do the participants represent?

This concern is a more critical issue in the case of surveys for quantitative research than in forms of qualitative research. Firms are normally allowed and able to address members of their own pages or communities who are “brand advocates” or “brand supporters”. The members-followers are most likely to be customers, but in addition to buying customers they may also include consumers who are just favourable towards the brand (e.g., for luxury brands). If the target population of the research that the marketer wishes to study matches this audience then it is acceptable to use the social media network as a source, and at least for a qualitative study it can be sufficient and satisfactory. However, for a quantitative study it is vital to meet additional requirements upon the process of selection or sampling of participants in order to allow valid inferences. Unfortunately, the match is in many cases inadequate or very poor (e.g., the pool of accessible members covers only a faction of the customer-base with particular demographic and lifestyle characteristics). For quantitative research the problem is likely to be more severe because the ability to draw probabilistic samples is limited or non-existent, and recruitment relies mostly on self-selection by the volunteering members.

The field of online research is still in development where issues of sampling from panels for example are still debatable. There are also misconceptions about the speed of online surveys because in practice one may need to wait even for a week for late respondents in order to obtain a better representative sample. Yet advocates of marketing  research through social media networks like Facebook try, quite immaturely, to pave the way into this special territory facing even more difficult methodological challenges.

There are certainly advantages to focusing research initiatives on the company’s customers, particularly in matters of new product development. Customers, and possibly even more broadly “brand supporters”, are likely to be more ready and motivated to help their favourite company, contributing their opinions and sharing information about their preferences. They are also likely to have closer familiarity with the company or brand and obtain better knowledge of its products and services than consumers in general. Hearing first what its own customers think of an early idea or a product concept in development makes much sense to help putting the company on the right track. However, as the configuration of a product concept becomes more advanced and specific, more specialised research techniques are required to adequately measure preferences or purchase intentions. Wider consumer segments also need to be studied. Even at an early stage of an idea there is a risk of missing on real opportunities (or vice versa) if an inappropriate audience is consulted or insufficient and superficial measurement techniques are used. Using the responses from “brand supporters” in a social media network can be productive for an exploratory examination to “test the water” before plunging in with greater financial investment. But such evidence should be evaluated with care; relying on the evidence from social media for making final decisions can be reckless and damaging.

Nevertheless, marketers should distinguish between interactions and collaboration between a company and its customers and research activity. Not every input should be quickly regarded as data for research and analysis. First of all, the mutual communication between customers or advocates and a company/brand is essential to maintaining and enhancing the relationship between them, and the company therefore should encourage customers to interact and furthermore contribute to its function and performance. Hence, when product users offer their own genuine ideas for new products or product improvements (e.g., hobbyists and enthusiasts who develop and build new Lego models) their contributions are welcome, and the better ones are implemented. And when a company (Strauss food company, Israel) gives feedback on ideas by its followers on its Facebook page as to which ideas are inapplicable, to be applied “maybe another time”, as well as in initial review, this activity is commended. But these interactions belong in the domain of collaboration, not research. Survey-like initiatives in Facebook may aid in enforcing a feeling of partnership between a company and its customers (commented to TheMarker by Osem food company).  A debate extended on this issue of “partnership” questions whether the reward to originators of successful ideas is only a sense of achievement and contribution or should they receive also material rewards from the benefiting companies.

Social media networks seem foremost appropriate as a source for qualitative research. If those who advocate performing marketing research in Facebook refer primarily to qualitative types of research, then it seems reasonable and more often may be admissible. It is also generally appropriate for exploratory and preliminary examinations of marketing initiatives but when done with caution in view of the limitations of the social media forums. It is much less appropriate as a venue and source for quantitative studies.

While interesting and valid studies can be conducted on how consumers behave in social media websites (e.g., on what subjects they talk, with whom, and the narrative of discourse they use), using a social media network as a source of research on other topics is a different matter. When done for marketing purposes, there are ethical issues regarding analytics of personal content in social media that could not be discussed in the current post. Primarily at stake is the concern whether companies are entitled to analysing content of conversations between consumers-members, suggesting that they are spying on and eavesdropping to network members. Even in discussions on the company’s page the utilization of analytic techniques may not be appropriate or effective. Access to background information on members who activate web apps on the company’s page (with their permission) is another contentious issue. For most users, this is the kind of privacy they have to give up for participating in a network free of charge, but to what extent will consumers agree to go on like this?

The use of social media networks for marketing research, as well as analytics, is therefore more complex and less straightforward than many marketers appear to perceive those activities. Foremost, explorations in social media should not be viewed head-on as a substitute for the more traditional methods of marketing research.

Ron Ventura, Ph.D. (Marketing)

Read Full Post »

Competition in health-related industries (i.e., health care services, pharmaceutical, biotechnology) has been increasing continuously in the past two to three decades. The health business has also become more complex and multilayered with public and private institutions, individual doctors and patients, as players. Consequently, decision processes on medical treatment may become more complicated or variable, being more difficult to predict which treatment or medication will be administered to patients. For example:

  • For many medical conditions there are likely to exist a few alternative brands or versions of the same type of prescribed medication. Depending on the health systems in different countries, and on additional situational factors, it may be decided by a physician, a health care provider and/or insurer, or a pharmacist what particular brand of medication a patient would use. In some cases the patient may be allowed to choose between a more expensive brand and an economic brand (e.g., original and generic brands, subsidised and non-subsidised brands).
  •  There are plenty of over-the-counter (OTC) medications, formulae and devices that patients can buy at their own discretion, possibly with a recommendation of a physician or pharmacist.
  • Public and private medical centers and clinics offer various clinical tests and treatments (e.g., prostate screening, MRI scanning, [virtual] colonoscopy), often going above the heads of general/family physicians of the concerned patients.
  • In more complex or serious conditions, a patient may choose between having a surgery at a public hospital or at a private hospital, depending on the coverage of his or her health insurance.

In the late 1990s, professionals, executives and researchers in health-related areas have developed an interest in methods for measuring preferences that would allow them to better understand how decisions are made by their prospect customers, especially doctors and patients (“end-consumers”). This knowledge serves (a) to address more closely the preferences of patients or requirements of physicians, and (b) to channel planning, product development or marketing efforts more effectively. In particular, they have become interested in methods of conjoint analysis and choice-based conjoint that have already been prevalent in marketing research for measuring and analysing preferences. Conjoint methods are based on two key principles: (a) making trade-offs between decision criteria, and (b) decomposition of stated preferences with respect to whole product concepts (e.g., a medication) by means of statistical techniques into utility values for levels of each attribute or criterion describing the product (e.g., administering 2 vs. 4 times in 24 hours). The methods differ, some argue quite distinctly, in terms of the form in which preferences are expressed (i.e., ranking or rating versus choice) and in the statistical models applied (e.g., choice-based conjoint is often identified by its application of discrete choice modelling). An important benefit for pharmaceutical companies, for example, is gained in learning what characteristics of a medication (e.g., anti-depressant) contribute more to convincing physicians to prescribe it, versus factors like risks or side-effects that lead them to avoid a medication.

The product concepts presented are hypothetical in the sense that they are specified by using controlled experimental techniques and do not necessarily match existing products at the time of study. This property is essential for deriving utility values for the various levels of product attributes studied, and to allow prediction by simulation of shares of preference (“market shares”) for future products. The forecasting power of conjoint models is considered their major appeal from a managerial perspective. In addition, conjoint data can be used for segmenting patients and designing refined targeted marketing strategies.

Interest in application of conjoint methods in a health context has grown in the past decade. According to a review research of conjoint studies reported in 79 articles published between 2005 and 2008, the number of studies nearly doubled from 16 in 2005 to 29 in 2007. The researchers estimated that by the end of 2008 the number of published studies would reach 40. The most frequent areas of application have been cancer (15%) and respiratory disorder (12%)(1). However, applications of conjoint techniques can be found also for guiding policy making and the design of health plans in a broader context of health-care services provided to patients (e.g., by HMOs).

Most conjoint studies in health (71%) apply choice experiments and modelling, becoming the dominant approach (close to 80%) particularly in 2008. A typical study includes 5 or 6 attributes with 2 or 3 levels for each attribute. Most studies in a choice-based approach involve 7 to 8 scenarios (choice sets) but studies with 10-11 or 14-15 scenarios are also frequent (2). A choice scenario normally includes 3 to 5 concepts from which a respondent has to choose a single most prefered concept.

Interpretation of conjoint studies among medical doctors needs a special qualification to be distinguished from studies of patients or consumers. That is because the physicians make professional judgements about the most appropriate treatment option for their patients.  Therefore, it is less appropriate to relate to personal preferences in this context. It is more sensible and suitable to talk about decision criteria that physicians apply, their priorities (i.e., represented by importance weights), and requirements of physicians from pharmaceutical or other treatment alternatives available in the market.

Including monetary cost in conjoint studies on products and services in health-care may be subject to several complications and limitations. That may be the reason for the relatively low proportion of articles on conjoint studies in health that were found to include prices (40%)  (3). For instance, doctors do not take money out of their own pockets to pay for the medications they prescribe, so it is generally less relevant to include price in their studies. It may be sensible, however, to include cost in cases where doctors are allowed to purchase and hold a readily available  inventory of medications for their visiting patients in their private clinics (e.g., Switzerland). It may still be useful to examine how sensitive doctors are to the cost of medication that their patients will have to incur when prescribing them. However, this practice may be additionally complicated because the actual price patients pay for a specific medication is likely to change according to the coverage of their health plan or insurance. It is appropriate and recommended to include price in studies on OTC medications or health-related devices (e.g., for measuring blood pressure). Aspects of cost can be included in studies on health plans such as the percentage of discounts provided on medications and other types of clinical tests and treatments in the plan’s coverage.

An Example for a Conjoint Study on Health-Care Plans:

A choice-based conjoint study was conducted to help a health-care coverage provider assess the potential for a new modified heath plan it was considering to launch. Researchers Gates, McDaniel and Braunsberger (4)  designed a study with 11 attributes including provider names (the client and two competitors), network of physicians accessible, payment per doctor visit, prescription coverage, doctor quality, hospital choice, monthly premium, and additional attributes. Each respondent was introduced to 10 choice sets where in each set he or she had to choose one out of four plans. This setting was elected so that in subsequent simulations the researchers could more accurately test scenarios with existing plans of the three providers plus a new plan by the client-provider. The study was conducted among residents in a specific US region by mail. Yet beforehand a qualitative study (focus group discussions) and a telephone survey have been carried out to define, screen and refine the set of attributes to be included in the conjoint study. 506 health-care patients returned the mail questionnaire (71% response rate out of those in the phone survey who agreed to participate in the next phase).

The estimated (aggregate) utility function suggested to the researchers that the attributes could be divided into two classes of importance: primary criteria for choosing a health plan and secondary considerations. The primary criteria focused on access allowed to doctors in the region of residence and cost associated with the plan, representing the more immediate concerns to target consumers in the market in choosing a health-care plan by a HMO. It was mainly confirmed in the study that consumers are less concerned by narrowing the network of doctors they may visit, as long as they can keep their current family physician and are not forced to replace him or her with another on the list. Respondents appeared to rely less on reported quality ratings of doctors and hospitals. Vision tests and dental coverage were among the secondary considerations. Managers could thereby examine candidate modifications to their health plan and estimate their impact on market shares.

The conjoint methods offer professionals and managers in health-related organizations research tools for gaining valuable insights into patient preferences or criteria governing the clinical decisions of doctors on medications and other treatments. These methods can be particularly helpful in guiding the development of pharmaceutical products or instruments for performing clinical tests and treatments when issues of marketing and promoting them to decision makers come into play. As illustrated in the example, findings from conjoint studies can be useful in policy making on health-care services and designing attractive health plans to patients. This kind of research-based knowledge is acknowledged more widely as a key to success in the highly competitive environs of health-care.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1)  Conjoint Analysis Applications in Health – How Are Studies Being Designed and Reported? An Update of Current Practice in the Published Literature Between 2005 and 2008, D. Marshall, J.F.P. Bridges, B. Hauber, R. Cameron, L. Donnalley, K. Fyie, and F.R. Johnson, 2010, The Patient: Patient-Centered Outcomes Research, 3 (4), 249-256

(2) Ibid. 1.

(3) Ibid. 1.

(4) Modeling Consumer Health Plan Choice Behavior to Improve Customer Value and Health Plan Market Share, Roger Gates, Carl McDaniel, and Karin Braunsberger, 2000, Journal of Business Research, 48, pp. 247-257 (The research was executed by DSS Research to which Gates is affiliated).

Additional sources:

A special report on conducting conjoint studies in health was prpared in 2011 by a task force of the International Society for Pharmaeconomics and Outcomes Research. The authors provide methodological recommendations for guiding the planning, design, and analysis and reporting conjoint studies in health-related domains.

Conjoint Analysis Applications in Health – A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force, John F.P. Bridges, and A. Brett Hauber et al., 2011, Value in Health, 14, pp. 403-413

http://www.ispor.org/taskforces/documents/ISPOR-CA-in-Health-TF-Report-Checklist.pdf

Read Full Post »