Feeds:
Posts
Comments

Posts Tagged ‘Research’

One of the more difficult and troublesome decisions in brand management arises when entering a product category that is new to the company: Whether to up-start a new brand for the product or to endow it with the identity of an existing brand — that is, extending a company’s established brand from an original product category to a product category of a different type. The first question that would probably pop-up is “how different is the new product?”, acting as a prime criterion to judge whether the parent-brand fits the new product.

Notwithstanding, the choice is not completely ‘black or white’ since intermediate solutions are possible through the intricate hierarchy of brand (naming) architecture. But focusing on the two more distinct strategic branding options above helps to see more clearly the different risk and cost implications of launching a new product brand versus using the name of an existing brand from an original product category. Notably, the manufacturers, retailers and consumers, all perceive risks, albeit from the different perspective of each party given its role.

  • Note: Brand extensions represent the transfer of a brand from one type of product to a different type, to be distinguished from line extensions that pertain to the introduction of variants within the same product category (e.g., flavours, colours).

This is a puzzling marketing and branding problem also from an academic perspective. Multiple studies have attempted in different ways to identify the factors that best explain or account for successful brand extensions. While the stream of research on this topic helpfully points out to major factors, some more commonly agreed upon, a gap remains between the sorts of extensions predicted to succeed according to the studies and the extensions performed by companies that happen to succeed or fail in the markets in reality. A plausible reason for missing the outcomes of actual extensions, as argued by the researchers Milberg, Sinn, and Goodstein (2010), is neglecting the competitive settings in categories that are the target of brand extension (1).

Perhaps one of the most famous examples of a presumptuous brand extension has been the case of Virgin (UK), from music to cola (drink), airline, train transport, and mobile communication (ironically, the origin of the brand as Virgin Music has since been abolished). The success of Virgin’s distant extensions is commonly attributed to the personal character of Richard Branson, the entrepreneur behind the brand: his boldness, initiative, willingness to take risks, and adventurism. These traits seem to have transferred to his business activities and helped to make the extensions more credible and acceptable to consumers.

Another good example relates to Philips (originated in The Netherlands). Starting from lighting (bulbs, now more in LED), the brand extended over the years to personal care (e.g., face shavers for men, hair removal for women), sound and vision (e.g., televisions, DVD and Blue-Ray players, originally in radio sets), PC products, tablets and phones, and more. Still, when looking overall at the different products, systems and devices sharing the Philips brand, they can mostly be linked as members in a broad category of ‘electrics and electronics’, a primary competence of the company. As the company grew with time, launched more types of products whilst advancing with technology, and its Philips brand was perceived as having greater experience and good record in brand extensions, this could facilitate the market acceptance of further extensions to additional products.

  • In the early days of the 1930s to 1950s radio and TV sets relied for operation on vacuum tubes, later moving to electronic circuits with transistors or digital components. Hence, historically there was an apparent physical-technological connection between those products and the brand’s origin in light bulbs, a connection much harder to find now between category extensions, except for the broad category linkage suggested above.

Academic research has examined a range of ‘success factors’ of brand extensions, such as: perceived quality of the parent-brand; fit between the parent-brand and the extension category; degree of difficulty in making an extension (challenge undertaken); parent-brand conviction; parent-brand experience; marketing support; retailer acceptance; perceived risk (for consumers) in adopting the brand extension; consumer innovativeness; consumer knowledge of the parent-brand and category extension; the stage of entry into another category (i.e., as an early or a late entrant). The degree of fit of the parent-brand (and original product) with the extension category is revealed as the most prominent factor contributing to better acceptance and evaluation (e.g., favourability) of the extension in consumer studies.

Aaker and Keller specified in a pioneer article (1990) two requirements for fit: (a) the extension product category is a direct complement or a substitute of the original category; (b) the company, with its people and facilities, is perceived as having the knowledge and capability of manufacturing the product in the extension category. These requirements reflect a similarity between the original and extension product categories that is necessary for successful transfer of a favourable attitude towards the brand to the extension product type (2). A successful transfer of attitude may occur, however, also if the parent-brand has values, purpose or image that seem relevant to the extension product category, even when the technological linkage is less tight or apparent (as the case of Virgin suggests).

  • Aaker and Keller found that fit, based especially on competence, stands out as a contributing factor to higher consumer evaluation (level of difficulty is a secondary factor while perceived quality plays more of a ‘mediating’ role).

Volckner and Sattler (2006) worked to sort out the contributions of ten factors, as retrieved from academic literature, to the success of brand extensions; relations were refined with the aid of expert advice from brand managers and researchers (3). Contribution was assessed in their model in terms of (statistical) significance and relative importance. The researchers found  fit to be the most important factor driving (perceived) brand extension success in their study, followed by marketing support, parent-brand conviction, retail acceptance, and parent-brand experience. The complete model tested for more complex structural relationships represented through mediating and moderating (interacting) factors (e.g., the effect of marketing support on extension success ‘passes’ through fit and retailer acceptance).

For brand extensions to be accepted by consumers and garner a positive attitude, consumers should recognise a connectedness or linkage between the parent-brand and the category extension. The fit between them can be based on attributes of the original and extension types of product or a symbolic association. Keller and Lehmann (2006) conclude in this respect that “consumers need to see the proposed extension as making sense” (emphasis added). They identify product development, applied via brand (and line) extensions, as a primary driver of brand growth, and thereby adding to parent-brand equity. Parent-brands do not tend to be damaged by unsuccessful brand extensions, yet the authors point to circumstances where greater fit may result in a negative effect on the parent-brand, and inversely where joining a new brand name with the parent-brand (as its endorser) may protect the parent-brand from adverse outcomes of extension failure (4).

When assessing the chances of success of a brand extension, it is nevertheless important to consider what brands are already present in the extension category that a company is about to enter. Milberg, Sinn, and Goodstein claim that this factor has not received enough attention in research on brand extensions. In particular, one has to take into account the strength of the parent-brand relative to competing brands incumbent in the target category. As a starting point for entering the extension category, they chose to focus on how well consumers are familiar with the competitor brands vis-à-vis the extending brand.  Milberg and her colleagues proposed that a brand extension can succeed despite a worse fit with the category extension due to an advantage in brand familiarity, and vice versa. Consumer response to brand extensions was tested on two aspects: evaluation (attitude) and perceived risk (5).

First, it should be noted, the researchers confirm the positive effect of better fit on consumer evaluation of the brand extension when no competitors are considered. The better fitting extension is also perceived as significantly less risky than a worse fitting extension. However, Milberg et al. obtain supportive evidence that in a competitive setting, facing less familiar brands can improve the fortune of a worse fitting extension, compared with being introduced in a noncompetitive setting: When the incumbent brands are less familiar relative to the parent-brand, the evaluation of the brand extension is significantly higher (more favourable) and purchasing its product is perceived less risky than if no competition is referred to.

  • A reverse outcome is found in the case of better fit where the competitor brands are more highly familiar: A disadvantage in brand familiarity can dampen the brand extension evaluation and increase the sense of risk in purchasing from the extended brand, compared with a noncompetitive setting.

Two studies performed show how considering differences in brand familiarity can change the picture about the effect of brand extension fit from that often found without accounting for competing brands in the extension category.

When comparing different competitive settings, the research findings provide a more constrained support, but in the direction expected by Milberg and colleagues. The conditions tested entailed a trade-off between (a) a worse fitting brand extension competing with less familiar brands; and (b) a better fitting brand extension competing with more familiar brands. In regard to competitive settings:

The first study showed that the evaluation of a worse fitting extension competing with relatively unfamiliar brands is significantly more favourable than a better fitting extension facing more familiar brands. Furthermore, the product of a worse fitting brand extension is preferred more frequently over its competition than the better fitting extension product is (chosen by 72% vs. 6%, respectively). Also, purchasing a product from the worse fitting brand extension is perceived significantly less risky compared with the better fitting brand. These results indicate that the relative familiarity of the incumbent brands that an extension faces would be more detrimental to its odds of success than how well its fit is.

The second study aimed to generalise the findings to different parent-brands and product extensions. It challenged the brand extensions with somewhat more difficult conditions: it included categories that are all relevant to respondents (students), and so competitor brands in extension categories are also relatively more familiar to them than in the first study. The researchers acknowledge that the findings are less robust with respect to comparisons of the contrasting competitive settings. Evaluation and perceived risk related to the worse fitting brand competing with less familiar brands are equivalent to the better fitting brand extension facing more familiar brands. The gap in choice shares is reduced though in this case it is still statistically significant (45% vs. 15%, respectively). Facing less familiar brands may not improve the response of consumers to the worse fitting brand extension (i.e., not overcoming the effect of fit) but at least it is in a position as good as of the better fitting brand extension competing in a more demanding setting.

  • Perceived risk intervenes in a more complicated relationship as a mediator of the effect of fit on brand extension evaluation, and also in mediating the effect of relative familiarity in competitive settings. Mediation implies, for example, that a worse fitting extension evokes greater risk which is responsible for lowering the brand extension evaluation; consumers may seek more familiar brands to alleviate that risk.

A parent-brand can assume an advantage in an extension category even though it encounters brands that are familiar within that category, and may even be considered experts in the field: if the extending brand is leading within its original category and is better known beyond it, this can give it a leverage on the incumbents if those brands are more ‘local’ or specific to the extension category. For example, it would be easier for Nikon leading brand of cameras to extend to binoculars (better fit) where it meets brands like Bushnell and Tasco than extending to scanners (also better fit) where it has to face brands like HP and Epson. In the case of worse fitting extensions, it could be significant for Nikon whether it extends to CD players and competes with Sony and Pioneer or extends to laser pointers and faces Acme and Apollo — in the latter case it may enjoy the kind of leverage that can overcome a worse fit. (Product and brand examples are borrowed from Study 1). Further research may enquire if this would work better for novice consumers than experts. Milberg, Sinn and Goodstein recommend to consider additional characteristics that brands may differ on (e.g., attitude, image, country of origin), suggesting more potential bases of strength.

Entering a new product category for a company is often a difficult challenge, and choosing the more appropriate branding strategy for launching the product can be furthermore delicate and consequential. If the management chooses to make a brand extension, it should consider aspects of relative strength of its parent-brand, such as familiarity, against the incumbent brands of the category it plans to enter in addition to a variety of other characteristics of product types and its brand identity. However, the managers can take advantage as well of intermediate solutions in brand architecture to combine a new brand name with an endorsement of an established brand (e.g., higher-level brand for a product range). Choosing the better branding strategy may be helped by better understanding of the differences and relations (e.g., hierarchy) between product categories as perceived by consumers.

Ron Ventura, Ph.D. (Marketing)

Notes:

1. Consumer Reactions to Brand Extensions in a Competitive Context: Does Fit Still Matter?; Sandra J. Milberg, Francisca Sinn, & Ronald C. Goodstein, 2010; Journal of Consumer Research, 37 (October), pp. 543-553.

2.  Consumer Evaluations of Brand Extensions; David A. Aaker and Kevin L. Keller, 1990; Journal of Marketing, 54 (January), pp. 27-41.

3.  Drivers of Brand Extension Success; Franziska Volckner and Henrik Sattler, 2006; Journal of Marketing, 70 (April), pp. 18-34.

4. Brands and Branding: Research Finding and Future Priorities; Kevin L. Keller and Donald R. Lehmann, 2006; Marketing Science, 25 (6), pp. 740-759.

5. Ibid. 1.

Read Full Post »

It is usually not a pleasant feeling to be alone in a scary place or event — think of being stuck in a dark elevator or being involved in a car accident. People commonly seek to be with someone for comfort and company. But the companion does not always have to be another person. A research by Dunn and Hoegg (2014) provides corroboration that the need to share fear matters to humans while the identity of the companion, whether a person or an object, is less critical.  More specifically, sharing fear with a product from an unfamiliar brand may facilitate a quick emotional attachment with that brand without requiring to build a relationship over a lengthy period of time (1).

Fear is evoked by the presence or anticipation of a danger or threat. Feeling fear may be triggered by an unfamiliar event to which one is unsure how to respond (uncertainty) or an unexpected event at a specific moment (surprise); experiencing fear is furthermore likely when the event encountered is both unfamiliar and unexpected. It is important to note, nonetheless, that not every encounter with an unfamiliar or unexpected event necessarily leads to  fear. The Amygdala in the temporal lobe of the brain is the “centre” where fear arises. However, the amygdala like other brain structures is responsible for multiple functions. The amygdala is activated in response to unfamiliarity, unpredictability or ambiguity, but not every instance necessarily means the evocation of fear. For example, tension from facing an unfamiliar problem that one is at loss how to solve may not result in fear. Additionally, fear as well as other states of emotion are the outcome of appraisal of physical feelings (e.g., faster heartbeats, startle, warmth), considering the conditions in which they were triggered; it is a cognitive interpretation of their meaning (“why do I feel that way?”). Activation of other brain structures together with the amygdala may influence whether similar feelings triggered by an unexpected event are interpreted, for instance, as fear, anger, or surprise. The context in which an event occurs can matter a lot for the appraisal of emotions (2).

Dunn and Hoegg emphasise the emotional charge of consumer attachment with a brand versus cognitive underpinnings. Brand attachment has often been conceptualised as the product of a relationship between consumers and the target brand built over time. It should take a longer time to achieve a more solid brand attachment because of cognitive processes for establishing brand connections in memory and stronger favourable brand attitudes. However, this explanation is subject to criticism of missing the important role of emotions in bonding between consumers and a brand which does not necessarily require a long time. By focusing their studies on unfamiliar brands, Dunn and Hoegg intended to show that emotional attachment can emerge much more quickly when the consumers are distressed and are looking for a partner to share their fear with, and that partner or companion can be a brand of a given product.

On the same grounds, the researchers chose a scale of emotional attachment (Thomson, MacInnis and Park, 2005 [3]) as more appropriate over a scale that combines emotional and cognitive aspects of attachment and gives greater weight to cognitive constructs (Park, MacInnis et al., 2010 [4]). The emotional scale comprises three dimensions: (a) Affection (affectionate, friendly, loved, peaceful); (b) Passion (passionate, delighted, captivated); (c) Connection (connected, bonded, attached). Nevertheless, in the later research Park and MacInnis with colleagues offer a broader perspective that accounts for two bases of brand attachment: (i) a connection between self-concept and a brand; and (ii) brand prominence in memory.

While ‘brand prominence’ can be regarded as more cognitive-oriented (accessibility of thoughts and feelings in memory), a ‘brand-self connection’ entails the expansion of one’s concept of self to incorporate others, such as brands, within it — and that involves an emotional element. Park and MacInnis et al. emphasise the brand-self connection as the emotional core of their definition of brand attachment, while brand prominence is a facilitator in actualizing the attachment (analyses substantiate that brand attachment is a better predictor than attitudes of intentions to perform more difficult types of behavior reflecting commitment, and the brand-self connection is more essential for driving this behaviour). The three-dimension scale of emotional brand attachment seems very relevant for the research goals of Dunn and Hoegg, even though it is more restricted from a stand-point of the theoretical roots of brand attachment.

The desire to affiliate with others in scaring and upsetting situations is recognised as a mechanism for coping with negative emotions in those situations. Episodes of armed conflict, terrorist attacks, and natural disasters make people get closer to each other, unite and show solidarity. However, the researchers note that the act of affiliation is essential for coping rather than the affiliation target. That is, the literature on affiliation or attachment relates to interpersonal connections as well as attachment to objects (although objects are viewed as substitutes in absence of other persons [pet animals should also be considered]). We can find support for possible attachment to products and their brands in the human tendency to animate or anthropomorphise objects by assigning them traits of living beings, whether animals or humans. Brands may be animated in order to help consumers relate with them more comfortably, making them appear more vivid to them. It is one of the processes that facilitates the development of consumers’ relationships with their brands in use; consumers connect with brands also through the role brands fulfilled in their personal history, heritage and family traditions, and how brands integrate in their preferred lifestyles (5).

Dunn and Hoegg investigate how consumers connect with a brand on occasions of incidental fear. They make a clear distinction between events that may trigger fear (or other emotions) and fear appeals strategically planned in advertising (e.g., in order to induce a particular desired behaviour). Events that incidentally cause fear would be independent and uncontrolled. Additionally, the intensity and range of emotions felt is expected to differ when consumers actively participate in an event and hence experience it directly in contrast to watching TV ads — in direct consumer experiences, emotional feelings are likely to be more intensive and specific.  In a model for measuring consumption emotions developed and tested by Richins, fear is characterised as a negative and more active (as opposed to receptive) emotion, next to other emotions such as anger, worry, discontent, sadness and shame (6).

  • In their experiments, the researchers try to emulate incidental fear by displaying to participants clips from cinema films or TV series’ episodes, and present evidence that manipulations successfully elicited the intended emotions as dominant in response to each video clip. Yet, it remains somewhat ambiguous how real and direct the experience of watching scenes in a film or a TV programme is perceived and felt with regard to the emotions evoked.

The following are more concrete findings from the studies and their insights:

Emotional brand attachment is generated through perception that the brand shares the fear with the consumer — Study 1 confirms that emotional attachment with an unfamiliar brand is generated when a product (juice) by that brand is present and can be consumed during the fear-inducing experience (more than for emotions of sadness, excitement and happiness). But moreover, it is shown that the emotional attachment is mediated (conditioned) by perception of the consumer that the brand shared the fear with him or her.

Humans precede product brands —  Sharing fear with a brand contributes to stronger emotional brand attachment, but only if they still have a desire generated by fear to affiliate with others. If conversely that desire is satiated by a perception of the consumers that they are already socially affiliated with other people, the effect on brand attachment is muted.

  • Note: Participants in Study 2 were asked to perform a search with words related to feelings of affiliation and social connectedness (e.g., included, accepted, involved) to prime affiliation. Given the statements used to measure (non-)affiliation (e.g., “I feel disconnected from the world around me”), it is a little questionable how effective such a priming condition could be (though the authors show it was sufficient). It might have been more tangible to ask participants to think of people dear to them, family and close friends, and write about them.

Balancing negative and positive emotional effects on attitudes — Based on analyses in Study 2 the researchers also suggest that increased positive effect of emotional brand attachment may counterbalance and override a negative influence of ‘affect transfer’ on attitudes due to fear.

Presence of the brand and attention to it are required yet sufficient — Study 3 demonstrates that neither consumption of the product (juice) nor even touching it (the bottle), both forms of physical interaction, are really needed for feeling affiliated and forming emotional attachment — forced consumption in particular does not contribute to stronger perceived sharing or emotional attachment than merely seeing the product when feeling fear, that is making an eye contact and visually attending to the product in search for a companion. (Unexpectedly, in the case of action and excitement, consuming the drink increases emotional attachment.) Study 4 stresses, nevertheless, that the brand must be present during the emotional event for generating increased emotional attachment — having the brand nearby while experiencing the fear is essential for consumers to feel connected with the brand as their sharing partner (tested with a different product, potato chips).

The research paper suffers from a deficit in practice. That is, marketing managers and professionals might be disappointed to discover that it could be most difficult to have any control of those situations of incidental fear and to act on them to their advantage. In order to have any influence on the consumer a company would be required to anticipate an individual event in advance and to find a way to intervene (i.e., make their product present) without being perceived too intrusive or self-interested — two non-negligible challenges. An additional restriction is posed by the relation of the ‘fear effect’ to brands not previously familiar to the consumers.

Let us consider some potential scenarios where brands might benefit and the difficulties that are likely to arise in implementing it:

Undertaking medical treatments or tests — Some treatments can be alarming and frightening on occasion to different patients. A sense of fear is likely to enter already, and perhaps especially, while waiting. It is a opportunity for introducing the brand-companion in the waiting hall; even more so given that patients are usually not allowed to or prevented from using artifacts during the treatment (mostly no food and drinks). First, a company may have a difficulty to obtain access to places where patients wait for treatment. Second, consumers-patients are likely to bring products with them from home to entertain them (of brands they know). Third, patients often arrive with a family or friend companion, thus satisfying their need for affiliation with another person which dominates affiliation with an object. Still, there is room for ingenuity how to locate the brand close enough to the treatment episode (e.g., shops offering books or toys, especially for children, in the premises of a clinic or hospital).

Trekking or hiking in nature — Some routes, particularly in mountainous areas, can be quite adventurous, not to say dangerous. If a brand could find a way to introduce its product just before the consumer starts the hiking trip, it may benefit from being with him or her if fear arises. One problem is that hikers are advised and even required not to embark alone on more dangerous routes. Another problem is that those trekking or hiking sites often offer local brands, that while not being familiar to the consumers they also are not likely to be available to them at home, and thus the opportunity to develop a relationship based on the early emotional attachment is lost.

Offering legal, financial, insurance, and technical services in events of crisis — In various occasions of accidents, malfunctions, and disasters, people need help to cope with the crisis and the negative emotions it may evoke, particularly fear. A service provider would be expected to counsel the customer in his or her distress, and of course propose a solution (e.g. how to fix one’s home after a fire or an earthquake). Unfortunately,  one cannot make an eye contact with an intangible service. The company has to find creative and practical ways to make itself readily visible and accessible to the consumer when needed by offering instruments and cues for making contact (e.g., an alarm and communication device for the elderly and people with more risky medical conditions).

  • Dunn and Hoegg are aware of the limitation of the findings to unfamiliar brands. They reasonably propose that “because fear leads to a general motivation to affiliate, emotional brand attachment would be enhanced regardless of the familiarity with the brand” (p. 165). It should take further research, however, to substantiate this proposition.

Despite the possible difficulties companies will likely need to deal with, the doors are not completely shut to them to benefit from this phenomenon. But they must come up with creative and non-intursive solutions to make their brands and products present in the right place at the right time. At the very least, marketers should be aware of the potential effect of sharing fear with the consumer and understand how it can work in the brand’s benefit. It is worth remembering, after all, the saying “a friend in need is a friend indeed” whereby in some incidents the friend can be a brand.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) “The Effect of Fear on Emotional Brand Attachment”; Lea Dunn and JoAndrea Hoegg, 2014; Journal of Consumer Research, 41 (June), pp. 152-168.

(2) “What Is Emotion?: History, Measures and Meanings”; Jerome Kagan, 2007; New Haven and London: Yale University Press. Also see: “The Experience of Emotion”; Lisa Feldman Barrett, Bejta Mesquita, Kevin N. Ochsner, & James J. Gross, 2007; Annual Review of Psychology, 58, pp. 373-403.

(3) “The Ties That Bind: Measuring the Strength of Consumers’ Emotional Attachments to Brands”; Mathew Thomson, Deborah J. MacInnis, & C. Whan Park, 2005; Journal of Consumer Psychology, 15 (1), pp. 77-91.

(4) “Brand Attachment and Brand Attitude Strength: Conceptual and Empirical Differentiation of Two Critical Brand Equity Drivers”; C. Whan Park, Deborah J. MacInnis, Joseph Priester, Andreas B. Eisengerich, & Dawn Iacobucci, 2010; Journal of Marketing, 74 (November), 1-17.

(5) “Consumers and Their Brands: Developing Relationship Theory in Consumer Research”; Susan Fournier, 1998; Journal of Consumer Research, 24 (March), pp. 343-373.

(6) “Measuring Emotions in the Consumption Experience”; Marsha L. Richins, 1997; Journal of Consumer Research, 24 (September), pp. 127-146.

Read Full Post »

Human thinking processes are rich and variable, whether in search, problem solving, learning, perceiving and recognizing stimuli, or decision-making. But people are subject to limitations on the complexity of their computations and especially the capacity of their ‘working’ (short-term) memory. As consumers, they frequently need to struggle with large amounts of information on numerous brands, products or services with varying characteristics, available from a variety of retailers and e-tailers, stretching the consumers’ cognitive abilities and patience. Wait no longer, a new class of increasingly intelligent decision aids is being put forward to consumers by the evolving field of Cognitive Computing. Computer-based ‘smart agents’ will get smarter, yet most importantly, they would be more human-like in their thinking.

Cognitive computing is set to upgrade human decision-making, consumers’ in particular. Following IBM, a leader in this field, cognitive computing is built on methods of Artificial Intelligence (AI) yet intends to take this field a leap forward by making it “feel” less artificial and more similar to human cognition. That is, a human-computer interaction will feel more natural and fluent if the thinking processes of the computer resemble more closely those of its human users (e.g., manager, service representative, consumer). Dr. John E. Kelly, SVP at IBM Research, provides the following definition in his white paper introducing the topic (“Computer, Cognition, and the Future of Knowing”): “Cognitive computing refers to systems that learn at scale, reason with purpose and interact with humans. Rather than been explicitly programmed, they learn and reason from interactions with us and from their experiences with their environment.” The paper seeks to rebuke claims of any intention behind cognitive computing to replace human thinking and decisions. The motivation, as suggested by Kelly, is to augment human ability to understand and act upon the complex systems of our society.

Understanding natural language has been for a long time a human cognitive competence that computers could not imitate. However, comprehension of natural language, in text or speech, is now considered one of the important abilities of cognitive computing systems. Another important ability concerns the recognition of visual images and objects embedded in them (e.g., face recognition receives particular attention). Furthermore, cognitive computing systems are able to process and analyse unstructured data which constitutes 80% of the world’s data, according to IBM. They can extract contextual meaning so as to make sense of the unstructured data (verbal and visual). This is a marked difference between the new computers’ cognitive systems and traditional information systems.

  • The Cognitive Computing Forum, which organises conferences in this area, lists a dozen characteristics integral to those systems. In addition to (a) natural language processing; and (b) vision-based sensing and image recognition, they are likely to include machine learning, neural networks, algorithms that learn and adapt, semantic understanding, reasoning and decision automation, sophisticated pattern recognition, and more (note that there is an overlap between some of the methodologies on this list). They also need to exhibit common sense.

The power of cognitive computing is derived from its combination between cognitive processes attributed to the human brain (e.g., learning, reasoning) and the enhanced computation (complexity, speed) and memory capabilities of advanced computer technologies. In terms of intelligence, it is acknowledged that cognitive processes of the human brain are superior to computers inasmuch as could be achieved through conventional programming. Yet, the actual performance of human cognition (‘rationality’) is bounded by memory and computation limitations. Hence, we can employ cognitive computing systems that are capable of handling much larger amounts of information than humans can, while using cognitive (‘neural’) processes similar to humans’. Kelly posits in IBM’s paper: “The true potential of the Cognitive Era will be realized by combining the data analytics and statistical reasoning of machines with uniquely human qualities, such as self-directed goals, common sense and ethical values.”  It is not sufficiently understood yet how cognitive processes physically occur in the human central nervous system. But, it is argued, there is growing knowledge and understanding of their operation or neural function to be sufficient for emulating at least some of them by computers. (This argument refers to the concept of different levels of analysis that may and should prevail simultaneously.)

The distinguished scholar Herbert A. Simon studied thinking processes from the perspective of information processing theory, which he championed. In the research he and his colleagues conducted, he traced and described in a formalised manner strategies and rules that people utilise to perform different cognitive tasks, especially solving problems (e.g., his comprehensive work with Allen Newell on Human Problem Solving, 1972). In his theory, any strategy or rule specified — from more elaborate optimizing algorithms to short-cut rules (heuristics) — is composed of elementary information processes (e.g., add, subtract, compare, substitute). On the other hand, strategies may be joined in higher-level compound information processes. Strategy specifications were subsequently translated into computer programmes for simulation and testing.

The main objective of Simon was to gain better understanding of human thinking and the cognitive processes involved therein. He proclaimed that computer thinking is programmed in order to simulate human thinking, as part of an investigation aimed at understanding the latter (1). Thus, Simon did not explicitly aim to overcome the limitations of the human brain but rather simulate how the brain may work-out around those limitations to perform various tasks. His approach, followed by other researchers, was based on recording how people perform given tasks, and testing for efficacy of the process models through computer simulations. This course of research is different from the goals of novel cognitive computing.

  • We may identify multiple levels in research on cognition: an information processing level (‘mental’), a neural-functional level, and a neurophysiological level (i.e., how elements of thought emerge and take form in the brain). Moreover, researchers aim to obtain a comprehensive picture of brain structures and areas responsible for sensory, cognitive, emotional and motor phenomena, and how they inter-relate. Progress is made by incorporating methods and approaches of the neurosciences side-by-side with those of cognitive psychology and experimental psychology to establish coherent and valid links between those levels.

Simon created explicit programmes of the steps required to solve particular types of problems, though he aimed at developing also more generalised programmes that would be able to handle broader categories of problems (e.g., the General Problem Solver embodying the Means-End heuristic) and other cognitive tasks (e.g., pattern detection, rule induction) that may also be applied in problem solving. Yet, cognitive computing seeks to reach beyond explicit programming and construct guidelines for far more generalised processes that can learn and adapt to data, and handle broader families of tasks and contexts. If necessary, computers would generate their own instructions or rules for performing a task. In problem solving, computers are taught not merely how to solve a problem but how to look for a solution.

While cognitive computing can employ greater memory and computation resources than naturally available to humans, it is not truly attempted to create a fully rational system. The computer cognitive system should retain some properties of bounded rationality if only to maintain resemblance to the original human cognitive system. First, forming and selecting heuristics is an integral property of human intelligence. Second, cognitive computing systems try to exhibit common sense, which may not be entirely rational (i.e., based on good instincts and experience), and introduce effects of emotions and ethical or moral values that may alter or interfere with rational cognitive processes. Third, cognitive computing systems are allowed to err:

  • As Kelly explains in IBM’s paper, cognitive systems are probabilistic, meaning that they have the power to adapt and interpret the complexity and unpredictability of unstructured data, yet they do not “know” the answer and therefore may make mistakes in assigning the correct meaning to data and queries (e.g., IBM’s Watson misjudged a clue in the quiz game Jeopardy against two human contestants — nonetheless “he” won the competition). To reflect this characteristic, “the cognitive system assigns a confidence level to each potential insight or answer”.

Applications of cognitive computing are gradually growing in number (e.g., experimental projects with the cooperation and support of IBM on Watson). They may not be targeted directly for use by consumers at this stage, but consumers are seen as the end-beneficiaries. The users could first be professionals and service agents who help consumers in different areas. For example, applied systems in development and trial would:

  1. help medical doctors in identifying (cancer) diagnoses and advising their patients on treatment options (it is projected that such a system will “take part” in doctor-patient consultations);
  2. perform sophisticated analyses of financial markets and their instruments in real-time to guide financial advisers with investment recommendations to their clients;
  3. assist account managers or service representatives to locate and extract relevant information from a company’s knowledge base to advise a customer in a short time (CRM/customer support).

The health-advisory platform WellCafé by Welltok provides an example of application aimed at consumers: The platform guides consumers on healthy behaviours recommended for them whereby the new assistant Concierge lets them converse in natural language to get help on resources and programmes personally relevant to them as well as various health-related topics (e.g., dining options). (2)

Consider domains such as cars, tourism (vacation resorts), or real-estate (second-hand apartments and houses). Consumers may encounter tremendous information in these domains on numerous options and many attributes to consider (for cars there may also be technical detail more difficult to digest). A cognitive system has to help the consumer in studying the market environment (e.g., organising the information from sources such as company websites and professional and peer reviews [social media], detecting patterns in structured and unstructured data, screening and sorting) and learning vis-à-vis the consumer’s preferences and habits in order to prioritize and construct personally fitting recommendations. Additionally, it is noteworthy that in any of these domains visual information (e.g., photographs) could be most relevant and valuable to consumers in their decision process — visual appeal of car models, mountain or seaside holiday resorts, and apartments cannot be discarded. Cognitive computing assistants may raise very high consumer expectations.

Cognitive computing aims to mimic human cognitive processes that would be performed by intelligent computers with enhanced resources on behalf of humans. The application of capabilities of such a system would facilitate consumers or the professionals and agents that help them with decisions and other tasks — saving them time and effort (sometimes frustration), providing them well-organised information with customised recommendations for action that users would feel they  have reached themselves. Time and experience will tell how comfortably people interact and engage with the human-like intelligent assistants and how productive they indeed find them, using the cognitive assistant as the most natural thing to do.

Ron Ventura, Ph.D. (Marketing)

Notes:

1.  “Thinking by Computers”, Herbert A. Simon, 1966/2008, reprinted in Economics, Bounded Rationality and the Cognitive Revolution, Massimo Egidi and Robin Marris (eds.)[pp. 55-75], Edward Elgar.

2. The examples given above are described in IBM’s white paper by Kelly and in: “Cognitive Computing: Real-World Applications for an Emerging Technology”, Judit Lamont (Ph.D.), 1 Sept. 2015, KMWorld.com

Read Full Post »

The location-based technology of beacons is a relatively recent newcomer in the retail scene (since 2013). Beacons provide an additional route for interacting with shoppers in real-time via their smartphones as they move around in stores and malls. Foremost, this technology is about marrying between the physical and the digital (virtual) spaces to create better integrated and encompassing shopping experiences.

It is already widely acknowledged that in-store and online shopping are not independent and do not happen completely separate from each other; instead, experience and information from one scene can feed and drive a shopping experience, and purchase, in the other scene. In particular, mobile devices enable shoppers to apply digital resources while shopping in a physical shop or store.  Beacons may advance retailers and shoppers another step forward in that direction, with the expectation to generate more purchases in-store. The beacon technology was received at first with enthusiasm and promising willingness-to-accept by retailers, but these subdued in the past year and adoption has stalled. A salient obstacle appears as consumers remain hesitant and cautious about letting retailers communicate through beacons with their smartphones and the implications it may have on their privacy.

In essence, beacons are small, battery-powered, low-energy Bluetooth devices that function as transmitters of information — primarily unique location signals — to nearby smartphones with an app authorised to receive the information. The availability of an authorised app (e.g., retailer’s, mall operator’s) installed on the consumer’s smartphone (or tablet) is critical for the communication technology to function properly. Upon receiving a location signal, the app is thereby triggered to display location-relevant content for the shopper in-store (e.g., product information, digital coupons, as well as store activities and services).

Additional requirements may be in force such as the retailer’s app being open during the shopping trip or that the shopper consents (opts-in) to allow the app receive information from beacons, but these do not seem to be necessary or mandatory conditions for the technology to work (e.g., an app may be set with ‘approval’ as default). Ambiguity that seemingly prevails about the extra requirements could be one of the sour points in the technology’s implementation. On one hand, the application of beacons is more ethical when setting up at least one of these requirements, and should endow it with greater credibility among consumers. On the other hand, any additional criterion for access of beacons to smartphones — assuming the app is already installed — could limit further the number of participating shoppers and reduce its marketing impact.

  • Only smartphones (and tablets) support apps, not any mobile phone. It should not be taken for granted that everyone has supporting smartphones, hence raising another possible limiting requirement on access for beacons (though in decline in developed countries). Another problem, yet, concerns the distinction between Apple iPhones operated with iOS and smartphones of other brands operated with Google’s Android — beacons have to work with either type of operating system and compatible apps but they do not necessarily do so (e.g., iBeacons are exclusive for Apple’s own mobile devices).

There are some more variations in the application of beacon technology in retail. Beacon devices may be attached to shelves next to specific product displays or to fixtures and building columns in positions aimed at capturing smartphones of shoppers moving in a close area (e.g., an aisle). If the beacon is associated with a particular product, the shopper may engage using the app by actively approaching the phone to the beacon. Otherwise, the app communicates with the beacons without  shoppers taking any voluntary action. Furthermore, some applications of beacon technology suggest sending information other than location signals from the beacon, such as product-related information, and receiving customer-related information by the beacon from the smartphone.

Reasonably, retailers would be interested first in applications of the technology for practical marketing purposes in their stores. However, beacon technology may also be utilised in research on shopper behaviour, a purpose now appreciated by many large retailers.

Marketing Practice in Retail

The instant sales-driven idea of application of beacon technology evoked by retailers is to introduce special offers, discount deals and digital coupons for selected products as shoppers get near to their displays. Notwithstanding this type of application, location-based features and services enabled via beacons can be even more creative and useful for shoppers, and beneficial for the retailers.

Relevance is key in achieving an effective application of the technology. Any message or content must be relevant in time and place to the shopper. That is, the content must be related to available products when the shopper is getting close enough to them. The content should not be too general in reference to any product in the store but to products in a section of the store where the shopper passes-by. Triggering an offer for a product just after the shopper entered a store is less likely to be effective, unless, for example, there is a special promotional activity for it in a main area of the floor. The retailer should not err in introducing an offer for a product item that is not available in the specific store at that time. Furthermore, if the app can link product information with customer information, it may be able to generate better content that is both location-relevant and personalised. The app could make use of accessible information on personal purchase history, interests and demographic characteristics. This higher-level application surely requires greater resources and effort of the retailer to implement.

The beacons’ greatest enemy could be their use for bombardment of shoppers with push or pop-up messages of offers, deals, discounts etc. This practice is suspected as a major fault in the early days of the technology that may be responsible for the slowdown in adoption lately. There could be nothing more irritating for a shopper if every few meters walked in the store he or she is interrupted by a buzz and message of “just today offer on X” that appears on the smartphone’s screen. Retailers have to be selective lest customers will avoid using their apps. It is much more important to produce adaptive, relevant and customer-specific messages and content overall (Adobe, Digital Marketing Blog, 4 February 2016).

  • The grocery retail chain Target, that launched a trial with beacons in 50 US stores in the second half of 2015, committed, for instance, to show no more than two promotional (push) messages during a store visit (TechCrunch.com, 5 Aug. ’15).

More intelligent and helpful ways exist to apply the beacon technology in interaction with the app than promotional push messages. First, content of the “front page” of the app can change as the shopper progresses in the store to reflect information that would be of interest to the shopper in that area of the store (e.g., show hyper-linked ’tiles’ for nearby product types). Second, beyond ‘technical’ information on product characteristics and price, a retailer can facilitate shopper-user access to reviews and recommendations for location-relevant products via the app. Third, if the shopper fills-in a shopping list on a retailer’s app (e.g., a supermarket), and the app has a built-in plan of the store, it can help the shopper navigate through the store to find the requested products, and it may even re-order the list and propose to the shopper a more ‘efficient’ path.

Beacons are associated mostly with stores (e.g., department stores, chain stores, supermarkets). However, beacons may also be utilised by mall operators where the ‘targets’ are stores rather than specific products. An application programme in a mall may command collaboration with the retailers (e.g., store profile and notifications, special promotional messages [for extra pay], content contributions).

In another interesting form of collaboration, the fashion magazine Elle initiated a programme with ShopAdvisor, a mobile app and facilitator that assists retailers in connecting with their shoppers through beacons. As an enhancement to its special 30th anniversary issue, Elle launched a trial project in partnership with some of its advertisers (e.g., Guess, Levi’s, Vince Camuto) to introduce their customers to location-based content with the help of ShopAdvisor (focused on promotional alerts)(1).

Consumers are concerned about tactics of location-based technologies like beacons that get intrusive and even creepy; they become adverse towards the way such apps sometimes surprise them (e.g., in dressing rooms). Indeed, only shoppers who installed an authorised app can be affected, but for customers who installed such a retailer’s app, with other benefits in mind, it can be disturbing at times. The hard issue at stake is how the app alerts or approaches its shoppers-users with location-based messages. Shoppers do not like to feel that someone is watching where they go.

The shopper may believe that if the app remains closed on the smartphone he or she cannot be approached. But if, as reported in CNBC News, a dormant app can be awaken by a beacon signal, this measure is not enough. This may happen because the shopper previously allowed the app to receive the Bluetooth signal or the app “assumed” so as default.  The shopper must take an extra step to disable the function at the app-level or device-level (Bluetooth connectivity). Retailers should let their customers opt-out and be careful in any attempt to remotely open their apps on smartphones (so-called “welcome reminders”), because imposing and interfering with customer choices may get the opposite outcome of removing the app.

The app may display ‘digital’ coupons for the shopper to “pick-up” and show later at the cashier (or self-service check-out). It is reasoned that if coupons are shown at the right time shoppers will welcome the offer, no resentment. The manner shoppers are alerted can also matter, by not being too obtrusive (e.g., “Click here for coupons for products in this aisle”). Shoppers told CNBC News that if digital coupons were offered to them by the app just when relevant, they would be glad to use this option, being more convenient than going around with paper coupons, but they would want the ability to opt-out.

Shopper Behaviour Research

The beacon technology may further contribute to research on shopper behaviour in stores or malls. Specifically, it may be suitable for collecting data of shopper traffic to be used in path analysis of the shopping journeys. The information may cover what areas of the store shoppers visit more frequently, how long one stays in a given area, and sequences of passes between areas.

Nonetheless, there are methodological, technological and ethical factors retailers and researchers have to consider. At this time, there are distinct limitations to be recognized that may inflict on the validity and reliability of the research application of beacons. Ethical issues discussed above regarding the provision of access of beacons to mobile apps furthermore apply in the research context.

This methodology involves tracking the movements of shoppers. Beacon technology may record frequency of visits in each area of the store separately or it may track the presence of a particular shopper by different beacons across the store. A beacon may also be able to send repeated signals at fixed intervals to a smartphone to measure how long a shopper remains in a given area. However, this type of research is not informative about what a shopper does in a specific location as in front of product shelves, and thus it cannot provide valuable details on her decision processes. Hence, retailers cannot rely on this methodology as a substitute for other methods capable of studying shopper behaviour more deeply, especially with respect to decision-making. A range of methods may be used to supplement path analysis such as interviewer’s walk-along with a shopper, passive observations, video filming, and possibly also in-store eye-tracking.

An implementation of the technology for research would require a comprehensive coverage of the premises with beacons, perhaps greater than needed for marketing practice. It should be compared with alternative location-based technologies (e.g., Radio Frequency Identification [RFID], Wi-Fi)  on criteria of access, range and accuracy, and of course cost-effectiveness. For example, the RFID technology employs tags ( transmitters) regularly attached to shopping carts — if a shopper leaves the cart at the end-of-aisle and goes in to pick-up a couple of products, the system will miss that; smartphones, however, are carried on shoppers all the time. Beacon technology may have an important advantage over RFID if location data is linked with customer characteristics, but this is a sensitive ethical issue and at least it is imperative to ensure no personal IDs are included in the dataset. All alternative technologies may also have to deal with different types of environmental interferences with their signals. Access would have both technical and ethical aspects.

A mixture of problems emerges as responsible for impairing the utilisation of beacon technology, according to RetailDive (online news and trends magazine), mainly consumers who do not perceive beacon-triggered features as useful enough to them and retailers troubled by technical or operational difficulties. Among the suggestions made: encourage pull of helpful information from beacons by shoppers rather than push messages, and speed-up calling staff for assistance via beacons (RetailDive, 17 December 2015). A recent research report by Adobe and Econsultancy on Digital Trends for 2016 indicates that retailers are becoming more reluctant to implement a geo-targeting technology like beacons this year compared with 2015 (a decrease in proportion of retailers who have this technology in plan or exploring it, against an increase in proportion of those who are not exploring or do not know). Conspicuously, there seems to be much more optimism about high effectiveness of geo-targeting technology at technology and consultancy agencies than among retailers, who seem to be much more in the opinion that it is too early (2). Agencies could have better understanding of the field, yet it signals an alarm of disconnect between agencies and their clients.

There is potential to beacon technology with clearly identifiable benefits it can deliver to retailers and consumers. It is still a young technology and requires more development and progress on various technical, applied and ethical aspects.  Promotional messages are  important tools but must be used in a good and sensible measure. A retailer cannot settle for a small set of fixed messages. It has to develop a dynamic ‘bank’ of messages, large enough to be versatile over products, (chain) stores, and consumer groups, and maintain regular updates. However, retailers have to develop and provide a more rich suite of clever content and practical tools based on location. Consumers will have to be convinced of the benefits enabled by beacons, yet feel free to decide when and how to enjoy them.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) “App Helps Target Shoppers’ Location and Spontaneity”, Glenn Rifkin, International New-York Times, 31 December 2015 – 1 January 2016.

(2) “Quarterly Digital Intelligence Briefing: 2016 Digital Trends”, Adobe and Econsultancy, January 2016 (pp. 24-25). The findings are considered with caution because of relatively small sub-samples of respondents on this topic (N < 200).

Read Full Post »

Surveys, being a major part of marketing research, seem to be in perpetual movement of change and development. Many of the changes in recent years are tied with technological advancement. About fifteen years ago online surveys — delivered over the Internet — began to rise as a dominant mode of survey administration; but now, researchers are pushed to perform more of their surveys via mobile devices, namely smartphones and tablets, in addition or as a replacement to being administered on desktop and laptop computers.

Yet some important distinctions between those two modes can make the transfer of surveys between them flawed. Just as much as it was wrong to suggest in the past that survey questionnaires administered in face-to-face interviews could be seamlessly transferred to phone interviews, it would be wrong today to suggest a seamless transfer of surveys from web browsers on desktops/laptops to mobile browsers (or apps).

In the latest Greenbook Research Industry Trends (GRIT) Report of Q3-Q4 2015, the authors suggest that there is still much room for improvement in adjusting online survey questionnaires to run and display properly also on mobile devices. They find that 45% of their respondents on the research supplier side and 30% on the research buyer (client) side claim that their companies design at least three quarters (75%-100%) of their online surveys to work effectively on mobile phones; however, “that tells us that over 50% of all  surveys are NOT mobile optimized” (p. 14, capital letters are in origin). The authors hereby implicitly call on marketing researchers to do much more to get their online surveys fully mobile-optimized. But this is not necessarily a justified or desirable requirement because not all online surveys are appropriate and applicable to be answered on smartphones nor on tablets. There could be multiple reasons for a lack of match between these modes for administering a particular survey: the topic, the types of constructs measured and instruments being used, the length of the questionnaire, and the target population relevant for the research. Consumers use mobile devices and personal computers differently (e.g., purpose, depth and time) which is likely to extend also to how they approach surveys on these products.

  • The GRIT survey of marketing researchers was conducted in a sample of 1,497 respondents recruited by e-mail and social media channels, of whom 78% are on the supplier-side and 22% on the client-side. Nearly half (46%) originate in North-America and a little more than quarter (27%) come from Europe.

Concerns about coverage and reach of a research population have followed online surveys from the beginning. Of different approaches for constructing samples, including sampling frames (e.g., e-mail lists) and ad-hoc samples (i.e., website pop-up survey invitations), the panel methodology has become most prevalent. But this approach is not free of limitations or weaknesses. Panels have a ‘peculiar’ property: If you do not join a panel you have zero probability of being invited to participate in a survey. Mobile surveys may pose again similar problems, and perhaps even more severely, because users of smartphones (not every mobile phone is able to load surveys), and moreover tablets, constitute a sub-population that is not broad enough yet and the users also have rather specific demographic and lifestyle characteristics.

  • Different sources of contact data and channels are being used to approach consumers to participate in surveys. Companies conduct surveys among their customers for whom they have e-mail addresses. Subscribers to news media websites may also be included a in survey panel of the publisher. Members of forums, groups or communities in social media networks may be asked as well to take part in surveys (commissioned by the administrator).

Decreasing response rates in phone and face-to-face surveys has been an early drive of online surveys; these difficulties have got only worse in recent years so that online surveys remain the viable alternative, and in some situations are even superior. Online self-administered questionnaires (SAQ) of course have their own genuine advantages such as ability to present images and videos, interactive response tools and greater freedom to choose when to fill the questionnaire. However, as with former modes of data collection for surveys, response behaviour may differ between online surveys responded to on personal computers and on mobile devices (one should consider the difficulty to control what respondents do when filling SAQs on their own).

The GRIT report reveals that the greatest troubling aspects of panels for marketing researchers are the quantity and quality of respondents available through those sampling pools (top-2-box satisfaction: 36% and 26%, respectively). In particular, 33% are not at all satisfied or only slightly satisfied with the quality of respondents. The cost of panel is also generating relatively low satisfaction (top-2-box 34%). Marketing researchers are more satisfied with timeliness of fielding, purchase process, ease of accessing a panel and customer service (49%-54%). [Note: 33% is compared with ~20% for ‘quantity’ and ‘cost’ and ~12% on other aspects.]

The GRIT report further identifies four quadrants of panel aspects based on satisfaction (top-2-box) versus (derived) importance. The quality and quantity of respondents available in panels occupy the ‘Weaknesses’ quadrant as they generate less satisfaction while being of higher importance. Customer service and purchase process form ‘Key Strengths’, being of higher importance and sources of higher satisfaction. Of the lower-importance aspects, cost is a ‘Vulnerability’ whereas access and timeliness are ‘Assets’. The ‘Weaknesses’ quadrant is troubling especially because it includes properties that define the essence of the panel as a framework for repeatedly extracting samples, its principal purpose. The assets and strengths in this case may not be sufficient to compensate for flaws in the product itself, the panel.

Surveys allow researchers to study mental constructs, cognitive and affective: perceptions and beliefs, attitudes, preferences and intentions; they may broadly look onto thoughts, feelings and emotions. Survey questionnaires entail specialised methods, instruments and tools for those purposes. Furthermore, surveys can be used to study concepts such as logical reasoning, inferences, relations and associations established by consumers. In the area of decision-making, researchers can investigate processes performed by the consumers or shoppers, as reported by them. Advisedly, the findings and lessons on decision processes may be validated and expanded by using other types of methods such as verbal protocols, eye tracking and mouse tracking (web pages) as research participants perform pre-specified tasks. However, surveys should remain part of the research programme.

Much of the knowledge and understanding of consumers obtained through surveys cannot be gained from methods and techniques that do not directly converse with the consumers. Data from recording of behaviour or measures of unconscious responses may lack important context from the consumer viewpoint that may render those findings difficult to interpret correctly. Conscious statements of consumers on their thoughts, feelings, experiences and actions may not be fully accurate or complete but they do represent what they have in mind and often enough guide their behaviour — we just need to ask them in an appropriate and methodic way.


The examples below are brought to demonstrate why different approaches should be used collaboratively to complement each other, and how surveys can make their own contribution to the whole story:

  •  Volumes of data on actions or operations performed by consumers, as entailed in the framework of Big Data, provide ‘snapshots’ or ‘slices’ of behaviour, but seem to lack the context of consumer goals or mindsets to meaningfully connect them. One has indirectly to infer or guess what made the behaviour occur as it did.
  • Big Data also refers to volumes of verbatim in social media networks where the amount of data gives an illusion that it can replace input from surveys. However, only surveys can provide the kind of controlled and systematic measures of beliefs, attitudes and opinions needed to properly test research propositions or hypotheses.
  • Methods of neuroscience inform researchers about neural correlates of sensory and mental activity in specific areas of the brain, but it does not tell them what the subject makes of those events. In other words, even if we can reduce thoughts, feelings and emotions to neural activity in the brain, we would miss the subjective experience of the consumers.

 

It is not expected of marketing researchers to turn all their online surveys to mobile devices, at least not as long as these co-exist with personal computers. The logic of the GRIT’s report is probably as follows: Since more consumers spend more time on smartphones (and tablets), they should be allowed to choose and be able to respond to a survey on any of the computer-type products they hold in time and place convenient to them. That is indeed a commendable liberal and democratic stance but it is not always in best interest of the survey from a methodological perspective.

Mobile surveys could be very limiting in terms of the amount and complexity of information a researcher may reliably collect through them. A short mobile survey (5-10 minutes at most) with questions that permit quick responses is not likely to be suitable to study adequately many of the constructs previously discussed to build a coherent picture of consumers’ mindsets and related behaviours. These surveys may be suitable for collecting particular types of information, and perhaps even have an advantage at this as suggested shortly.

According to the GRIT report, 36% of researchers-respondents estimate that online surveys their companies carry out take on average up to 10 minutes (short); 29% estimate their surveys take 11-15 minutes (medium); and 35% give an average estimate of 16 minutes or more (long). The overall average stands at 15 minutes.

These duration estimates correspond to online surveys in general and the authors note that particularly longer surveys would be unsuitable for mobile surveys. For example, 16% of respondents state their online surveys take more than 20 minutes which is unrealistic for mobile devices. At the other end, very short surveys (up to five minutes) are performed by 10%.

There are some noteworthy differences between research suppliers and clients. The main finding to notice is that clients are pressing to shorter surveys, such that may also be applicable to respond to on mobile devices:

  • Whereas just near to 10% of suppliers perform surveys of up to 5 minutes on average, a little more of 15% of clients perform surveys of this average length.
  • Suppliers are more inclined to perform surveys of 11-15 minutes on average (approx. 33%) compared with clients (about 23%).
  • Suppliers also have a little stronger propensity for surveys of 16-20 minutes (20% vs. 16% among clients).

Researchers on the supplier side appear to be more aware and sensitive to the time durations online surveys should take to achieve their research objectives and are less ready to execute very short surveys as clients drive to.

  • Interestingly, the report shows that the average estimated time length in practice is similar to the maximal length respondents think an online survey should take. The authors propose these results can be summed up as “whatever we answered previously as the average length, is the maximal length”. They acknowledge not asking specifically about mobile surveys — the accepted maximum is 10 minutes. This limit is more in accordance with clients’ stated maximum for online surveys (52%) whereas only 36% of suppliers report such a goal (32% of suppliers choose 11-15 minutes as the maximum, above the expected maximum for mobile).

Online surveys designed for personal computers are subject to time limits, in view of respondents’ expected spans of attention, yet the limits are expected to be less strict compared with mobile devices. Furthermore, the PC mode allows more flexibility in variability and sophistication of questions and response scales applied. A smartphone does not encourage much reflective thought and this must be taken into consideration. Desktops and laptops accommodate more complex tasks, usually executed in more comfortable settings (e.g., consumers tend to perform pre-purchase ‘market research’ on the their personal computers and conduct quick queries of the last-minute during the shopping trip on their smartphones) — this works also to the benefit of online surveys on personal computers. (Tablets are still difficult to position, possibly closer to laptops than to smartphones.)

Online surveys for mobile devices and for desktops/laptops do not have to be designed to be the same in content of questionnaires (adapting appearance to device and screen is just part of the matter). First, there is justification to design surveys specifically for mobile devices. These surveys may be most suitable for studying feedback on recent events or experiences, measuring responses to images and videos, and performing association tests. Subjects as proposed here are afforded in common by System 1 (Automatic) — intuition and quick responses (immediacy), emotional reactions, visual appeal (creativity), and associative thinking.

Second, it would be better to compose and design separate survey questionnaires for personal computers and for mobile devices at different lengths. Trying to impose an online survey of fifteen minutes on respondents using mobile devices is at considerable risk of early break-off or worse of diminishing quality of responses as the survey goes on. At least a short version of the questionnaire should be channeled to the mobile device — though it still would not resolve issues of unfitting types of questions posed. Even worse, however, would be an attempt to shorten all online surveys to fit into the time spans of mobile surveys because this could make the surveys much less effective and useful as sources of information and miss much of their business value.

Marketing researchers have to invest special effort to ensure that online surveys remain relevant and able to provide useful and meaningful answers to marketing and business questions. Reducing and degrading surveys just in order to obtain greater cooperation from consumers will only achieve the opposite — it will strengthen the position of the field of Big Data (that worries some researchers), as well as other approaches that navigate the unconsciousness. Instead, marketing researchers should improve and enhance the capabilities of surveys to provide intelligent and valuable insights, achieved particularly by designing surveys that are best compatible with the mode in which the survey is administered.

Ron Ventura, Ph.D. (Marketing)

Read Full Post »

The World Health Organization (WHO) created a storm of confusion and panic when it published on 26 October (2015) its warning on cancer risks from processed meat as well as red meat. The warning arose as the outcome of a year-long effort by a committee of 22 experts, led by WHO’s International Agency for Research on Cancer (IARC), who reviewed and analysed findings from 800 studies carried out in past years across the globe. The warning itself, alarming enough, is not disputed; the problem concerned here is with the way the IARC made its warning announcement to the public.

Before referring to the content of the cancer warning, it should be emphasised that this research project did not bring any new data as evidence but analysed collectively results from previous studies at various research institutions (i.e., it was a meta-analyis type of research; originally published in the medical journal Lancent Oncology). Thus, warnings about the risks of cancer in consuming larger amounts of processed meat and red meat, and findings that support them, are not new. The IARC added an important authoritative backing with the intention that its voice will receive greater public attention and better succeed in persuading consumers to modify their behaviour. However, the announcement was not made cleverly, and without corrective measures may end in failure of the IARC’s initiative.

The warning of the IARC is actually composed of two warnings, at two different levels of risk. The IARC distinguished in its press release (no. 240) between two categories of risk to which it assigned processed meat and red meat as follows:

Category 1: “Carcinogenic to humans”. Processed meat is classified in this category together with asbestos, tobacco (smoking), alcohol and arsenic. It is causally linked to bowel cancer, particularly colorectal cancer — IARC states that the classification relies on sufficient evidence in humans that consumption of processed meat causes colorectal cancer (i.e., colon and rectal). Processed meat relates to meat products that have gone through processes of curing, salting, smoking and fermentation to improve their preservation. They include popular products like sausage, hot dog, bacon, ham, and salami.

Categroy 2A: ” Probably carcinogenic”. The classification of red meat in this category is based on limited evidence of its causal link to cancer in humans but strong mechanistic evidence of a carcinogenic effect. Red meat includes beef & veal, lamb & sheep, and pork (e.g., in fresh cuts or mixes).  It has been identified as a probable cause of colorectal cancer but also associated with pancreatic and prostate cancers.

  • Unexplained in the press release, a mechanistic effect relates to the effect of chemical substances or processes at the individual level (i.e., on a single organism). It is enough to suggest here that stating ‘strong mechanistic evidence’ is ambiguous to most people since they cannot understand the significance (even after definition).

There can be little wonder that the announcement of IARC alarmed and puzzled consumers, plausibly holding their heads in their hands and saying: “What should we do now about those meat products that we eat?” Because so many meat products or food items seem to be covered in those warnings, consumers are justified in feeling lost about the drastic reduction in menu that is looming, especially for the more carnivore ones, vis-à-vis a fear of cancer. The news media has tried to fill some of the void with the help of health, food and diet experts, but with little help directly from WHO, in answering questions such as what food made of meat can one continue to consume and how much. Some experts, nonetheless, contributed positive recommendations that go beyond meat consumption.

With regard to the level of risk, the IARC did indicate the estimate of its experts that eating daily 50 grams (1.8 ounce) more of processed meat increases the risk of contracting colorectal cancer by 18%. In a separate comment to Reuters, Dr. Kurt Strife of IARC clarified that the risk of developing colorectal (bowel) cancer in an individual from eating processed meat remains low but this risk increases when a greater amount of meat is consumed. The issue of quantity consumed is material as reflected also in recommendations from other sources. However, the IARC apparently did not see it as its responsibility to explain and recommend to the public how to act following its warning. In the official announcement to the press, Dr. Christopher Wild, director of IARC, called on governments and international regulatory agencies to “conduct risk assessments, in order to balance the risks and benefits of red meat and processed meat and to provide the best dietary recommendations”. The call on other agencies to act is commendable but the self-exemption by IARC is flawed.

The choice of IARC to couple its warning on processed meat as a cause of cancer with the warning on red meat as probable cause raises another problem. First, it produced an excessive warning with an overwhelming effect, asking the public to face a health limitation on a broad range of meat products at once. Consumers were likely to confront the joint-group heading “processed meat and red meat” before they could grasp the difference in level of risk; next they might assess more deeply the specific classes of meat and its products included. Second, adding at this time the warning on red meat could distract consumers from attending to and heeding the more serious cancer warning on processed meat, that is based on more conclusive evidence. It seems most acceptable from academic and clinical perspectives to publish the two warnings together, and it is understandable in regard to public health that the IARC would not want the risk associated with red meat to be neglected. Yet, when it comes to informing the public in the general media, the joint-warning could be superfluous and less effective in persuading consumers about the need to change their diet in view of cancer risks of either processed meat or red meat.

In advertising, brands are often cautioned that over-reaching product claims or promises might be received by consumers with disbelief and suspicion and thereafter be discarded. Conversely, excessive or too harsh warnings might induce disbelief and paralysing fear followed by resentment and rejection. On either side, messages that are perceived as excessive do not invoke trust in consumers, and in this case, not gaining their trust could be detrimental.

Another flaw in the press release that raised particular rage is the insinuated equivalence between eating processed meat and smoking tobacco. Associations in the meat industry attacked this equivalence and the research as a whole (e.g., UK, US, Canada, Australia), and government officials and other experts expressed their respective reservation in the media. It has been noted that there are grades in level of risk among causes of cancer in the first category, and that smoking remains the most dangerous single cause of cancer, much riskier than eating processed meat; excessive drinking of alcohol also bears a higher risk than the latter. But the mere listing of all those causes of cancer together, flatly as members in the same category, makes them equal and non-distinguishable to consumers. The IARC managed to grab attention alarmingly but probably not in the way they desired.

Different interpretations were suggested in the media, mostly in attempt to explain the meaning and implications of the warnings and to calm some of the public scare that was giving signs. Special attention was dedicated to differentiating between the cases of processed meat and red meat. The British Guardian told its readers that it was not advised to stop eating any processed meat or red meat. However, they clarified, consumption of processed meat should be cut considerably, particularly for those who are in habit of eating these food items daily (e.g., in breakfast). In addition, consumers are recommended to sanction their consumption of red meat, eating more moderately (“Processed Meats are Ranked Alongside Smoking as Cancer Causes – WHO”, The Guardian Online, 26 October 2015).

  • It is noted that in its press release the IARC stressed that their findings support previous recommendations to limit the intake of those types of meat, and in a later clarification to the media they iterated that IARC did not recommend to stop eating those meat products. In the press release Dr. Wild also acknowledges the nutritional value of red meat, confirming that there are benefits to consuming it.

The Guardian brings specific recommendations from the World Cancer Research Fund that people should not eat more than 500 grams of red meat (beef, sheep and pork) a week, and to reduce as much as possible their consumption of processed products (e.g., ham, bacon, salami). Dr. Elizabeth Lund, an independent consultant in nutrition and gastrointestinal health, offers yet a more balanced approach in face of IARC’s warnings with helpful practical recommendations to consumers: “A much bigger risk factor is obesity and lack of exercise. Overall, I feel that eating meat once a day combined with plenty of fruit, vegetables and cereal fibre, plus exercise and weight control, will allow for a low risk of colorectal cancer and a more balanced diet.” 

Beef products attract great attention in their defence. Advocates emphasise the importance of how beef items are prepared and the method of heating. The problem is argued to be mostly with products prepared and packaged in advance by mass food manufacturers, but that is only a partial factor in the generation of cancer risk. Beef is often recommended for its content of iron [as well as proteins and other nutritional components.] However, scientists suggested that iron may lead to release of nitrates that act as a carcinogenic agent. This process may happen during preparation, grilling or frying, but also during digestion. According to this assertion, the main cause for alarm is attributed not to the ingredients added to meat but to compounds created during the heating of meat (e.g., quick, high temperature) or digestion. Beef items like hamburgers and kebab prepared at home or in small private-business premises from fresh mixes could be safer, but it does not eliminate the risk completely. This issue appears as a sensitive subject of controversy and friction between large manufacturers, small butcher enterprises, and restaurants (competing among themselves) and health agencies and experts.

Raising fear in consumers can move them to take necessary action to reduce the risk (e.g., not driving after drinking alcohol) — research has provided support for a positive effect of fear inducement. Scaring people, such as by an excessive demonstration of a threat (e.g., car accidents) or its scope, may cause a paralysing effect but even that may not be the main problem. Goldstein, Martin and Cialdini suggest that a greater problem occurs when inducing fear without guiding people how they can reduce the danger. If the producer of the risk warning does not accompany it with recommendations for action in order to reduce it, a consumer is left with the fear with no way out. He or she is more likely in this situation to deal with the fear by “blocking-out” the message, dissociating oneself from the threat, and indeed be paralysed into taking no action (1). This is where IARC failed — they introduced the fear by itself. It was IARC’s responsibility as issuer of the warning to recommend actions to consumers like how to change their diet and taking other supportive measures.

Another viewpoint concerns the way consumers approach the risk and respond to it. Pennings, Wansink and Meulenberg propose decoupling the risk perception (i.e., how consumers assess the level of uncertainty) from the risk attitude (e.g., the extent to which consumers are risk-aversive) in anticipating consumer response to a risk (e.g., decreased food safety) and confronting it. What counts first is the chance a consumer perceives that he or she will be personally affected and then how to deal with it (e.g., stop or reduce consuming the risky food). Furthermore, the researchers suggest that segments which differ in their risk perception and attitude, and how they weigh them, should be distinguished; they may require each a different treatment (2).

The case here is different from the crisis case studied by Pennings et al. (‘mad cow disease’) because it did not arise due to an epidemic outbreak or a company’s malpractice (e.g., crisis of Remedia’s milk formulae for babies in Israel) — it is not a particular event but a more ongoing condition. Yet, at this point in time it is a crisis for consumers evoked by a new warning about a health threat. Health authorities and agencies will have to decide, for example, if the more appropriate strategy in any market or segment is to provide clearer information about the level of risk (reducing uncertainty) or tighten controls and supervision of food production of meat (i.e., because consumers do not tolerate cancer risk at almost any level of probability).

  • Special consideration may also be needed to persuade segments like young consumers in their 20s who do not care how their behaviour will impact their health thirty years away, partly because they simply cannot imagine what bad impact it could have — they are concentrated on enjoying their lives today; or consumers in lower socio-economic decile who eat those types of meat products (e.g., hamburgers, hot dogs) out of necessity, because these are cheaper food items for their meals.

The researchers and officials at IARC and WHO are clearly concerned about the possibility that consumers will become ill with cancer due to the amounts of processed meat and red meat that they eat, aiming at causing consumers to change their diet habits and reduce the threat and suffering. But they left a void by launching an incomplete persuasion effort — it was taken as over-threatening on one hand and lacking guidance on the other hand through practical recommendations to consumers how to act to improve their health prospects. In order to increase the chance that consumers will heed the risk and act as desired the IARC will be required to provide guidance and support to the public on its own and through collaboration with other agencies for a quicker response to consumer confusion and fear.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) Yes! 50 Secrets from the Science of Persuasion; Noah J. Goldstein, Steve J. Martin, & Robert B. Cialdini, 2013; Profile Books.

(2) A Note on Modeling a Consumer Reaction to a Crisis: The Case of the Mad Cow Disease; Joost M.E. Pennings, Brian Wansink, & Matthew T.G. Meulenberg, 2002; International Journal of Research in Marketing, 19, pp. 91-100.

Read Full Post »

Psycographic-oriented research of consumer lifestyles based on surveys for collecting the data is losing favour among marketers and researchers. Descriptors of consumer lifestyles are applied especially for segmentation by means of statistical clustering methods and other approaches (e.g., latent class modelling). Identifying lifestyle segments has been recognized as a strategic instrument for marketing planning because this kind of segmentation has been helpful and insightful in explaining variation in consumer behaviour where “dry” demographic descriptors cannot reach the deeper intricacies. But with the drop in response rates to surveys over the years, even on the Internet, and further problematic issues in consumer responses to survey questionnaires (by interview or self-administered), lifestyle research using psychographic measures is becoming less amenable, and that is regrettable.

The questionnaires required for building lifestyle segmentation models are typically long, using multi-item “batteries” of statements (e.g., response on a disagree-agree scale) and other types of questions. Initially (1970s) psychographics were represented mainly by Activities, Interests and Opinions (AIO). The measures cover a wide span of topics or aspects from home and work, shopping and leisure, to politics, religion and social affairs. But this approach was criticised for lacking a sound theoretical ground to direct the selection of aspects characterising lifestyles that are more important, relevant to and explanatory of consumer behaviour. Researchers have been seeking ever since the 1980s better-founded psychology-driven bases for lifestyle segmentation, particularly social relations among people and sets of values people hold to. The Values and Lifestyles (VALS) model released by the Stanford Research Institute (SRI) in 1992 incorporated motivation and additional areas of psychological traits (VALS is now licensed to Strategic Business Insights). The current version of the American model is based on that same eight-segment typography with some updated modifications necessary to keep with the times (e.g., the rise of advanced-digital technology) — the conceptual model is structured around two prime axes, (a) resources (economic, mental) and (b) motivation or orientation. Scale items corresponding to the AIOs continue to be used but they would be chosen to represent constructs in broader or better-specified contexts.

Yet the challenge holds even for the stronger-established models, how to choose the most essential aspects and obtain a sufficient set of question items consumers are likely to complete answering. Techniques are available for constructing a reduced set of items (e.g., a couple of dozens) for subsequent segmentation studies relying on a common base model, but a relatively large set (e.g., several dozens to a few hundreds of items) would still be needed for building the original model of lifestyle segments. It is a hard challenge considering in particular the functions and limitations of more popular modes of surveys nowadays, online and mobile.

Lifestyles reflect in general the patterns or ways in which people run their ordinary lives while uncovering something of the underlying motives or goals. However,  ‘lifestyles’ have been given various meanings, and researchers follow different interpretations in constructing their questionnaires. The problem may lie in the difficulty to construct a coherent and consensual theory of ‘lifestyles’ that would conform to almost any area (i.e., product and service domain) where consumer behaviour is studied.  This may well explain why lifestyle segmentation research is concentrated more frequently on answering marketing questions with respect to a particular type of product or service (e.g., banking, mobile telecom, fashion, food). It can help to select more effectively the aspects the model should focus on and thereby also reduce the length of the questionnaire. The following are some of the concepts lifestyle models may incorporate and emphasise:

  • Values that are guiding and driving consumers (e.g., collectivism vs. individualism, modernism vs. traditionalism, liberalism vs. conservatism);
  • In the age of Internet and social media consumers develop new customs of handling social relations in the virtual world versus the physical world;
  • In view of the proliferation of digital, Internet and mobile communication technologies and products it is necessary to address differences in consumer orientation and propensity to adopt and use those products (e.g, ‘smart’ products of various sorts);
  • How consumers balance differently between work and home or family and career is a prevailing issue at all times;
  • Lifestyles may be approached through the allocation of time between duties and other activities — for example, how consumers allocate their leisure time between spending it with family, friends or alone (e.g., hobbies, sports, in front of a screen);
  • Explore possible avenues for developing consumer relationships with brands as they integrate them into their everyday way of living (e.g., in reference to a seminal paper by Susan Fournier, 1998)(1);
  • Taking account of aspects of decision-making processes as they may reflect overall on the styles of shopping and purchasing behaviour of consumers (e.g., need for cognition, tendency to process information analytically or holistically, the extent to which consumers search for information before their decision).

Two more issues deserve special attention: 

  1. Lifestyle is often discussed adjacent with personality. On one hand, a personality trait induces a consistent form of response to some shared stimulating conditions in a variety of situations or occasions (e.g., responding logically or angrily in any situation that creates stress or conflict, offering help whenever seeing someone else in distress). Therefore, personality traits can contribute to the model by adding generalisation and stability to segment profiles. On the other hand, since lifestyle aspects describe particular situations and contexts whereas personality traits generalize across them, it is argued that these should not be mixed as clustering variables but may be applied in complementary modules of a segmentation model.
  2. Products that consumers own and use or services they utilize can illustrate  figuratively their type of lifestyle. But including a specific product in the model framework may hamper the researcher’s ability to make later inferences and predictions on consumer behaviour for the same product or a similar one. Therefore, it is advisable to refer carefully to more general types of products distinctively for the purpose of implying or reflecting on a pattern of lifestyle (e.g., smartphones and technology-literacy). Likewise, particular brand names should be mentioned only for an important symbolic meaning (e.g., luxury fashion brands, luxury cars).

Alternative approaches pertain to portray lifestyles yet do not rely on information elicited from consumers wherein they describe themselves; information is collated mostly from secondary databases. Geodemographic models segment and profile neighbourhoods and their households (e.g., PRIZM by Claritas-Nielsen and MOSAIC by Experian). In addition to demographics they also include information on housing, products owned (e.g., home appliances), media used, as well as activities in which consumers may participate. However, marketers are expected, by insinuation, to infer the lifestyle of a household, based, for instance, on appliances or digital products in the house, on newspaper or magazine subscriptions, on clubs (e.g., sports), and on associations that members of the household belong to. Or consider another behavioural approach that is based on clustering and “basket” (associative) analyses of the sets of products purchased by consumers. These models were not originally developed to measure lifestyles. Their descriptors may vicariously indicate a lifestyle of a household (usually not of an individual). They lack any depth in describing and classifying how consumers are managing their lives nor enquiring why they live them that way.

The evolving difficulties in carrying-out surveys are undeniable. Recruiting consumers as respondents and keeping them interested throughout the questionnaire is becoming more effortful, demanding more financial and operational resources and greater ingenuity. Data from surveys may be complemented by data originated from internal and external databases available to marketing researchers to resolve at least part of the problem. A lifestyle questionnaire is usually extended beyond the items related to segmentation variables by further questions for model validation, and for studying how consumers’ attitudes and behaviour in a product domain of interest are linked with their lifestyles. Some of the information collected thus far through the survey from respondents may be obtained from databases, sometimes even more reliably than that based on respondents’ self-reports. One of the applications of geodemographic segmentation models more welcome in this regard is using information on segment membership as a sampling variable for a survey, whereof characteristics from the former model can also be combined with psychographic characteristics from the survey questionnaire in subsequent analyses. There are furthermore better opportunities now to integrate survey-based data with behavioural data from internal customer databases of companies (e.g., CRM) for constructing lifestyle segments of their customers.

Long lifestyle questionnaires are particularly subject to concerns about the risk of respondent drop-out and decreased quality of response data as respondents progress in the questionnaire. The research firm SSI (Survey Sampling International) presented recently in a webinar (February 2015 via Quirk’s) their findings and insights from a continued study on the effects of questionnaire length and fatigue on response quality (see a POV brief here). A main concern, according to the researchers, is that respondents, rather than dropping-out in the middle of an online questionnaire, actually continue but pay less attention to questions and devote less effort answering them, hence decreasing the quality of response data.

Interestingly, SSI finds that respondents who lose interest drop-out mostly by half-way of a questionnaire irrespective of its length, whether it should take ten minutes or thirty minutes to complete. For those who stay, problems may yet arise if fatigue kicks-in and the respondent goes on to answer questions anyway. As explained by SSI, many respondents like to answer online questionnaires; they get into the realm but they may not notice when they become tired or do not feel comfortable to leave before completing the mission, so they simply go on. They may become less accurate, succumb to automatic routines, and give shorter answers to open-end questions. A questionnaire may take forty minutes to answer but in the estimation of SSI’s researchers respondents are likely to become less attentive after twenty minutes. The researchers refer to both online and mobile modes of survey. They also show, for example, the effect of presenting a particular group of questions in different stages of the questionnaire.

SSI suggests in its presentation some techniques for mitigating those data-quality problems. Two of the suggestions are highlighted here: (1) Dividing the full questionnaire into a few modules (e.g., 2-3) so that respondents will be invited to answer each module in a separate session (e.g., a weekly module-session); (2) Insert break-ups in the questionnaire that let respondents loosen attention from the task and rest their minds for a few moments — an intermezzo may serve for a message of appreciation and encouragement to respondents or a short gaming activity.

A different approach, mentioned earlier, aims to facilitate the conduct of many more lifestyle-application studies by (a) building once a core segmentation model in a comprehensive study; (b) performing future application studies for particular products or services using a reduced set of question items for segmentation according to the original core model. This approach is indeed not new. It allows to lower the burden on the core modelling study from questions on product categories and release space for such questions in future studies dedicated to specific products and brands. One type of technique is to derive a fixed subset of questions from the original study that are statistically the best predictors of segment membership. However, a more sophisticated technique that implements tailored (adaptive) interviewing was developed back in the 1990s by the researchers Kamakura and Wedel (2).

  • The original model was built as a latent class model; the tailored “real-time” process selected items for each respondent given his or her previous responses. In a simulated test, the majority of respondents were “presented” with less than ten items; the average was 16 items (22% of the original calibration set).

Lifestyle segmentation studies are likely to require paying greater rewards to participants. But that may not be enough to keep them in the survey. Computer-based “gamification” tools and techniques (e.g., conditioning rewards on progress in the questionnaire, embedding animation on response scales) may help to some extent but they may also raise greater concerns for quality of responses (e.g., answering less seriously, rushing through to collect “prizes”).

The contemporary challenges of conducting lifestyle segmentation research are clear. Nonetheless so should be the advantages and benefits of applying information on consumer lifestyle patterns in marketing and retailing. Lifestyle segmentation is a strategic tool and effort should persist to resolve the methodological problems that surface, combining where necessary and possible psychographic measures with information from other sources.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) Consumers and Their Brands: Developing Relationship Theory in Consumer Research; Susan Fournier, 1998; Journal of Consumer Research, 34 (March), pp. 343-373

(2) Lifestyle Segmentation With Tailored Interviewing; Wagner A. Kamakura and Michel Wedel, 1995; Journal of Marketing Research, 32 (Aug.), pp. 308-317.

Read Full Post »

Older Posts »