Feeds:
Posts
Comments

Posts Tagged ‘Methodology’

A shopper may well know what types of products he or she is planning to buy in a store, but what products the shopper will come out with is much less sure. Frequently there will be some additional unplanned products in the shopper’s basket. This observation is more often demonstrated in the case of grocery shopping in supermarkets, but it is likely to hold true also in other types of stores, especially large ones like department stores, fashion stores, and DIY or home improvement stores.

There can be a number of reasons or triggers for shoppers to consider additional products to purchase during the shopping trip itself — products forgotten and reminded of by cues that arise while shopping, attractiveness of visual appearance of product display (‘visual lift’), promotions posted on tags at the product display (‘point-of-purchase’) or in hand-out flyers, and more. The phenomenon of unplanned purchases is very familiar, and the study of it is not new. However, the behaviour of shoppers during their store visit that leads to this outcome, especially the consideration of product categories in an unplanned manner, is not understood well enough. The relatively new methodology of video tracking with a head-mounted small camera shows promise in gaining better understanding of shopper behaviour during the shopping trip; a research article by Hui, Huang, Suher and Inman (2013) is paving the way with a valuable contribution, particularly in shedding light on the relations between planned and unplanned considerations in a supermarket, and the factors that may drive conversion of the latter into purchases (1).

Shopper marketing is an evolving specialisation which gains increasing attention in  marketing and retailing. It concerns activities of consumers performed in a ‘shopper mode’ and is strongly connected with or contained within consumer marketing. Innovations in this sub-field by retailers and manufacturers span digital activities, multichannel marketing, store atmospherics and design, in-store merchandising, shopper marketing metrics and organisation. However, carrying out more effective and successful shopper marketing programmes requires closer collaboration between manufacturers and retailers — more openness to each party’s perspective and priorities (e.g., in interpretation of shopper insights), sharing information and coordination (2).

In-Store Video Tracking allows researchers to observe the shopping trip as it proceeds from the viewpoint of the shopper, literally. The strength of this methodology is in capturing the dynamics of shopping (e.g., with regard to in-store drivers of unplanned purchases). Unlike other approaches (e.g., RFID, product scanners), the video tracking method enables tracking acts of consideration, whether followed or not by purchase (i.e., putting a product item in the shopping cart).

For video tracking, a shopper is asked to wear, with the help of an experimenter, a headset belt that contains the portable video equipment, including a small video camera, a view/record unit, and a battery pack. It is worn like a Bluetooth headset. In addition, the equipment used by Hui et al. included an RFID transmitter that allows to trace the location of the shopper throughout his or her shopping path in a supermarket.

Like any research methodology, video tracking has its strengths and advantages versus its weaknesses and limitations. With the camera it is possible to capture the shopper’s field of vision during a shopping trip; the resulting video is stored in the view/record unit. However, without an eye-tracking (infrared) device, the camera may not point accurately to the positions of products considered (by eye fixation) in the field of vision. Yet, the video supports at least approximate inferences when a product is touched or moved, or the head-body posture and gesture suggest from which display a shopper considers products (i.e., the ‘frame’ closes-in on a section of the display). It is further noted that difficulties in calibrating an eye-tracking device in motion may impair the accuracy of locating fixations. The video camera seems sufficient and effective for identifying product categories as targets of consideration and purchase.

Furthermore, contrary to video filmed from cameras hanging from the ceiling in a store, the head-mounted camera records the scene at eye-level and not from high above, enabling to better notice what the shopper is doing (e.g., in aisles), and it follows the shopper all the way, not just in selected sections of the store. Additionally, using a head-mounted camera is more ethical than relying on surrounding cameras (often CCTV security cameras). On the other hand, head-mounted devices (e.g., camera, eye-tracking), which are not the most natural to wear whilst shopping, raise concerns of sampling bias (self-selection) and possibly causing change in the behaviour of the shopper; proponents argue that shoppers quickly forget of the device (devices are now made lighter) as they engage in shopping, but the issue is still in debate.

Video tracking is advantageous to RFID  and product scanners for the study of unplanned purchase behaviour by capturing acts of consideration: the RFID method alone (3) enables to trace the path of the shopper but not what one does in front of the shelf or stand display, and a scanner method allows to record what products are purchased but not which are considered. The advantage of the combined video + RFID approach according to Hui and his colleagues is in providing them “not only the shopping path but also the changes in the shoppers’ visual field as he or she walks around the store” (p. 449).

The complete research design included two interviews conducted with each shopper-participant — before the shopping trip, as a shopper enters the store, and after, on the way out. In the initial interview, shoppers were asked in which product categories they were planning to buy (aided by a list to choose from), as well as other shopping aspects (e.g., total budget, whether they brought their own shopping list). At the exit the shoppers were asked about personal characteristics, and the experimenters collected a copy of the receipt from the retailer’s transaction log. The information collected was essential for two aspects in particular: (a) distinguishing between planned and unplanned considerations; and (b) estimating the amount of money remaining for the shopper to make unplanned purchases out of the total budget (‘in-store slack’ metric).

237 participants were included in analyses. Overall, shoppers-participants planned to purchase from approximately 5.5 categories; they considered on average 13 categories in total, of which fewer than 5 were planned considerations (median 5.6). 37% of the participants carried a list prepared in advance.

Characteristics influencing unplanned consideration:  The researchers sought first to identify personal and product characteristics that significantly influence the probability of making an unplanned consideration in each given product category (a latent utility likelihood model was constructed). Consequently, they could infer which characteristics contribute to considering more categories in an unplanned manner. The model showed, for instance, that shoppers older in age and female shoppers are likely to engage in unplanned consideration in a greater number of product categories. Inversely, shoppers who are more familiar with a store (layout and location of products) and those carrying a shopping list tend to consider fewer product categories in an unplanned manner.

At a product level, a higher hedonic score for a product category is positively associated with greater incidence of unplanned consideration of it. Products that are promoted in the weekly flyer of the store at the time of a shopper’s visit are also more likely to receive an unplanned consideration from the shopper. Hui et al. further revealed effects of complementarity relations: products that were not planned beforehand for purchase (B) but are closer complementary of products in a ‘planned basket’ of shoppers (A) gain a greater likelihood of being considered in an unplanned manner (‘A –> B lift’).  [The researchers present a two-dimensional map detailing what products are more proximate and thus more likely to get paired together, not dependent yet on purchase of them].

Differences in behaviour between planned and unplanned considerations: Unplanned considerations tend to be made more haphazardly — while standing farther from display shelves and involving fewer product touches; conversely, planned considerations entail greater ‘depth’. Unplanned considerations tend to occur a little later in the shopping trip (the gap in timing is not very convincing). An unplanned consideration is less likely to entail reference to a shopping list — the list serves in “keeping the shopper on task”, being less prone to divert to unplanned consideration. Shoppers during an unplanned consideration are also less likely to refer to discount coupons or to in-store flyers/circulars. However, interestingly, some of the patterns found in this analysis change as an unplanned consideration turns into a purchase.

Importantly, in the outcome unplanned considerations are less likely to conclude with a purchase (63%) than planned considerations (83%). This raises the question, what can make an unplanned consideration result in purchase conversion?

Drivers of purchase conversion of unplanned considerations: Firstly, unplanned considerations that result in a purchase take longer (40 seconds on average) than those that do not (24 seconds). Secondly, shoppers get closer to the shelves and touch more product items before concluding with a purchase; the greater ‘depth’ of the process towards unplanned purchase is characterised by viewing fewer product displays (‘facings’) within the category — the shopper is concentrating on fewer alternatives yet examines those selected more carefully (e.g., by picking them up for a closer read). Another conspicuous finding is that shoppers are more likely to refer to a shopping list during an unplanned consideration that is going to result in a purchase — a plausible explanation is that the shopping list may help the shopper to seek whether an unplanned product complements a product on the list.

The researchers employed another (latent utility) model to investigate more systemically the drivers likely to lead unplanned considerations to result in a purchase. The model supported, for example, that purchase conversion is more likely in categories of  higher hedonic products. It corroborated the notions about ‘depth’ of consideration as a driver to purchase and the role of a shopping list in realising complementary unplanned products as supplements to the ‘planned basket’. It is also shown that interacting with a service staff for assistance increases the likelihood of concluding with a purchase.

  • Location in the store matters: An aisle is relatively a more likely place for an unplanned consideration to occur, and subsequently has a better chance when it happens to result in a purchase. The authors recommend assigning service staff to be present near aisles.

Complementarity relations were analysed once again, this time in the context of unplanned purchases. The analysis, as visualised in a new map, indicates that proximity between planned and unplanned categories enhances the likelihood of an unplanned purchase: if a shopper plans to purchase in category A, then the closer category B is to A, the more likely is the shopper to purchase in category B given it is considered. Hui et al. note that distances in the maps for considerations and for purchase conversion of unplanned considerations are not correlated, implying hence that the unplanned consideration and a purchase decision are two different dimensions in the decision process. This is a salient result because it distinguishes between engaging in consideration and the decision itself. The researchers caution, however, that in some cases the distinction between consideration and a choice decision may be false and inappropriate because they may happen rapidly in a single step.

  • The latent distances in the maps are also uncorrelated with physical distances between products in the supermarket (i.e., the complementarity relations are mental).

The research shows that while promotion (coupons or in-store flyers) for an unplanned product has a significant effect in increasing the probability of its consideration, it does not contribute to probability of its purchase. This evidence furthermore points to a separation between consideration and a decision. The authors suggest that a promotion may attract shoppers to consider a product, but they are mostly uninterested to buy and hence it has no further effect on their point-of-purchase behaviour. The researchers suggest that retailers can apply their model of complementarity to proactively invoke consideration by triggering a real-time promotion on a mobile shopping app for products associated with those on a digital list of the shopper “so a small coupon can nudge this consideration into a purchase”.

But there are some reservations to be made about the findings regarding promotions. An available promotion can increase the probability of a product to be considered in an unplanned manner, yet shoppers are less likely to look at their coupons or flyers at the relevant moment. Inversely, the existence of a promotion does not contribute to purchase conversion of an unplanned consideration but shoppers are more likely to refer to their coupons or flyers during unplanned considerations that result in a purchase.  A plausible explanation to resolve this apparent inconsistency is that reference to a promotional coupon or flyer is more concrete from a shopper viewpoint than the mere availability of a promotion; shoppers may not be aware of some of the promotions the researchers account for. In the article, the researchers do not address directly promotional information that appears on tags at the product display — such promotions may affect shoppers differently from flyers or distributed coupons (paper or digital via mobile app), because tags are more readily visible at the point-of-purchase.

One of the dynamic factors examined by Hui et al. is the ‘in-store slack’, the mental budget reserved for unplanned purchases. Reserving a larger slack increases the likelihood of unplanned considerations. Furthermore, at the moment of truth, the larger is the in-store slack that remains at the time of an unplanned consideration, the more likely is the shopper to take a product from the display to purchase. However, computations used in the analyses of dynamic changes in each shopper’s in-store slack appear to assume that shoppers estimate how much they already spent on planned products in various moments of the trip and are aware of their budget, an assumption not very realistic. The approach in the research is very clever, and yet consumers may not be so sophisticated: they may exceed their in-store slack, possibly because they are not very good in keeping their budget (e.g., exacerbated by use of credit cards) or in making arithmetic computations fluently.

Finally, shoppers could be subject to a dynamic trade-off between their self-control and the in-store slack. As the shopping trip progresses and the remaining in-store slack is expected to shrink, the shopper becomes less likely to allow an unplanned purchase, but he or she may become more likely to be tempted to consider and buy in an unplanned manner, because the strength of one’s self-control is depleted following active decision-making. In addition, a shopper who avoided making a purchase on the last occasion of unplanned consideration is more likely to purchase a product in the next unplanned occasion — this negative “momentum” effect means that following an initial effort at self-control, subsequent attempts are more likely to fail as a result of depletion of the strength of self-control.

The research of Hui, Huang, Suher and Inman offers multiple insights for retailers as well as manufacturers to take notice of, and much more material for thought and additional study and planning. The video tracking approach reveals patterns and drivers of shopper behaviour in unplanned considerations and how they relate to planned considerations.  The methodology is not without limitations; viewing and coding the video clips is notably time-consuming. Nevertheless, this research is bringing us a step forward towards better understanding and knowledge to act upon.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) Deconstructing the “First Moment of Truth”: Understanding Unplanned Consideration and Purchase Conversion Using In-Store Video Tracking; Sam K. Hui, Yanliu Huang, Jacob Suher, & J. Jeffrey Inman, 2013; Journal of Marketing Research, 50 (August), pp. 445-462.

(2) Innovations in Shopper Marketing: Current Insights and Future Research Issues; Venkatesh Shankar, J. Jeffrey Inman, Murali Mantrala, & Eileen Kelley, 2011; Journal of Retailing, 87S (1), pp. S29-S42.

(3) See other research on path data modelling and analysis in marketing and retailing by Hui with Peter Fader and Eric Bradlow (2009).

Read Full Post »

Surveys, being a major part of marketing research, seem to be in perpetual movement of change and development. Many of the changes in recent years are tied with technological advancement. About fifteen years ago online surveys — delivered over the Internet — began to rise as a dominant mode of survey administration; but now, researchers are pushed to perform more of their surveys via mobile devices, namely smartphones and tablets, in addition or as a replacement to being administered on desktop and laptop computers.

Yet some important distinctions between those two modes can make the transfer of surveys between them flawed. Just as much as it was wrong to suggest in the past that survey questionnaires administered in face-to-face interviews could be seamlessly transferred to phone interviews, it would be wrong today to suggest a seamless transfer of surveys from web browsers on desktops/laptops to mobile browsers (or apps).

In the latest Greenbook Research Industry Trends (GRIT) Report of Q3-Q4 2015, the authors suggest that there is still much room for improvement in adjusting online survey questionnaires to run and display properly also on mobile devices. They find that 45% of their respondents on the research supplier side and 30% on the research buyer (client) side claim that their companies design at least three quarters (75%-100%) of their online surveys to work effectively on mobile phones; however, “that tells us that over 50% of all  surveys are NOT mobile optimized” (p. 14, capital letters are in origin). The authors hereby implicitly call on marketing researchers to do much more to get their online surveys fully mobile-optimized. But this is not necessarily a justified or desirable requirement because not all online surveys are appropriate and applicable to be answered on smartphones nor on tablets. There could be multiple reasons for a lack of match between these modes for administering a particular survey: the topic, the types of constructs measured and instruments being used, the length of the questionnaire, and the target population relevant for the research. Consumers use mobile devices and personal computers differently (e.g., purpose, depth and time) which is likely to extend also to how they approach surveys on these products.

  • The GRIT survey of marketing researchers was conducted in a sample of 1,497 respondents recruited by e-mail and social media channels, of whom 78% are on the supplier-side and 22% on the client-side. Nearly half (46%) originate in North-America and a little more than quarter (27%) come from Europe.

Concerns about coverage and reach of a research population have followed online surveys from the beginning. Of different approaches for constructing samples, including sampling frames (e.g., e-mail lists) and ad-hoc samples (i.e., website pop-up survey invitations), the panel methodology has become most prevalent. But this approach is not free of limitations or weaknesses. Panels have a ‘peculiar’ property: If you do not join a panel you have zero probability of being invited to participate in a survey. Mobile surveys may pose again similar problems, and perhaps even more severely, because users of smartphones (not every mobile phone is able to load surveys), and moreover tablets, constitute a sub-population that is not broad enough yet and the users also have rather specific demographic and lifestyle characteristics.

  • Different sources of contact data and channels are being used to approach consumers to participate in surveys. Companies conduct surveys among their customers for whom they have e-mail addresses. Subscribers to news media websites may also be included a in survey panel of the publisher. Members of forums, groups or communities in social media networks may be asked as well to take part in surveys (commissioned by the administrator).

Decreasing response rates in phone and face-to-face surveys has been an early drive of online surveys; these difficulties have got only worse in recent years so that online surveys remain the viable alternative, and in some situations are even superior. Online self-administered questionnaires (SAQ) of course have their own genuine advantages such as ability to present images and videos, interactive response tools and greater freedom to choose when to fill the questionnaire. However, as with former modes of data collection for surveys, response behaviour may differ between online surveys responded to on personal computers and on mobile devices (one should consider the difficulty to control what respondents do when filling SAQs on their own).

The GRIT report reveals that the greatest troubling aspects of panels for marketing researchers are the quantity and quality of respondents available through those sampling pools (top-2-box satisfaction: 36% and 26%, respectively). In particular, 33% are not at all satisfied or only slightly satisfied with the quality of respondents. The cost of panel is also generating relatively low satisfaction (top-2-box 34%). Marketing researchers are more satisfied with timeliness of fielding, purchase process, ease of accessing a panel and customer service (49%-54%). [Note: 33% is compared with ~20% for ‘quantity’ and ‘cost’ and ~12% on other aspects.]

The GRIT report further identifies four quadrants of panel aspects based on satisfaction (top-2-box) versus (derived) importance. The quality and quantity of respondents available in panels occupy the ‘Weaknesses’ quadrant as they generate less satisfaction while being of higher importance. Customer service and purchase process form ‘Key Strengths’, being of higher importance and sources of higher satisfaction. Of the lower-importance aspects, cost is a ‘Vulnerability’ whereas access and timeliness are ‘Assets’. The ‘Weaknesses’ quadrant is troubling especially because it includes properties that define the essence of the panel as a framework for repeatedly extracting samples, its principal purpose. The assets and strengths in this case may not be sufficient to compensate for flaws in the product itself, the panel.

Surveys allow researchers to study mental constructs, cognitive and affective: perceptions and beliefs, attitudes, preferences and intentions; they may broadly look onto thoughts, feelings and emotions. Survey questionnaires entail specialised methods, instruments and tools for those purposes. Furthermore, surveys can be used to study concepts such as logical reasoning, inferences, relations and associations established by consumers. In the area of decision-making, researchers can investigate processes performed by the consumers or shoppers, as reported by them. Advisedly, the findings and lessons on decision processes may be validated and expanded by using other types of methods such as verbal protocols, eye tracking and mouse tracking (web pages) as research participants perform pre-specified tasks. However, surveys should remain part of the research programme.

Much of the knowledge and understanding of consumers obtained through surveys cannot be gained from methods and techniques that do not directly converse with the consumers. Data from recording of behaviour or measures of unconscious responses may lack important context from the consumer viewpoint that may render those findings difficult to interpret correctly. Conscious statements of consumers on their thoughts, feelings, experiences and actions may not be fully accurate or complete but they do represent what they have in mind and often enough guide their behaviour — we just need to ask them in an appropriate and methodic way.


The examples below are brought to demonstrate why different approaches should be used collaboratively to complement each other, and how surveys can make their own contribution to the whole story:

  •  Volumes of data on actions or operations performed by consumers, as entailed in the framework of Big Data, provide ‘snapshots’ or ‘slices’ of behaviour, but seem to lack the context of consumer goals or mindsets to meaningfully connect them. One has indirectly to infer or guess what made the behaviour occur as it did.
  • Big Data also refers to volumes of verbatim in social media networks where the amount of data gives an illusion that it can replace input from surveys. However, only surveys can provide the kind of controlled and systematic measures of beliefs, attitudes and opinions needed to properly test research propositions or hypotheses.
  • Methods of neuroscience inform researchers about neural correlates of sensory and mental activity in specific areas of the brain, but it does not tell them what the subject makes of those events. In other words, even if we can reduce thoughts, feelings and emotions to neural activity in the brain, we would miss the subjective experience of the consumers.

 

It is not expected of marketing researchers to turn all their online surveys to mobile devices, at least not as long as these co-exist with personal computers. The logic of the GRIT’s report is probably as follows: Since more consumers spend more time on smartphones (and tablets), they should be allowed to choose and be able to respond to a survey on any of the computer-type products they hold in time and place convenient to them. That is indeed a commendable liberal and democratic stance but it is not always in best interest of the survey from a methodological perspective.

Mobile surveys could be very limiting in terms of the amount and complexity of information a researcher may reliably collect through them. A short mobile survey (5-10 minutes at most) with questions that permit quick responses is not likely to be suitable to study adequately many of the constructs previously discussed to build a coherent picture of consumers’ mindsets and related behaviours. These surveys may be suitable for collecting particular types of information, and perhaps even have an advantage at this as suggested shortly.

According to the GRIT report, 36% of researchers-respondents estimate that online surveys their companies carry out take on average up to 10 minutes (short); 29% estimate their surveys take 11-15 minutes (medium); and 35% give an average estimate of 16 minutes or more (long). The overall average stands at 15 minutes.

These duration estimates correspond to online surveys in general and the authors note that particularly longer surveys would be unsuitable for mobile surveys. For example, 16% of respondents state their online surveys take more than 20 minutes which is unrealistic for mobile devices. At the other end, very short surveys (up to five minutes) are performed by 10%.

There are some noteworthy differences between research suppliers and clients. The main finding to notice is that clients are pressing to shorter surveys, such that may also be applicable to respond to on mobile devices:

  • Whereas just near to 10% of suppliers perform surveys of up to 5 minutes on average, a little more of 15% of clients perform surveys of this average length.
  • Suppliers are more inclined to perform surveys of 11-15 minutes on average (approx. 33%) compared with clients (about 23%).
  • Suppliers also have a little stronger propensity for surveys of 16-20 minutes (20% vs. 16% among clients).

Researchers on the supplier side appear to be more aware and sensitive to the time durations online surveys should take to achieve their research objectives and are less ready to execute very short surveys as clients drive to.

  • Interestingly, the report shows that the average estimated time length in practice is similar to the maximal length respondents think an online survey should take. The authors propose these results can be summed up as “whatever we answered previously as the average length, is the maximal length”. They acknowledge not asking specifically about mobile surveys — the accepted maximum is 10 minutes. This limit is more in accordance with clients’ stated maximum for online surveys (52%) whereas only 36% of suppliers report such a goal (32% of suppliers choose 11-15 minutes as the maximum, above the expected maximum for mobile).

Online surveys designed for personal computers are subject to time limits, in view of respondents’ expected spans of attention, yet the limits are expected to be less strict compared with mobile devices. Furthermore, the PC mode allows more flexibility in variability and sophistication of questions and response scales applied. A smartphone does not encourage much reflective thought and this must be taken into consideration. Desktops and laptops accommodate more complex tasks, usually executed in more comfortable settings (e.g., consumers tend to perform pre-purchase ‘market research’ on the their personal computers and conduct quick queries of the last-minute during the shopping trip on their smartphones) — this works also to the benefit of online surveys on personal computers. (Tablets are still difficult to position, possibly closer to laptops than to smartphones.)

Online surveys for mobile devices and for desktops/laptops do not have to be designed to be the same in content of questionnaires (adapting appearance to device and screen is just part of the matter). First, there is justification to design surveys specifically for mobile devices. These surveys may be most suitable for studying feedback on recent events or experiences, measuring responses to images and videos, and performing association tests. Subjects as proposed here are afforded in common by System 1 (Automatic) — intuition and quick responses (immediacy), emotional reactions, visual appeal (creativity), and associative thinking.

Second, it would be better to compose and design separate survey questionnaires for personal computers and for mobile devices at different lengths. Trying to impose an online survey of fifteen minutes on respondents using mobile devices is at considerable risk of early break-off or worse of diminishing quality of responses as the survey goes on. At least a short version of the questionnaire should be channeled to the mobile device — though it still would not resolve issues of unfitting types of questions posed. Even worse, however, would be an attempt to shorten all online surveys to fit into the time spans of mobile surveys because this could make the surveys much less effective and useful as sources of information and miss much of their business value.

Marketing researchers have to invest special effort to ensure that online surveys remain relevant and able to provide useful and meaningful answers to marketing and business questions. Reducing and degrading surveys just in order to obtain greater cooperation from consumers will only achieve the opposite — it will strengthen the position of the field of Big Data (that worries some researchers), as well as other approaches that navigate the unconsciousness. Instead, marketing researchers should improve and enhance the capabilities of surveys to provide intelligent and valuable insights, achieved particularly by designing surveys that are best compatible with the mode in which the survey is administered.

Ron Ventura, Ph.D. (Marketing)

Read Full Post »

Psycographic-oriented research of consumer lifestyles based on surveys for collecting the data is losing favour among marketers and researchers. Descriptors of consumer lifestyles are applied especially for segmentation by means of statistical clustering methods and other approaches (e.g., latent class modelling). Identifying lifestyle segments has been recognized as a strategic instrument for marketing planning because this kind of segmentation has been helpful and insightful in explaining variation in consumer behaviour where “dry” demographic descriptors cannot reach the deeper intricacies. But with the drop in response rates to surveys over the years, even on the Internet, and further problematic issues in consumer responses to survey questionnaires (by interview or self-administered), lifestyle research using psychographic measures is becoming less amenable, and that is regrettable.

The questionnaires required for building lifestyle segmentation models are typically long, using multi-item “batteries” of statements (e.g., response on a disagree-agree scale) and other types of questions. Initially (1970s) psychographics were represented mainly by Activities, Interests and Opinions (AIO). The measures cover a wide span of topics or aspects from home and work, shopping and leisure, to politics, religion and social affairs. But this approach was criticised for lacking a sound theoretical ground to direct the selection of aspects characterising lifestyles that are more important, relevant to and explanatory of consumer behaviour. Researchers have been seeking ever since the 1980s better-founded psychology-driven bases for lifestyle segmentation, particularly social relations among people and sets of values people hold to. The Values and Lifestyles (VALS) model released by the Stanford Research Institute (SRI) in 1992 incorporated motivation and additional areas of psychological traits (VALS is now licensed to Strategic Business Insights). The current version of the American model is based on that same eight-segment typography with some updated modifications necessary to keep with the times (e.g., the rise of advanced-digital technology) — the conceptual model is structured around two prime axes, (a) resources (economic, mental) and (b) motivation or orientation. Scale items corresponding to the AIOs continue to be used but they would be chosen to represent constructs in broader or better-specified contexts.

Yet the challenge holds even for the stronger-established models, how to choose the most essential aspects and obtain a sufficient set of question items consumers are likely to complete answering. Techniques are available for constructing a reduced set of items (e.g., a couple of dozens) for subsequent segmentation studies relying on a common base model, but a relatively large set (e.g., several dozens to a few hundreds of items) would still be needed for building the original model of lifestyle segments. It is a hard challenge considering in particular the functions and limitations of more popular modes of surveys nowadays, online and mobile.

Lifestyles reflect in general the patterns or ways in which people run their ordinary lives while uncovering something of the underlying motives or goals. However,  ‘lifestyles’ have been given various meanings, and researchers follow different interpretations in constructing their questionnaires. The problem may lie in the difficulty to construct a coherent and consensual theory of ‘lifestyles’ that would conform to almost any area (i.e., product and service domain) where consumer behaviour is studied.  This may well explain why lifestyle segmentation research is concentrated more frequently on answering marketing questions with respect to a particular type of product or service (e.g., banking, mobile telecom, fashion, food). It can help to select more effectively the aspects the model should focus on and thereby also reduce the length of the questionnaire. The following are some of the concepts lifestyle models may incorporate and emphasise:

  • Values that are guiding and driving consumers (e.g., collectivism vs. individualism, modernism vs. traditionalism, liberalism vs. conservatism);
  • In the age of Internet and social media consumers develop new customs of handling social relations in the virtual world versus the physical world;
  • In view of the proliferation of digital, Internet and mobile communication technologies and products it is necessary to address differences in consumer orientation and propensity to adopt and use those products (e.g, ‘smart’ products of various sorts);
  • How consumers balance differently between work and home or family and career is a prevailing issue at all times;
  • Lifestyles may be approached through the allocation of time between duties and other activities — for example, how consumers allocate their leisure time between spending it with family, friends or alone (e.g., hobbies, sports, in front of a screen);
  • Explore possible avenues for developing consumer relationships with brands as they integrate them into their everyday way of living (e.g., in reference to a seminal paper by Susan Fournier, 1998)(1);
  • Taking account of aspects of decision-making processes as they may reflect overall on the styles of shopping and purchasing behaviour of consumers (e.g., need for cognition, tendency to process information analytically or holistically, the extent to which consumers search for information before their decision).

Two more issues deserve special attention: 

  1. Lifestyle is often discussed adjacent with personality. On one hand, a personality trait induces a consistent form of response to some shared stimulating conditions in a variety of situations or occasions (e.g., responding logically or angrily in any situation that creates stress or conflict, offering help whenever seeing someone else in distress). Therefore, personality traits can contribute to the model by adding generalisation and stability to segment profiles. On the other hand, since lifestyle aspects describe particular situations and contexts whereas personality traits generalize across them, it is argued that these should not be mixed as clustering variables but may be applied in complementary modules of a segmentation model.
  2. Products that consumers own and use or services they utilize can illustrate  figuratively their type of lifestyle. But including a specific product in the model framework may hamper the researcher’s ability to make later inferences and predictions on consumer behaviour for the same product or a similar one. Therefore, it is advisable to refer carefully to more general types of products distinctively for the purpose of implying or reflecting on a pattern of lifestyle (e.g., smartphones and technology-literacy). Likewise, particular brand names should be mentioned only for an important symbolic meaning (e.g., luxury fashion brands, luxury cars).

Alternative approaches pertain to portray lifestyles yet do not rely on information elicited from consumers wherein they describe themselves; information is collated mostly from secondary databases. Geodemographic models segment and profile neighbourhoods and their households (e.g., PRIZM by Claritas-Nielsen and MOSAIC by Experian). In addition to demographics they also include information on housing, products owned (e.g., home appliances), media used, as well as activities in which consumers may participate. However, marketers are expected, by insinuation, to infer the lifestyle of a household, based, for instance, on appliances or digital products in the house, on newspaper or magazine subscriptions, on clubs (e.g., sports), and on associations that members of the household belong to. Or consider another behavioural approach that is based on clustering and “basket” (associative) analyses of the sets of products purchased by consumers. These models were not originally developed to measure lifestyles. Their descriptors may vicariously indicate a lifestyle of a household (usually not of an individual). They lack any depth in describing and classifying how consumers are managing their lives nor enquiring why they live them that way.

The evolving difficulties in carrying-out surveys are undeniable. Recruiting consumers as respondents and keeping them interested throughout the questionnaire is becoming more effortful, demanding more financial and operational resources and greater ingenuity. Data from surveys may be complemented by data originated from internal and external databases available to marketing researchers to resolve at least part of the problem. A lifestyle questionnaire is usually extended beyond the items related to segmentation variables by further questions for model validation, and for studying how consumers’ attitudes and behaviour in a product domain of interest are linked with their lifestyles. Some of the information collected thus far through the survey from respondents may be obtained from databases, sometimes even more reliably than that based on respondents’ self-reports. One of the applications of geodemographic segmentation models more welcome in this regard is using information on segment membership as a sampling variable for a survey, whereof characteristics from the former model can also be combined with psychographic characteristics from the survey questionnaire in subsequent analyses. There are furthermore better opportunities now to integrate survey-based data with behavioural data from internal customer databases of companies (e.g., CRM) for constructing lifestyle segments of their customers.

Long lifestyle questionnaires are particularly subject to concerns about the risk of respondent drop-out and decreased quality of response data as respondents progress in the questionnaire. The research firm SSI (Survey Sampling International) presented recently in a webinar (February 2015 via Quirk’s) their findings and insights from a continued study on the effects of questionnaire length and fatigue on response quality (see a POV brief here). A main concern, according to the researchers, is that respondents, rather than dropping-out in the middle of an online questionnaire, actually continue but pay less attention to questions and devote less effort answering them, hence decreasing the quality of response data.

Interestingly, SSI finds that respondents who lose interest drop-out mostly by half-way of a questionnaire irrespective of its length, whether it should take ten minutes or thirty minutes to complete. For those who stay, problems may yet arise if fatigue kicks-in and the respondent goes on to answer questions anyway. As explained by SSI, many respondents like to answer online questionnaires; they get into the realm but they may not notice when they become tired or do not feel comfortable to leave before completing the mission, so they simply go on. They may become less accurate, succumb to automatic routines, and give shorter answers to open-end questions. A questionnaire may take forty minutes to answer but in the estimation of SSI’s researchers respondents are likely to become less attentive after twenty minutes. The researchers refer to both online and mobile modes of survey. They also show, for example, the effect of presenting a particular group of questions in different stages of the questionnaire.

SSI suggests in its presentation some techniques for mitigating those data-quality problems. Two of the suggestions are highlighted here: (1) Dividing the full questionnaire into a few modules (e.g., 2-3) so that respondents will be invited to answer each module in a separate session (e.g., a weekly module-session); (2) Insert break-ups in the questionnaire that let respondents loosen attention from the task and rest their minds for a few moments — an intermezzo may serve for a message of appreciation and encouragement to respondents or a short gaming activity.

A different approach, mentioned earlier, aims to facilitate the conduct of many more lifestyle-application studies by (a) building once a core segmentation model in a comprehensive study; (b) performing future application studies for particular products or services using a reduced set of question items for segmentation according to the original core model. This approach is indeed not new. It allows to lower the burden on the core modelling study from questions on product categories and release space for such questions in future studies dedicated to specific products and brands. One type of technique is to derive a fixed subset of questions from the original study that are statistically the best predictors of segment membership. However, a more sophisticated technique that implements tailored (adaptive) interviewing was developed back in the 1990s by the researchers Kamakura and Wedel (2).

  • The original model was built as a latent class model; the tailored “real-time” process selected items for each respondent given his or her previous responses. In a simulated test, the majority of respondents were “presented” with less than ten items; the average was 16 items (22% of the original calibration set).

Lifestyle segmentation studies are likely to require paying greater rewards to participants. But that may not be enough to keep them in the survey. Computer-based “gamification” tools and techniques (e.g., conditioning rewards on progress in the questionnaire, embedding animation on response scales) may help to some extent but they may also raise greater concerns for quality of responses (e.g., answering less seriously, rushing through to collect “prizes”).

The contemporary challenges of conducting lifestyle segmentation research are clear. Nonetheless so should be the advantages and benefits of applying information on consumer lifestyle patterns in marketing and retailing. Lifestyle segmentation is a strategic tool and effort should persist to resolve the methodological problems that surface, combining where necessary and possible psychographic measures with information from other sources.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) Consumers and Their Brands: Developing Relationship Theory in Consumer Research; Susan Fournier, 1998; Journal of Consumer Research, 34 (March), pp. 343-373

(2) Lifestyle Segmentation With Tailored Interviewing; Wagner A. Kamakura and Michel Wedel, 1995; Journal of Marketing Research, 32 (Aug.), pp. 308-317.

Read Full Post »