Feeds:
Posts
Comments

Posts Tagged ‘Questionnaire’

Surveys, being a major part of marketing research, seem to be in perpetual movement of change and development. Many of the changes in recent years are tied with technological advancement. About fifteen years ago online surveys — delivered over the Internet — began to rise as a dominant mode of survey administration; but now, researchers are pushed to perform more of their surveys via mobile devices, namely smartphones and tablets, in addition or as a replacement to being administered on desktop and laptop computers.

Yet some important distinctions between those two modes can make the transfer of surveys between them flawed. Just as much as it was wrong to suggest in the past that survey questionnaires administered in face-to-face interviews could be seamlessly transferred to phone interviews, it would be wrong today to suggest a seamless transfer of surveys from web browsers on desktops/laptops to mobile browsers (or apps).

In the latest Greenbook Research Industry Trends (GRIT) Report of Q3-Q4 2015, the authors suggest that there is still much room for improvement in adjusting online survey questionnaires to run and display properly also on mobile devices. They find that 45% of their respondents on the research supplier side and 30% on the research buyer (client) side claim that their companies design at least three quarters (75%-100%) of their online surveys to work effectively on mobile phones; however, “that tells us that over 50% of all  surveys are NOT mobile optimized” (p. 14, capital letters are in origin). The authors hereby implicitly call on marketing researchers to do much more to get their online surveys fully mobile-optimized. But this is not necessarily a justified or desirable requirement because not all online surveys are appropriate and applicable to be answered on smartphones nor on tablets. There could be multiple reasons for a lack of match between these modes for administering a particular survey: the topic, the types of constructs measured and instruments being used, the length of the questionnaire, and the target population relevant for the research. Consumers use mobile devices and personal computers differently (e.g., purpose, depth and time) which is likely to extend also to how they approach surveys on these products.

  • The GRIT survey of marketing researchers was conducted in a sample of 1,497 respondents recruited by e-mail and social media channels, of whom 78% are on the supplier-side and 22% on the client-side. Nearly half (46%) originate in North-America and a little more than quarter (27%) come from Europe.

Concerns about coverage and reach of a research population have followed online surveys from the beginning. Of different approaches for constructing samples, including sampling frames (e.g., e-mail lists) and ad-hoc samples (i.e., website pop-up survey invitations), the panel methodology has become most prevalent. But this approach is not free of limitations or weaknesses. Panels have a ‘peculiar’ property: If you do not join a panel you have zero probability of being invited to participate in a survey. Mobile surveys may pose again similar problems, and perhaps even more severely, because users of smartphones (not every mobile phone is able to load surveys), and moreover tablets, constitute a sub-population that is not broad enough yet and the users also have rather specific demographic and lifestyle characteristics.

  • Different sources of contact data and channels are being used to approach consumers to participate in surveys. Companies conduct surveys among their customers for whom they have e-mail addresses. Subscribers to news media websites may also be included a in survey panel of the publisher. Members of forums, groups or communities in social media networks may be asked as well to take part in surveys (commissioned by the administrator).

Decreasing response rates in phone and face-to-face surveys has been an early drive of online surveys; these difficulties have got only worse in recent years so that online surveys remain the viable alternative, and in some situations are even superior. Online self-administered questionnaires (SAQ) of course have their own genuine advantages such as ability to present images and videos, interactive response tools and greater freedom to choose when to fill the questionnaire. However, as with former modes of data collection for surveys, response behaviour may differ between online surveys responded to on personal computers and on mobile devices (one should consider the difficulty to control what respondents do when filling SAQs on their own).

The GRIT report reveals that the greatest troubling aspects of panels for marketing researchers are the quantity and quality of respondents available through those sampling pools (top-2-box satisfaction: 36% and 26%, respectively). In particular, 33% are not at all satisfied or only slightly satisfied with the quality of respondents. The cost of panel is also generating relatively low satisfaction (top-2-box 34%). Marketing researchers are more satisfied with timeliness of fielding, purchase process, ease of accessing a panel and customer service (49%-54%). [Note: 33% is compared with ~20% for ‘quantity’ and ‘cost’ and ~12% on other aspects.]

The GRIT report further identifies four quadrants of panel aspects based on satisfaction (top-2-box) versus (derived) importance. The quality and quantity of respondents available in panels occupy the ‘Weaknesses’ quadrant as they generate less satisfaction while being of higher importance. Customer service and purchase process form ‘Key Strengths’, being of higher importance and sources of higher satisfaction. Of the lower-importance aspects, cost is a ‘Vulnerability’ whereas access and timeliness are ‘Assets’. The ‘Weaknesses’ quadrant is troubling especially because it includes properties that define the essence of the panel as a framework for repeatedly extracting samples, its principal purpose. The assets and strengths in this case may not be sufficient to compensate for flaws in the product itself, the panel.

Surveys allow researchers to study mental constructs, cognitive and affective: perceptions and beliefs, attitudes, preferences and intentions; they may broadly look onto thoughts, feelings and emotions. Survey questionnaires entail specialised methods, instruments and tools for those purposes. Furthermore, surveys can be used to study concepts such as logical reasoning, inferences, relations and associations established by consumers. In the area of decision-making, researchers can investigate processes performed by the consumers or shoppers, as reported by them. Advisedly, the findings and lessons on decision processes may be validated and expanded by using other types of methods such as verbal protocols, eye tracking and mouse tracking (web pages) as research participants perform pre-specified tasks. However, surveys should remain part of the research programme.

Much of the knowledge and understanding of consumers obtained through surveys cannot be gained from methods and techniques that do not directly converse with the consumers. Data from recording of behaviour or measures of unconscious responses may lack important context from the consumer viewpoint that may render those findings difficult to interpret correctly. Conscious statements of consumers on their thoughts, feelings, experiences and actions may not be fully accurate or complete but they do represent what they have in mind and often enough guide their behaviour — we just need to ask them in an appropriate and methodic way.


The examples below are brought to demonstrate why different approaches should be used collaboratively to complement each other, and how surveys can make their own contribution to the whole story:

  •  Volumes of data on actions or operations performed by consumers, as entailed in the framework of Big Data, provide ‘snapshots’ or ‘slices’ of behaviour, but seem to lack the context of consumer goals or mindsets to meaningfully connect them. One has indirectly to infer or guess what made the behaviour occur as it did.
  • Big Data also refers to volumes of verbatim in social media networks where the amount of data gives an illusion that it can replace input from surveys. However, only surveys can provide the kind of controlled and systematic measures of beliefs, attitudes and opinions needed to properly test research propositions or hypotheses.
  • Methods of neuroscience inform researchers about neural correlates of sensory and mental activity in specific areas of the brain, but it does not tell them what the subject makes of those events. In other words, even if we can reduce thoughts, feelings and emotions to neural activity in the brain, we would miss the subjective experience of the consumers.

 

It is not expected of marketing researchers to turn all their online surveys to mobile devices, at least not as long as these co-exist with personal computers. The logic of the GRIT’s report is probably as follows: Since more consumers spend more time on smartphones (and tablets), they should be allowed to choose and be able to respond to a survey on any of the computer-type products they hold in time and place convenient to them. That is indeed a commendable liberal and democratic stance but it is not always in best interest of the survey from a methodological perspective.

Mobile surveys could be very limiting in terms of the amount and complexity of information a researcher may reliably collect through them. A short mobile survey (5-10 minutes at most) with questions that permit quick responses is not likely to be suitable to study adequately many of the constructs previously discussed to build a coherent picture of consumers’ mindsets and related behaviours. These surveys may be suitable for collecting particular types of information, and perhaps even have an advantage at this as suggested shortly.

According to the GRIT report, 36% of researchers-respondents estimate that online surveys their companies carry out take on average up to 10 minutes (short); 29% estimate their surveys take 11-15 minutes (medium); and 35% give an average estimate of 16 minutes or more (long). The overall average stands at 15 minutes.

These duration estimates correspond to online surveys in general and the authors note that particularly longer surveys would be unsuitable for mobile surveys. For example, 16% of respondents state their online surveys take more than 20 minutes which is unrealistic for mobile devices. At the other end, very short surveys (up to five minutes) are performed by 10%.

There are some noteworthy differences between research suppliers and clients. The main finding to notice is that clients are pressing to shorter surveys, such that may also be applicable to respond to on mobile devices:

  • Whereas just near to 10% of suppliers perform surveys of up to 5 minutes on average, a little more of 15% of clients perform surveys of this average length.
  • Suppliers are more inclined to perform surveys of 11-15 minutes on average (approx. 33%) compared with clients (about 23%).
  • Suppliers also have a little stronger propensity for surveys of 16-20 minutes (20% vs. 16% among clients).

Researchers on the supplier side appear to be more aware and sensitive to the time durations online surveys should take to achieve their research objectives and are less ready to execute very short surveys as clients drive to.

  • Interestingly, the report shows that the average estimated time length in practice is similar to the maximal length respondents think an online survey should take. The authors propose these results can be summed up as “whatever we answered previously as the average length, is the maximal length”. They acknowledge not asking specifically about mobile surveys — the accepted maximum is 10 minutes. This limit is more in accordance with clients’ stated maximum for online surveys (52%) whereas only 36% of suppliers report such a goal (32% of suppliers choose 11-15 minutes as the maximum, above the expected maximum for mobile).

Online surveys designed for personal computers are subject to time limits, in view of respondents’ expected spans of attention, yet the limits are expected to be less strict compared with mobile devices. Furthermore, the PC mode allows more flexibility in variability and sophistication of questions and response scales applied. A smartphone does not encourage much reflective thought and this must be taken into consideration. Desktops and laptops accommodate more complex tasks, usually executed in more comfortable settings (e.g., consumers tend to perform pre-purchase ‘market research’ on the their personal computers and conduct quick queries of the last-minute during the shopping trip on their smartphones) — this works also to the benefit of online surveys on personal computers. (Tablets are still difficult to position, possibly closer to laptops than to smartphones.)

Online surveys for mobile devices and for desktops/laptops do not have to be designed to be the same in content of questionnaires (adapting appearance to device and screen is just part of the matter). First, there is justification to design surveys specifically for mobile devices. These surveys may be most suitable for studying feedback on recent events or experiences, measuring responses to images and videos, and performing association tests. Subjects as proposed here are afforded in common by System 1 (Automatic) — intuition and quick responses (immediacy), emotional reactions, visual appeal (creativity), and associative thinking.

Second, it would be better to compose and design separate survey questionnaires for personal computers and for mobile devices at different lengths. Trying to impose an online survey of fifteen minutes on respondents using mobile devices is at considerable risk of early break-off or worse of diminishing quality of responses as the survey goes on. At least a short version of the questionnaire should be channeled to the mobile device — though it still would not resolve issues of unfitting types of questions posed. Even worse, however, would be an attempt to shorten all online surveys to fit into the time spans of mobile surveys because this could make the surveys much less effective and useful as sources of information and miss much of their business value.

Marketing researchers have to invest special effort to ensure that online surveys remain relevant and able to provide useful and meaningful answers to marketing and business questions. Reducing and degrading surveys just in order to obtain greater cooperation from consumers will only achieve the opposite — it will strengthen the position of the field of Big Data (that worries some researchers), as well as other approaches that navigate the unconsciousness. Instead, marketing researchers should improve and enhance the capabilities of surveys to provide intelligent and valuable insights, achieved particularly by designing surveys that are best compatible with the mode in which the survey is administered.

Ron Ventura, Ph.D. (Marketing)

Read Full Post »

Psycographic-oriented research of consumer lifestyles based on surveys for collecting the data is losing favour among marketers and researchers. Descriptors of consumer lifestyles are applied especially for segmentation by means of statistical clustering methods and other approaches (e.g., latent class modelling). Identifying lifestyle segments has been recognized as a strategic instrument for marketing planning because this kind of segmentation has been helpful and insightful in explaining variation in consumer behaviour where “dry” demographic descriptors cannot reach the deeper intricacies. But with the drop in response rates to surveys over the years, even on the Internet, and further problematic issues in consumer responses to survey questionnaires (by interview or self-administered), lifestyle research using psychographic measures is becoming less amenable, and that is regrettable.

The questionnaires required for building lifestyle segmentation models are typically long, using multi-item “batteries” of statements (e.g., response on a disagree-agree scale) and other types of questions. Initially (1970s) psychographics were represented mainly by Activities, Interests and Opinions (AIO). The measures cover a wide span of topics or aspects from home and work, shopping and leisure, to politics, religion and social affairs. But this approach was criticised for lacking a sound theoretical ground to direct the selection of aspects characterising lifestyles that are more important, relevant to and explanatory of consumer behaviour. Researchers have been seeking ever since the 1980s better-founded psychology-driven bases for lifestyle segmentation, particularly social relations among people and sets of values people hold to. The Values and Lifestyles (VALS) model released by the Stanford Research Institute (SRI) in 1992 incorporated motivation and additional areas of psychological traits (VALS is now licensed to Strategic Business Insights). The current version of the American model is based on that same eight-segment typography with some updated modifications necessary to keep with the times (e.g., the rise of advanced-digital technology) — the conceptual model is structured around two prime axes, (a) resources (economic, mental) and (b) motivation or orientation. Scale items corresponding to the AIOs continue to be used but they would be chosen to represent constructs in broader or better-specified contexts.

Yet the challenge holds even for the stronger-established models, how to choose the most essential aspects and obtain a sufficient set of question items consumers are likely to complete answering. Techniques are available for constructing a reduced set of items (e.g., a couple of dozens) for subsequent segmentation studies relying on a common base model, but a relatively large set (e.g., several dozens to a few hundreds of items) would still be needed for building the original model of lifestyle segments. It is a hard challenge considering in particular the functions and limitations of more popular modes of surveys nowadays, online and mobile.

Lifestyles reflect in general the patterns or ways in which people run their ordinary lives while uncovering something of the underlying motives or goals. However,  ‘lifestyles’ have been given various meanings, and researchers follow different interpretations in constructing their questionnaires. The problem may lie in the difficulty to construct a coherent and consensual theory of ‘lifestyles’ that would conform to almost any area (i.e., product and service domain) where consumer behaviour is studied.  This may well explain why lifestyle segmentation research is concentrated more frequently on answering marketing questions with respect to a particular type of product or service (e.g., banking, mobile telecom, fashion, food). It can help to select more effectively the aspects the model should focus on and thereby also reduce the length of the questionnaire. The following are some of the concepts lifestyle models may incorporate and emphasise:

  • Values that are guiding and driving consumers (e.g., collectivism vs. individualism, modernism vs. traditionalism, liberalism vs. conservatism);
  • In the age of Internet and social media consumers develop new customs of handling social relations in the virtual world versus the physical world;
  • In view of the proliferation of digital, Internet and mobile communication technologies and products it is necessary to address differences in consumer orientation and propensity to adopt and use those products (e.g, ‘smart’ products of various sorts);
  • How consumers balance differently between work and home or family and career is a prevailing issue at all times;
  • Lifestyles may be approached through the allocation of time between duties and other activities — for example, how consumers allocate their leisure time between spending it with family, friends or alone (e.g., hobbies, sports, in front of a screen);
  • Explore possible avenues for developing consumer relationships with brands as they integrate them into their everyday way of living (e.g., in reference to a seminal paper by Susan Fournier, 1998)(1);
  • Taking account of aspects of decision-making processes as they may reflect overall on the styles of shopping and purchasing behaviour of consumers (e.g., need for cognition, tendency to process information analytically or holistically, the extent to which consumers search for information before their decision).

Two more issues deserve special attention: 

  1. Lifestyle is often discussed adjacent with personality. On one hand, a personality trait induces a consistent form of response to some shared stimulating conditions in a variety of situations or occasions (e.g., responding logically or angrily in any situation that creates stress or conflict, offering help whenever seeing someone else in distress). Therefore, personality traits can contribute to the model by adding generalisation and stability to segment profiles. On the other hand, since lifestyle aspects describe particular situations and contexts whereas personality traits generalize across them, it is argued that these should not be mixed as clustering variables but may be applied in complementary modules of a segmentation model.
  2. Products that consumers own and use or services they utilize can illustrate  figuratively their type of lifestyle. But including a specific product in the model framework may hamper the researcher’s ability to make later inferences and predictions on consumer behaviour for the same product or a similar one. Therefore, it is advisable to refer carefully to more general types of products distinctively for the purpose of implying or reflecting on a pattern of lifestyle (e.g., smartphones and technology-literacy). Likewise, particular brand names should be mentioned only for an important symbolic meaning (e.g., luxury fashion brands, luxury cars).

Alternative approaches pertain to portray lifestyles yet do not rely on information elicited from consumers wherein they describe themselves; information is collated mostly from secondary databases. Geodemographic models segment and profile neighbourhoods and their households (e.g., PRIZM by Claritas-Nielsen and MOSAIC by Experian). In addition to demographics they also include information on housing, products owned (e.g., home appliances), media used, as well as activities in which consumers may participate. However, marketers are expected, by insinuation, to infer the lifestyle of a household, based, for instance, on appliances or digital products in the house, on newspaper or magazine subscriptions, on clubs (e.g., sports), and on associations that members of the household belong to. Or consider another behavioural approach that is based on clustering and “basket” (associative) analyses of the sets of products purchased by consumers. These models were not originally developed to measure lifestyles. Their descriptors may vicariously indicate a lifestyle of a household (usually not of an individual). They lack any depth in describing and classifying how consumers are managing their lives nor enquiring why they live them that way.

The evolving difficulties in carrying-out surveys are undeniable. Recruiting consumers as respondents and keeping them interested throughout the questionnaire is becoming more effortful, demanding more financial and operational resources and greater ingenuity. Data from surveys may be complemented by data originated from internal and external databases available to marketing researchers to resolve at least part of the problem. A lifestyle questionnaire is usually extended beyond the items related to segmentation variables by further questions for model validation, and for studying how consumers’ attitudes and behaviour in a product domain of interest are linked with their lifestyles. Some of the information collected thus far through the survey from respondents may be obtained from databases, sometimes even more reliably than that based on respondents’ self-reports. One of the applications of geodemographic segmentation models more welcome in this regard is using information on segment membership as a sampling variable for a survey, whereof characteristics from the former model can also be combined with psychographic characteristics from the survey questionnaire in subsequent analyses. There are furthermore better opportunities now to integrate survey-based data with behavioural data from internal customer databases of companies (e.g., CRM) for constructing lifestyle segments of their customers.

Long lifestyle questionnaires are particularly subject to concerns about the risk of respondent drop-out and decreased quality of response data as respondents progress in the questionnaire. The research firm SSI (Survey Sampling International) presented recently in a webinar (February 2015 via Quirk’s) their findings and insights from a continued study on the effects of questionnaire length and fatigue on response quality (see a POV brief here). A main concern, according to the researchers, is that respondents, rather than dropping-out in the middle of an online questionnaire, actually continue but pay less attention to questions and devote less effort answering them, hence decreasing the quality of response data.

Interestingly, SSI finds that respondents who lose interest drop-out mostly by half-way of a questionnaire irrespective of its length, whether it should take ten minutes or thirty minutes to complete. For those who stay, problems may yet arise if fatigue kicks-in and the respondent goes on to answer questions anyway. As explained by SSI, many respondents like to answer online questionnaires; they get into the realm but they may not notice when they become tired or do not feel comfortable to leave before completing the mission, so they simply go on. They may become less accurate, succumb to automatic routines, and give shorter answers to open-end questions. A questionnaire may take forty minutes to answer but in the estimation of SSI’s researchers respondents are likely to become less attentive after twenty minutes. The researchers refer to both online and mobile modes of survey. They also show, for example, the effect of presenting a particular group of questions in different stages of the questionnaire.

SSI suggests in its presentation some techniques for mitigating those data-quality problems. Two of the suggestions are highlighted here: (1) Dividing the full questionnaire into a few modules (e.g., 2-3) so that respondents will be invited to answer each module in a separate session (e.g., a weekly module-session); (2) Insert break-ups in the questionnaire that let respondents loosen attention from the task and rest their minds for a few moments — an intermezzo may serve for a message of appreciation and encouragement to respondents or a short gaming activity.

A different approach, mentioned earlier, aims to facilitate the conduct of many more lifestyle-application studies by (a) building once a core segmentation model in a comprehensive study; (b) performing future application studies for particular products or services using a reduced set of question items for segmentation according to the original core model. This approach is indeed not new. It allows to lower the burden on the core modelling study from questions on product categories and release space for such questions in future studies dedicated to specific products and brands. One type of technique is to derive a fixed subset of questions from the original study that are statistically the best predictors of segment membership. However, a more sophisticated technique that implements tailored (adaptive) interviewing was developed back in the 1990s by the researchers Kamakura and Wedel (2).

  • The original model was built as a latent class model; the tailored “real-time” process selected items for each respondent given his or her previous responses. In a simulated test, the majority of respondents were “presented” with less than ten items; the average was 16 items (22% of the original calibration set).

Lifestyle segmentation studies are likely to require paying greater rewards to participants. But that may not be enough to keep them in the survey. Computer-based “gamification” tools and techniques (e.g., conditioning rewards on progress in the questionnaire, embedding animation on response scales) may help to some extent but they may also raise greater concerns for quality of responses (e.g., answering less seriously, rushing through to collect “prizes”).

The contemporary challenges of conducting lifestyle segmentation research are clear. Nonetheless so should be the advantages and benefits of applying information on consumer lifestyle patterns in marketing and retailing. Lifestyle segmentation is a strategic tool and effort should persist to resolve the methodological problems that surface, combining where necessary and possible psychographic measures with information from other sources.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) Consumers and Their Brands: Developing Relationship Theory in Consumer Research; Susan Fournier, 1998; Journal of Consumer Research, 34 (March), pp. 343-373

(2) Lifestyle Segmentation With Tailored Interviewing; Wagner A. Kamakura and Michel Wedel, 1995; Journal of Marketing Research, 32 (Aug.), pp. 308-317.

Read Full Post »