Surveys, being a major part of marketing research, seem to be in perpetual movement of change and development. Many of the changes in recent years are tied with technological advancement. About fifteen years ago online surveys — delivered over the Internet — began to rise as a dominant mode of survey administration; but now, researchers are pushed to perform more of their surveys via mobile devices, namely smartphones and tablets, in addition or as a replacement to being administered on desktop and laptop computers.
Yet some important distinctions between those two modes can make the transfer of surveys between them flawed. Just as much as it was wrong to suggest in the past that survey questionnaires administered in face-to-face interviews could be seamlessly transferred to phone interviews, it would be wrong today to suggest a seamless transfer of surveys from web browsers on desktops/laptops to mobile browsers (or apps).
In the latest Greenbook Research Industry Trends (GRIT) Report of Q3-Q4 2015, the authors suggest that there is still much room for improvement in adjusting online survey questionnaires to run and display properly also on mobile devices. They find that 45% of their respondents on the research supplier side and 30% on the research buyer (client) side claim that their companies design at least three quarters (75%-100%) of their online surveys to work effectively on mobile phones; however, “that tells us that over 50% of all surveys are NOT mobile optimized” (p. 14, capital letters are in origin). The authors hereby implicitly call on marketing researchers to do much more to get their online surveys fully mobile-optimized. But this is not necessarily a justified or desirable requirement because not all online surveys are appropriate and applicable to be answered on smartphones nor on tablets. There could be multiple reasons for a lack of match between these modes for administering a particular survey: the topic, the types of constructs measured and instruments being used, the length of the questionnaire, and the target population relevant for the research. Consumers use mobile devices and personal computers differently (e.g., purpose, depth and time) which is likely to extend also to how they approach surveys on these products.
-
The GRIT survey of marketing researchers was conducted in a sample of 1,497 respondents recruited by e-mail and social media channels, of whom 78% are on the supplier-side and 22% on the client-side. Nearly half (46%) originate in North-America and a little more than quarter (27%) come from Europe.
Concerns about coverage and reach of a research population have followed online surveys from the beginning. Of different approaches for constructing samples, including sampling frames (e.g., e-mail lists) and ad-hoc samples (i.e., website pop-up survey invitations), the panel methodology has become most prevalent. But this approach is not free of limitations or weaknesses. Panels have a ‘peculiar’ property: If you do not join a panel you have zero probability of being invited to participate in a survey. Mobile surveys may pose again similar problems, and perhaps even more severely, because users of smartphones (not every mobile phone is able to load surveys), and moreover tablets, constitute a sub-population that is not broad enough yet and the users also have rather specific demographic and lifestyle characteristics.
-
Different sources of contact data and channels are being used to approach consumers to participate in surveys. Companies conduct surveys among their customers for whom they have e-mail addresses. Subscribers to news media websites may also be included a in survey panel of the publisher. Members of forums, groups or communities in social media networks may be asked as well to take part in surveys (commissioned by the administrator).
Decreasing response rates in phone and face-to-face surveys has been an early drive of online surveys; these difficulties have got only worse in recent years so that online surveys remain the viable alternative, and in some situations are even superior. Online self-administered questionnaires (SAQ) of course have their own genuine advantages such as ability to present images and videos, interactive response tools and greater freedom to choose when to fill the questionnaire. However, as with former modes of data collection for surveys, response behaviour may differ between online surveys responded to on personal computers and on mobile devices (one should consider the difficulty to control what respondents do when filling SAQs on their own).
The GRIT report reveals that the greatest troubling aspects of panels for marketing researchers are the quantity and quality of respondents available through those sampling pools (top-2-box satisfaction: 36% and 26%, respectively). In particular, 33% are not at all satisfied or only slightly satisfied with the quality of respondents. The cost of panel is also generating relatively low satisfaction (top-2-box 34%). Marketing researchers are more satisfied with timeliness of fielding, purchase process, ease of accessing a panel and customer service (49%-54%). [Note: 33% is compared with ~20% for ‘quantity’ and ‘cost’ and ~12% on other aspects.]
The GRIT report further identifies four quadrants of panel aspects based on satisfaction (top-2-box) versus (derived) importance. The quality and quantity of respondents available in panels occupy the ‘Weaknesses’ quadrant as they generate less satisfaction while being of higher importance. Customer service and purchase process form ‘Key Strengths’, being of higher importance and sources of higher satisfaction. Of the lower-importance aspects, cost is a ‘Vulnerability’ whereas access and timeliness are ‘Assets’. The ‘Weaknesses’ quadrant is troubling especially because it includes properties that define the essence of the panel as a framework for repeatedly extracting samples, its principal purpose. The assets and strengths in this case may not be sufficient to compensate for flaws in the product itself, the panel.
Surveys allow researchers to study mental constructs, cognitive and affective: perceptions and beliefs, attitudes, preferences and intentions; they may broadly look onto thoughts, feelings and emotions. Survey questionnaires entail specialised methods, instruments and tools for those purposes. Furthermore, surveys can be used to study concepts such as logical reasoning, inferences, relations and associations established by consumers. In the area of decision-making, researchers can investigate processes performed by the consumers or shoppers, as reported by them. Advisedly, the findings and lessons on decision processes may be validated and expanded by using other types of methods such as verbal protocols, eye tracking and mouse tracking (web pages) as research participants perform pre-specified tasks. However, surveys should remain part of the research programme.
Much of the knowledge and understanding of consumers obtained through surveys cannot be gained from methods and techniques that do not directly converse with the consumers. Data from recording of behaviour or measures of unconscious responses may lack important context from the consumer viewpoint that may render those findings difficult to interpret correctly. Conscious statements of consumers on their thoughts, feelings, experiences and actions may not be fully accurate or complete but they do represent what they have in mind and often enough guide their behaviour — we just need to ask them in an appropriate and methodic way.
The examples below are brought to demonstrate why different approaches should be used collaboratively to complement each other, and how surveys can make their own contribution to the whole story:
-
Volumes of data on actions or operations performed by consumers, as entailed in the framework of Big Data, provide ‘snapshots’ or ‘slices’ of behaviour, but seem to lack the context of consumer goals or mindsets to meaningfully connect them. One has indirectly to infer or guess what made the behaviour occur as it did.
-
Big Data also refers to volumes of verbatim in social media networks where the amount of data gives an illusion that it can replace input from surveys. However, only surveys can provide the kind of controlled and systematic measures of beliefs, attitudes and opinions needed to properly test research propositions or hypotheses.
-
Methods of neuroscience inform researchers about neural correlates of sensory and mental activity in specific areas of the brain, but it does not tell them what the subject makes of those events. In other words, even if we can reduce thoughts, feelings and emotions to neural activity in the brain, we would miss the subjective experience of the consumers.
It is not expected of marketing researchers to turn all their online surveys to mobile devices, at least not as long as these co-exist with personal computers. The logic of the GRIT’s report is probably as follows: Since more consumers spend more time on smartphones (and tablets), they should be allowed to choose and be able to respond to a survey on any of the computer-type products they hold in time and place convenient to them. That is indeed a commendable liberal and democratic stance but it is not always in best interest of the survey from a methodological perspective.
Mobile surveys could be very limiting in terms of the amount and complexity of information a researcher may reliably collect through them. A short mobile survey (5-10 minutes at most) with questions that permit quick responses is not likely to be suitable to study adequately many of the constructs previously discussed to build a coherent picture of consumers’ mindsets and related behaviours. These surveys may be suitable for collecting particular types of information, and perhaps even have an advantage at this as suggested shortly.
According to the GRIT report, 36% of researchers-respondents estimate that online surveys their companies carry out take on average up to 10 minutes (short); 29% estimate their surveys take 11-15 minutes (medium); and 35% give an average estimate of 16 minutes or more (long). The overall average stands at 15 minutes.
These duration estimates correspond to online surveys in general and the authors note that particularly longer surveys would be unsuitable for mobile surveys. For example, 16% of respondents state their online surveys take more than 20 minutes which is unrealistic for mobile devices. At the other end, very short surveys (up to five minutes) are performed by 10%.
There are some noteworthy differences between research suppliers and clients. The main finding to notice is that clients are pressing to shorter surveys, such that may also be applicable to respond to on mobile devices:
- Whereas just near to 10% of suppliers perform surveys of up to 5 minutes on average, a little more of 15% of clients perform surveys of this average length.
- Suppliers are more inclined to perform surveys of 11-15 minutes on average (approx. 33%) compared with clients (about 23%).
- Suppliers also have a little stronger propensity for surveys of 16-20 minutes (20% vs. 16% among clients).
Researchers on the supplier side appear to be more aware and sensitive to the time durations online surveys should take to achieve their research objectives and are less ready to execute very short surveys as clients drive to.
-
Interestingly, the report shows that the average estimated time length in practice is similar to the maximal length respondents think an online survey should take. The authors propose these results can be summed up as “whatever we answered previously as the average length, is the maximal length”. They acknowledge not asking specifically about mobile surveys — the accepted maximum is 10 minutes. This limit is more in accordance with clients’ stated maximum for online surveys (52%) whereas only 36% of suppliers report such a goal (32% of suppliers choose 11-15 minutes as the maximum, above the expected maximum for mobile).
Online surveys designed for personal computers are subject to time limits, in view of respondents’ expected spans of attention, yet the limits are expected to be less strict compared with mobile devices. Furthermore, the PC mode allows more flexibility in variability and sophistication of questions and response scales applied. A smartphone does not encourage much reflective thought and this must be taken into consideration. Desktops and laptops accommodate more complex tasks, usually executed in more comfortable settings (e.g., consumers tend to perform pre-purchase ‘market research’ on the their personal computers and conduct quick queries of the last-minute during the shopping trip on their smartphones) — this works also to the benefit of online surveys on personal computers. (Tablets are still difficult to position, possibly closer to laptops than to smartphones.)
Online surveys for mobile devices and for desktops/laptops do not have to be designed to be the same in content of questionnaires (adapting appearance to device and screen is just part of the matter). First, there is justification to design surveys specifically for mobile devices. These surveys may be most suitable for studying feedback on recent events or experiences, measuring responses to images and videos, and performing association tests. Subjects as proposed here are afforded in common by System 1 (Automatic) — intuition and quick responses (immediacy), emotional reactions, visual appeal (creativity), and associative thinking.
Second, it would be better to compose and design separate survey questionnaires for personal computers and for mobile devices at different lengths. Trying to impose an online survey of fifteen minutes on respondents using mobile devices is at considerable risk of early break-off or worse of diminishing quality of responses as the survey goes on. At least a short version of the questionnaire should be channeled to the mobile device — though it still would not resolve issues of unfitting types of questions posed. Even worse, however, would be an attempt to shorten all online surveys to fit into the time spans of mobile surveys because this could make the surveys much less effective and useful as sources of information and miss much of their business value.
Marketing researchers have to invest special effort to ensure that online surveys remain relevant and able to provide useful and meaningful answers to marketing and business questions. Reducing and degrading surveys just in order to obtain greater cooperation from consumers will only achieve the opposite — it will strengthen the position of the field of Big Data (that worries some researchers), as well as other approaches that navigate the unconsciousness. Instead, marketing researchers should improve and enhance the capabilities of surveys to provide intelligent and valuable insights, achieved particularly by designing surveys that are best compatible with the mode in which the survey is administered.
Ron Ventura, Ph.D. (Marketing)