Feeds:
Posts
Comments

Posts Tagged ‘Behaviour’

Revelations about the Facebook – Cambridge Analytica affair last month (March 2018) invoked a heated public discussion about data privacy and users’ control over their personal information in social media networks, particularly in the domain of Facebook. The central allegation in this affair is that personal data in social media was misused for the winning political presidential campaign of Donald Trump. It offers ‘juicy’ material for all those interested in American politics. But the importance of the affair goes much beyond that, because impact of the concerns it has raised radiates to the daily lives of millions of users-consumers socially active via the social media platform of Facebook; it could touch potentially a multitude of commercial marketing contexts (i.e., products and services) in addition to political marketing.

Having a user account as member of the social media network of Facebook is pay free, a boon hard to resist. Facebook surpassed in Q2 of 2017 the mark of two billion active monthly users, double a former record of one billion reached five years earlier (Statista). No monetary price requirement is explicitly submitted to users. Yet, users are subject to alternative prices, embedded in the activity on Facebook, implicit and less noticeable as a cost to bear.

Some users may realise that advertisements they receive and see is the ‘price’ they have to tolerate for not having to pay ‘in cash’ for socialising on Facebook. It is less of a burden if the content is informative and relevant to the user. What users are much less likely to realise is how personally related data (e.g., profile, posts and photos, other activity) is used to produce personally targeted advertising, and possibly in creating other forms of direct offerings or persuasive appeals to take action (e.g., a user receives an invitation from a brand, based on a post of his or her friend, about a product purchased or  photographed). The recent affair led to exposing — in news reports and a testimony of CEO Mark Zuckerberg before Congress — not only the direct involvement of Facebook in advertising on its platform but furthermore how permissive it has been in allowing third-party apps to ‘borrow’ users’ information from Facebook.

According to reports on this affair, Psychologist Aleksandr Kogan developed with colleagues, as part of academic research, a model to deduce personality traits from behaviour of users on Facebook. Aside from his position at Cambridge University, Kogan started a company named Global Science Research (GSR) to advance commercial and political applications of the model. In 2013 he launched an app in Facebook, ‘this-is-your-digital-life’, in which Facebook users would answer a self-administered questionnaire on personality traits and some personal background. In addition, the GSR app prompted respondents to give consent to pull personal and behavioural data related to them from Facebook. Furthermore, at that time the app could get access to limited information on friends of respondents — a capability Facebook removed at least since 2015 (The Guardian [1], BBC News: Technology, 17 March 2018).

Cambridge Analytica (CA) contracted with GSR to use its model and data it collected. The app was able, according to initial estimates, to harvest data on as many as 50 million Facebook users; by April 2018 the estimate was updated by Facebook to reach 87 millions. It is unclear how many of these users were involved in the project of Trump’s campaign because CA was specifically interested for this project in eligible voters in the US; it is said that CA applied the model with data in other projects (e.g., pro-Brexit in the UK), and GSR made its own commercial applications with the app and model.

In simple terms, as can be learned from a more technical article in The Guardian [2], the model is constructed around three linkages:

(1) Personality traits (collected with the app) —> data on user behaviour in Facebook platform, mainly ‘likes’ given by each user (possibly additional background information was collected via the app and from the users’ profiles);

(2) Personality traits —> behaviour in the target area of interest — in the case of Trump’s campaign, past voting behaviour (CA associated geographical data on users with statistics from the US electoral registry).

Since model calibration was based on data from a subset of users who responded to the personality questionnaire, the final stage of prediction applied a linkage:

(3) Data on Facebook user behaviour ( —> predicted personality ) —>  predicted voting intention or inclination (applied to the greater dataset of Facebook users-voters)

The Guardian [2] suggests that ‘just’ 32,000 American users responded to the personality-political questionnaire for Trump’s campaign (while at least two million users from 11 states were initially cross-referenced with voting behaviour). The BBC gives an estimate of as many as 265,000 users who responded to the questionnaire in the app, which corresponds to the larger pool of 87 million users-friends whose data was harvested.

A key advantage credited to the model is that it requires only data on ‘likes’ by users and does not have to use other detailed data from posts, personal messages, status updates, photos etc. (The Guardian [2]). However, the modelling concept raises some critical questions: (1) How many repeated ‘likes’ of a particular theme are required to infer a personality trait? (i.e., it should account for a stable pattern of behaviour in response to a theme or condition in different situations or contexts); (2) ‘Liking’ is frequently spurious and casual — ‘likes’ do not necessarily reflect thought-out agreement or strong identification with content or another person or group (e.g., ‘liking’ content on a page may not imply it personally applies to the user who likes it); (3) Since the app was allowed to collect only limited information on a user’s ‘friends’, how much of it could be truly relevant and sufficient for inferring the personality traits? On the other hand, for whatever traits that could be deduced, data analyst and whistleblower Christopher Wylie, who brought the affair out to the public, suggested that the project for Trump had picked-up on various sensitivities and weaknesses (‘demons’ in his words). Personalised messages were respectively devised to persuade or lure voters-users likely to favour Trump to vote for him. This is probably not the way users would want sensitive and private information about them to be utilised.

  • Consider users in need for help who follow and ‘like’ content of pages of support groups for bereaved families (e.g., of soldiers killed in service), combatting illnesses, or facing other types of hardship (e.g., economic or social distress): making use of such behaviour for commercial or political gain would be unethical and disrespectful.

Although the app of GSR may have properly received the consent of users to draw information about them from Facebook, it is argued that deception was committed on three counts: (a) The consent was awarded for academic use of data — users were not giving consent to participate in a political or commercial advertising campaign; (b) Data on associated ‘friends’, according to Facebook, has been allowed at the time only for the purpose of learning how to improve users’ experiences on the platform; and (c) GSR was not permitted at any time to sell and transfer such data to third-party partners. We are in the midst of a ‘blame game’ among Facebook, GSR and CA on the transfer of data between the parties and how it has been used in practice (e.g., to what extent the model of Kogan was actually used in the Trump’s campaign). It is a magnificent mess, but this is not the space to delve into its small details. The greater question is what lessons will be learned and what corrections will be made following the revelations.

Mark Zuckerberg, founder and CEO of Facebook, gave testimony at the US Congress in two sessions: a joint session of the Senate Commerce and Judiciary Committees (10 April 2018) and before the House of Representatives Commerce and Energy Committee (11 April 2018). [Zuckerberg declined a call to appear in person before a parliamentary committee of the British House of Commons.] Key issues about the use of personal data on Facebook are reviewed henceforth in light of the opening statements and replies given by Zuckerberg to explain the policy and conduct of the company.

Most pointedly, Facebook is charged that despite receiving reports concerning GSR’s app and CA’s use of data in 2015, it failed to ensure in time that personal data in the hands of CA is deleted from their repositories and that users are warned about the infringement (before the 2016 US elections), and that it took at least two years for the social media company to confront GSR and CA more decisively. Zuckerberg answered in his defence that Cambridge Analytica had told them “they were not using the data and deleted it, we considered it a closed case”; he immediately added: “In retrospect, that was clearly a mistake. We shouldn’t have taken their word for it”. This line of defence is acceptable when coming from an individual person acting privately. But Zuckerberg is not in that position: he is the head of a network of two billion users. Despite his candid admission of a mistake, this conduct is not becoming a company the size and influence of Facebook.

At the start of both hearing sessions Zuckerberg voluntarily and clearly took personal responsibility and apologized for mistakes made by Facebook while committing to take measures (some already done) to avoid such mistakes from being repeated. A very significant realization made by Zuckerberg in the House is him conceding: “We didn’t take a broad view of our responsibility, and that was a big mistake” — it goes right to the heart of the problem in the approach of Facebook to personal data of its users-members. Privacy of personal data may not seem to be worth money to the company (i.e., vis-à-vis revenue coming from business clients or partners) but the whole network business apparatus of the company depends on its user base. Zuckerberg committed that Facebook under his leadership will never give priority to advertisers and developers over the protection of personal information of users. He will surely be followed on these words.

Zuckerberg argued that the advertising model of Facebook is misunderstood: “We do not sell data to advertisers”. According to his explanation, advertisers are asked to describe to Facebook the target groups they want to reach, Facebook traces them and then does the placement of advertising items. It is less clear who composes and designs the advertising items, which also needs to be based on knowledge of the target consumers-users. However, there seems to be even greater ambiguity and confusion in distinguishing between use of personal data in advertising by Facebook itself and access and use of such data by third-party apps hosted on Facebook, as well as distinguishing between types of data about users (e.g., profile, content posted, response to others’ content) that may be used for marketing actions.

Zuckerberg noted that the ideal of Facebook is to offer people around the world free access to the social network, which means it has to feature targeted advertising. He suggested in Senate there will always be a pay-free version of Facebook, yet refrained from saying when if ever there will be a paid advertising-clear version. It remained unclear from his testimony what information is exchanged with advertisers and how. Zuckerberg insisted that users have full control over their own information and how it is being used. He added that Facebook will not pass personal information to advertisers or other business partners, to avoid obvious breach of trust, but it will continue to use such information to the benefit of advertisers because that is how its business model works (NYTimes,com, 10 April 2018). It should be noted that whereas users can choose who is allowed to see information like posts and photos they upload for display, that does not seem to cover other types of information about their activity on the platform (e.g., ‘likes’, ‘shares’, ‘follow’ and ‘friend’ relations) and how it is used behind the scenes.

Many users would probably want to continue to benefit from being exempt of paying a monetary membership fee, but they can still be entitled to have some control over what adverts they value and which they reject. The smart systems used for targeted advertising could be less intelligent than they purport to be. Hence more feedback from users may help to assign them well-selected adverts that are of real interest, relevance and use to them, and thereof increase efficiency for advertisers.

At the same time, while Facebook may not sell information directly, the greater problem appears to be with the information it allows apps of third-party developers to collect about users without their awareness (or rather their attention). In a late wake-up call at the Senate, Zuckerberg said that the company is reviewing app owners who obtain a large amount of user data or use it improperly, and will act against them. Following Zuckerberg’s effort to go into details of the terms of service and to explain how advertising and apps work on Facebook, and especially how they differ, Issie Lapowsky reflects in the ‘Wired’: “As the Cambridge Analytica scandal shows, the public seems never to have realized just how much information they gave up to Facebook”. Zuckerberg emphasised that an app can get access to raw user data from Facebook only by permission, yet this standard, according to Lapowsky, is “potentially revelatory for most Facebook users” (“If Congress Doesn’t Understand Facebook, What Hope Do Its Users Have”, Wired, 10 April 2018).

There can be great importance to how an app asks for permission or consent of users to pull their personal data from Facebook, how clear and explicit it is presented so that users understand what they agree to. The new General Data Protection Regulation (GDPR) of the European Union, coming into effect within a month (May 2018), is specific on this matter: it requires explicit ‘opt-in’ consent for sensitive data and unambiguous consent for other data types. The request must be clear and intelligible, in plain language, separated from other matters, and include a statement of the purpose of data processing attached to consent. It is yet to be seen how well this ideal standard is implemented, and extended beyond the EU. Users are of course advised to read carefully such requests for permission to use their data in whatever platform or app they encounter them before they proceed. However, even if no information is concealed from users, they may not be adequately attentive to comprehend the request correctly. Consumers engaged in shopping often attend to only some prices, remember them inaccurately, and rely on a more general ‘feeling’ about the acceptable price range or its distribution. If applying the data of users for personalised marketing is a form of price expected from them to pay, a company taking this route should approach the data fairly just as with setting monetary prices, regardless of how well its customers are aware of the price.

  • The GDPR specifies personal data related to an individual to be protected if “that can be used to directly or indirectly identify the person”. This leaves room for interpretation of what types of data about a Facebook user are ‘personal’. If data is used and even transferred at an aggregate level of segments there is little risk of identifying individuals, but for personally targeted advertising or marketing one needs data at the individual level.

Zuckerberg agreed that some form of regulation over social media will be “inevitable ” but conditioned that “We need to be careful about the regulation we put in place” (Fortune.com, 11 April 2018). Democrat House Representative Gene Green posed a question about the GDPR which “gives EU citizens the right to opt out of the processing of their personal data for marketing purposes”. When Zuckerberg was asked “Will the same right be available to Facebook users in the United States?”, he replied “Let me follow-up with you on that” (The Guardian, 13 April 2018).

The willingness of Mark Zuckerberg to take responsibility for mistakes and apologise for them is commendable. It is regrettable, nevertheless, that Facebook under his leadership has not acted a few years earlier to correct those mistakes in its approach and conduct. Facebook should be ready to act in time on its responsibility to protect its users from harmful use of data personally related to them. It can be optimistic and trusting yet realistic and vigilant. Facebook will need to care more for the rights and interests of its users as it does for its other stakeholders in order to gain the continued trust of all.

Ron Ventura, Ph.D. (Marketing)

 

 

 

 

 

Advertisements

Read Full Post »

Surveys, being a major part of marketing research, seem to be in perpetual movement of change and development. Many of the changes in recent years are tied with technological advancement. About fifteen years ago online surveys — delivered over the Internet — began to rise as a dominant mode of survey administration; but now, researchers are pushed to perform more of their surveys via mobile devices, namely smartphones and tablets, in addition or as a replacement to being administered on desktop and laptop computers.

Yet some important distinctions between those two modes can make the transfer of surveys between them flawed. Just as much as it was wrong to suggest in the past that survey questionnaires administered in face-to-face interviews could be seamlessly transferred to phone interviews, it would be wrong today to suggest a seamless transfer of surveys from web browsers on desktops/laptops to mobile browsers (or apps).

In the latest Greenbook Research Industry Trends (GRIT) Report of Q3-Q4 2015, the authors suggest that there is still much room for improvement in adjusting online survey questionnaires to run and display properly also on mobile devices. They find that 45% of their respondents on the research supplier side and 30% on the research buyer (client) side claim that their companies design at least three quarters (75%-100%) of their online surveys to work effectively on mobile phones; however, “that tells us that over 50% of all  surveys are NOT mobile optimized” (p. 14, capital letters are in origin). The authors hereby implicitly call on marketing researchers to do much more to get their online surveys fully mobile-optimized. But this is not necessarily a justified or desirable requirement because not all online surveys are appropriate and applicable to be answered on smartphones nor on tablets. There could be multiple reasons for a lack of match between these modes for administering a particular survey: the topic, the types of constructs measured and instruments being used, the length of the questionnaire, and the target population relevant for the research. Consumers use mobile devices and personal computers differently (e.g., purpose, depth and time) which is likely to extend also to how they approach surveys on these products.

  • The GRIT survey of marketing researchers was conducted in a sample of 1,497 respondents recruited by e-mail and social media channels, of whom 78% are on the supplier-side and 22% on the client-side. Nearly half (46%) originate in North-America and a little more than quarter (27%) come from Europe.

Concerns about coverage and reach of a research population have followed online surveys from the beginning. Of different approaches for constructing samples, including sampling frames (e.g., e-mail lists) and ad-hoc samples (i.e., website pop-up survey invitations), the panel methodology has become most prevalent. But this approach is not free of limitations or weaknesses. Panels have a ‘peculiar’ property: If you do not join a panel you have zero probability of being invited to participate in a survey. Mobile surveys may pose again similar problems, and perhaps even more severely, because users of smartphones (not every mobile phone is able to load surveys), and moreover tablets, constitute a sub-population that is not broad enough yet and the users also have rather specific demographic and lifestyle characteristics.

  • Different sources of contact data and channels are being used to approach consumers to participate in surveys. Companies conduct surveys among their customers for whom they have e-mail addresses. Subscribers to news media websites may also be included a in survey panel of the publisher. Members of forums, groups or communities in social media networks may be asked as well to take part in surveys (commissioned by the administrator).

Decreasing response rates in phone and face-to-face surveys has been an early drive of online surveys; these difficulties have got only worse in recent years so that online surveys remain the viable alternative, and in some situations are even superior. Online self-administered questionnaires (SAQ) of course have their own genuine advantages such as ability to present images and videos, interactive response tools and greater freedom to choose when to fill the questionnaire. However, as with former modes of data collection for surveys, response behaviour may differ between online surveys responded to on personal computers and on mobile devices (one should consider the difficulty to control what respondents do when filling SAQs on their own).

The GRIT report reveals that the greatest troubling aspects of panels for marketing researchers are the quantity and quality of respondents available through those sampling pools (top-2-box satisfaction: 36% and 26%, respectively). In particular, 33% are not at all satisfied or only slightly satisfied with the quality of respondents. The cost of panel is also generating relatively low satisfaction (top-2-box 34%). Marketing researchers are more satisfied with timeliness of fielding, purchase process, ease of accessing a panel and customer service (49%-54%). [Note: 33% is compared with ~20% for ‘quantity’ and ‘cost’ and ~12% on other aspects.]

The GRIT report further identifies four quadrants of panel aspects based on satisfaction (top-2-box) versus (derived) importance. The quality and quantity of respondents available in panels occupy the ‘Weaknesses’ quadrant as they generate less satisfaction while being of higher importance. Customer service and purchase process form ‘Key Strengths’, being of higher importance and sources of higher satisfaction. Of the lower-importance aspects, cost is a ‘Vulnerability’ whereas access and timeliness are ‘Assets’. The ‘Weaknesses’ quadrant is troubling especially because it includes properties that define the essence of the panel as a framework for repeatedly extracting samples, its principal purpose. The assets and strengths in this case may not be sufficient to compensate for flaws in the product itself, the panel.

Surveys allow researchers to study mental constructs, cognitive and affective: perceptions and beliefs, attitudes, preferences and intentions; they may broadly look onto thoughts, feelings and emotions. Survey questionnaires entail specialised methods, instruments and tools for those purposes. Furthermore, surveys can be used to study concepts such as logical reasoning, inferences, relations and associations established by consumers. In the area of decision-making, researchers can investigate processes performed by the consumers or shoppers, as reported by them. Advisedly, the findings and lessons on decision processes may be validated and expanded by using other types of methods such as verbal protocols, eye tracking and mouse tracking (web pages) as research participants perform pre-specified tasks. However, surveys should remain part of the research programme.

Much of the knowledge and understanding of consumers obtained through surveys cannot be gained from methods and techniques that do not directly converse with the consumers. Data from recording of behaviour or measures of unconscious responses may lack important context from the consumer viewpoint that may render those findings difficult to interpret correctly. Conscious statements of consumers on their thoughts, feelings, experiences and actions may not be fully accurate or complete but they do represent what they have in mind and often enough guide their behaviour — we just need to ask them in an appropriate and methodic way.


The examples below are brought to demonstrate why different approaches should be used collaboratively to complement each other, and how surveys can make their own contribution to the whole story:

  •  Volumes of data on actions or operations performed by consumers, as entailed in the framework of Big Data, provide ‘snapshots’ or ‘slices’ of behaviour, but seem to lack the context of consumer goals or mindsets to meaningfully connect them. One has indirectly to infer or guess what made the behaviour occur as it did.
  • Big Data also refers to volumes of verbatim in social media networks where the amount of data gives an illusion that it can replace input from surveys. However, only surveys can provide the kind of controlled and systematic measures of beliefs, attitudes and opinions needed to properly test research propositions or hypotheses.
  • Methods of neuroscience inform researchers about neural correlates of sensory and mental activity in specific areas of the brain, but it does not tell them what the subject makes of those events. In other words, even if we can reduce thoughts, feelings and emotions to neural activity in the brain, we would miss the subjective experience of the consumers.

 

It is not expected of marketing researchers to turn all their online surveys to mobile devices, at least not as long as these co-exist with personal computers. The logic of the GRIT’s report is probably as follows: Since more consumers spend more time on smartphones (and tablets), they should be allowed to choose and be able to respond to a survey on any of the computer-type products they hold in time and place convenient to them. That is indeed a commendable liberal and democratic stance but it is not always in best interest of the survey from a methodological perspective.

Mobile surveys could be very limiting in terms of the amount and complexity of information a researcher may reliably collect through them. A short mobile survey (5-10 minutes at most) with questions that permit quick responses is not likely to be suitable to study adequately many of the constructs previously discussed to build a coherent picture of consumers’ mindsets and related behaviours. These surveys may be suitable for collecting particular types of information, and perhaps even have an advantage at this as suggested shortly.

According to the GRIT report, 36% of researchers-respondents estimate that online surveys their companies carry out take on average up to 10 minutes (short); 29% estimate their surveys take 11-15 minutes (medium); and 35% give an average estimate of 16 minutes or more (long). The overall average stands at 15 minutes.

These duration estimates correspond to online surveys in general and the authors note that particularly longer surveys would be unsuitable for mobile surveys. For example, 16% of respondents state their online surveys take more than 20 minutes which is unrealistic for mobile devices. At the other end, very short surveys (up to five minutes) are performed by 10%.

There are some noteworthy differences between research suppliers and clients. The main finding to notice is that clients are pressing to shorter surveys, such that may also be applicable to respond to on mobile devices:

  • Whereas just near to 10% of suppliers perform surveys of up to 5 minutes on average, a little more of 15% of clients perform surveys of this average length.
  • Suppliers are more inclined to perform surveys of 11-15 minutes on average (approx. 33%) compared with clients (about 23%).
  • Suppliers also have a little stronger propensity for surveys of 16-20 minutes (20% vs. 16% among clients).

Researchers on the supplier side appear to be more aware and sensitive to the time durations online surveys should take to achieve their research objectives and are less ready to execute very short surveys as clients drive to.

  • Interestingly, the report shows that the average estimated time length in practice is similar to the maximal length respondents think an online survey should take. The authors propose these results can be summed up as “whatever we answered previously as the average length, is the maximal length”. They acknowledge not asking specifically about mobile surveys — the accepted maximum is 10 minutes. This limit is more in accordance with clients’ stated maximum for online surveys (52%) whereas only 36% of suppliers report such a goal (32% of suppliers choose 11-15 minutes as the maximum, above the expected maximum for mobile).

Online surveys designed for personal computers are subject to time limits, in view of respondents’ expected spans of attention, yet the limits are expected to be less strict compared with mobile devices. Furthermore, the PC mode allows more flexibility in variability and sophistication of questions and response scales applied. A smartphone does not encourage much reflective thought and this must be taken into consideration. Desktops and laptops accommodate more complex tasks, usually executed in more comfortable settings (e.g., consumers tend to perform pre-purchase ‘market research’ on the their personal computers and conduct quick queries of the last-minute during the shopping trip on their smartphones) — this works also to the benefit of online surveys on personal computers. (Tablets are still difficult to position, possibly closer to laptops than to smartphones.)

Online surveys for mobile devices and for desktops/laptops do not have to be designed to be the same in content of questionnaires (adapting appearance to device and screen is just part of the matter). First, there is justification to design surveys specifically for mobile devices. These surveys may be most suitable for studying feedback on recent events or experiences, measuring responses to images and videos, and performing association tests. Subjects as proposed here are afforded in common by System 1 (Automatic) — intuition and quick responses (immediacy), emotional reactions, visual appeal (creativity), and associative thinking.

Second, it would be better to compose and design separate survey questionnaires for personal computers and for mobile devices at different lengths. Trying to impose an online survey of fifteen minutes on respondents using mobile devices is at considerable risk of early break-off or worse of diminishing quality of responses as the survey goes on. At least a short version of the questionnaire should be channeled to the mobile device — though it still would not resolve issues of unfitting types of questions posed. Even worse, however, would be an attempt to shorten all online surveys to fit into the time spans of mobile surveys because this could make the surveys much less effective and useful as sources of information and miss much of their business value.

Marketing researchers have to invest special effort to ensure that online surveys remain relevant and able to provide useful and meaningful answers to marketing and business questions. Reducing and degrading surveys just in order to obtain greater cooperation from consumers will only achieve the opposite — it will strengthen the position of the field of Big Data (that worries some researchers), as well as other approaches that navigate the unconsciousness. Instead, marketing researchers should improve and enhance the capabilities of surveys to provide intelligent and valuable insights, achieved particularly by designing surveys that are best compatible with the mode in which the survey is administered.

Ron Ventura, Ph.D. (Marketing)

Read Full Post »