Posts Tagged ‘Users’

Revelations about the Facebook – Cambridge Analytica affair last month (March 2018) invoked a heated public discussion about data privacy and users’ control over their personal information in social media networks, particularly in the domain of Facebook. The central allegation in this affair is that personal data in social media was misused for the winning political presidential campaign of Donald Trump. It offers ‘juicy’ material for all those interested in American politics. But the importance of the affair goes much beyond that, because impact of the concerns it has raised radiates to the daily lives of millions of users-consumers socially active via the social media platform of Facebook; it could touch potentially a multitude of commercial marketing contexts (i.e., products and services) in addition to political marketing.

Having a user account as member of the social media network of Facebook is pay free, a boon hard to resist. Facebook surpassed in Q2 of 2017 the mark of two billion active monthly users, double a former record of one billion reached five years earlier (Statista). No monetary price requirement is explicitly submitted to users. Yet, users are subject to alternative prices, embedded in the activity on Facebook, implicit and less noticeable as a cost to bear.

Some users may realise that advertisements they receive and see is the ‘price’ they have to tolerate for not having to pay ‘in cash’ for socialising on Facebook. It is less of a burden if the content is informative and relevant to the user. What users are much less likely to realise is how personally related data (e.g., profile, posts and photos, other activity) is used to produce personally targeted advertising, and possibly in creating other forms of direct offerings or persuasive appeals to take action (e.g., a user receives an invitation from a brand, based on a post of his or her friend, about a product purchased or  photographed). The recent affair led to exposing — in news reports and a testimony of CEO Mark Zuckerberg before Congress — not only the direct involvement of Facebook in advertising on its platform but furthermore how permissive it has been in allowing third-party apps to ‘borrow’ users’ information from Facebook.

According to reports on this affair, Psychologist Aleksandr Kogan developed with colleagues, as part of academic research, a model to deduce personality traits from behaviour of users on Facebook. Aside from his position at Cambridge University, Kogan started a company named Global Science Research (GSR) to advance commercial and political applications of the model. In 2013 he launched an app in Facebook, ‘this-is-your-digital-life’, in which Facebook users would answer a self-administered questionnaire on personality traits and some personal background. In addition, the GSR app prompted respondents to give consent to pull personal and behavioural data related to them from Facebook. Furthermore, at that time the app could get access to limited information on friends of respondents — a capability Facebook removed at least since 2015 (The Guardian [1], BBC News: Technology, 17 March 2018).

Cambridge Analytica (CA) contracted with GSR to use its model and data it collected. The app was able, according to initial estimates, to harvest data on as many as 50 million Facebook users; by April 2018 the estimate was updated by Facebook to reach 87 millions. It is unclear how many of these users were involved in the project of Trump’s campaign because CA was specifically interested for this project in eligible voters in the US; it is said that CA applied the model with data in other projects (e.g., pro-Brexit in the UK), and GSR made its own commercial applications with the app and model.

In simple terms, as can be learned from a more technical article in The Guardian [2], the model is constructed around three linkages:

(1) Personality traits (collected with the app) —> data on user behaviour in Facebook platform, mainly ‘likes’ given by each user (possibly additional background information was collected via the app and from the users’ profiles);

(2) Personality traits —> behaviour in the target area of interest — in the case of Trump’s campaign, past voting behaviour (CA associated geographical data on users with statistics from the US electoral registry).

Since model calibration was based on data from a subset of users who responded to the personality questionnaire, the final stage of prediction applied a linkage:

(3) Data on Facebook user behaviour ( —> predicted personality ) —>  predicted voting intention or inclination (applied to the greater dataset of Facebook users-voters)

The Guardian [2] suggests that ‘just’ 32,000 American users responded to the personality-political questionnaire for Trump’s campaign (while at least two million users from 11 states were initially cross-referenced with voting behaviour). The BBC gives an estimate of as many as 265,000 users who responded to the questionnaire in the app, which corresponds to the larger pool of 87 million users-friends whose data was harvested.

A key advantage credited to the model is that it requires only data on ‘likes’ by users and does not have to use other detailed data from posts, personal messages, status updates, photos etc. (The Guardian [2]). However, the modelling concept raises some critical questions: (1) How many repeated ‘likes’ of a particular theme are required to infer a personality trait? (i.e., it should account for a stable pattern of behaviour in response to a theme or condition in different situations or contexts); (2) ‘Liking’ is frequently spurious and casual — ‘likes’ do not necessarily reflect thought-out agreement or strong identification with content or another person or group (e.g., ‘liking’ content on a page may not imply it personally applies to the user who likes it); (3) Since the app was allowed to collect only limited information on a user’s ‘friends’, how much of it could be truly relevant and sufficient for inferring the personality traits? On the other hand, for whatever traits that could be deduced, data analyst and whistleblower Christopher Wylie, who brought the affair out to the public, suggested that the project for Trump had picked-up on various sensitivities and weaknesses (‘demons’ in his words). Personalised messages were respectively devised to persuade or lure voters-users likely to favour Trump to vote for him. This is probably not the way users would want sensitive and private information about them to be utilised.

  • Consider users in need for help who follow and ‘like’ content of pages of support groups for bereaved families (e.g., of soldiers killed in service), combatting illnesses, or facing other types of hardship (e.g., economic or social distress): making use of such behaviour for commercial or political gain would be unethical and disrespectful.

Although the app of GSR may have properly received the consent of users to draw information about them from Facebook, it is argued that deception was committed on three counts: (a) The consent was awarded for academic use of data — users were not giving consent to participate in a political or commercial advertising campaign; (b) Data on associated ‘friends’, according to Facebook, has been allowed at the time only for the purpose of learning how to improve users’ experiences on the platform; and (c) GSR was not permitted at any time to sell and transfer such data to third-party partners. We are in the midst of a ‘blame game’ among Facebook, GSR and CA on the transfer of data between the parties and how it has been used in practice (e.g., to what extent the model of Kogan was actually used in the Trump’s campaign). It is a magnificent mess, but this is not the space to delve into its small details. The greater question is what lessons will be learned and what corrections will be made following the revelations.

Mark Zuckerberg, founder and CEO of Facebook, gave testimony at the US Congress in two sessions: a joint session of the Senate Commerce and Judiciary Committees (10 April 2018) and before the House of Representatives Commerce and Energy Committee (11 April 2018). [Zuckerberg declined a call to appear in person before a parliamentary committee of the British House of Commons.] Key issues about the use of personal data on Facebook are reviewed henceforth in light of the opening statements and replies given by Zuckerberg to explain the policy and conduct of the company.

Most pointedly, Facebook is charged that despite receiving reports concerning GSR’s app and CA’s use of data in 2015, it failed to ensure in time that personal data in the hands of CA is deleted from their repositories and that users are warned about the infringement (before the 2016 US elections), and that it took at least two years for the social media company to confront GSR and CA more decisively. Zuckerberg answered in his defence that Cambridge Analytica had told them “they were not using the data and deleted it, we considered it a closed case”; he immediately added: “In retrospect, that was clearly a mistake. We shouldn’t have taken their word for it”. This line of defence is acceptable when coming from an individual person acting privately. But Zuckerberg is not in that position: he is the head of a network of two billion users. Despite his candid admission of a mistake, this conduct is not becoming a company the size and influence of Facebook.

At the start of both hearing sessions Zuckerberg voluntarily and clearly took personal responsibility and apologized for mistakes made by Facebook while committing to take measures (some already done) to avoid such mistakes from being repeated. A very significant realization made by Zuckerberg in the House is him conceding: “We didn’t take a broad view of our responsibility, and that was a big mistake” — it goes right to the heart of the problem in the approach of Facebook to personal data of its users-members. Privacy of personal data may not seem to be worth money to the company (i.e., vis-à-vis revenue coming from business clients or partners) but the whole network business apparatus of the company depends on its user base. Zuckerberg committed that Facebook under his leadership will never give priority to advertisers and developers over the protection of personal information of users. He will surely be followed on these words.

Zuckerberg argued that the advertising model of Facebook is misunderstood: “We do not sell data to advertisers”. According to his explanation, advertisers are asked to describe to Facebook the target groups they want to reach, Facebook traces them and then does the placement of advertising items. It is less clear who composes and designs the advertising items, which also needs to be based on knowledge of the target consumers-users. However, there seems to be even greater ambiguity and confusion in distinguishing between use of personal data in advertising by Facebook itself and access and use of such data by third-party apps hosted on Facebook, as well as distinguishing between types of data about users (e.g., profile, content posted, response to others’ content) that may be used for marketing actions.

Zuckerberg noted that the ideal of Facebook is to offer people around the world free access to the social network, which means it has to feature targeted advertising. He suggested in Senate there will always be a pay-free version of Facebook, yet refrained from saying when if ever there will be a paid advertising-clear version. It remained unclear from his testimony what information is exchanged with advertisers and how. Zuckerberg insisted that users have full control over their own information and how it is being used. He added that Facebook will not pass personal information to advertisers or other business partners, to avoid obvious breach of trust, but it will continue to use such information to the benefit of advertisers because that is how its business model works (NYTimes,com, 10 April 2018). It should be noted that whereas users can choose who is allowed to see information like posts and photos they upload for display, that does not seem to cover other types of information about their activity on the platform (e.g., ‘likes’, ‘shares’, ‘follow’ and ‘friend’ relations) and how it is used behind the scenes.

Many users would probably want to continue to benefit from being exempt of paying a monetary membership fee, but they can still be entitled to have some control over what adverts they value and which they reject. The smart systems used for targeted advertising could be less intelligent than they purport to be. Hence more feedback from users may help to assign them well-selected adverts that are of real interest, relevance and use to them, and thereof increase efficiency for advertisers.

At the same time, while Facebook may not sell information directly, the greater problem appears to be with the information it allows apps of third-party developers to collect about users without their awareness (or rather their attention). In a late wake-up call at the Senate, Zuckerberg said that the company is reviewing app owners who obtain a large amount of user data or use it improperly, and will act against them. Following Zuckerberg’s effort to go into details of the terms of service and to explain how advertising and apps work on Facebook, and especially how they differ, Issie Lapowsky reflects in the ‘Wired’: “As the Cambridge Analytica scandal shows, the public seems never to have realized just how much information they gave up to Facebook”. Zuckerberg emphasised that an app can get access to raw user data from Facebook only by permission, yet this standard, according to Lapowsky, is “potentially revelatory for most Facebook users” (“If Congress Doesn’t Understand Facebook, What Hope Do Its Users Have”, Wired, 10 April 2018).

There can be great importance to how an app asks for permission or consent of users to pull their personal data from Facebook, how clear and explicit it is presented so that users understand what they agree to. The new General Data Protection Regulation (GDPR) of the European Union, coming into effect within a month (May 2018), is specific on this matter: it requires explicit ‘opt-in’ consent for sensitive data and unambiguous consent for other data types. The request must be clear and intelligible, in plain language, separated from other matters, and include a statement of the purpose of data processing attached to consent. It is yet to be seen how well this ideal standard is implemented, and extended beyond the EU. Users are of course advised to read carefully such requests for permission to use their data in whatever platform or app they encounter them before they proceed. However, even if no information is concealed from users, they may not be adequately attentive to comprehend the request correctly. Consumers engaged in shopping often attend to only some prices, remember them inaccurately, and rely on a more general ‘feeling’ about the acceptable price range or its distribution. If applying the data of users for personalised marketing is a form of price expected from them to pay, a company taking this route should approach the data fairly just as with setting monetary prices, regardless of how well its customers are aware of the price.

  • The GDPR specifies personal data related to an individual to be protected if “that can be used to directly or indirectly identify the person”. This leaves room for interpretation of what types of data about a Facebook user are ‘personal’. If data is used and even transferred at an aggregate level of segments there is little risk of identifying individuals, but for personally targeted advertising or marketing one needs data at the individual level.

Zuckerberg agreed that some form of regulation over social media will be “inevitable ” but conditioned that “We need to be careful about the regulation we put in place” (Fortune.com, 11 April 2018). Democrat House Representative Gene Green posed a question about the GDPR which “gives EU citizens the right to opt out of the processing of their personal data for marketing purposes”. When Zuckerberg was asked “Will the same right be available to Facebook users in the United States?”, he replied “Let me follow-up with you on that” (The Guardian, 13 April 2018).

The willingness of Mark Zuckerberg to take responsibility for mistakes and apologise for them is commendable. It is regrettable, nevertheless, that Facebook under his leadership has not acted a few years earlier to correct those mistakes in its approach and conduct. Facebook should be ready to act in time on its responsibility to protect its users from harmful use of data personally related to them. It can be optimistic and trusting yet realistic and vigilant. Facebook will need to care more for the rights and interests of its users as it does for its other stakeholders in order to gain the continued trust of all.

Ron Ventura, Ph.D. (Marketing)







Read Full Post »

Touch-screens are becoming the norm of display and interaction on mobile devices, from smartphones to tablets — devices with screen sizes in the range of 4” to 10”. Maximal area of the device’s face is dedicated to the screen, leaving a thin surrounding frame with enough space primarily for the physical ‘On’ button (e.g., awakening the screen, returning to the ‘Home’ display). Most controls for operating a smartphone or tablet and their applications are now virtual, represented as visual icons, symbols and keystrokes on the screen. Users can interact with the device (even for dialing a phone number) by pointing, swiping and similar hand (finger) gestures applied to the screen’s display. It all sounds and feels great, and mostly functions alright, but not all is bright — there is still much room for improvement and better fine-tuning.

The focal devices of this article are smartphones with screens normally between 4” and 5.4” and tablets that carry mostly screens in size of 7” to 10” (extra-size smartphones, also-known-as ‘phablets’, embody a screen larger than 5.4”). They essentially enlarge the real-estate of the screen by doing away with physical controls on the device (buttons and keypads). Operation of the device and interaction with its applications is delegated almost wholly to the touch of virtual controls and other finger-gestures.

This new form of handheld computer-type devices provides a highly advanced class of viewing verbal and pictorial content and interacting with them through manual gestures. Touch-screens were available already in the turn of the century with Personal Digital Assistants (PDAs). The touch-screens of smartphones and tablets are yet empowered in several important aspects: (1) they can be operated with the touch of fingers without need for a pen or stylus; (2) the screens are larger; and (3) the images are in much higher quality. The differences do not end here, if only to mention the communication abilities of the more recent mobile devices. Smartphones in particular can be said to converge a phone and a PDA in a single device, but with some additional capabilities that neither mobile phones nor PDAs have had in earlier times.

The first critical problem to address with touch-screen mobile devices concerns writing. A user is likely to encounter difficulties frequently when writing text with a virtual keyboard — it is rather easy to miss target character keystrokes. The difficulty is not simply in typing text but in getting the words spelled correctly, and overall avoiding character typing errors. The difficulty to produce a text without errors is likely to turn out more acute and agitating with smartphones and the smaller tablets (i.e., 7-8”). It may also cause users to leave spaces in the wrong places, and inversely to concatenate words. Correcting errors can be furthermore annoying when the user cannot find the direction arrows or point his or her finger to stand at the right position of correction; going backspace is not useful if one already moved to another line when the earlier error is noticed or any other correction of text is demanded.

Mobile devices foster writing correspondence texts (e.g., e-mails, chatting, social media updates and comments) even faster than with other modes, specially when users are in motion.  People tend to write correspondence as such more quickly and haphazardly, taking less care to avoid mistakes, and textproofing before sending is usually not in high priority or time-affordable. The result is that producing a well-thought and error-free text message on a touch-screen with a virtual keyboard may be an irritating mission (e.g., either abort message or send it with some errors).

  • Writing alphanumeric text with a 12-key physical pad is hardly convenient, and is usually time-consuming. In that sense QWERTY-type keyboards, physical or virtual, are better. There is yet an important difference to notice: The keys on a physical keyboard (e.g.,  Nokia E5 that followed on the original Blackberry phones) can be quite small but they feel like separated bumpers (i.e., giving the user a tactile feedback where the finger rests) whereas a virtual keyboard is completely flat and smooth. The cost of the physical keyboard is of course the smaller screen.

Mistyping is mostly associated with failure to accurately ‘hit’ the intended character keystroke with one’s index finger, and often enough with the thumb (e.g., when in motion and only one hand is free to hold the device and write). That is because virtual character keys tend to be too small for our fingers used for texting (less so on 10” tablet screens). The kind of errors that may result are typing the wrong character, typing the same character inadvertently twice, or  not typing the designated character. Apparently, failing to execute selected actions also occurs with images, such as when having to press virtual buttons or activating icon and text hyperlinks. These controls could be related to the device and its utilities or embedded in websites and imported apps. These issues are well-explained by Steven Hoober in an article in UXMatters (“Common Misconceptions About Touch”, 18 March 2013). Hoober makes an important distinction between seeing clearly text and icon targets and touching them effectively, and he recommends target sizes for them (in measures of points and millimeters).

Hoober refers to an additional sensitive and critical consideration in preventing users from taking accidentally the wrong action: he calls this ‘preventing interference errors’. He clearly suggests to avoid placing controls for actions with opposite consequences too near each other lest trying to touch-press one control could result in adversely activating the other unwanted control. This applies especially to actions associated with catastrophic results or outcomes that are difficult to undo. For instance, he recommends separating sufficiently the locations of controls for Send and Delete actions (Hoober recommends a distance of at least 8mms and preferably 10mms between centres of the controls [the target point of finger contact]).

Touch-screen devices benefit indeed from a larger screen real-estate for image display. But there is nonetheless competition on that real-estate for the content of display, and competition can be quite tough especially on devices with screens smaller than 7” in size. The competition is prominently between images of controls and the content of device utilities, webpages and apps. It applies primarily to the interface of a virtual keyboard that requires a relatively large space (in some cases up to 50% of screen area). However, there could be other controls needed for operating the device and specific utilities, websites or apps (e.g., designers may have to give up on some pictorial imagery in order to allow enough space for action controls like “Add to Basket”).

Focusing on the virtual keyboard: when called-upon to write, it pops-up and hides  other content of the display (e.g., e-mail message, shopping webpage) in the lower part of the screen. It may hide content that the user actually needs to see while proceeding in composing a message or responding to content in a website. The smaller the screen, on one hand a larger part of the underlying display is hidden, on the other hand the keystrokes have to be smaller. Unlike with a physical keyboard, the virtual one can at least be dropped out when not in use and called again when needed for writing. But it can be disturbing if every few moments one has to drop out the keyboard and surface it again to resume writing. With larger screens there should still be enough space for text in the e-mail message editor that one can scroll; with screens 7” or less one may be able to see only up to four lines at a time and even that in small type difficult to read (changing zoom may help but also cause trouble — more below).

Virtual keyboards on mobile devices are split into two or three displays due to space limitations (e.g., Latin letters as for English or German [but with some order variations], numeric figures and symbols, and an extra keyboard for non-Latin alphabets as Hebrew, Arabic, Cyrillic). But in any particular set of keyboard display, some character keys or controls may have to be forsaken for space limitations. As suggested above, it is most annoying when the direction arrows are eliminated (e.g., on a Samsung 7” tablet) because it makes it more difficult to go back and forth across a text while composing and editing it.

Relying on gestures can save space for screen real-estate and help in making interactions fluid and efficient, but working with a touch-screen has limitations. Raluca Budiu of Nielsen & Norman Group (user experience research and usability experts, 19 April 2015) lists some of the main problems that may arise for users: (1) The leading problem concerns typing, and particularly the need to continuously divide attention between the content written and the keypad area; (2) Poor tactile feedback, small keypads and crowded keys make the typing experience more troublesome; (3) The target size of controls or keys has to be considerably larger with touch interface to optimize reaching time and minimise errors compared with a mouse; (4) Since there can be many target areas on a touch-screen (especially of smartphones), it is easy to make accidental touch errors (see Hoober’s ‘interference error’) — some errors can “leave the user disoriented and unsure of what happened”. Budiu notes that respecting the Undo usability heuristic is furthermore important with mobile devices.

References to those main problems could be found in the earlier paragraphs of this article. Two more issues are addressed below:

Scrolling over a touch-screen — Mobile devices do not apply a scrolling bar — the user can scroll by swiping the index finger in a swoosh movement up or down over the touch-screen. The smaller the screen, and if one is in a landscape mode, more scrolling may be needed (shifting left and right is also possible). Trouble may start when the window display is populated by ‘clickable’ tiles or pictures: if the user does not swipe the finger quickly and lightly enough over the image, he or she may activate the underlying link rather than scroll across the window. When that happens, the user may arrive to a different window display, and one has to find the way back. More disturbing, when the content is online and connectivity can be slow on occasion, the user may remain stuck for a long time before being able to return to the desired location of content and resume work.

Zooming and automatic change of size —  Since type on touch-screens of mobile devices can be small and uneasy to read, one can zoom-in to enlarge the display appearance and the text in it. This is usually done by swiping the index and thumb fingers away from each other over the screen (conversely, one can zoom-out to reduce size but see more content by bringing the fingers closer together). But caution: one has to be accurate, and this does not always work so well. One may accidentally “blow” a picture image over the whole screen, for instance. When writing an e-mail message zooming can be helpful as one toggles between writing and reading the composed text. Yet, these devices are smart and sometimes they try to adjust the size for you according to the identified mode of use; sometimes it is appropriate but on occasion it causes trouble and nuisance. In more drastic cases, whilst trying to enlarge the type on a webpage, the system may lock in a loop and continue zooming-in until the user can see nothing coherent and has to start over again.

  • Note that the scrolling and size problems were encountered much more frequently on a Samsung tablet, either 7” or 10”, than on an Apple’s 10” iPad .

People may discover at times that although they were sure they could see exactly where their hand should reach and act, it somehow missed the target. That may happen because perception augmented by cognitive conception and processes of location and action are not the same in the human brain. These processes are connected (i.e., they share and pass information between them) but are nonetheless distinct. Visual information flows and is processed in two pathways: (a) perceptual but non-conceptual information is passed through the ventral stream to the temporal lobe where percepts are interpreted into meaningful images of scenes and objects; (b) visuospatial (location) and visuomotor (action) signals are transferred through the dorsal stream to the parietal lobe to guide, for instance, our manual movements. The ventral-temporal (semantic) visual system allows to identify a target for action yet the dorsal-parietal (pragmatic) visual system is responsible in parallel for determining where the target is and how to act upon it. Furthermore, action requires only a subset of information from percepts, including size, shape and orientation of a target object to complete a task, much less than what we perceive and even recognize as seeing. The conceptual identity of the target is mostly not required.

Jacob and Jeannerod (2003), distinguishing between Semantic and Pragmatic vision as cited above, argue that pragmatic vision processed in the parietal lobe is more complex and multi-layered than has been theorised and described in literature on vision. Humans may believe they act on whatever they perceive (as an image) but in fact they usually act on the nonconscious signals that arrive directly to the parietal lobe. Recognising and identifying clearly the target and understanding what to do with it are therefore not enough — the target should be designed in a form that permits (affords) the visuomotor system to perform the action correctly and efficiently. The semantic and pragmatic processes occur simultaneously. In some instances the semantic system may assist the pragmatic system but usually deliberate intervention is not needed. A user should not have to tilt the tablet, for example, while trying to accurately and slowly direct his finger to touch the small backward or forward arrows of the browser on the touch-screen. This is an example of an effortful action users should not be driven to.

Using mobile devices with touch-screens has advantages and can be a gratifying experience. But there is also a lot that can be done to improve that experience, moreover if the aspiration is that consumers will use these devices much more frequently for performing more tasks, and especially that they will use tablets more than desktop and laptop computers in the future. Although the touch-screen mobile devices promote to use fingers, they should support the use of a pen or stylus and perhaps even encourage it with smaller screen devices (for typing and not just for drawing). It is also helpful to enlarge the images of keystrokes, icons and symbols as one approaches to touch any of these controls. These are just hints and there are probably many more ways interaction designers can create to improve mobile users’ experiences, making them more effective and enjoyable.

Ron Ventura, Ph.D. (Marketing)


Ways of Seeing: The Scope and Limits of Visual Cognition; Pierre Jacob and Marc Jeannerod, 2003; Oxford University Press.

Additional recommended reading:

Mobile Computing; Jesper Kjeldskov; In Encyclopedia of Human-Computer Interaction (2nd edition, Chapter 9); Interaction Design Foundation.




Read Full Post »

For the past two years the Internet company Yahoo is under immense pressure: The management led by CEO Marissa Mayer, in office since 2012, is working hard to reinvigorate the core online business of the company with new up-to-date technologies; and furthermore, creating more value, mainly from advertising. The board of directors is seeking to give management more time to find a way out of the difficult times, however it is struggling to fend off pressures from activist investors who demand a break-up of the company in order to salvage the real value they see captured in Yahoo through its stakes in external companies — Alibaba of China and Yahoo Japan. Yahoo is in a delicate and complex situation, carrying a danger that consumers-users will be left behind in the final business outcome.

The key criticism of Yahoo concerns the poor performance of its online advertising system, lagging behind other platforms such as Google (search) and Facebook (social media). The core business of the company entails its search engine and media (news in various domains), acting as sources of income from advertising (e.g., display ads, sponsored results). Display advertising is now active also in Yahoo’s Mail (e-mail service).

Underlying the poor financial performance of the advertising system are mainly two problems: (a) inconvenient and technologically outdated utilities and tools for advertisers when placing their orders for online ads (1); (b) a relatively low volume of search queries by Internet users, particularly far behind Google, and insufficient returns by visitors to the different sections of Yahoo websites. For example, according to figures revealed by the New-York Times, only ten percent (10%) out of one billion monthly visitors of Yahoo websites return every day, suggesting weak brand attachment; the reported figure for Facebook is 65% (2). It may start from failing to persuade more Internet users to make Yahoo a start homepage on their browsers.

Yahoo may be suffering, nevertheless, from a  broader problem of generating income from its online services. That is, the company should not rely only on income from advertising but create additional schemes that can generate income from use of its online services. Yahoo could monetise services, for instance, by charging users on premium plans (e.g., allowing extended storage capacity, more advanced tools or features, increased customisation, access to extended content). Yahoo may further not have a wide enough range of services on which it can charge premiums from registered (logged-in) users. Rightfully, companies are reluctant to ask customers to pay for online services, but that may be an unaffordable privilege, as in the case of Yahoo. Moreover, charging price premiums for enhanced services is legitimate and can contribute to higher perceived quality or value to consumers.

The complexity of the situation can partly be explained by the claim of investors that a greater portion of market value of Yahoo arises from its stakes in Alibaba and Yahoo Japan than from its own activity. Yahoo originally (2005) had a stake of ~24% in the Chinese e-commerce company Alibaba. Shortly before an initial public offering (IPO) of Alibaba in September 2014, that stake was valued $40 billion. During the IPO, Yahoo sold 40% of that stake as agreed with Alibaba to the latter’s requirement. Yahoo eventually collected more than $9bn, available to award shareholders or re-invest in the company (how funds were actually used is unpublished). The remaining stake of Yahoo in Alibaba (~15%) was worth some $30bn in December 2015. Investors thought that not enough value stemmed from Yahoo’s genuine activity before Alibaba’s IPO, and some seem to believe that is nonetheless apparent after the IPO.

The first two years of Mayer as CEO enjoyed a sense of improvement and optimism. Until the IPO of Alibaba, Yahoo acquired more than forty technology companies to bring fresh methods, tools and skills to the company. The share price of Yahoo climbed from a low of under $20 to above $30 by the end of 2013 and reached $50 in late 2014. But after Alibaba’s IPO, tensions with investors, especially the activist ones, escalated as patience with Mayer as well as the board was running thin. The share price also started to decline back to $30 during 2015 (it recovered to ~$36 since January 2016).

It must be noted that the board of directors together with Mayer did try to find solutions that would satisfy the investors while saving the core business of Yahoo. One plan considered was to sell the remaining stake of Yahoo in Alibaba but that solution was abandoned due to concerns about a looming large tax liability. Another solution, championed by Mayer, was to put the core media and search business of Yahoo on sale in one piece, but that plan was also just recently suspended as the process failed to mature. The most serious prospective buyer was the US telecom company Verizon; they were thinking of merging the activity of Yahoo with that of AOL, acquired last year, but executives were worried about the company’s ability to pull together such an integration effort in a short time (3).

  • Update note (July 2016): After all, a deal was done with Verizon to buy Yahoo for $4.8bn (excluding its stakes in Alibaba and Yahoo Japan.)

In the second part of this article I examine the display and organization of Yahoo’s websites with a user-consumer viewpoint in mind — visual layout, sections and services on the website, composition of content, links, menus and other objects. The examination is focused more on the content and services Yahoo provides to its users rather than its advertising.

Yahoo runs multiple versions of its website in different countries and languages. The major part of the review is centered on the website of Yahoo in the United Kingdom as a pivot exemplar. References will be subsequently made to other versions. Nevertheless, all of the additional websites visited (8) highly resemble the UK website in appearance and composition. Through the examination I intend to argue that Yahoo has not organized and designed the homepages of its website versions appropriately to expose users to, and give them the necessary inducement to access, some of its core services that would also be important sources of income. However, beyond the homepages, I also relate to the ‘portfolio’ of media topical sections and services that comprise the websites.

Some of the graphics on the page were not captured (the title name Yahoo and news bar were supplemented)

Two services of Yahoo are primary assets: the search engine (Yahoo! Search) and the e-mail service (Yahoo! Mail). Both follow the company’s website in substance from its early days. They are essential components of Yahoo’s brand. The search facility is the gate to the enormous content on the Internet. The e-mail service with its mailbox management utilities is at the foundations of the company’s invaluable customer base. Both have advanced over the years and added features, although there is argument over the nature of progress particularly with regard to the search engine. A third additional asset of Yahoo is the media content of news stories and videos in various domains delivered on the website. On the left-hand of the homepage appears a sidebar with links to services and news topics on the website; a ‘global’ heading bar appears on top of any webpage on Yahoo’s site.

As important and interesting as the news media content may be, its preview takes grossly too much space of the homepage. Conversely, the search window for initial queries, while on top, is marginalised on the page, nearly “drowning” in the news content. It sends a message to visitors that this feature is secondary or less to media content. It is little wonder that on-face Internet users perceive Google as the universal search engine (Yahoo has been relying on the powers of search engines of Google and previously Microsoft’s Bing in recent years). The icon-link to the e-mail service is not in a much better position at the top right corner. Even though three links for Mail appear on the homepage — the icon right to the search window, on top of the vertical sidebar, and on left side of the heading bar — none of these positions is central. The allocation of space on the homepage is not reasonably proportional between these three assets. It suggests that Yahoo has become a media company and has practically discounted its two other assets.

The sidebar added to the website in the past two years is a welcome contribution as it helps to quickly familiarise with or easily find some key services and news topics on Yahoo’s site. Nevertheless, icons-links for those services and topics could receive better attention and salience in users’ eyes and minds if they were arranged in a central area of the page adjacent to the Search window and Mail icon (e.g., beneath them). It would give Yahoo an opportunity to promote services or topics with a greater income potential vis-à-vis visitors’ interests and utility in using particular services. For example, the online cloud-based service Flickr for storing, editing and showcasing photos is hardly noticed on the head-bar, and if at all on the sidebar (Flickr was acquired by Yahoo in 2005). If site users could also see more instantly and clearly what functional services (non-news) are offered by Yahoo, it might be better understood why there is a Sign-In option separate from Mail.

  • Extra feature-services such as Contacts, Calendar, Notepad and Messenger (chat) are already included in Mail.

Yahoo highlights on its homepage general news, sport, entertainment and finance. On the ‘homepage’ of the news section one can find more categories such as UK,  World, Science & Tech, Motoring and Celebrity. Links to some of them appear on the sidebar of the UK homepage (e.g., Cars [Motoring], Celebrity). Interestingly, some news/media sections do behave as more autonomous sites and some have a different layout with a visual graphic display of tiles — Parenting, Style and Movies. (In the Italian version, Beauty and Celebrity sections also exhibit a tile ‘art’ display.)

The news headlines with the snippets (briefs) are useful but those do not necessarily belong on the homepage in that long a list. The ‘ribbon’ of images for selected stories would most appropriately fit on the homepage with a focal story changing on top — that is all that needs to remain on the homepage (with some enhancements such as choice of category) while the additional headlines are delegated to the News ‘homepage’. In the final display of the homepage, a concise and elegant arrangement should include the Search window and Mail/ Sign-In icons, surrounded by a News showcase and a palette of selected services or media topics.

  • A visitor has to look deeper into the website to trace additional services that may be  interesting and useful. A few examples: (1) The Finance (news and more) section includes a personalised utility ‘My Portfolios’ for managing investments; (2) On a page enlisting more services one can find Groups (discussion forums) and Shopping. Other features or services on a sidebar or head-bar refer to Weather, Mobile (downloading Yahoo apps), and Answers (subdivision of Search — peer-to-peer Q&A exchanges).

When the homepage of UK website is compared with other country and language websites of Yahoo, it is mostly noticeable that some of the links on the sidebar and head-bar may vary, apparently accounting for regional and cultural differences in public interests. Countries may also be affiliated or in co-operation with different local content and service providers. For instance: Italy assigns more importance to Style, Beauty and Celebrity, also having more invested topical sections; France has a section on real-estate (Immobilier) in affiliation with BFM TV); Australia has a TV section affiliated with PLUS7); and in Germany the Weather and Flickr services are represented on both sidebar and head-bar. It is further observed that the sidebar in Yahoo Australia includes many more links than in other site versions.

Regarding the US website, some differences can be marked. First, subject titles of appear above each news headline. Second, a reference to the social blogspace site Tumblr appears on the head-bar (in addition to Flickr) — it appears also on the site of Australia but not on the other sites visited (Tumblr was acquired by Yahoo in 2013). Third, the US site chose to mention on its sidebar Shopping and Politics.

  • The Yahoo websites exhibit anomalies implying that the company refrains from promoting some of its own in-house or subsidiary services. For instance, Flickr and Tumblr are sidelined, and the latter is exclusive to just a couple of countries. The ‘Shopping’ product search for attractive retailer offers (powered by Nextag) is more often hidden, and Yahoo homepages provide links to eBay and Amazon.

In order to design in practice the most appropriate and effective composition and layout of the homepage, Yahoo may apply usability tests, eye tracking, and possibly also tracking of mouse movements and clicks. These three methodological approaches can be used in parallel or even simultaneously to derive findings that can support and complement each other in guiding the design process. Attention obviously should be paid to visual appeal of the page appearance in the final design. As suggested above, however, emphasis should be directed to the content and services provided by Yahoo as opposed to the advertising space.

Notwithstanding, the homepage is just the start of the journey of a visitor on the website. Of course much depends on the quality of services and content in determining how long a visitor will stay on the site. For example, how the mail, e-commerce (shopping), or photo service platform compare with competition. Particularly with respect to the search engine, continued utilisation relies on relevance, credibility and timeliness (historical to up-to-date) of results generated.

Yahoo provides specialised searches of websites and pages, images, videos, answers, products and more. Yet the company acquired in the past the Altavista engine that was advantageous in retrieving higher-quality and academic-level information sources and materials but it was apparently submerged without leaving a trace; and as indicated earlier, Yahoo has turned to stronger capabilities of competitors at the expense of developing more of their own. Marissa Mayer aims alternately to create a leverage by developing a powerful intelligent search engine for mobile devices in a mobile-friendly site/app. Even though the mobile-driven approach can be a move in the right direction for Yahoo, it may not resolve the suggested problems inherent in the online website, and skeptics doubt that the company has the skills and resources in its current state to accomplish those goals.

Yahoo has a lot at stake. It should not rely on users to know how to get to its services independently or to search for their Internet addresses. The site, online or mobile, has to give a hand and show users the way to the services it wants them most to visit and apply, and there is no better place to start than on the site’s homepage. The solutions needed are not just about technology but in the domain of marketing strategy and user-consumer online and mobile behaviour. Yet, looking at how events roll at Yahoo, the decisions made could be driven by business and financial considerations above the heads of users-consumers.

  • The lessons for Yahoo should now be learnt by Verizon as it intends to merge between functions and capabilities of Yahoo and AOL, and probably rebrand them.

Ron Ventura, Ph.D. (Marketing)


(1) “Marissa’s Moment of Truth”, Jess Hempel, Fortune Europe Edition, 14 May 2014  pp. 38-44.

(2) “Yahoo’s Suitors Are in the Dark About its Financial Details”, International New-York Times, 16-17 April 2016.



Read Full Post »

There can hardly be a doubt that Internet users would be lost and unable to exploit the riches of information in the World Wide Web (WWW), and the Internet overall, without the aid of search engines (e.g., Google, Yahoo!, Bing). Anytime information is needed on a new concept or in an unfamiliar topic, one turns to a search engine for help. Users search for information for various purposes in different spheres of life — formal and informal education, professional work, shopping, entertainment, and others. While on some tasks the relevant piece of information can be quickly retrieved from a single source chosen from the results list, oftentimes a rushed search that relies on results in immediate sight is simply not enough.

And yet users of Web search engines, as revealed in research on their behaviour, tend to consider only results that appear on the first page (a page usually includes ten results). They may limit their search task even further by focusing on just the first “top” results that can be viewed on the screen, without scrolling down to the bottom of the first page. Users then also tend to proceed to view only a few webpages by clicking their links on the results list (usually up to five results)[1].

  • Research in this field is based mostly on analysis of query logs, but researchers also apply lab experiments and observation of users in-person while performing search tasks.     

Internet users refrain from going through results pages and stop short of exploring information sources located on subsequent pages that are nonetheless potentially relevant and helpful. It is important, however, to distinguish between search purposes, because not for every type of search looking farther than the first page is necessary and beneficial. Firstly, our interest is in a class of informational search whose purpose in general is to learn about a topic (other recognized categories are navigational search and transactional / resource search)[2]. Secondly, we may distinguish between a search for more specific information and a search for learning more broadly about a topic. The goal of a directed search is to obtain information regarding a particular fact or a list of facts (e.g., UK’s prime minister in 1973, state secretaries of the US in the 20th century). Although it is likely we could find answers to such questions from a single source (e.g., Wikipedia), found on the first page of results, it is advisable to verify the information with a couple of additional sources; but that usually would be sufficient. An undirected search, on the other hand, is aimed to learn more broadly about a topic (e.g., the life and work of architect Frank Lloyd Wright, online shopping behaviour). The latter type of search is our main focus since in this case ending a search too soon can be the more damaging and harmful to our learning or knowledge acquisition [3]. This may also be true for other types of informational search identified by Rose and Levinson, namely advice seeking and obtaining a list of sources to consult [4].

With respect to Internet users especially in the role of consumers, and to their shopping activities, a special class of topical search is associated with learning about products and services (e.g., features and attributes, goals and uses, limitations and risks, expert reviews and advice). Negative consequences of inadequate learning in this case may be salient economically or experientially to consumers (though perhaps not as serious for our knowledgebase compared with other domains of education).

The problem starts even before the stage of screening and evaluating information based on its actual content. That is, the problem is not of selectively choosing sources that appear reliable or their information seems relevant and interesting; it is neither of selectively favouring information that supports our prior beliefs and opinions (i.e., a confirmation bias). The problem has to do with the tendency of people to consider and apply only that portion of information that is put in front of them. Daniel Kahneman pointedly labeled this human propensity WYSIATI — What You See Is All There Is — in his excellent book Thinking, Fast and Slow [4]. Its roots may be traced to the availability heuristic which deals with the tendency of people to rely on the exemplars of a category presented, or ease of accessing the first category instances from one’s memory, in order to make judgements about frequency or probability of categories and events. The heuristic’s effect extends also to error in assessing size (e.g., using only the first items of a data series to assess its total size or sum). However, WYSIATI should better be viewed in the wider context of a distinction explained and elaborated by Kahneman between what he refers to as System 1 and System 2.

System 1 is intuitive and quick-to-respond whereas System 2 is more thougthful and deliberate. While System 2 is effortful, System 1 puts as little effort as possible to make a judgement or reach a conclusion. System 1 is essentially associative (i.e., it draws on quick associations that come to mind), but it consequently also tends to jump to conclusions. System 2 on the other hand is more critical and specialises in asking questions and seeking more required information (e.g., for solving a problem). WYSIATI is due to System 1 and can be particularly linked with other possible fallacies related to this system of fast thinking (e.g., representativeness, reliance on ‘low numbers’ or insufficient data). Albeit, the slow thinking System 2 is lazy — it does not hurry to intervene, and even when it is activated on the call of System 1 often enough it only attempts to follow and justify the latter’s fast conclusions [5]. We need to enforce our will in order to make our System 2 think harder and improve where necessary on poorly-based judgements made by System 1. 

Several implications of WYSIATI when using a Web search engine become apparent. It is appealing to follow a directive which says: the search results you see is all there is. It is in the power of System 1 to tell users when utilising a search engine: there is no need to look further — consider links to search hits immediately accessible on the first page, preferably seen on the screen from top of the page, perhaps scroll down to its bottom. Users should pause to ask if the information proposed is sufficient or they need to look for more input.

  • Positioning a “ruler” at the bottom of any page with page numbers and a Next button that searchers can click-through to proceed to additional pages (e.g., Google) is not helpful in this regard — such a ruler should be placed also at the top of a page to encourage or remind users to check subsequent pages, whether or not one observes all the results on a given page.

Two major issues in employing sources of information are relevance and credibility of their content. A user can take advantage of the text snippet quoted from a webpage under the hyperlinked heading of each result in order to initially assess if it is relevant enough to enter the website. It is more difficult, however, to judge the credibility of websites as information sources, and operators of search engines may not be doing enough to help their users in this respect. Lewandowski is critical of an over-reliance of search engines on popularity-oriented measures as indicators of quality or credibility to evaluate and rank websites and their webpages. He mentions: the source-domain popularity; click and visit behaviour of webpages; links to the page in other external pages, serving as recommendations; and ratings and “likes” by Internet users [6]. Popularity is not a very reliable, guaranteed indicator of quality (as known for extrinsic cues of perceived quality of products in general). A user of a search engine could be misguided in relying on the first results suggested by the engine in confident belief that they have to be the most credible. Search engines indeed use other criteria for their ranking like text-based tests (important for relevance) and freshness, but with respect to credibility or quality, the position of a webpage in the list of results could be misleading.

  • Searchers should consider on their own if the source (company, organization or other entity) is familiar and has good reputation in the relevant field, then judge the content itself. Yet, Lewandowski suggests that search engines should give priority in their ranking and positioning of results to entities that are recognized authorities appreciated for their knowledge and practice in the domain concerned [7]. (Note: It is unverified to what extent search engines indeed use this kind of appraisal as a criterion.) 

Furthermore, organic results are not immune to marketing-driven manipulations. Paid advertised links normally appear now on a side bar, at top or bottom of pages, mainly the first one, and they may also be flagged as “ads”. Thus searchers can easily distinguish them and choose how to treat them. Yet, the position of a webpage in the organic results list may be “assisted” by using techniques of search engine optimization (SEO), increasing their frequency of retrieval, for example through popular keywords or tagwords in webpage content or promotional links to the page (non-ads). Users should be careful of satisficing behaviour, relying only on early results, and be willing to look somewhat deeper into the results list on subsequent pages (e.g., at least 3-4 pages, sometimes reach page 10). Surprisingly instructive and helpful information may be found in webpages that appear on later results pages. 

  • A principal rule of information economics may serve users well: keep browsing results pages and consider links proposed until additional information seems marginally relevant and helpful and does not justify the additional time continuing to browse results. Following this criterion suggests no rule-of-thumb for the number of pages to view — in some cases it may be sufficient to consider two results pages, while in others it could be worth considering even twenty pages. 

Another aspect of search behaviour concerns the composition of queries and the transition between search queries during a session. It is important to balance sensibly and efficiently between the number of queries used and the number of results pages viewed on each search trial. Web searchers tend to compose relatively short queries, about 3-4 keywords on average in English (in German queries are 1-2 words long since German includes many composite words). Users make relatively little use of logical operators. However, users update and change queries when they run into difficulty in finding the information they seek. It becomes a problem if they get unsatisfied with a query because they could not find the needed information too shortly. Users also switch between strings of keywords and phrases in natural language. Yet updating the query (e.g., replacing or adding a word) frequently changes the results list only marginally. The answer to a directed search may be found sometimes around the corner, that is, in a webpage whose link appears on the second or third results page. And as said earlier, it is worth checking 2-3 answers or sources before moving on. Therefore, it is wise even to eye-scan the results on 2-4 pages (e.g., based on heading and snippet) before concluding that the query was not accurate or effective enough.

  • First, users of Web search engines may apply logical operators to define and focus their area of interest more precisely (as well as other criteria features of advanced search, for example time limits). Additionally, they may try the related query strings suggested by the search engine at the bottom of the first page (e.g., in Google). Users can also refer to special domain databases (e.g., news, images) shown on the top-tab. Yahoo! Search, furthermore, offers on the first page a range of results types from different databases mixed with general Web results. And Google suggests references to academic articles from its Google Scholar database for “academic” queries.

The way Interent users perceive their own experience with search engines can be revealling. In a survey of Pew Research Center on Internet & American Life in 2012, 56% of respondents (adults) expressed strong confidence in their ability to find the information they need by using the service of a search engine and an additional 37% said they were somewhat confident. Also, 29% said they are always able to find the information looked for and 62% said they can find it most of the time, making together a vast majority of 91%. Additionally, American respondents were mostly satisfied with information found, saying that it was accurate and trustworthy (73%), and thought that relevance and quality of results improved over time (50%).

Internet users appear to set themselves modest information goals and become satisfied with the information they gathered, suspectedly too quickly. They may not appreciate enough the possibilties and scope of information that search engines can lead them to, or simply be over-confident in their search skills. As suggested above, a WYSIATI approach could drive searchers of the Web to end their search too soon. They need to make the effort, willingly, to overcome this tendency as the task demands, getting System 2 at work. 

Ron Ventura, Ph.D. (Marketing)


(1) As cited by Dirk Lewandowski (2008), Search Engine User Behaviour: How Can Users Be Guided to Quality Content, Information Service & Use, 28, pp. 261-268 http://eprints.rclis.org/16078/1/ISU2008.pdf ; also see for example research by Bernard J. Jansen and Amanda Spink (2006) on How Are We Searching the World Wide Web.

2) Daniel E. Rose & Danny Levinson (2004), Understanding User Goals in Web Search, ACM WWW Conference, http://facweb.cs.depaul.edu/mobasher/classes/csc575/papers/www04–rose.pdf 

(3) Dirk Lewandowski (2012), Credibility in Web Search Engines, In Online Credibility and Digital Ethos: Evaluating Computer-Mediated Communication, S. Apostel & M. Fold (Eds.) Hershey, PA: IGI Global (viewed at: http://arxiv.org/ftp/arxiv/papers/1208/1208.1011.pdf, 8 July ’14)

(4) Daniel Kahneman (2011), Thinking, Fast and Slow, Penguin Books.

(5)  Ibid. 4.

(6) Ibid. 3

(7) Ibid 1 (Lewandowski 2008). 



Read Full Post »