Feeds:
Posts
Comments

Posts Tagged ‘Decision Making’

The discipline of consumer behaviour is by now well versed in the distinction between System 1 and System 2 modes of thinking, relating in particular to consumer judgement and decision making, with implications for marketing and retail management. Much appreciative gratitude is owed to Nobel Prize Laureate in economics Daniel Kahneman for bringing forward the concept of these thinking systems to the knowledge of the wider public (i.e., beyond academics) in his book “Thinking, Fast and Slow” (2012). ‘System 1’ and ‘System 2’, though not always using these labels, have been identified and elaborated by psychologists earlier than Kahneman’s book, as the author so notes. However, Kahneman succeeds in making more crystal clear the concepts of these different modes of thinking while linking them to phenomena studied in his own previous research, most notably in collaboration with the late Amos Tversky.

In a nutshell: System 1’s type of thinking is automatic, associative and intuitive; it tends to respond quickly, but consequently it is at higher risk of jumping to wrong conclusions. It is the ‘default’ type of thinking that guides human judgement, decisions and behaviour much of the time. On the other hand, System 2’s type of thinking is deliberative, logical, critical, and effortful; it involves deeper concentration and more complex computations and rules. System 2 has to be called to duty voluntarily, activating rational thinking and careful reasoning. Whereas thinking represented by System 1 is fast and reflexive, that of System 2 is slow and reflective.

Kahneman describes and explains the role, function and effect of System 1 and System 2 in various contexts, situations or problems. In broad terms: Thinking of the System 1 type comes first; System 2 either passively adopts impressions, intuitive judgements and recommendations by System 1 or actively kicks-in for more orderly examination and correction (alas, it tends to be lazy, not in a hurry to volunteer). Just to give a taste, below is a selection of situations and problems in which Kahneman demonstrates the important differences between these two modes of thinking, how they operate and the outcomes they effect:

  • # Illusions (e.g., visual, cognitive)  # Use of memory (e.g., computations, comparisons)  # Tasks requiring self-control  # Search for causal explanations  # Attending to information (“What You See Is All There Is”)  # Sets and prototypes (e.g., ‘average’ vs. ‘total’ assessments)  # Intensity matching  # ‘Answering the easier question’ (simplifying by substitution)  # Predictions (also see correlation and regression, intensity matching, representativeness)  # Choice in opt-in and opt-out framing situations (e.g., organ donation)
  • Note: In other contexts presented by Kahneman (e.g., validity illusion [stock-picking task], choice under Prospect Theory), the author does not connect them explicitly to  System 1 or System 2 so their significance may only be indirectly implied by the reader.

In order to gain a deeper understanding of System 1 and System 2 we should inspect the detailed aspects differentiating between these thinking systems. The concept of the two systems actually emerges from binding multiple dual-process theories of cognition together, thus appearing to be a larger cohesive theory of modes of thinking. Each dual process theory is usually focused on a particular dimension that distinguishes between two types of cognitive processes the human mind may utilise. However, those dimensions ‘correlate’ or ‘co-occur’, and a given theory often adopts aspects from other similar theories or adds supplementary properties; the dual-system conception hence is built on this conversion. The aspects or properties used to describe the process in each type of system are extracted from those dual-process theories. A table presented by Stanovich (2002) helps to see how System 1 and System 2 contrast in various dual-process theories. Some of those theories are: [For brevity, S1 and S2 are applied below to refer to each system.)

  • S1: Associative system / S2: Rule-based system (Sloman)
  • S1: Heuristic processing / S2: Analytic processing (Evans)
  • S1: Tacit thought process / S2: Explicit thought process (Evans and Over)
  • S1: Experiential system / S2: Rational system (Epstein)
  • S1: Implicit inference / S2: Explicit inference (Johnson-Laird)
  • S1: Automatic processing / S2: Controlled processing (Shiffrin and Schneider)

Note: Evans and Wason related to Type 1 vs. Type 2 processes already in 1976.

  • Closer to consumer behaviour: Central processing versus peripheral processing in the Elaboration Likelihood Model (Petty, Cacioppo & Schumann) posits a dual-process theory of routes to persuasion.

Each dual process theory provides a rich and comprehensive portrayal of two different thinking modes. The theories complement each other but they do not necessarily depend on each other. The boundaries between the two types of process are not very sharp, that is, features of the systems are not all exclusive in the sense that a particular property associated with a process of System 1 may occur in a System 2 process, and vice versa. Furthermore, the processes also interact with one another, particularly in a way where System 2 relies on products of thought from System 1, either approving them or using them as a starting-point for further analysis. Nevertheless, occasionally System 2 may generate reasons for us merely to justify a choice made by System 1 (e.g., a consumer likes a product for the visual appearance of its packaging or its design).

Stanovich follows the table of theories with a comparison of properties describing System 1 versus System 2 as derived from a variety of dual process theories, but without attributing them to any specific theory (e.g., holistic/analytic, relatively fast/slow, highly contextualized/decontextualized). Comparative lists of aspects or properties have been offered by other researchers as well. Evans (2008) formed a comparative list of more than twenty attributes which he divided into four clusters (describing System 1/System 2):

  • Cluster 1: Consciousness (e.g., unconscious/conscious, automatic/controlled, rapid/slow, implicit/explicit, high capacity/low capacity)
  • Cluster 2: Evolution (e.g., evolutionary old/recent, nonverbal/linked to language)
  • Cluster 3: Functional characteristics (e.g.,  associative/rule-based, contextualized/abstract, parallel/sequential)
  • Cluster 4: individual differences (universal/heritable, independent of/linked to general intelligence, independent of/limited by working memory capacity).

Listings of properties collated from different sources (models, theories), interpreted as integrative profiles of System 1 and System 2 modes of thinking, may yield a misconception of the distinction between the two systems as representing an over-arching theory. Evans questions whether it is really possible and acceptable to tie the various theories of different origins under a common roof, suggested as an over-arching cohesive theory of two systems (he identifies problems residing mainly with ‘System 1’). It could be more appropriate to approach the dual-system presentation as a paradigm or framework to help one grasp the breadth of aspects that may distinguish between two types of cognitive processes and obtain a more comprehensive picture of cognition. The properties are not truly required to co-occur altogether as constituents of a whole profile of one system or the other. In certain domains of judgement or decision problems, a set of properties may jointly describe the process entailed. Some dual process theories may take different perspectives on a similar domain, and hence the aspects derived from them are related and appear to co-occur.

  • Evans confronts a more widely accepted ‘sequential-interventionist’ view (as described above) with a ‘parallel-competitive’ view.

People use a variety of procedures and techniques to form judgements, make decisions or perform any other kind of cognitive task. Stanovich relates the structure, shape and level of sophistication of the mental procedures or algorithms of thought humans can apply, to their intelligence or cognitive capacity, positioned at the algorithmic level of analysis. Investing more effort in more complicated techniques or algorithms entailed in rational thinking is a matter of volition, positioned at the intentional level (borrowed from Dennett’s theorizing on consciousness).

However, humans do not engage a great part of the time in thought close to the full of their cognitive capacity (e.g., in terms of depth and efficiency). According to Stanovich, we should distinguish between cognitive ability and thinking dispositions (or styles). The styles of thinking a person applies do not necessarily reflect everything one is cognitively capable of. Put succinctly, the fact that a person is intelligent does not mean that he or she has to think and act rationally; one has to choose to do so and invest the required effort into it. When one does not, it opens the door for smart people to act stupidly. Furthermore, the way a person is disposed to think is most often selected and executed unconsciously, especially when the thinking disposition or style is relatively fast and simple. Cognitive styles that are entailed in System 1, characterised as intuitive, automatic, associative and fast, are made to ease the cognitive strain on the brain, and they are most likely to occur unconsciously or preconsciously. Still, being intuitive and using heuristics should not imply a person will end up acting stupidly — some would argue his or her intuitive decision could be more sensible than one made when trying to think rationally; it may depend on how thinking in the realm of System 1 happens — if one rushes while applying an inappropriate heuristic or relying on an unfitting association, he or she could become more likely to act stupidly (or plainly, ‘being stupid’).

Emotion and affect are more closely linked to System 1. Yet, emotion should not be viewed ultimately as a disruptor of rationality. As proposed by Stanovich, emotions may fulfill an important adaptive regulatory role — serving as interrupt signals necessary to achieve goals, avoiding entanglement in complex rational thinking that only keeps one away from a solution, and reducing a problem to manageable dimensions. In some cases emotion does not disrupt rationality but rather help to choose when it is appropriate and productive to apply a rational thinking style (e.g., use an optimization algorithm, initiate counterfactual thinking). By switching between two modes of thinking, described as System 1 and System 2, one has the flexibility to choose when and how to act in reason or be rational, and emotion may play the positive role of a guide.

The dual-system concept provides a way of looking broadly at cognitive processes that underlie human judgement and decision making. System 1’s mode of thinking is particularly adaptive by which it allows a consumer to quickly sort out large amounts of information and navigate through complex and changing environments. System 2’s mode of thinking is the ‘wise counselor’ that can be called to analyse the situation more deeply and critically, and provide a ‘second opinion’ like an expert. However, it intervenes ‘on request’ when it receives persuasive signals that its help is required. Consideration of aspects distinguishing between these two modes of thinking by marketing and retail managers can help them to better understand how consumers conduct themselves and cater to their needs, concerns, wishes and expectations. Undertaking this viewpoint can especially help, for instance, in the area of ‘customer journeys’ — studying how thinking styles direct or lead the customer or shopper through a journey (including emotional signals), anticipating reactions, and devising methods that can alleviate conflicts and reduce friction in interaction with customers.

Ron Ventura, Ph.D. (Marketing)

References:

(1)  Thinking, Fast and Slow; Daniel Kahneman, 2012; Penguin Books.

(2) Rationality, Intelligence, and Levels of Analysis in Cognitive Science (Is Dysrationalia Possible); Keith E. Stanovich, 2002; in Why Smart People Can Be So Stupid (Robert J. Sternberg editor)(pp. 124-158), New Haven & London: Yale University Press.

(3) Dual-Processing Accounts of Reasoning, Judgment and Social Cognition; Jonathan St. B. T. Evans, 2008; Annual Review of Psychology, 59, pp. 255-278. (Available online at psych.annualreviews.org, doi: 10.1146/annurev.psych.59.103006.093629).

 

Advertisements

Read Full Post »

A shopper may well know what types of products he or she is planning to buy in a store, but what products the shopper will come out with is much less sure. Frequently there will be some additional unplanned products in the shopper’s basket. This observation is more often demonstrated in the case of grocery shopping in supermarkets, but it is likely to hold true also in other types of stores, especially large ones like department stores, fashion stores, and DIY or home improvement stores.

There can be a number of reasons or triggers for shoppers to consider additional products to purchase during the shopping trip itself — products forgotten and reminded of by cues that arise while shopping, attractiveness of visual appearance of product display (‘visual lift’), promotions posted on tags at the product display (‘point-of-purchase’) or in hand-out flyers, and more. The phenomenon of unplanned purchases is very familiar, and the study of it is not new. However, the behaviour of shoppers during their store visit that leads to this outcome, especially the consideration of product categories in an unplanned manner, is not understood well enough. The relatively new methodology of video tracking with a head-mounted small camera shows promise in gaining better understanding of shopper behaviour during the shopping trip; a research article by Hui, Huang, Suher and Inman (2013) is paving the way with a valuable contribution, particularly in shedding light on the relations between planned and unplanned considerations in a supermarket, and the factors that may drive conversion of the latter into purchases (1).

Shopper marketing is an evolving specialisation which gains increasing attention in  marketing and retailing. It concerns activities of consumers performed in a ‘shopper mode’ and is strongly connected with or contained within consumer marketing. Innovations in this sub-field by retailers and manufacturers span digital activities, multichannel marketing, store atmospherics and design, in-store merchandising, shopper marketing metrics and organisation. However, carrying out more effective and successful shopper marketing programmes requires closer collaboration between manufacturers and retailers — more openness to each party’s perspective and priorities (e.g., in interpretation of shopper insights), sharing information and coordination (2).

In-Store Video Tracking allows researchers to observe the shopping trip as it proceeds from the viewpoint of the shopper, literally. The strength of this methodology is in capturing the dynamics of shopping (e.g., with regard to in-store drivers of unplanned purchases). Unlike other approaches (e.g., RFID, product scanners), the video tracking method enables tracking acts of consideration, whether followed or not by purchase (i.e., putting a product item in the shopping cart).

For video tracking, a shopper is asked to wear, with the help of an experimenter, a headset belt that contains the portable video equipment, including a small video camera, a view/record unit, and a battery pack. It is worn like a Bluetooth headset. In addition, the equipment used by Hui et al. included an RFID transmitter that allows to trace the location of the shopper throughout his or her shopping path in a supermarket.

Like any research methodology, video tracking has its strengths and advantages versus its weaknesses and limitations. With the camera it is possible to capture the shopper’s field of vision during a shopping trip; the resulting video is stored in the view/record unit. However, without an eye-tracking (infrared) device, the camera may not point accurately to the positions of products considered (by eye fixation) in the field of vision. Yet, the video supports at least approximate inferences when a product is touched or moved, or the head-body posture and gesture suggest from which display a shopper considers products (i.e., the ‘frame’ closes-in on a section of the display). It is further noted that difficulties in calibrating an eye-tracking device in motion may impair the accuracy of locating fixations. The video camera seems sufficient and effective for identifying product categories as targets of consideration and purchase.

Furthermore, contrary to video filmed from cameras hanging from the ceiling in a store, the head-mounted camera records the scene at eye-level and not from high above, enabling to better notice what the shopper is doing (e.g., in aisles), and it follows the shopper all the way, not just in selected sections of the store. Additionally, using a head-mounted camera is more ethical than relying on surrounding cameras (often CCTV security cameras). On the other hand, head-mounted devices (e.g., camera, eye-tracking), which are not the most natural to wear whilst shopping, raise concerns of sampling bias (self-selection) and possibly causing change in the behaviour of the shopper; proponents argue that shoppers quickly forget of the device (devices are now made lighter) as they engage in shopping, but the issue is still in debate.

Video tracking is advantageous to RFID  and product scanners for the study of unplanned purchase behaviour by capturing acts of consideration: the RFID method alone (3) enables to trace the path of the shopper but not what one does in front of the shelf or stand display, and a scanner method allows to record what products are purchased but not which are considered. The advantage of the combined video + RFID approach according to Hui and his colleagues is in providing them “not only the shopping path but also the changes in the shoppers’ visual field as he or she walks around the store” (p. 449).

The complete research design included two interviews conducted with each shopper-participant — before the shopping trip, as a shopper enters the store, and after, on the way out. In the initial interview, shoppers were asked in which product categories they were planning to buy (aided by a list to choose from), as well as other shopping aspects (e.g., total budget, whether they brought their own shopping list). At the exit the shoppers were asked about personal characteristics, and the experimenters collected a copy of the receipt from the retailer’s transaction log. The information collected was essential for two aspects in particular: (a) distinguishing between planned and unplanned considerations; and (b) estimating the amount of money remaining for the shopper to make unplanned purchases out of the total budget (‘in-store slack’ metric).

237 participants were included in analyses. Overall, shoppers-participants planned to purchase from approximately 5.5 categories; they considered on average 13 categories in total, of which fewer than 5 were planned considerations (median 5.6). 37% of the participants carried a list prepared in advance.

Characteristics influencing unplanned consideration:  The researchers sought first to identify personal and product characteristics that significantly influence the probability of making an unplanned consideration in each given product category (a latent utility likelihood model was constructed). Consequently, they could infer which characteristics contribute to considering more categories in an unplanned manner. The model showed, for instance, that shoppers older in age and female shoppers are likely to engage in unplanned consideration in a greater number of product categories. Inversely, shoppers who are more familiar with a store (layout and location of products) and those carrying a shopping list tend to consider fewer product categories in an unplanned manner.

At a product level, a higher hedonic score for a product category is positively associated with greater incidence of unplanned consideration of it. Products that are promoted in the weekly flyer of the store at the time of a shopper’s visit are also more likely to receive an unplanned consideration from the shopper. Hui et al. further revealed effects of complementarity relations: products that were not planned beforehand for purchase (B) but are closer complementary of products in a ‘planned basket’ of shoppers (A) gain a greater likelihood of being considered in an unplanned manner (‘A –> B lift’).  [The researchers present a two-dimensional map detailing what products are more proximate and thus more likely to get paired together, not dependent yet on purchase of them].

Differences in behaviour between planned and unplanned considerations: Unplanned considerations tend to be made more haphazardly — while standing farther from display shelves and involving fewer product touches; conversely, planned considerations entail greater ‘depth’. Unplanned considerations tend to occur a little later in the shopping trip (the gap in timing is not very convincing). An unplanned consideration is less likely to entail reference to a shopping list — the list serves in “keeping the shopper on task”, being less prone to divert to unplanned consideration. Shoppers during an unplanned consideration are also less likely to refer to discount coupons or to in-store flyers/circulars. However, interestingly, some of the patterns found in this analysis change as an unplanned consideration turns into a purchase.

Importantly, in the outcome unplanned considerations are less likely to conclude with a purchase (63%) than planned considerations (83%). This raises the question, what can make an unplanned consideration result in purchase conversion?

Drivers of purchase conversion of unplanned considerations: Firstly, unplanned considerations that result in a purchase take longer (40 seconds on average) than those that do not (24 seconds). Secondly, shoppers get closer to the shelves and touch more product items before concluding with a purchase; the greater ‘depth’ of the process towards unplanned purchase is characterised by viewing fewer product displays (‘facings’) within the category — the shopper is concentrating on fewer alternatives yet examines those selected more carefully (e.g., by picking them up for a closer read). Another conspicuous finding is that shoppers are more likely to refer to a shopping list during an unplanned consideration that is going to result in a purchase — a plausible explanation is that the shopping list may help the shopper to seek whether an unplanned product complements a product on the list.

The researchers employed another (latent utility) model to investigate more systemically the drivers likely to lead unplanned considerations to result in a purchase. The model supported, for example, that purchase conversion is more likely in categories of  higher hedonic products. It corroborated the notions about ‘depth’ of consideration as a driver to purchase and the role of a shopping list in realising complementary unplanned products as supplements to the ‘planned basket’. It is also shown that interacting with a service staff for assistance increases the likelihood of concluding with a purchase.

  • Location in the store matters: An aisle is relatively a more likely place for an unplanned consideration to occur, and subsequently has a better chance when it happens to result in a purchase. The authors recommend assigning service staff to be present near aisles.

Complementarity relations were analysed once again, this time in the context of unplanned purchases. The analysis, as visualised in a new map, indicates that proximity between planned and unplanned categories enhances the likelihood of an unplanned purchase: if a shopper plans to purchase in category A, then the closer category B is to A, the more likely is the shopper to purchase in category B given it is considered. Hui et al. note that distances in the maps for considerations and for purchase conversion of unplanned considerations are not correlated, implying hence that the unplanned consideration and a purchase decision are two different dimensions in the decision process. This is a salient result because it distinguishes between engaging in consideration and the decision itself. The researchers caution, however, that in some cases the distinction between consideration and a choice decision may be false and inappropriate because they may happen rapidly in a single step.

  • The latent distances in the maps are also uncorrelated with physical distances between products in the supermarket (i.e., the complementarity relations are mental).

The research shows that while promotion (coupons or in-store flyers) for an unplanned product has a significant effect in increasing the probability of its consideration, it does not contribute to probability of its purchase. This evidence furthermore points to a separation between consideration and a decision. The authors suggest that a promotion may attract shoppers to consider a product, but they are mostly uninterested to buy and hence it has no further effect on their point-of-purchase behaviour. The researchers suggest that retailers can apply their model of complementarity to proactively invoke consideration by triggering a real-time promotion on a mobile shopping app for products associated with those on a digital list of the shopper “so a small coupon can nudge this consideration into a purchase”.

But there are some reservations to be made about the findings regarding promotions. An available promotion can increase the probability of a product to be considered in an unplanned manner, yet shoppers are less likely to look at their coupons or flyers at the relevant moment. Inversely, the existence of a promotion does not contribute to purchase conversion of an unplanned consideration but shoppers are more likely to refer to their coupons or flyers during unplanned considerations that result in a purchase.  A plausible explanation to resolve this apparent inconsistency is that reference to a promotional coupon or flyer is more concrete from a shopper viewpoint than the mere availability of a promotion; shoppers may not be aware of some of the promotions the researchers account for. In the article, the researchers do not address directly promotional information that appears on tags at the product display — such promotions may affect shoppers differently from flyers or distributed coupons (paper or digital via mobile app), because tags are more readily visible at the point-of-purchase.

One of the dynamic factors examined by Hui et al. is the ‘in-store slack’, the mental budget reserved for unplanned purchases. Reserving a larger slack increases the likelihood of unplanned considerations. Furthermore, at the moment of truth, the larger is the in-store slack that remains at the time of an unplanned consideration, the more likely is the shopper to take a product from the display to purchase. However, computations used in the analyses of dynamic changes in each shopper’s in-store slack appear to assume that shoppers estimate how much they already spent on planned products in various moments of the trip and are aware of their budget, an assumption not very realistic. The approach in the research is very clever, and yet consumers may not be so sophisticated: they may exceed their in-store slack, possibly because they are not very good in keeping their budget (e.g., exacerbated by use of credit cards) or in making arithmetic computations fluently.

Finally, shoppers could be subject to a dynamic trade-off between their self-control and the in-store slack. As the shopping trip progresses and the remaining in-store slack is expected to shrink, the shopper becomes less likely to allow an unplanned purchase, but he or she may become more likely to be tempted to consider and buy in an unplanned manner, because the strength of one’s self-control is depleted following active decision-making. In addition, a shopper who avoided making a purchase on the last occasion of unplanned consideration is more likely to purchase a product in the next unplanned occasion — this negative “momentum” effect means that following an initial effort at self-control, subsequent attempts are more likely to fail as a result of depletion of the strength of self-control.

The research of Hui, Huang, Suher and Inman offers multiple insights for retailers as well as manufacturers to take notice of, and much more material for thought and additional study and planning. The video tracking approach reveals patterns and drivers of shopper behaviour in unplanned considerations and how they relate to planned considerations.  The methodology is not without limitations; viewing and coding the video clips is notably time-consuming. Nevertheless, this research is bringing us a step forward towards better understanding and knowledge to act upon.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) Deconstructing the “First Moment of Truth”: Understanding Unplanned Consideration and Purchase Conversion Using In-Store Video Tracking; Sam K. Hui, Yanliu Huang, Jacob Suher, & J. Jeffrey Inman, 2013; Journal of Marketing Research, 50 (August), pp. 445-462.

(2) Innovations in Shopper Marketing: Current Insights and Future Research Issues; Venkatesh Shankar, J. Jeffrey Inman, Murali Mantrala, & Eileen Kelley, 2011; Journal of Retailing, 87S (1), pp. S29-S42.

(3) See other research on path data modelling and analysis in marketing and retailing by Hui with Peter Fader and Eric Bradlow (2009).

Read Full Post »

A new film this year, “Sully”, tells the story of US Airways Flight 1549 that landed safely onto the water surface of the Hudson River on 15 January 2009 following a drastic damage to the plane’s two engines. This article is specifically about the decision process of the captain Chesley (Sully) Sullenberger with the backing of his co-pilot (first officer) Jeff Skiles; the film helps to highlight some instructive and interesting aspects of human judgement and decision-making in an acute crisis situation. Furthermore, the film shows how those cognitive processes contrast with computer algorithms and simulations and why the ‘human factor’ must not be ignored.

There were altogether 155 people on board of the Airbus A320 aircraft in its flight 1549 from New-York to North Carolina: 150 passengers and five crew members. The story unfolds whilst following Sully in the aftermath of the incident during the investigation of the US National Transportation Safety Board (NTSB) which he was facing together with Skiles. The film (directed by Clint Eastwood, featuring Tom Hanks as Sully and Aaron Ackhart as Skiles, 2016) is based on Sullenberger’s autobiographic book “Highest Duty: My Search for What Really Matters” (2009). Additional resources such as interviews and documentaries were also used in preparation of this article.

  • The film is excellent, recommended for its way of delivering the drama of the story during and after the flight, and for the acting of the leading actors. A caution to those who have not seen the film: the article includes some ‘spoilers’. On the other hand, facts of this flight and the investigation that followed were essentially known before the film.

This article is not explicitly about consumers, although the passengers, as customers, were obviously directly affected by the conduct of the pilots as it saved their lives. The focus, as presented above, is on the decision process of the captain Sullenberger. We may expect that such an extraordinary positive outcome of the flight, rescued from a dangerous circumstance, would have a favourable impact on the image of the airline US Airways that employs such talented flight crew members. But improving corporate image or customer service and relationships were not the relevant considerations during the flight, just saving lives.

Incident Schedule: Less than 2 minutes after take-off (at ~15:27) a flock of birds (Canada geese) clashed into both engines of the aircraft. It is vital to realise that from that moment, the flight lasted less than four minutes! The captain took control of the plane from his co-pilot immediately after impact with the birds, and then had between 30 seconds to one minute to make a decision where to land.  Next, just 151 seconds passed from impact with the birds and until the plane was approaching right above the Hudson river for landing on the water. Finally, impact with water occurred 208 seconds after impact with the birds (at ~15:30).

Using Heuristics: The investigators of NTSB told Sully (Hanks) about flight calculations performed in their computer simulations, and argued that according to the simulation results it had not been inevitable to land on the Hudson river, a highly risky type of crash-land. In response, Sully said that it had been impossible for himself and Skiles to perform all those detailed calculations during the four minutes of the flight after the impact of the birds with the aircraft’s engines; he was relying instead on what he saw with his eyes in front of him — the course of the plane and the terrain below them as the plane was gliding with no engine power.

The visual guidance Sully describes as using to navigate the plane resembles a type of ‘gaze heuristic’ identified by professor Gerd Gigerenzer (1). In the example given by Gigerenzer, a player who tries to catch a ball flying in the air does not have time to calculate the trajectory of the ball, considering its initial position, speed and angle of projection. Moreover, the player should also take into account wind, air resistance and ball spin. The ball would be on the ground by the time the player makes the necessary estimations and computation. An alternative intuitive strategy (heuristic) is to ‘fix gaze on the ball, start running, and adjust one’s speed so that the angle of gaze remains constant’. The situation of the aircraft flight is of course different, more complex and perilous, but a similar logic seems to hold: navigating the plane in air safely towards the terrain surface (land or water) when there is no time for any advanced computation (the pilot’s gaze would have to be fixed on the terrain beneath towards a prospect landing ‘runway’). Winter winds in New-York City on that frozen day have probably made the landing task even more complicated.  But in those few minutes available to Sully, he found this type of ‘gaze’ or eyesight guiding rule the most practical and helpful.

Relying on Senses: Sullenberger made extensive use of his senses (visual, auditory, olfactory) to collect every information he could get from his surrounding environment. To start with, the pilots could see the birds coming in front of them right before some of them were clashing into the engines — this evidence was crucial to identifying instantly the cause of the problem though they still needed some time to assess the extent of damage. In an interview to CBS’s programme 60 Minutes (with Katie Couric, February 2009), Sully says that he saw the smoke coming out from both engines, smelled the burned flesh of the birds, and subsequently heard a hushing noise from the engines (i.e., made by the remaining blades). He could also feel the trembling of the broken engines. This multi-modal sensory information contributed to convincing him that the engines were lost (i.e., unable to produce thrust) in addition to failure to restart them. Sully also utilised all that time information from the various meters or clocks in the cockpit dashboard in front of him (while Skiles was reading to him from the manuals). The captain was thus attentive to multiple visual stimuli (including and beyond using a visual guidance heuristic) in his decision process, from early judgement to action on his decision to land onto the water of the Hudson river.

Computer algorithms can ‘pick-up’ and process all the technical information of the aircraft displayed to the pilots in the cockpit. The algorithms may also apply in the computations additional measurements (e.g., climate conditions) and perhaps data from sensors installed in the aircraft. But the computer algorithms cannot ‘experience’ the flight event like the pilots. Sully could ‘feel the aircraft’, almost simultaneously and rapidly perceive the sensory stimuli he received in the cockpit, within and outside the cabin, and respond to them (e.g., make judgement). Information available to him seconds after impact with the birds gave him indications about the condition of the engines that algorithms as used in the simulations could not receive. That point was made clear in the dispute that emerged between Sully and the investigating committee with regard to the condition of one of the engines. The investigators claimed that early tests and simulations suggested one of the engines was still functioning and could allow the pilots to bring the plane to land in one of the nearby airports (returning to La Guardia or reverting to Teterboro in New-Jersey). Sully (Hanks) disagreed and argued that his indications were clear that the second engine referred to was badly damaged and non-functional — both engines had no thrust. Sully was proven right — the committee eventually updated that missing parts of the disputed engine were found and showed that the engine was indeed non-functional, disproving the early tests.

Timing and the Human Factor: The captain Sullenberger had furthermore a strong argument with the investigating committee of NTSB about their simulations in attempt to re-construct or replicate the sequence of events during the flight. The committee argued that pilots in a flight simulator ‘virtually’ made a successful landing in both La Guardia and Teterboro airports when the simulator computer was given the data of the flight. Sully (Hanks) found a problem with those live but virtual simulations. The flight simulation was flawed because it made the assumption the pilots could immediately know where it was possible to land, and they were instructed to do so. Sully and Skiles indeed knew immediately the cause of damage but still needed time to assess the extent of damage before Sully could decide how to react. Therefore, they could not actually turn the plane towards one of those airports right after bird impact as the simulating pilots did. The committee ignored the human factor, as argued by Sully, that had required him up to one minute to realise the extent of damage and his decision options.

The conversation of Sully with air controllers demonstrates his assessments step-by-step in real-time that he could not make it to La Guardia or alternatively to Teterboro — both were effectively considered — before concluding that the aircraft may find itself in the water of the Hudson. Then the captain directed the plane straight above the river in approach to crash-landing. One may also note how brief were his response statements to the air controller.  Sully was confident that landing on the Hudson was “the only viable alternative”. He told so in his interview to CBS. In the film, Sully (Hanks) told Skiles (Ackhart) during a recuperating break outside the committee hall that he had no question left in his mind that they have done the right thing.

Given the strong resistance of Sully, the committee ordered additional flight simulations where the pilots were “held” waiting for 35 seconds to account for the time needed to assess the damage before attempting to land anywhere. Following this minimum delay the simulating pilots failed to land safely neither at La Guardia nor at Teterboro. It was evident that those missing seconds were critical to arriving in time to land in those airports. Worse than that, the committee had to admit (as shown in the film) that the pilots made multiple attempts (17) in their simulations before ‘landing’ successfully in those airports. The human factor of evaluation before making a sound decision in this kind of emergency situation must not be ignored.

Delving a little deeper into the event helps to realise how difficult the situation was.  The pilots were trying to execute a three-part checklist of instructions. They were not told, however, that those instructions were made to match a situation of loss of both engines at a much higher altitude than they were at just after completing take-off. The NTSB’s report (AAR-10-03) finds that the dual engine failure at a low altitude was critical — it allowed the pilots too little time to fulfill the existing three-part checklist. In an interview to Newsweek in 2015, Sullenberger said on that challenge: “We were given a three-page checklist to go through, and we only made it through the first page, so I had to intuitively know what to do.”  The NTSB committee further accepts in its report that landing at La Guardia could succeed only if started right after the bird strike, but as explained earlier, that was unrealistic; importantly, they note the realisation made by Sullenberger that an attempt to land at La Guardia “would have been an irrevocable choice, eliminating all other options”.

The NTSB also commends Sullenberger in its report for operating the Auxiliary Power Unit (APU). The captain asked Skiles to try operating the APU after their failed attempt to restart the engines. Sully decided to take this action before they could reach the article on the APU in the checklist. The operation of the APU was most beneficial according to NTSB to allow electricity on board.

Notwithstanding the judgement and decision-making capabilities of Sully, his decision to land on waters of the Hudson river could have ended-up miserably without his experience and skills as a pilot to execute it rightly. He has had 30 years of experience as a commercial pilot in civil aviation since 1980 (with US Airways and its predecessors), and before that had served in the US Air Force in the 1970s as a pilot of military jets (Phantom F-4). The danger in landing on water is that the plane would swindle and not reach in parallel to the water surface, thus one of the wings might hit water, break-up and cause the whole plane to capsize and break-up into the water (as happened in a flight in 1996). That Sully succeeded to safely “ditch” on water surface is not obvious.

The performance of Sullenberger from decision-making to execution seems extraordinary. His judgement and decision capacity in these flight conditions may be exceptional; it is unclear if other pilots could perform as well as he has done. Human judgement is not infallible; it may be subject to biases and errors and succumb to information overload. It is not too difficult to think of examples of people making bad judgements and decisions (e.g., in finance, health etc.). Yet Sully has demonstrated that high capacity of human judgement and sound decision-making exists, and we can be optimistic about that.

It is hard, and not straightforward, to extend conclusions from flying airplanes to other areas of activity. In one aspect, however, there can be some helpful lessons to learn from this episode in thinking more deeply and critically about the replacement of human judgement and decision-making with computer algorithms, machine learning and robotics. Such algorithms work best in familiar and repeated events or situations. But in new and less familiar situations and in less ordinary and more dynamic conditions humans are able to perform more promptly and appropriately. Computer algorithms can often be very helpful but they are not always and necessarily superior to human thinking.

This kind of discussion is needed, for example, in respect to self-driving cars. It is a very active field in industry these days, connecting automakers with technology companies for installing autonomous computer driving systems in cars. Google is planning on creating ‘driverless’ cars without a steering wheel or pedals; their logic is that humans should not be involved anymore in driving: “Requiring a licensed driver be able to take over from the computer actually increases the likelihood of an accident because people aren’t that reliable” (2). This claim is excessive and questionable. We have to carefully distinguish between computer aid to humans and replacing human judgement and decision-making with computer algorithms.

Chesley (Sully) Sullenberger has allowed himself as the flight captain to be guided by his experience, intuition and common sense to land the plane safely and save the lives of all passengers and crew on board. He was wholly focused on “solving this problem” as he told CBS, the task of landing the plane without casualties. He recruited his best personal resources and skills to this task, and in his success he might give everyone hope and strength in belief in human capacity.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) “Gut Feelings: The Intelligence of the Unconscious”, Gerd Gigerenzer, 2007, Allen Lane (Pinguin Books).

(2) “Some Assembly Required”, Erin Griffith, Fortune (Europe Edition), 1 July 2016.

 

Read Full Post »

Department stores are competing hard for more than thirty years to overcome the challenges posed to them by shopping centres and malls. They keep refreshing their interior designs, merchandising and marketing methods to remain relevant, up-to-date, and especially reinvigorated for the younger generations of shoppers. Department stores and shopping centres are two different models in retailing for offering a wide array of product categories, and accompanying services, within enclosed built environments — different in requirements and responsibilities of managing them, in their structures, and most importantly with respect to the shopping experiences they create. There is enough room in consumers’ lives for shopping both ways.

Shopping centres may be found in the central areas of cities and on their outskirts, on main roads at city-gates and in suburban neighbourhoods. A shopping mall, according to the American genuine model, is a shopping centre characterised by location outside the city centre, housed in a single- or two-floor building spread over a large area and a large-space parking lot, free of charge. But shopping centres or malls exhibit nowadays such a variety of architectural structures and styles of interior design, at different sizes and locations, that the distinction in terms has become quite vague and less important.

Department stores belong traditionally in city centres. They also are typically housed inPartial back closed windows allows a glimpse into the Coop store their dedicated buildings (e.g., 5 to 7 floors, including one or two underground floors). Each floor in a contemporary store is hosting one or more departments (e.g., cosmetics, accessories, menswear, furniture, electric goods and electronics/digital) or amenities (e.g., restaurants). That was not the case in the early days (1850s-1920s) when the retail space open to the public included only up to three floors and the rest of the building was used for production, staff accommodation, and other administrative functions; the range of products was much smaller. So the department store as we better know it today follows the format redeveloped in the 1930s and further progressed soon after World War II. The styles of interior design and visual merchandising, nevertheless, have certainly changed several times over the years.

There is however another recent format of a department store which resides within a shopping centre. It is a reduced and condensed exemplar of the ‘classic’ department store, probably not how consumers more often perceive and think of such stores. But having a reduced store version is perhaps not a problem inasmuch as its location. Shopping centres invite retail chains of department stores to open a branch as an anchor store in their premises, and it seems as a necessary action by the retailers to maintain visibility and presence amid the threat of the shopping centres posed to them. This venture also allows the retailer to extend and reach shoppers away from city centres. Yet, one may question if it helps and serves the interests of the department store retailer as much as of the proprietor of the shopping centre. Being more limited in space and scope of products, while surrounded by a few hundred other shops and stores under the same roof, the department store could get more easily lost and vanish from shopper attention in the crowded space. It should be much more difficult for the store to remain conspicuous in this kind of environment, especially when shoppers can refer to a selection of specialist shops in any category they are interested almost next door.

When a shopper enters a respectable department store he or she tends to get absorbed within it. The variety of products on display, lights and colours, brand signs, and furnishing and fixtures in different shapes and styles pull you in, making you forget of the outer world. The shopper may find almost anything one needs and seeks, whether it is for wearing, decorating the living room, or working in the kitchen, enough to forget there is a street and other shops and stores out there. Think of stores — just for illustration — such as  KaDeWe in Berlin, Selfridges in London, La Rinascente in Milano, or Printemps in Paris: that is the magic of a department store. Of course there are many other stores of this type from different chains, in different styles and atmospherics (which may vary between departments within the same store), and in some of the main cities in each country. For instance, Marks & Spencer opened its modern flag store in a glass building at the turn of the century in Manchester, not in London. Not long afterwards Selfridges also opened a store in Manchester, and then in Birmingham. Printemps and Galeries Lafayette sit next to each other on Boulevard Hausmann in Paris — both are very elegant though the latter  looks more glittering and artistic,  appearing even more upscale and luxurious than the former. Now Galeries Lafayette is planning its yet most modern concept of a department store to open on Champs Élysées.

That is not the impression and feeling one gets in a shopping centre. Although a centre can be absorbing and entertaining in its own way, usually it would be the centre’s environment that is absorbing as a whole and much less any single shop or store. Even in larger stores the shopper is never too far from being exposed again to other retail outlets that can be quickly accessed. In the shopping centre or mall, a shopper moves around between shops and stores, reviews and compares their brand and product selections, and at any point in time he or she can easily return to “feel free” walking in the public pathways of the centre, eye-scanning other stores. It is a different manner and form of shopping experience for a consumer than visiting a department store.

The rise of branding and consumer brands since the 1980s has also had an important impact on trade, organisation and visual merchandising in department stores, as in other types of stores in general. There is a much stronger emphasis in the layout of floors on organisation by brand, particularly in fashion (clothing and accessories) departments. The course of the shopping trip is affected as a result. Shoppers are driven to search first by brand rather than by attribute of the product type they seek. That is, a shopper would search and examine a variety of articles (e.g., shirts, trousers, sweaters, jackets) displayed in a section dedicated to a particular brand before seeing similar articles from other brands. It can make the trip more tiresome if one is looking for a type of clothing by fabric, cut or fit, colour and visual pattern. But not everything on a floor is always sorted in brand sections, like a shop-in-shop; often a shopper may find concentrated displays of items like shirts or rain coats of different models from several brands. Furthermore, there is still continuity on a floor so that one can move around, take along articles from different brands to compare and fit together, and then pay for everything at the same cashier.

In some cases, especially for more renowned and luxury brands, the shop-in-shop arrangement is formal where a brand is given more autonomy to run its dedicated “shop” (known as a concession), making their own merchandising decisions and employing their own personnel for serving and selling to customers. The flexibility of shoppers may be somewhat more restricted when buying from brand concessions. However, even when some “brand shops” are more formal, much of the merchandising is already segregated into brand sections, and shoppers frequently cannot easily tell between formal and less formal business arrangements for brand displays. The sections assigned toView over terraces in a multi-storey department store specific brands are usually not physically fully enclosed and separated from other areas: some look more like “booths”, others are more widely open at the front facing a pathway. Significantly, shoppers can still feel they are walking in the same space of a department or floor, and then move smoothly to another type of department (e.g., from men or women fashion to home goods). That kind of continuity and flexibility while shopping is not affordable when wandering between individual shops and stores in a shopping centre or mall. The segregation of floor layout into dominant brand sections or “shops” within a department store (and some architectural elements) can blur the lines and make the department store seem more similar to a shopping centre, but not quite. The shopping experiences remain distinct in nature and flavour.

  • “With so many counters rented out to other retailers, it is as though the modern department store has returned to the format of the early nineteenth-century bazaar.” (English Shops and Shopping, Kathryn A. Morrison, 2003, Yale University Press/English Heritage.)

Department stores have gone through salient changes, even transformations, over the years. In as early as the 1930s stores started a transition to an open space layout, removing partitions between old-time rooms to allow for larger halls on each floor. Other changes were more pronounced after World War II and into the 1950s, such as  permitting self-service while reducing the need of shoppers to rely on sellers, and accordingly displaying merchandise more openly visible and accessible to the shoppers at arm’s reach. These developments have altered the dynamics of shopping and paved the way for creative advances in visual merchandising.

Department stores have also introduced more supporting services (e.g., repairs of various kinds, photo processing, orders & deliveries,  gift lists, cafeterias and restaurants). In the new millennium department stores joined the digital scene, added online shopping and expanded other services and interactions with consumers through the online and mobile channels. In more recent years we also witness a resurgence of emphasis on food, particularly high quality food or delicatessen. Department stores have opened food halls that include merchandise for sale (fresh and packaged) and bars where shoppers can eat from freshly made dishes of different types of food and cuisines (e.g., KaDeWe, La Rinascente, Jelmoli in Zürich).

Department stores in Israel have always been in a smaller scale than their counterparts  overseas, a modest version. But they suffered greatly with the emergence of shopping centres. The only chain that still exists today (“HaMashbir”) was originally established in 1947 by the largest labour union organisation in the country. Since the first American-style mall was opened near Tel-Aviv in 1985 the chain has started to decline; as more shopping centres opened their gates the stores became outdated and lost the interest of consumers. By the end of the 1990s the chain had come near collapse until it was salvaged in 2003 by a private businessman (Shavit) who took upon himself to rebuild and revive it.

The chain now has 39 branches across the country, but they are mostly far from the scale of those abroad and about a half are located in shopping centres. Yet in 2011 HaMashbir opened its first large multi-category store in the centre of Jerusalem, occupying 5000sqm in seven floors. It seems the stores have gone through a few rounds of remodelling until settling upon their current look and style. They are overall elegant but not fancy, less luxurious and brand-laden, intended to better accommodate consumers of the middle class and to attract families.

It is rather surprising that Tel-Aviv is still awaiting a full-scale department store. The chain has stores in two shopping centres in Tel-Aviv but none left on main streets. At least in two leading shopping centres the stores have shrunk over the years, and one of them is gone. The latter in particular, located once in a lucrative and most popular shopping mall in a northern suburb, was reduced from two floors to a single floor and gave up its fashion department amid the plentiful of competing fashion stores in the mall, until eventually it closed down. Another store remains near Tel-Aviv in “Ayalon Mall”, the first mall of Israel.

Tel-Aviv has the population size (400,000) and flow of visitors on weekdays (more than a million) to justify a world-class store on a main street. Such a store has also the potential of increasing the city’s attraction to tourists. The detriments for the retail chain are likely to be the high real estate prices, difficulty to find a building suitable for housing the store, and the competition from existing shopping centres as well as from stores in high-street shopping districts. Yet especially in a city like Tel-Aviv a properly designed and planned department store is most likely to be a shopping and leisure institution and centre of activity to many who live, work or tour the city.

Shopping centres and department stores can exist side by side because they are essentially different models and concepts of an enriched retail complex in enclosed environments. Unlike the shopping centre, the department store is a world in itself of retail and not an assortment of individual retail establishments. The department store engages shoppers through  its structure, design and function given the powers the retailer has to plan and manage the large store as an integrated retailing space. Consequently, a department store engenders customer experiences that are different from a shopping centre regarding the customers’ shopping trips or journeys and how they spend their time for leisure in the store. One just has to look at the flows of people who flock through the doors of department stores in major cities, most of all as weekends get nearer.

Ron Ventura, Ph.D. (Marketing)

Read Full Post »

Human thinking processes are rich and variable, whether in search, problem solving, learning, perceiving and recognizing stimuli, or decision-making. But people are subject to limitations on the complexity of their computations and especially the capacity of their ‘working’ (short-term) memory. As consumers, they frequently need to struggle with large amounts of information on numerous brands, products or services with varying characteristics, available from a variety of retailers and e-tailers, stretching the consumers’ cognitive abilities and patience. Wait no longer, a new class of increasingly intelligent decision aids is being put forward to consumers by the evolving field of Cognitive Computing. Computer-based ‘smart agents’ will get smarter, yet most importantly, they would be more human-like in their thinking.

Cognitive computing is set to upgrade human decision-making, consumers’ in particular. Following IBM, a leader in this field, cognitive computing is built on methods of Artificial Intelligence (AI) yet intends to take this field a leap forward by making it “feel” less artificial and more similar to human cognition. That is, a human-computer interaction will feel more natural and fluent if the thinking processes of the computer resemble more closely those of its human users (e.g., manager, service representative, consumer). Dr. John E. Kelly, SVP at IBM Research, provides the following definition in his white paper introducing the topic (“Computer, Cognition, and the Future of Knowing”): “Cognitive computing refers to systems that learn at scale, reason with purpose and interact with humans. Rather than been explicitly programmed, they learn and reason from interactions with us and from their experiences with their environment.” The paper seeks to rebuke claims of any intention behind cognitive computing to replace human thinking and decisions. The motivation, as suggested by Kelly, is to augment human ability to understand and act upon the complex systems of our society.

Understanding natural language has been for a long time a human cognitive competence that computers could not imitate. However, comprehension of natural language, in text or speech, is now considered one of the important abilities of cognitive computing systems. Another important ability concerns the recognition of visual images and objects embedded in them (e.g., face recognition receives particular attention). Furthermore, cognitive computing systems are able to process and analyse unstructured data which constitutes 80% of the world’s data, according to IBM. They can extract contextual meaning so as to make sense of the unstructured data (verbal and visual). This is a marked difference between the new computers’ cognitive systems and traditional information systems.

  • The Cognitive Computing Forum, which organises conferences in this area, lists a dozen characteristics integral to those systems. In addition to (a) natural language processing; and (b) vision-based sensing and image recognition, they are likely to include machine learning, neural networks, algorithms that learn and adapt, semantic understanding, reasoning and decision automation, sophisticated pattern recognition, and more (note that there is an overlap between some of the methodologies on this list). They also need to exhibit common sense.

The power of cognitive computing is derived from its combination between cognitive processes attributed to the human brain (e.g., learning, reasoning) and the enhanced computation (complexity, speed) and memory capabilities of advanced computer technologies. In terms of intelligence, it is acknowledged that cognitive processes of the human brain are superior to computers inasmuch as could be achieved through conventional programming. Yet, the actual performance of human cognition (‘rationality’) is bounded by memory and computation limitations. Hence, we can employ cognitive computing systems that are capable of handling much larger amounts of information than humans can, while using cognitive (‘neural’) processes similar to humans’. Kelly posits in IBM’s paper: “The true potential of the Cognitive Era will be realized by combining the data analytics and statistical reasoning of machines with uniquely human qualities, such as self-directed goals, common sense and ethical values.”  It is not sufficiently understood yet how cognitive processes physically occur in the human central nervous system. But, it is argued, there is growing knowledge and understanding of their operation or neural function to be sufficient for emulating at least some of them by computers. (This argument refers to the concept of different levels of analysis that may and should prevail simultaneously.)

The distinguished scholar Herbert A. Simon studied thinking processes from the perspective of information processing theory, which he championed. In the research he and his colleagues conducted, he traced and described in a formalised manner strategies and rules that people utilise to perform different cognitive tasks, especially solving problems (e.g., his comprehensive work with Allen Newell on Human Problem Solving, 1972). In his theory, any strategy or rule specified — from more elaborate optimizing algorithms to short-cut rules (heuristics) — is composed of elementary information processes (e.g., add, subtract, compare, substitute). On the other hand, strategies may be joined in higher-level compound information processes. Strategy specifications were subsequently translated into computer programmes for simulation and testing.

The main objective of Simon was to gain better understanding of human thinking and the cognitive processes involved therein. He proclaimed that computer thinking is programmed in order to simulate human thinking, as part of an investigation aimed at understanding the latter (1). Thus, Simon did not explicitly aim to overcome the limitations of the human brain but rather simulate how the brain may work-out around those limitations to perform various tasks. His approach, followed by other researchers, was based on recording how people perform given tasks, and testing for efficacy of the process models through computer simulations. This course of research is different from the goals of novel cognitive computing.

  • We may identify multiple levels in research on cognition: an information processing level (‘mental’), a neural-functional level, and a neurophysiological level (i.e., how elements of thought emerge and take form in the brain). Moreover, researchers aim to obtain a comprehensive picture of brain structures and areas responsible for sensory, cognitive, emotional and motor phenomena, and how they inter-relate. Progress is made by incorporating methods and approaches of the neurosciences side-by-side with those of cognitive psychology and experimental psychology to establish coherent and valid links between those levels.

Simon created explicit programmes of the steps required to solve particular types of problems, though he aimed at developing also more generalised programmes that would be able to handle broader categories of problems (e.g., the General Problem Solver embodying the Means-End heuristic) and other cognitive tasks (e.g., pattern detection, rule induction) that may also be applied in problem solving. Yet, cognitive computing seeks to reach beyond explicit programming and construct guidelines for far more generalised processes that can learn and adapt to data, and handle broader families of tasks and contexts. If necessary, computers would generate their own instructions or rules for performing a task. In problem solving, computers are taught not merely how to solve a problem but how to look for a solution.

While cognitive computing can employ greater memory and computation resources than naturally available to humans, it is not truly attempted to create a fully rational system. The computer cognitive system should retain some properties of bounded rationality if only to maintain resemblance to the original human cognitive system. First, forming and selecting heuristics is an integral property of human intelligence. Second, cognitive computing systems try to exhibit common sense, which may not be entirely rational (i.e., based on good instincts and experience), and introduce effects of emotions and ethical or moral values that may alter or interfere with rational cognitive processes. Third, cognitive computing systems are allowed to err:

  • As Kelly explains in IBM’s paper, cognitive systems are probabilistic, meaning that they have the power to adapt and interpret the complexity and unpredictability of unstructured data, yet they do not “know” the answer and therefore may make mistakes in assigning the correct meaning to data and queries (e.g., IBM’s Watson misjudged a clue in the quiz game Jeopardy against two human contestants — nonetheless “he” won the competition). To reflect this characteristic, “the cognitive system assigns a confidence level to each potential insight or answer”.

Applications of cognitive computing are gradually growing in number (e.g., experimental projects with the cooperation and support of IBM on Watson). They may not be targeted directly for use by consumers at this stage, but consumers are seen as the end-beneficiaries. The users could first be professionals and service agents who help consumers in different areas. For example, applied systems in development and trial would:

  1. help medical doctors in identifying (cancer) diagnoses and advising their patients on treatment options (it is projected that such a system will “take part” in doctor-patient consultations);
  2. perform sophisticated analyses of financial markets and their instruments in real-time to guide financial advisers with investment recommendations to their clients;
  3. assist account managers or service representatives to locate and extract relevant information from a company’s knowledge base to advise a customer in a short time (CRM/customer support).

The health-advisory platform WellCafé by Welltok provides an example of application aimed at consumers: The platform guides consumers on healthy behaviours recommended for them whereby the new assistant Concierge lets them converse in natural language to get help on resources and programmes personally relevant to them as well as various health-related topics (e.g., dining options). (2)

Consider domains such as cars, tourism (vacation resorts), or real-estate (second-hand apartments and houses). Consumers may encounter tremendous information in these domains on numerous options and many attributes to consider (for cars there may also be technical detail more difficult to digest). A cognitive system has to help the consumer in studying the market environment (e.g., organising the information from sources such as company websites and professional and peer reviews [social media], detecting patterns in structured and unstructured data, screening and sorting) and learning vis-à-vis the consumer’s preferences and habits in order to prioritize and construct personally fitting recommendations. Additionally, it is noteworthy that in any of these domains visual information (e.g., photographs) could be most relevant and valuable to consumers in their decision process — visual appeal of car models, mountain or seaside holiday resorts, and apartments cannot be discarded. Cognitive computing assistants may raise very high consumer expectations.

Cognitive computing aims to mimic human cognitive processes that would be performed by intelligent computers with enhanced resources on behalf of humans. The application of capabilities of such a system would facilitate consumers or the professionals and agents that help them with decisions and other tasks — saving them time and effort (sometimes frustration), providing them well-organised information with customised recommendations for action that users would feel they  have reached themselves. Time and experience will tell how comfortably people interact and engage with the human-like intelligent assistants and how productive they indeed find them, using the cognitive assistant as the most natural thing to do.

Ron Ventura, Ph.D. (Marketing)

Notes:

1.  “Thinking by Computers”, Herbert A. Simon, 1966/2008, reprinted in Economics, Bounded Rationality and the Cognitive Revolution, Massimo Egidi and Robin Marris (eds.)[pp. 55-75], Edward Elgar.

2. The examples given above are described in IBM’s white paper by Kelly and in: “Cognitive Computing: Real-World Applications for an Emerging Technology”, Judit Lamont (Ph.D.), 1 Sept. 2015, KMWorld.com

Read Full Post »

From a consumer viewpoint, choice situations should be presented in a clear and comprehensible manner that facilitates consumers’ correct understanding of what is at stake and helps them to choose an alternative that fits most closely their needs or preferences. But policy makers may go farther and design choices to direct the decision-making consumers to a desirable or recommended alternative in their judgement.

It is very likely for Humans (unlike economic persons, or Econs) to be influenced in their decisions by the way a choice problem is presented; even if unintentional — it is almost unavoidable. Sometimes, however, an intervention to influence a decision-maker is done intentionally. Choice architecture relates to how choice problems are presented: the way the problem is organised and structured, and how alternatives are described, including tools or techniques that may be used to guide a decision-maker to a particular choice alternative. Richard Thaler and Cass Sunstein have called such tools ‘nudges’, and the designer of the choice problem is referred to as a ‘choice architect’. In their book, “Nudge: Improving Decisions About Health, Wealth and Happiness” (2009), the researchers were very specific, nonetheless, about the kinds of nudging they support and advocate (1). A nudge may be likened to a light push of a consumer out of his or her ‘comfort zone’ towards a particular choice alternative (e.g., action, product), but it should be harmless and left optional to consumers whether to accept or reject.

Thaler and Sunstein argue that in some cases more action is needed to ‘nudge’ consumers in a right direction. That is because consumers, as Humans, often do not consider carefully enough the choice situation and alternatives, they tend to err, and may not do what would actually be in their own best interest. It may be added that consumers’ preferences may not be well-established, and when these are unstable it could make it furthermore difficult for consumers to find an alternative that fits their preferences more closely. Hence, the authors recommend acting in a careful corrective manner that guides consumers towards an alternative that a policy maker assesses will serve them better (e.g., health-care, savings). Yet they insist that any intervention of nudging should not be imposed on the consumer. They call their approach ‘libertarian paternalism’ — a policy maker may tell consumers what alternative would be right for them but the consumer is eventually left with the freedom of choice how to act. They state that:

To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting the fruit at eye level counts as a nudge. Banning junk food does not.

Thaler and Sunstein suggest six key principles, or types, of nudges: (a) Defaults; (b) Expect error (i.e., nudges designed to accommodate human error); (c) Give feedback (nudges reliant on social influence may be included here); (d) Understanding ‘mappings’ (i.e., a match between a choice made and its welfare outcome, such as consumption experience); (e) Structure complex choices; (f) Incentives. The authors discuss and propose how to use those tools in dealing with choice issues such as complexity and a status quo bias (inertia) (e.g., applied to student loans, retirement pensions and savings, medication plans).

Let’s look at some examples of how choice architecture may influence consumer choice:

A default may be set-up to determine what happens if a consumer makes no active choice (e.g., ‘too difficult to choose’, ‘too many options’) or to induce the consumer to take a certain action. Defaults can change the significance of opt-in and opt-out choice methods. A basic opt-in could ask a consumer to tick a box if she agrees to participate in a given programme. Now consider a slight change by pre-ticking the box as default — if the consumer does not like to join, she can uncheck the box (opt-out). A more explicit default and opt-out combination could state up-start (e.g., in a heading) that the consumer is automatically enrolled in the programme and if she declines she should send an e-mail to the organiser. If inclusion in a programme is the default, and consumers have to opt-out of the programme, many more will end-up enrolled than if they had to actively approve their participation. Yet the effect may vary depending on the ease of opting-out (just unchecking the box vs. sending a separate e-mail). Defaults of this type may be used for benign purposes such as subscription to a e-newsletter versus sensitive purposes like organ donation (2).

  • A default option is particularly attractive when the ‘alternative’ action is actually choosing from a long list of other alternatives (e.g., mutual and equity funds for investment).

Making a sequence of choice decisions is a recurring purchase activity. As a simple example, suppose you have to construct a list of items that you want to purchase (e.g., songs to compile, books to order) by choosing one item from each of a series of choice sets.  Presenting choice-sets in an increasing order of choice-set size is likely to encourage the chooser to enter a maximising mind-set — starting with a small set, it is easier to examine more closely all options in the set before choosing, and while the set size increases the chooser will continue trying to examine options more exhaustively. When starting with a large choice-set and decreasing the size thereon, the opposite happens where the chooser enters a simplifying or satisficing mind-set. Thus, over choice-sets, the chooser in an increasing order condition is likely to perform a deeper search and examine overall more options. As described by Levav, Reinholtz and Lin, consumers are “sticky adapters” (3). When constructing an investment portfolio, for instance, a financial policy maker may nudge investors to examine more of the funds, bonds and equities available by dividing them into classes to be presented as choice-sets in an increasing order of size (up to a reasonable limit).

Multiple aspects of choice design or architecture arise in the context of mass customization. Taking the case of price, a question arises whether to specify the cost of each level of a customized attribute (actually the price premium for upgraded levels vs. a baseline level) or the total price of the final product designed. A proponent opinion argues that providing detailed price information for levels of quality attributes allows consumers to consider the monetary implications of choosing an upgraded level on each attribute. It is not as difficult as trying to extract the marginal cost of a level chosen on each quality attribute from the total price. Including prices for levels of quality attributes leads consumers to choose more frequently intermediate attribute levels (compared with a by-alternative choice-set)(4). A counter opinion posits that carefully weighing price information on each attribute is not so easy (consumers report higher subjective difficulty), actually causing consumers to be too cautious and configure products that are less expensive but also of lower quality. Hence, providing a total price for the outcome product could be sufficient and more useful for the customers (5). It is hard to give any conclusive design suggestion in this case.

In a last example, the form in which calorie information is provided on restaurant menus matters no less than posting it. As a recent research by Parker and Lehmann shows, it is practically possible to be over-doing it (6). Consistent with other studies, the researchers find that when posting calorie figures next to food dishes, consumers choose from the calorie-posted menu items with lower calorie content on average than from a similar traditional menu but with no calorie figures. Separating low-calorie items from their original categories of food type (e.g., salads, burgers) into a new group, as some restaurants do, may eliminate, however, the advantage of calorie-posting. While the logic of a separate group is that it would make the group more conspicuous and easier for diners to attend to it, it could make it easier for them instead to exclude those items from consideration. Nevertheless, some qualification is needed as the title given to the group also matters.

Parker and Lehmann show that organising the low-calorie items in a separate group explicitly titled as such (e.g., “Low Calories”, “Under 600 Calories”) attenuates the posting effect, thus eliminating the advantage of inducing consumers to order lower-calorie items. The title is important because it is easier this way for consumers to screen out this category from consideration (e.g., as unappealing on face of it). It is demonstrated that giving a positive name unrelated to calories (e.g., “Eddie’s Favourites”, “Fresh and Fit”) would generate less rejection and make it no more likely to be screened out as a group than other categories. In a menu that is just calorie-posted, consumers are more likely to trade-off the calories with other information on a food item such as its composition and price. But if the consumers are helped to screen the low-calorie group as a measure of simplifying their decision process in an early stage, it means they would also ignore their calorie details.

  • An additional explanation can be suggested for disregarding the low-calorie items when grouped together: If those items are mixed in categories of other items similar to them in type of food, each item would stand-out as ‘low calorie’ and be perceived as different and more important. If the low-calorie items are aggregated on the other hand in a set-aside group, they are more likely to be perceived as of diminished importance or appeal collectively and be ignored together. (cf. [7]). Therefore, creating a separate group of varied items pulled out from all the other groups sends a wrong message to consumers and may nudge them in the wrong direction.

Both public and private policy makers can use nudging. But there are some limitations deserving attention especially with regard to private (business) policy makers. Companies sometimes act out of belief that in order to recruit customers they should present complex alternative plans (e.g., mobile telecoms, insurance, bank loans), which includes obscuring vital details and making comparisons between alternatives very difficult. They see nudging tools that are meant to reduce complexity of consumer choice as playing against their interest (e.g., if choice is complex it will be easier for the company to capture [trap-in] the customer). That counters the intention of Thaler and Sunstein, and they stand against this kind of practice.

In the case of helping customers to see more clearly the relation, and match, between their patterns of service usage and the cost they are required to pay, Thaler and Sunstein propose a nudge scheme called RECAP — Record, Evaluate, and Compare Alternative Prices. The scheme entails publishing in readily accessible channels (e.g., websites) full details of their service and price plans as well as provide existing customers periodic reports that show how their level of usage on each component of service contributes to total cost. These measures that increase transparency would help customers understand what they pay for, monitor and control their costs, and reconsider from time to time their current service plan vis-à-vis alternative plans of the same provider and those of competitors. The problem is that service providers are usually reluctant to hand over such detailed information from their own good will. Public regulators may have to require companies to create a RECAP scheme, or perhaps nudge them to do so.

In the lighter scenario, companies prefer to avoid nudging techniques that work in the benefit of consumers because of concern it would hurt their own interests. In the worse scenario, companies misinterpret nudging and use tools that actively manipulate consumers to choose not in their benefit (e.g., highlight a more expensive product the consumer does not really need). Thaler and Sunstein are critical of either public or private (business) policy makers who conceive and apply nudges in their own self-interest. They tend to dedicate more effort, however, to counter objections to government intervention in consumers’ affairs and popular suspicions of malpractice by branches of the government (i.e., these issues seem to be of major concern in the United States that may not be fully understood in other countries). Of course it is important not turn a blind eye to harmful usage of nudges by public as well as private choice architects.

There are many opportunities in cleverly using nudging tools to guide and assist consumers. Yet there can be a thin line between interventions of imposed choice and free choice or between obtrusive and libertarian paternalism. Designing and implementing nudging tools can therefore be a delicate craft, advisably a matter primarily for expert choice architects.

Ron Ventura, Ph.D. (Marketing)

Notes:

(1) “Nudge: Improving Decisions About Health, Wealth and Happiness”; Richard H. Thaler and Cass R. Sunstein, 2009; Penguin Books (updated edition).

(2) Ibid 1, and: “Beyond Nudges: Tools of Choice Architecture”; Eric J. Johnson and others, 2012; Marketing Letters, 23, pp. 487-504.

(3) “The Effect of Ordering Decisions by Choice-Set Size on Consumer Search”; Jonathan Levav, Nicholas Reinholtz, & Claire Lin, 2012; Journal of Consumer Research, 39 (October), pp. 585-599.

(4) “Contingent Response to Self-Customized Procedures: Implications for Decision Satisfaction and Choice”; Ana Valenzuela, Ravi Dahr, & Florian Zettelmeyer, 2009; Journal of Marketing Research, 46 (December), pp. 754-763.

(5) “Marketing Mass-Customized Products: Striking a Balance Between Utility and Complexity”; Benedict G.C. Dellaert and Stefan Stremersch, 2005; Journal of Marketing Research, 42 (May), pp. 219-227.

(6) “How and When Grouping Low-Calorie Options Reduces the Benefits of Providing Dish-Specific Calorie Information”; Jeffrey R. Parker and Donald R. Lehmann, 2014; Journal of Consumer Research, 41 (June), pp. 213-235.

(7) Johnson et al. (see #2).

Read Full Post »

The EXPO 2015 exhibition in Milano, that is coming to a close at the end of October, has concentrated on the future of agriculture and food on our planet. The urgency of these topics is elevated by adverse conditions of climate change (warming) and shortage in water, predicted to worsen further. The EXPO is generally a prime opportunity for countries to promote their nation-brands. This time countries were invited to showcase their advanced scientific and technological capabilities by offering programmes and solutions to overcome environmental and economic challenges of agriculture and food provision.

The supermarket retailer Coop of Italy has yet taken a different direction, within the realm of its business specialisation: Coop Italia proposes at EXPO 2015 its vision of how shopping will be conducted in future supermarkets. They have put on stage a functioning model of a supermarket store (Future Food District / il supermercato del futuro) where detailed product information is displayed on large digital screens and check-out and payment are performed on computer-automated terminals. Almost obviously, such a supermarket will require even fewer human service personnel than met today in the store.

  • Coop Italia covers online (in Italian) a range of aspects such as food retailing, shopping, technology, and the future of food itself.

Coop Italia: Future Food District at EXPO 2015 (3)

It should be emphasised that the experimental supermarket of Coop at EXPO Milano is not just for demonstration but visitors of the site can practically collect food products into their shopping baskets and purchase them at the end of their trip. In the store’s front and on the upper level a visitor/shopper may find fresh produce and packaged food products displayed on shelves. From there he or she may descend to the lower level to find mostly refrigerated and frozen products. If products were actually selected from the display area, the shopper may go to the self-service scan-and-pay terminals and finalise the purchase (payment can be made by credit and debit cards or in cash).

The prospective format offers, according to Coop Italia, new interactions between consumers, products and producers. Mainly, consumers can observe and read from digital display screens much more information on products and their producers than has been traditionally possible in supermarkets. The screens are hanging usually above shelf cabinets or refrigerators at about head level. When the shopperCoop Italia: Future Food District at EXPO 2015 (4) points to a particular product’s title and image on the nearest screen, a variety of details in text and graphics, and a larger pictorial image of the product, will appear on screen. Besides the essentials of product name, size measures and price, additional information may be presented on product components and nutritional values (e.g., calories, sugar, salt, fat, protein, fibres), and on its source (e.g., producer company and country of origin). This facility should save shoppers the effort of tearing their eyes while reading small print on product packages, where packaging is relevant at all. The information is also displayed in a more friendly and comprehensible form (e.g., using understandable terms, illustrated visually in graphic charts). These enhancements of the future shopping experience are much about advanced display technology and data visualization.

Occasionally the visitor/shopper may also see some sales statistics and more background on growing and production of the product of interest with emphasis on nutritional and health implications. Coop Italia suggests that presenting more of these kinds of information will give better direction to consumers on preferred or recommended food products in future times (e.g., given new constraints on food provision). Thus Coop connects to the general issue of the future of food at the focus of EXPO 2015.

Coop Italia: Future Food District at EXPO 2015 (1)

Being on site, the space of the supermarket looked elegant and modern. The large black screens hanging over, positioned in an angle as “\”, definitely signalled a change in the visual scene of the store. It was the first cue to be noticed as to how the future supermarket could be different. The screens were easily discernible but their arrangement was not in any way disturbing to the eye — one could quickly get used to them. Activating the display and viewing information for any chosen product was intriguing and to some extent even entertaining. On one hand it felt like “playing” while shopping, on the other hand it increased interest in products considered, if only for curiosity and not for purchase. The information presented was usually helpful and of practical value for decision-making. Overall, the future supermarket model appeared to enrich the shopping experience.

There were some impediments, however, in practice. Making the screen to display information related to a desired product was not always smooth and easy. It was not clear, for instance, if one should raise a chosen product item up to the screen above or just point towards the image of the relevant product (visitors could be seen trying both). Whatever sensors were supposed to identify the gesture of the shopper’s hand or the product itself, they occasionally were not satisfactorily responsive. Most screens were located on-top so that shoppers could not touch them, and therefore the question was: How do I cause the system to recognize my choice of product. But perhaps it was also a matter of some more training by the shopper to get it right (gamers should have better success with such a system).

Screens on-top and as panels on the door-side of refrigerators

Screens on-top and as panels on the door-side of refrigerators

Additionally, sometimes it felt the information displayed changed too quickly, not giving enough time to review parts of the data provided. Information on each product was usually screened in two or three “shots” (i.e., display of first portion of product information replaced by display of the next portion). Since the shopper has no control of the duration of display, it could be sometimes irritating when, as a shopper, I could not review a data figure of interest in time. But one should remember that usually a shopper is not alone and the same screen may have to serve multiple customers within a few minutes, so a single shopper may be allowed just a brief time to inspect the most needed information. The stress on shoppers might be felt particularly during peak hours of shopping.  Hence, shoppers may benefit from the convenience of viewing information on large screens, but when necessary they should be able to toggle to the private screens on their mobile devices to continue their review of product information.

  • It is noted that Coop Italia provides QR codes for products that shoppers can scan and access the product information on their own devices (and possibly conduct the purchase online).

Regardless of the technology employed, the Coop deserves congratulating for their visually appealing layout and arrangement of product display, and its orderliness and cleanliness. It was evident that great care was invested in setting-up and housekeeping the supermarket. Since this is indeed an experimental stage for the future supermarket, it is reasonable and expected  that work to improve the performance and usability of the technology installed will continue. When it arrives, the younger generations will most likely be prepared for this concept. In summary, the shopping experience ‘nel supermercato del futuro’ was positive and encouraging.

 


How is Coop Italia perceived following its initiative? Naturally, the Coop would expect its Future Food District initiative to have a positive effect on the company’s image. Feedback they received from consumers following their visit of the future supermarket included (most frequent responses, cited from video clip):

  • The Coop demonstrates that it is modern and up-to-date (48%)
  • The Coop demonstrates that it has at heart the future of the planet and its inhabitants (29%)
  • The Coop demonstrates that it keeps in line with the new requirements of consumers (27%)
  • The Coop anticipates the future (19%)
  • The Coop is looking to generate curiosity and interest (13%)

But 16.5% also indicated that the Coop has gone too much ahead of its time, that consumers are not ready yet for all this technology, and 15% argued that the Coop may risk distancing those who are not familiar with the technology. Hence, the technological advances may be welcome, yet it could be too early to implement at this time.

 


The EXPO exhibition in Milano this year was enormous in scope and fascinating; it was well-organised and instructive. All countries presented products and other artefacts, images and models standing for some of their national and cultural assets and symbols,   emphasising, as much as possible for each country, environmental considerations and priorities. The differences in scale between exhibits of countries, however, were striking. There was also large diversity in level of sophistication of presentation, in the technologies used and other display aids applied. In particular, some countries focused more on high-tech techniques while others relied mainly on low-tech features.

Country exhibits hosted in shared-pavilions by theme (e.g., Cacao and chocolate, coffee, rice, bio-Mediterranean, arid zone) were modest; those countries also related  moderately to projects or developments to resolve agricultural and food challenges. But even among smaller exhibits it is unfair to talk of homogeneity because some countries were enlightening exceptions who managed to put up impressive and interesting exhibits.

Countries exhibiting in their own pavilions blended more expansively between their traditional assets and their programmes and technological solutions dedicated specifically to the challenges of future agriculture and food. It must be noted that some pavilions were impressive in their architecture per se. But the country pavilions also proved that size is not everything — diversity in level of effort invested, ingenuity and richness was discernible among those pavilion exhibitions. Furthermore, it also did not seem that variation in quality, originality and interest of exhibits was accounted for merely by differences in economic power or resources.

Israel Pavilion at EXPO 2015: A Vertical Field

Israel Pavilion at EXPO 2015: A Vertical Field

 

Read Full Post »

Older Posts »