Feeds:
Posts
Comments

Posts Tagged ‘Thinking (Cognition)’

The discipline of consumer behaviour is by now well versed in the distinction between System 1 and System 2 modes of thinking, relating in particular to consumer judgement and decision making, with implications for marketing and retail management. Much appreciative gratitude is owed to Nobel Prize Laureate in economics Daniel Kahneman for bringing forward the concept of these thinking systems to the knowledge of the wider public (i.e., beyond academics) in his book “Thinking, Fast and Slow” (2012). ‘System 1’ and ‘System 2’, though not always using these labels, have been identified and elaborated by psychologists earlier than Kahneman’s book, as the author so notes. However, Kahneman succeeds in making more crystal clear the concepts of these different modes of thinking while linking them to phenomena studied in his own previous research, most notably in collaboration with the late Amos Tversky.

In a nutshell: System 1’s type of thinking is automatic, associative and intuitive; it tends to respond quickly, but consequently it is at higher risk of jumping to wrong conclusions. It is the ‘default’ type of thinking that guides human judgement, decisions and behaviour much of the time. On the other hand, System 2’s type of thinking is deliberative, logical, critical, and effortful; it involves deeper concentration and more complex computations and rules. System 2 has to be called to duty voluntarily, activating rational thinking and careful reasoning. Whereas thinking represented by System 1 is fast and reflexive, that of System 2 is slow and reflective.

Kahneman describes and explains the role, function and effect of System 1 and System 2 in various contexts, situations or problems. In broad terms: Thinking of the System 1 type comes first; System 2 either passively adopts impressions, intuitive judgements and recommendations by System 1 or actively kicks-in for more orderly examination and correction (alas, it tends to be lazy, not in a hurry to volunteer). Just to give a taste, below is a selection of situations and problems in which Kahneman demonstrates the important differences between these two modes of thinking, how they operate and the outcomes they effect:

  • # Illusions (e.g., visual, cognitive)  # Use of memory (e.g., computations, comparisons)  # Tasks requiring self-control  # Search for causal explanations  # Attending to information (“What You See Is All There Is”)  # Sets and prototypes (e.g., ‘average’ vs. ‘total’ assessments)  # Intensity matching  # ‘Answering the easier question’ (simplifying by substitution)  # Predictions (also see correlation and regression, intensity matching, representativeness)  # Choice in opt-in and opt-out framing situations (e.g., organ donation)
  • Note: In other contexts presented by Kahneman (e.g., validity illusion [stock-picking task], choice under Prospect Theory), the author does not connect them explicitly to  System 1 or System 2 so their significance may only be indirectly implied by the reader.

In order to gain a deeper understanding of System 1 and System 2 we should inspect the detailed aspects differentiating between these thinking systems. The concept of the two systems actually emerges from binding multiple dual-process theories of cognition together, thus appearing to be a larger cohesive theory of modes of thinking. Each dual process theory is usually focused on a particular dimension that distinguishes between two types of cognitive processes the human mind may utilise. However, those dimensions ‘correlate’ or ‘co-occur’, and a given theory often adopts aspects from other similar theories or adds supplementary properties; the dual-system conception hence is built on this conversion. The aspects or properties used to describe the process in each type of system are extracted from those dual-process theories. A table presented by Stanovich (2002) helps to see how System 1 and System 2 contrast in various dual-process theories. Some of those theories are: [For brevity, S1 and S2 are applied below to refer to each system.)

  • S1: Associative system / S2: Rule-based system (Sloman)
  • S1: Heuristic processing / S2: Analytic processing (Evans)
  • S1: Tacit thought process / S2: Explicit thought process (Evans and Over)
  • S1: Experiential system / S2: Rational system (Epstein)
  • S1: Implicit inference / S2: Explicit inference (Johnson-Laird)
  • S1: Automatic processing / S2: Controlled processing (Shiffrin and Schneider)

Note: Evans and Wason related to Type 1 vs. Type 2 processes already in 1976.

  • Closer to consumer behaviour: Central processing versus peripheral processing in the Elaboration Likelihood Model (Petty, Cacioppo & Schumann) posits a dual-process theory of routes to persuasion.

Each dual process theory provides a rich and comprehensive portrayal of two different thinking modes. The theories complement each other but they do not necessarily depend on each other. The boundaries between the two types of process are not very sharp, that is, features of the systems are not all exclusive in the sense that a particular property associated with a process of System 1 may occur in a System 2 process, and vice versa. Furthermore, the processes also interact with one another, particularly in a way where System 2 relies on products of thought from System 1, either approving them or using them as a starting-point for further analysis. Nevertheless, occasionally System 2 may generate reasons for us merely to justify a choice made by System 1 (e.g., a consumer likes a product for the visual appearance of its packaging or its design).

Stanovich follows the table of theories with a comparison of properties describing System 1 versus System 2 as derived from a variety of dual process theories, but without attributing them to any specific theory (e.g., holistic/analytic, relatively fast/slow, highly contextualized/decontextualized). Comparative lists of aspects or properties have been offered by other researchers as well. Evans (2008) formed a comparative list of more than twenty attributes which he divided into four clusters (describing System 1/System 2):

  • Cluster 1: Consciousness (e.g., unconscious/conscious, automatic/controlled, rapid/slow, implicit/explicit, high capacity/low capacity)
  • Cluster 2: Evolution (e.g., evolutionary old/recent, nonverbal/linked to language)
  • Cluster 3: Functional characteristics (e.g.,  associative/rule-based, contextualized/abstract, parallel/sequential)
  • Cluster 4: individual differences (universal/heritable, independent of/linked to general intelligence, independent of/limited by working memory capacity).

Listings of properties collated from different sources (models, theories), interpreted as integrative profiles of System 1 and System 2 modes of thinking, may yield a misconception of the distinction between the two systems as representing an over-arching theory. Evans questions whether it is really possible and acceptable to tie the various theories of different origins under a common roof, suggested as an over-arching cohesive theory of two systems (he identifies problems residing mainly with ‘System 1’). It could be more appropriate to approach the dual-system presentation as a paradigm or framework to help one grasp the breadth of aspects that may distinguish between two types of cognitive processes and obtain a more comprehensive picture of cognition. The properties are not truly required to co-occur altogether as constituents of a whole profile of one system or the other. In certain domains of judgement or decision problems, a set of properties may jointly describe the process entailed. Some dual process theories may take different perspectives on a similar domain, and hence the aspects derived from them are related and appear to co-occur.

  • Evans confronts a more widely accepted ‘sequential-interventionist’ view (as described above) with a ‘parallel-competitive’ view.

People use a variety of procedures and techniques to form judgements, make decisions or perform any other kind of cognitive task. Stanovich relates the structure, shape and level of sophistication of the mental procedures or algorithms of thought humans can apply, to their intelligence or cognitive capacity, positioned at the algorithmic level of analysis. Investing more effort in more complicated techniques or algorithms entailed in rational thinking is a matter of volition, positioned at the intentional level (borrowed from Dennett’s theorizing on consciousness).

However, humans do not engage a great part of the time in thought close to the full of their cognitive capacity (e.g., in terms of depth and efficiency). According to Stanovich, we should distinguish between cognitive ability and thinking dispositions (or styles). The styles of thinking a person applies do not necessarily reflect everything one is cognitively capable of. Put succinctly, the fact that a person is intelligent does not mean that he or she has to think and act rationally; one has to choose to do so and invest the required effort into it. When one does not, it opens the door for smart people to act stupidly. Furthermore, the way a person is disposed to think is most often selected and executed unconsciously, especially when the thinking disposition or style is relatively fast and simple. Cognitive styles that are entailed in System 1, characterised as intuitive, automatic, associative and fast, are made to ease the cognitive strain on the brain, and they are most likely to occur unconsciously or preconsciously. Still, being intuitive and using heuristics should not imply a person will end up acting stupidly — some would argue his or her intuitive decision could be more sensible than one made when trying to think rationally; it may depend on how thinking in the realm of System 1 happens — if one rushes while applying an inappropriate heuristic or relying on an unfitting association, he or she could become more likely to act stupidly (or plainly, ‘being stupid’).

Emotion and affect are more closely linked to System 1. Yet, emotion should not be viewed ultimately as a disruptor of rationality. As proposed by Stanovich, emotions may fulfill an important adaptive regulatory role — serving as interrupt signals necessary to achieve goals, avoiding entanglement in complex rational thinking that only keeps one away from a solution, and reducing a problem to manageable dimensions. In some cases emotion does not disrupt rationality but rather help to choose when it is appropriate and productive to apply a rational thinking style (e.g., use an optimization algorithm, initiate counterfactual thinking). By switching between two modes of thinking, described as System 1 and System 2, one has the flexibility to choose when and how to act in reason or be rational, and emotion may play the positive role of a guide.

The dual-system concept provides a way of looking broadly at cognitive processes that underlie human judgement and decision making. System 1’s mode of thinking is particularly adaptive by which it allows a consumer to quickly sort out large amounts of information and navigate through complex and changing environments. System 2’s mode of thinking is the ‘wise counselor’ that can be called to analyse the situation more deeply and critically, and provide a ‘second opinion’ like an expert. However, it intervenes ‘on request’ when it receives persuasive signals that its help is required. Consideration of aspects distinguishing between these two modes of thinking by marketing and retail managers can help them to better understand how consumers conduct themselves and cater to their needs, concerns, wishes and expectations. Undertaking this viewpoint can especially help, for instance, in the area of ‘customer journeys’ — studying how thinking styles direct or lead the customer or shopper through a journey (including emotional signals), anticipating reactions, and devising methods that can alleviate conflicts and reduce friction in interaction with customers.

Ron Ventura, Ph.D. (Marketing)

References:

(1)  Thinking, Fast and Slow; Daniel Kahneman, 2012; Penguin Books.

(2) Rationality, Intelligence, and Levels of Analysis in Cognitive Science (Is Dysrationalia Possible); Keith E. Stanovich, 2002; in Why Smart People Can Be So Stupid (Robert J. Sternberg editor)(pp. 124-158), New Haven & London: Yale University Press.

(3) Dual-Processing Accounts of Reasoning, Judgment and Social Cognition; Jonathan St. B. T. Evans, 2008; Annual Review of Psychology, 59, pp. 255-278. (Available online at psych.annualreviews.org, doi: 10.1146/annurev.psych.59.103006.093629).

 

Advertisements

Read Full Post »

Human thinking processes are rich and variable, whether in search, problem solving, learning, perceiving and recognizing stimuli, or decision-making. But people are subject to limitations on the complexity of their computations and especially the capacity of their ‘working’ (short-term) memory. As consumers, they frequently need to struggle with large amounts of information on numerous brands, products or services with varying characteristics, available from a variety of retailers and e-tailers, stretching the consumers’ cognitive abilities and patience. Wait no longer, a new class of increasingly intelligent decision aids is being put forward to consumers by the evolving field of Cognitive Computing. Computer-based ‘smart agents’ will get smarter, yet most importantly, they would be more human-like in their thinking.

Cognitive computing is set to upgrade human decision-making, consumers’ in particular. Following IBM, a leader in this field, cognitive computing is built on methods of Artificial Intelligence (AI) yet intends to take this field a leap forward by making it “feel” less artificial and more similar to human cognition. That is, a human-computer interaction will feel more natural and fluent if the thinking processes of the computer resemble more closely those of its human users (e.g., manager, service representative, consumer). Dr. John E. Kelly, SVP at IBM Research, provides the following definition in his white paper introducing the topic (“Computer, Cognition, and the Future of Knowing”): “Cognitive computing refers to systems that learn at scale, reason with purpose and interact with humans. Rather than been explicitly programmed, they learn and reason from interactions with us and from their experiences with their environment.” The paper seeks to rebuke claims of any intention behind cognitive computing to replace human thinking and decisions. The motivation, as suggested by Kelly, is to augment human ability to understand and act upon the complex systems of our society.

Understanding natural language has been for a long time a human cognitive competence that computers could not imitate. However, comprehension of natural language, in text or speech, is now considered one of the important abilities of cognitive computing systems. Another important ability concerns the recognition of visual images and objects embedded in them (e.g., face recognition receives particular attention). Furthermore, cognitive computing systems are able to process and analyse unstructured data which constitutes 80% of the world’s data, according to IBM. They can extract contextual meaning so as to make sense of the unstructured data (verbal and visual). This is a marked difference between the new computers’ cognitive systems and traditional information systems.

  • The Cognitive Computing Forum, which organises conferences in this area, lists a dozen characteristics integral to those systems. In addition to (a) natural language processing; and (b) vision-based sensing and image recognition, they are likely to include machine learning, neural networks, algorithms that learn and adapt, semantic understanding, reasoning and decision automation, sophisticated pattern recognition, and more (note that there is an overlap between some of the methodologies on this list). They also need to exhibit common sense.

The power of cognitive computing is derived from its combination between cognitive processes attributed to the human brain (e.g., learning, reasoning) and the enhanced computation (complexity, speed) and memory capabilities of advanced computer technologies. In terms of intelligence, it is acknowledged that cognitive processes of the human brain are superior to computers inasmuch as could be achieved through conventional programming. Yet, the actual performance of human cognition (‘rationality’) is bounded by memory and computation limitations. Hence, we can employ cognitive computing systems that are capable of handling much larger amounts of information than humans can, while using cognitive (‘neural’) processes similar to humans’. Kelly posits in IBM’s paper: “The true potential of the Cognitive Era will be realized by combining the data analytics and statistical reasoning of machines with uniquely human qualities, such as self-directed goals, common sense and ethical values.”  It is not sufficiently understood yet how cognitive processes physically occur in the human central nervous system. But, it is argued, there is growing knowledge and understanding of their operation or neural function to be sufficient for emulating at least some of them by computers. (This argument refers to the concept of different levels of analysis that may and should prevail simultaneously.)

The distinguished scholar Herbert A. Simon studied thinking processes from the perspective of information processing theory, which he championed. In the research he and his colleagues conducted, he traced and described in a formalised manner strategies and rules that people utilise to perform different cognitive tasks, especially solving problems (e.g., his comprehensive work with Allen Newell on Human Problem Solving, 1972). In his theory, any strategy or rule specified — from more elaborate optimizing algorithms to short-cut rules (heuristics) — is composed of elementary information processes (e.g., add, subtract, compare, substitute). On the other hand, strategies may be joined in higher-level compound information processes. Strategy specifications were subsequently translated into computer programmes for simulation and testing.

The main objective of Simon was to gain better understanding of human thinking and the cognitive processes involved therein. He proclaimed that computer thinking is programmed in order to simulate human thinking, as part of an investigation aimed at understanding the latter (1). Thus, Simon did not explicitly aim to overcome the limitations of the human brain but rather simulate how the brain may work-out around those limitations to perform various tasks. His approach, followed by other researchers, was based on recording how people perform given tasks, and testing for efficacy of the process models through computer simulations. This course of research is different from the goals of novel cognitive computing.

  • We may identify multiple levels in research on cognition: an information processing level (‘mental’), a neural-functional level, and a neurophysiological level (i.e., how elements of thought emerge and take form in the brain). Moreover, researchers aim to obtain a comprehensive picture of brain structures and areas responsible for sensory, cognitive, emotional and motor phenomena, and how they inter-relate. Progress is made by incorporating methods and approaches of the neurosciences side-by-side with those of cognitive psychology and experimental psychology to establish coherent and valid links between those levels.

Simon created explicit programmes of the steps required to solve particular types of problems, though he aimed at developing also more generalised programmes that would be able to handle broader categories of problems (e.g., the General Problem Solver embodying the Means-End heuristic) and other cognitive tasks (e.g., pattern detection, rule induction) that may also be applied in problem solving. Yet, cognitive computing seeks to reach beyond explicit programming and construct guidelines for far more generalised processes that can learn and adapt to data, and handle broader families of tasks and contexts. If necessary, computers would generate their own instructions or rules for performing a task. In problem solving, computers are taught not merely how to solve a problem but how to look for a solution.

While cognitive computing can employ greater memory and computation resources than naturally available to humans, it is not truly attempted to create a fully rational system. The computer cognitive system should retain some properties of bounded rationality if only to maintain resemblance to the original human cognitive system. First, forming and selecting heuristics is an integral property of human intelligence. Second, cognitive computing systems try to exhibit common sense, which may not be entirely rational (i.e., based on good instincts and experience), and introduce effects of emotions and ethical or moral values that may alter or interfere with rational cognitive processes. Third, cognitive computing systems are allowed to err:

  • As Kelly explains in IBM’s paper, cognitive systems are probabilistic, meaning that they have the power to adapt and interpret the complexity and unpredictability of unstructured data, yet they do not “know” the answer and therefore may make mistakes in assigning the correct meaning to data and queries (e.g., IBM’s Watson misjudged a clue in the quiz game Jeopardy against two human contestants — nonetheless “he” won the competition). To reflect this characteristic, “the cognitive system assigns a confidence level to each potential insight or answer”.

Applications of cognitive computing are gradually growing in number (e.g., experimental projects with the cooperation and support of IBM on Watson). They may not be targeted directly for use by consumers at this stage, but consumers are seen as the end-beneficiaries. The users could first be professionals and service agents who help consumers in different areas. For example, applied systems in development and trial would:

  1. help medical doctors in identifying (cancer) diagnoses and advising their patients on treatment options (it is projected that such a system will “take part” in doctor-patient consultations);
  2. perform sophisticated analyses of financial markets and their instruments in real-time to guide financial advisers with investment recommendations to their clients;
  3. assist account managers or service representatives to locate and extract relevant information from a company’s knowledge base to advise a customer in a short time (CRM/customer support).

The health-advisory platform WellCafé by Welltok provides an example of application aimed at consumers: The platform guides consumers on healthy behaviours recommended for them whereby the new assistant Concierge lets them converse in natural language to get help on resources and programmes personally relevant to them as well as various health-related topics (e.g., dining options). (2)

Consider domains such as cars, tourism (vacation resorts), or real-estate (second-hand apartments and houses). Consumers may encounter tremendous information in these domains on numerous options and many attributes to consider (for cars there may also be technical detail more difficult to digest). A cognitive system has to help the consumer in studying the market environment (e.g., organising the information from sources such as company websites and professional and peer reviews [social media], detecting patterns in structured and unstructured data, screening and sorting) and learning vis-à-vis the consumer’s preferences and habits in order to prioritize and construct personally fitting recommendations. Additionally, it is noteworthy that in any of these domains visual information (e.g., photographs) could be most relevant and valuable to consumers in their decision process — visual appeal of car models, mountain or seaside holiday resorts, and apartments cannot be discarded. Cognitive computing assistants may raise very high consumer expectations.

Cognitive computing aims to mimic human cognitive processes that would be performed by intelligent computers with enhanced resources on behalf of humans. The application of capabilities of such a system would facilitate consumers or the professionals and agents that help them with decisions and other tasks — saving them time and effort (sometimes frustration), providing them well-organised information with customised recommendations for action that users would feel they  have reached themselves. Time and experience will tell how comfortably people interact and engage with the human-like intelligent assistants and how productive they indeed find them, using the cognitive assistant as the most natural thing to do.

Ron Ventura, Ph.D. (Marketing)

Notes:

1.  “Thinking by Computers”, Herbert A. Simon, 1966/2008, reprinted in Economics, Bounded Rationality and the Cognitive Revolution, Massimo Egidi and Robin Marris (eds.)[pp. 55-75], Edward Elgar.

2. The examples given above are described in IBM’s white paper by Kelly and in: “Cognitive Computing: Real-World Applications for an Emerging Technology”, Judit Lamont (Ph.D.), 1 Sept. 2015, KMWorld.com

Read Full Post »