Feeds:
Posts
Comments

Posts Tagged ‘Computers’

Human thinking processes are rich and variable, whether in search, problem solving, learning, perceiving and recognizing stimuli, or decision-making. But people are subject to limitations on the complexity of their computations and especially the capacity of their ‘working’ (short-term) memory. As consumers, they frequently need to struggle with large amounts of information on numerous brands, products or services with varying characteristics, available from a variety of retailers and e-tailers, stretching the consumers’ cognitive abilities and patience. Wait no longer, a new class of increasingly intelligent decision aids is being put forward to consumers by the evolving field of Cognitive Computing. Computer-based ‘smart agents’ will get smarter, yet most importantly, they would be more human-like in their thinking.

Cognitive computing is set to upgrade human decision-making, consumers’ in particular. Following IBM, a leader in this field, cognitive computing is built on methods of Artificial Intelligence (AI) yet intends to take this field a leap forward by making it “feel” less artificial and more similar to human cognition. That is, a human-computer interaction will feel more natural and fluent if the thinking processes of the computer resemble more closely those of its human users (e.g., manager, service representative, consumer). Dr. John E. Kelly, SVP at IBM Research, provides the following definition in his white paper introducing the topic (“Computer, Cognition, and the Future of Knowing”): “Cognitive computing refers to systems that learn at scale, reason with purpose and interact with humans. Rather than been explicitly programmed, they learn and reason from interactions with us and from their experiences with their environment.” The paper seeks to rebuke claims of any intention behind cognitive computing to replace human thinking and decisions. The motivation, as suggested by Kelly, is to augment human ability to understand and act upon the complex systems of our society.

Understanding natural language has been for a long time a human cognitive competence that computers could not imitate. However, comprehension of natural language, in text or speech, is now considered one of the important abilities of cognitive computing systems. Another important ability concerns the recognition of visual images and objects embedded in them (e.g., face recognition receives particular attention). Furthermore, cognitive computing systems are able to process and analyse unstructured data which constitutes 80% of the world’s data, according to IBM. They can extract contextual meaning so as to make sense of the unstructured data (verbal and visual). This is a marked difference between the new computers’ cognitive systems and traditional information systems.

  • The Cognitive Computing Forum, which organises conferences in this area, lists a dozen characteristics integral to those systems. In addition to (a) natural language processing; and (b) vision-based sensing and image recognition, they are likely to include machine learning, neural networks, algorithms that learn and adapt, semantic understanding, reasoning and decision automation, sophisticated pattern recognition, and more (note that there is an overlap between some of the methodologies on this list). They also need to exhibit common sense.

The power of cognitive computing is derived from its combination between cognitive processes attributed to the human brain (e.g., learning, reasoning) and the enhanced computation (complexity, speed) and memory capabilities of advanced computer technologies. In terms of intelligence, it is acknowledged that cognitive processes of the human brain are superior to computers inasmuch as could be achieved through conventional programming. Yet, the actual performance of human cognition (‘rationality’) is bounded by memory and computation limitations. Hence, we can employ cognitive computing systems that are capable of handling much larger amounts of information than humans can, while using cognitive (‘neural’) processes similar to humans’. Kelly posits in IBM’s paper: “The true potential of the Cognitive Era will be realized by combining the data analytics and statistical reasoning of machines with uniquely human qualities, such as self-directed goals, common sense and ethical values.”  It is not sufficiently understood yet how cognitive processes physically occur in the human central nervous system. But, it is argued, there is growing knowledge and understanding of their operation or neural function to be sufficient for emulating at least some of them by computers. (This argument refers to the concept of different levels of analysis that may and should prevail simultaneously.)

The distinguished scholar Herbert A. Simon studied thinking processes from the perspective of information processing theory, which he championed. In the research he and his colleagues conducted, he traced and described in a formalised manner strategies and rules that people utilise to perform different cognitive tasks, especially solving problems (e.g., his comprehensive work with Allen Newell on Human Problem Solving, 1972). In his theory, any strategy or rule specified — from more elaborate optimizing algorithms to short-cut rules (heuristics) — is composed of elementary information processes (e.g., add, subtract, compare, substitute). On the other hand, strategies may be joined in higher-level compound information processes. Strategy specifications were subsequently translated into computer programmes for simulation and testing.

The main objective of Simon was to gain better understanding of human thinking and the cognitive processes involved therein. He proclaimed that computer thinking is programmed in order to simulate human thinking, as part of an investigation aimed at understanding the latter (1). Thus, Simon did not explicitly aim to overcome the limitations of the human brain but rather simulate how the brain may work-out around those limitations to perform various tasks. His approach, followed by other researchers, was based on recording how people perform given tasks, and testing for efficacy of the process models through computer simulations. This course of research is different from the goals of novel cognitive computing.

  • We may identify multiple levels in research on cognition: an information processing level (‘mental’), a neural-functional level, and a neurophysiological level (i.e., how elements of thought emerge and take form in the brain). Moreover, researchers aim to obtain a comprehensive picture of brain structures and areas responsible for sensory, cognitive, emotional and motor phenomena, and how they inter-relate. Progress is made by incorporating methods and approaches of the neurosciences side-by-side with those of cognitive psychology and experimental psychology to establish coherent and valid links between those levels.

Simon created explicit programmes of the steps required to solve particular types of problems, though he aimed at developing also more generalised programmes that would be able to handle broader categories of problems (e.g., the General Problem Solver embodying the Means-End heuristic) and other cognitive tasks (e.g., pattern detection, rule induction) that may also be applied in problem solving. Yet, cognitive computing seeks to reach beyond explicit programming and construct guidelines for far more generalised processes that can learn and adapt to data, and handle broader families of tasks and contexts. If necessary, computers would generate their own instructions or rules for performing a task. In problem solving, computers are taught not merely how to solve a problem but how to look for a solution.

While cognitive computing can employ greater memory and computation resources than naturally available to humans, it is not truly attempted to create a fully rational system. The computer cognitive system should retain some properties of bounded rationality if only to maintain resemblance to the original human cognitive system. First, forming and selecting heuristics is an integral property of human intelligence. Second, cognitive computing systems try to exhibit common sense, which may not be entirely rational (i.e., based on good instincts and experience), and introduce effects of emotions and ethical or moral values that may alter or interfere with rational cognitive processes. Third, cognitive computing systems are allowed to err:

  • As Kelly explains in IBM’s paper, cognitive systems are probabilistic, meaning that they have the power to adapt and interpret the complexity and unpredictability of unstructured data, yet they do not “know” the answer and therefore may make mistakes in assigning the correct meaning to data and queries (e.g., IBM’s Watson misjudged a clue in the quiz game Jeopardy against two human contestants — nonetheless “he” won the competition). To reflect this characteristic, “the cognitive system assigns a confidence level to each potential insight or answer”.

Applications of cognitive computing are gradually growing in number (e.g., experimental projects with the cooperation and support of IBM on Watson). They may not be targeted directly for use by consumers at this stage, but consumers are seen as the end-beneficiaries. The users could first be professionals and service agents who help consumers in different areas. For example, applied systems in development and trial would:

  1. help medical doctors in identifying (cancer) diagnoses and advising their patients on treatment options (it is projected that such a system will “take part” in doctor-patient consultations);
  2. perform sophisticated analyses of financial markets and their instruments in real-time to guide financial advisers with investment recommendations to their clients;
  3. assist account managers or service representatives to locate and extract relevant information from a company’s knowledge base to advise a customer in a short time (CRM/customer support).

The health-advisory platform WellCafé by Welltok provides an example of application aimed at consumers: The platform guides consumers on healthy behaviours recommended for them whereby the new assistant Concierge lets them converse in natural language to get help on resources and programmes personally relevant to them as well as various health-related topics (e.g., dining options). (2)

Consider domains such as cars, tourism (vacation resorts), or real-estate (second-hand apartments and houses). Consumers may encounter tremendous information in these domains on numerous options and many attributes to consider (for cars there may also be technical detail more difficult to digest). A cognitive system has to help the consumer in studying the market environment (e.g., organising the information from sources such as company websites and professional and peer reviews [social media], detecting patterns in structured and unstructured data, screening and sorting) and learning vis-à-vis the consumer’s preferences and habits in order to prioritize and construct personally fitting recommendations. Additionally, it is noteworthy that in any of these domains visual information (e.g., photographs) could be most relevant and valuable to consumers in their decision process — visual appeal of car models, mountain or seaside holiday resorts, and apartments cannot be discarded. Cognitive computing assistants may raise very high consumer expectations.

Cognitive computing aims to mimic human cognitive processes that would be performed by intelligent computers with enhanced resources on behalf of humans. The application of capabilities of such a system would facilitate consumers or the professionals and agents that help them with decisions and other tasks — saving them time and effort (sometimes frustration), providing them well-organised information with customised recommendations for action that users would feel they  have reached themselves. Time and experience will tell how comfortably people interact and engage with the human-like intelligent assistants and how productive they indeed find them, using the cognitive assistant as the most natural thing to do.

Ron Ventura, Ph.D. (Marketing)

Notes:

1.  “Thinking by Computers”, Herbert A. Simon, 1966/2008, reprinted in Economics, Bounded Rationality and the Cognitive Revolution, Massimo Egidi and Robin Marris (eds.)[pp. 55-75], Edward Elgar.

2. The examples given above are described in IBM’s white paper by Kelly and in: “Cognitive Computing: Real-World Applications for an Emerging Technology”, Judit Lamont (Ph.D.), 1 Sept. 2015, KMWorld.com

Read Full Post »

Think of this: You walk into your car, close the door, and fasten your seat belt. Then you keypunch your destination on a panel or simply say “Go to XXX”, press a button, and your self-driving car gets on its way. During the travel you may read, eat a light meal, or do some work on your lapKnight Rider Hasselhoff and KITTtop computer/tablet device. This scenario is not that much imaginary — the development and testing of autonomous or driverless cars is already in progress, and the time the first models are marketed and hit the roads may be just a few years ahead. Some may want to see it as a dream-come-through when every person can have his or her own private chauffeur — installed in the car. For some the new robotic car may remind them of KITT, the clever and talking sports car in the popular futuristic TV series Knight Rider from the 1980s (featuring David Hasselhoff). However one relates to the concept of a self-driving car, it is likely to change dramatically the whole experience of travelling in a car, especially in the driver seat.

The autonomous car of Google appears to be the most publicised venture of this kind, full of ambition, but Google is just one of the players. Projects in this evolving technological field have called for collaboration between technology companies or academic research labs, responsible essentially for creating the sensors, computer vision and information technologies required for navigating and operating the automated cars, and automakers (e.g., Toyota, Audi, Renault-Nissan) that provide the vehicles. While the relevant devices and technologies could already exist, they have to be particularly accurate to be self-reliant and they must communicate properly with the ordinary car’s systems to control them safely in real-time; the effort to achieve those targets is still in progress.

The elaborate system of Google equips an autonomous car with a radar, laser range finders (aka lidars), and associated software. Extended capabilities of this system allow the car independently to smoothly join the traffic on freeways/highways, cross intersections, make right and left turns, and pass slower vehicles. The system’s cost is estimated at $70,000 to be installed in each car (1).

A high-tech company based in Israel, MobilEye Vision Technologies, offers an alternative approach that is based on camera only to collect all the visual information necessary from the scene of the road. For MobilEye, engaging in the driverless car challenge seems as a clear extension to their existing capabilities in developing and producing Advanced Driver Assistance Systems, camera-driven applications for alerting human drivers of collision risks (e.g., pedestrians starting to cross the street, insufficient distance from the car in front of them, as well as passing the legal speed limit)(2). The competence of MobilEye’s system for a driverless car is reduced at this time vis-a-vis Google’s system, but that may be attributed partly to the fact that the system currently applies a single camera at the front windshield. Hence, a car equipped with their system is capable only of self-driving in a single lane on freeways; yet it can detect traffic lights, slow down until complete stop, and then resume the journey in freeway speed. But the capabilities and performance of their system in driving a car are expected to improve as company officials say they plan to enhance it with a wide-angle camera and additional cameras, side-mounted and rear-facing. They aim to match the capabilities of the Google’s autonomous system but with a technological solution that is much more cost-effective to put on the road (1).

The most urgent and vital issue to address with respect to driverless cars has to be road safety. It is the motivation more frequently suggested for making the transition from human driving to robotic driving; that is, a computer-based system would behave more reliably on the road than a human driver and therefore will lead to a considerable reduction in road accidents.

Car accidents are often caused by misjudgement of a situation on the road in a matter of seconds by the driver who consequently takes the wrong action. But accidents also occur because drivers make dangerous moves, believing overconfidently that they can pull-it-through (e.g., “stealing” a red light, passing a slower car without sufficient distance from other cars or clear view of traffic on the opposite direction, speeding). A robotic system may indeed be able to prevent many accidents in either circumstances — its estimates (e.g., distance) would be more accurate and the computer algorithms it utilizes would make more reliable decisions, certainly not subject to human tendencies of risk-seeking and whims. Human judgement is fallible and intuitive quick decisions can be misguided. But intuition on many occasions is very effective in identifying obstacles, irregularities and hazards, and therefore helps avoid personal harms or accidents. It allows drivers to make sufficiently accurate decisions in a short time, important especially when time limitations are in force. Gut feelings also play an important guiding role. Yet when more time is available, drivers can plan their path and re-examine their intuitive judgement.

Sadly, drivers get into dangerous situations because they distract themselves, willingly or unintentionally, from whatever happens on the road (e.g., operating and talking on the mobile phone, kids that are quarrelling in the back seat). Thus, video demonstrations of MobilEye of how their warning system helps to avoid an accident (e.g., pedestrian ahead) focuses on incidents when the driver is distracted, possibly operating his music player or searching in his handbag, something he should not have done in the first place. However, the logic that since this kind of behaviour and other human fallacies cannot be completely prevented, and efforts for educating and training people to drive better are ineffective, we should pass control indefinitely to robotic systems, is normatively flawed and even dangerous — it allows people to feel less responsible. Nevertheless, an autonomous system may be welcome to resolve specific incidents when distraction to other activities cannot be delayed or when fatigue breaks-in.

Subsequently, an interesting question to be posed is: How well will robotic driving systems be able to anticipate human behaviour on the roads? Assuming that the human driver keeps his/her eyes on the road, who will more successfully detect a pedestrian about to step into the road from between parking cars, the driver or the robotic system with its sensors? Will the latter respond in time without human intervention? While there are some fascinating projections about how the new cars will impact urban life (e.g., parking, traffic lights, building construction (3)), there is lack of convincing evidence yet that the driverless cars are ready for crowded busy urban areas. Furthermore, replacing the fleet of cars on the roads can be expected to take years (auto experts suggest that the first models will be commercially available as early as 2020 and most cars will be autonomous from 2040 to 2050). This is not likely to be a smooth transition period; transport and urban policy makers must be carefully prepared for it. Particularly they should be addressing how effectively driverless cars are able to anticipate and respond to errors or misconduct of human drivers and the risk of accidents due to human drivers who misunderstand how self-driving cars manoeuver or even try to outsmart the robotic cars.

It is therefore that much essential that autonomous cars will in fact operate in different modes of human and robotic control during the transition period, and continue further later. David Friedman, Deputy Administrator of the National Highway Traffic Safety Administration in the US identified in an interview to Wall Street Journal five levels of automation, from “0” (all-human) to “4” (full automation). The intermediate Level 3 indicates “limited automation”, using assisted positioning technologies but which require the human driver to retake control from time to time (4).  Although human judgement is imperfect, human drivers should be given flexibility in relying on automated driving and be allowed to occasionally intervene. John Markoff of The New-York Times/IHT reports that the Toyota-Google car (Level 3 [WSJ]) made him feel more detached from the operation of the “robot” while the Audi-MobilEye car made him better realise what it takes for a “robot” to drive a car (1). Nevertheless, there is not a definite answer to the question what is correct to do in critical moments: should the human driver trust the system to do its job or to interfere and take control? Markoff felt less confident when the car in front slowed ahead of a stoplight (on the road down to the Dead Sea) and it “took all of my willpower”, in his words, to trust the car and not intervene. That system probably still has to be improved, but such episodes are likely to continue to be experienced all the time. On the one hand, computer algorithms are likely to deal better more frequently with road/traffic conditions and the driver should sit back and trust the robot. On the other hand, the driver should be advised not to engage too deeply in activities like reading or playing a video game and remain conscious of the road, prepared to take control in complex and less normal situations.

Introducing driverless cars may have, furthermore, significant implications for connectivity of the computerised car with and use of external information resources, and consequently for our privacy versus convenience. Thinking in particular of an information giant like Google, it is difficult to imagine that the company will not make use of the flow of information it may receive from cars for marketing purposes. True, much information can already be gathered and utilised by existing navigation applications and be shared through them. And yet, employing an autonomous driving system is going to involve even increased volumes and expanded types of information; collecting the information will be justified by operational requirements of the system, which will be difficult to argue with (e.g., information from Google’s sensors on a car can be matched any time with cloud-based data sets). That is, the autonomous system, a navigation application in the car, and external information resources will have continuous “conversations” as the car drives.

Therefore, during a future autonomous car drive, you may not be left so free to read or do your work. It will become more likely that as you pass near a restaurant you receive an alert that you have not visited it lately, and as you approach a DIY store you are notified of their great discount deals, etc.. The system will know much better what business establishments of interest the car is going to pass by and when it is expected to reach them; the sensors may also detect brand signage of interest on other vehicles or on the roadsides, consult external information resources and send a message to the driver.  That is not to mention the history of the pathways of a car that can be gathered, accumulated and saved on external databases. The opportunities for business enterprises and marketers are enormous and they are just starting to reveal. It would be convenient for digital-oriented consumers to receive some of those messages, but it would be also at a growing cost of losing the privacy of their whereabouts.

The cars of the future are expected to be increasingly more electronically wired, connected to the Internet (wirelessly), and supplanted with sensors, processors, and computer applications/applets. Some experts suggest that the dashboard controls of the car will actually become virtual and displayed on the driver’s smartphone instead of embedded in the car. Overall, analogue instruments are expected to be replaced by digital ones (5). It could change considerably the mission of car repairs, requiring more involvement of electronics and computer experts vis-a-vis mechanics and electricians. It would probably make more complex the care and maintenance of the car by its owner. The car will be more susceptible to sudden shutdown due to software failure or malfunction; the owner will have to take care of updating the various software installed on the car, wirelessly from the Internet or by a USB key (5); and thereby it may also be necessary to install anti-virus protection software on the car.

Eventually, technological visionaries and proponents of self-driving robotic cars should keep in mind that driving gives pleasure to many car owners besides the benefit of bringing them from place to place. A law-biding driver who simply enjoys the experience should not be deprived of it. However, not every journey is enjoyable and driving enjoyment should be balanced against releasing the human driver from effects of fatigue and stress. Therefore a self-driving system may prove greatly positive and desirable after an extended period of driving during a long travel, on a monotonous long straight freeway, in city centres, and in traffic jams. Yet, don’t be surprised if your car drives you off the road to a nearby steakhouse restaurant at its discretion.

Ron Ventura, Ph.D. (Marketing)

Sources:

(1) “Low-Cost System Offers Clues to Fast-Approaching Future of Driverless Car”, John Markoff, The International Herald Tribune (Global Edition of The New-York Times), 29 May 2013 (See the original article in NYT with a short demo video of MobilEye at: http://www.nytimes.com/2013/05/28/science/on-the-road-in-mobileyes-self-driving-car.html?pagewanted=all&_r=0 )

(2) Website of MobilEye Vision Technologies (www.mobileye.com — see Products pages).

(3) “Driverless Cars Could Reshape the City of the Future”, Nick Bilton, The Boston Globe (Online), 8 July 2013. http://www.bostonglobe.com/business/2013/07/07/driverless-cars-could-reshape-cities/SuUfDpWx9qs9Db3mxr7hRN/story.html

(4)’ “Self-Driving Car Sparks New Guidelines”, Joseph B. White, The Wall Street Journal (Online WSJ.com), 30 May 2013.  http://online.wsj.com/article/SB10001424127887323728204578515081578077890.html

(5) “Automobiles Ape the iPhone”, Seth Fletcher, Fortune (European Edition), 20 May 2013, Volume 167, No. 7, p. 25.

Read Full Post »