Think of this: You walk into your car, close the door, and fasten your seat belt. Then you keypunch your destination on a panel or simply say “Go to XXX”, press a button, and your self-driving car gets on its way. During the travel you may read, eat a light meal, or do some work on your laptop computer/tablet device. This scenario is not that much imaginary — the development and testing of autonomous or driverless cars is already in progress, and the time the first models are marketed and hit the roads may be just a few years ahead. Some may want to see it as a dream-come-through when every person can have his or her own private chauffeur — installed in the car. For some the new robotic car may remind them of KITT, the clever and talking sports car in the popular futuristic TV series Knight Rider from the 1980s (featuring David Hasselhoff). However one relates to the concept of a self-driving car, it is likely to change dramatically the whole experience of travelling in a car, especially in the driver seat.
The autonomous car of Google appears to be the most publicised venture of this kind, full of ambition, but Google is just one of the players. Projects in this evolving technological field have called for collaboration between technology companies or academic research labs, responsible essentially for creating the sensors, computer vision and information technologies required for navigating and operating the automated cars, and automakers (e.g., Toyota, Audi, Renault-Nissan) that provide the vehicles. While the relevant devices and technologies could already exist, they have to be particularly accurate to be self-reliant and they must communicate properly with the ordinary car’s systems to control them safely in real-time; the effort to achieve those targets is still in progress.
The elaborate system of Google equips an autonomous car with a radar, laser range finders (aka lidars), and associated software. Extended capabilities of this system allow the car independently to smoothly join the traffic on freeways/highways, cross intersections, make right and left turns, and pass slower vehicles. The system’s cost is estimated at $70,000 to be installed in each car (1).
A high-tech company based in Israel, MobilEye Vision Technologies, offers an alternative approach that is based on camera only to collect all the visual information necessary from the scene of the road. For MobilEye, engaging in the driverless car challenge seems as a clear extension to their existing capabilities in developing and producing Advanced Driver Assistance Systems, camera-driven applications for alerting human drivers of collision risks (e.g., pedestrians starting to cross the street, insufficient distance from the car in front of them, as well as passing the legal speed limit)(2). The competence of MobilEye’s system for a driverless car is reduced at this time vis-a-vis Google’s system, but that may be attributed partly to the fact that the system currently applies a single camera at the front windshield. Hence, a car equipped with their system is capable only of self-driving in a single lane on freeways; yet it can detect traffic lights, slow down until complete stop, and then resume the journey in freeway speed. But the capabilities and performance of their system in driving a car are expected to improve as company officials say they plan to enhance it with a wide-angle camera and additional cameras, side-mounted and rear-facing. They aim to match the capabilities of the Google’s autonomous system but with a technological solution that is much more cost-effective to put on the road (1).
The most urgent and vital issue to address with respect to driverless cars has to be road safety. It is the motivation more frequently suggested for making the transition from human driving to robotic driving; that is, a computer-based system would behave more reliably on the road than a human driver and therefore will lead to a considerable reduction in road accidents.
Car accidents are often caused by misjudgement of a situation on the road in a matter of seconds by the driver who consequently takes the wrong action. But accidents also occur because drivers make dangerous moves, believing overconfidently that they can pull-it-through (e.g., “stealing” a red light, passing a slower car without sufficient distance from other cars or clear view of traffic on the opposite direction, speeding). A robotic system may indeed be able to prevent many accidents in either circumstances — its estimates (e.g., distance) would be more accurate and the computer algorithms it utilizes would make more reliable decisions, certainly not subject to human tendencies of risk-seeking and whims. Human judgement is fallible and intuitive quick decisions can be misguided. But intuition on many occasions is very effective in identifying obstacles, irregularities and hazards, and therefore helps avoid personal harms or accidents. It allows drivers to make sufficiently accurate decisions in a short time, important especially when time limitations are in force. Gut feelings also play an important guiding role. Yet when more time is available, drivers can plan their path and re-examine their intuitive judgement.
Sadly, drivers get into dangerous situations because they distract themselves, willingly or unintentionally, from whatever happens on the road (e.g., operating and talking on the mobile phone, kids that are quarrelling in the back seat). Thus, video demonstrations of MobilEye of how their warning system helps to avoid an accident (e.g., pedestrian ahead) focuses on incidents when the driver is distracted, possibly operating his music player or searching in his handbag, something he should not have done in the first place. However, the logic that since this kind of behaviour and other human fallacies cannot be completely prevented, and efforts for educating and training people to drive better are ineffective, we should pass control indefinitely to robotic systems, is normatively flawed and even dangerous — it allows people to feel less responsible. Nevertheless, an autonomous system may be welcome to resolve specific incidents when distraction to other activities cannot be delayed or when fatigue breaks-in.
Subsequently, an interesting question to be posed is: How well will robotic driving systems be able to anticipate human behaviour on the roads? Assuming that the human driver keeps his/her eyes on the road, who will more successfully detect a pedestrian about to step into the road from between parking cars, the driver or the robotic system with its sensors? Will the latter respond in time without human intervention? While there are some fascinating projections about how the new cars will impact urban life (e.g., parking, traffic lights, building construction (3)), there is lack of convincing evidence yet that the driverless cars are ready for crowded busy urban areas. Furthermore, replacing the fleet of cars on the roads can be expected to take years (auto experts suggest that the first models will be commercially available as early as 2020 and most cars will be autonomous from 2040 to 2050). This is not likely to be a smooth transition period; transport and urban policy makers must be carefully prepared for it. Particularly they should be addressing how effectively driverless cars are able to anticipate and respond to errors or misconduct of human drivers and the risk of accidents due to human drivers who misunderstand how self-driving cars manoeuver or even try to outsmart the robotic cars.
It is therefore that much essential that autonomous cars will in fact operate in different modes of human and robotic control during the transition period, and continue further later. David Friedman, Deputy Administrator of the National Highway Traffic Safety Administration in the US identified in an interview to Wall Street Journal five levels of automation, from “0” (all-human) to “4” (full automation). The intermediate Level 3 indicates “limited automation”, using assisted positioning technologies but which require the human driver to retake control from time to time (4). Although human judgement is imperfect, human drivers should be given flexibility in relying on automated driving and be allowed to occasionally intervene. John Markoff of The New-York Times/IHT reports that the Toyota-Google car (Level 3 [WSJ]) made him feel more detached from the operation of the “robot” while the Audi-MobilEye car made him better realise what it takes for a “robot” to drive a car (1). Nevertheless, there is not a definite answer to the question what is correct to do in critical moments: should the human driver trust the system to do its job or to interfere and take control? Markoff felt less confident when the car in front slowed ahead of a stoplight (on the road down to the Dead Sea) and it “took all of my willpower”, in his words, to trust the car and not intervene. That system probably still has to be improved, but such episodes are likely to continue to be experienced all the time. On the one hand, computer algorithms are likely to deal better more frequently with road/traffic conditions and the driver should sit back and trust the robot. On the other hand, the driver should be advised not to engage too deeply in activities like reading or playing a video game and remain conscious of the road, prepared to take control in complex and less normal situations.
Introducing driverless cars may have, furthermore, significant implications for connectivity of the computerised car with and use of external information resources, and consequently for our privacy versus convenience. Thinking in particular of an information giant like Google, it is difficult to imagine that the company will not make use of the flow of information it may receive from cars for marketing purposes. True, much information can already be gathered and utilised by existing navigation applications and be shared through them. And yet, employing an autonomous driving system is going to involve even increased volumes and expanded types of information; collecting the information will be justified by operational requirements of the system, which will be difficult to argue with (e.g., information from Google’s sensors on a car can be matched any time with cloud-based data sets). That is, the autonomous system, a navigation application in the car, and external information resources will have continuous “conversations” as the car drives.
Therefore, during a future autonomous car drive, you may not be left so free to read or do your work. It will become more likely that as you pass near a restaurant you receive an alert that you have not visited it lately, and as you approach a DIY store you are notified of their great discount deals, etc.. The system will know much better what business establishments of interest the car is going to pass by and when it is expected to reach them; the sensors may also detect brand signage of interest on other vehicles or on the roadsides, consult external information resources and send a message to the driver. That is not to mention the history of the pathways of a car that can be gathered, accumulated and saved on external databases. The opportunities for business enterprises and marketers are enormous and they are just starting to reveal. It would be convenient for digital-oriented consumers to receive some of those messages, but it would be also at a growing cost of losing the privacy of their whereabouts.
The cars of the future are expected to be increasingly more electronically wired, connected to the Internet (wirelessly), and supplanted with sensors, processors, and computer applications/applets. Some experts suggest that the dashboard controls of the car will actually become virtual and displayed on the driver’s smartphone instead of embedded in the car. Overall, analogue instruments are expected to be replaced by digital ones (5). It could change considerably the mission of car repairs, requiring more involvement of electronics and computer experts vis-a-vis mechanics and electricians. It would probably make more complex the care and maintenance of the car by its owner. The car will be more susceptible to sudden shutdown due to software failure or malfunction; the owner will have to take care of updating the various software installed on the car, wirelessly from the Internet or by a USB key (5); and thereby it may also be necessary to install anti-virus protection software on the car.
Eventually, technological visionaries and proponents of self-driving robotic cars should keep in mind that driving gives pleasure to many car owners besides the benefit of bringing them from place to place. A law-biding driver who simply enjoys the experience should not be deprived of it. However, not every journey is enjoyable and driving enjoyment should be balanced against releasing the human driver from effects of fatigue and stress. Therefore a self-driving system may prove greatly positive and desirable after an extended period of driving during a long travel, on a monotonous long straight freeway, in city centres, and in traffic jams. Yet, don’t be surprised if your car drives you off the road to a nearby steakhouse restaurant at its discretion.
Ron Ventura, Ph.D. (Marketing)
(1) “Low-Cost System Offers Clues to Fast-Approaching Future of Driverless Car”, John Markoff, The International Herald Tribune (Global Edition of The New-York Times), 29 May 2013 (See the original article in NYT with a short demo video of MobilEye at: http://www.nytimes.com/2013/05/28/science/on-the-road-in-mobileyes-self-driving-car.html?pagewanted=all&_r=0 )
(2) Website of MobilEye Vision Technologies (www.mobileye.com — see Products pages).
(3) “Driverless Cars Could Reshape the City of the Future”, Nick Bilton, The Boston Globe (Online), 8 July 2013. http://www.bostonglobe.com/business/2013/07/07/driverless-cars-could-reshape-cities/SuUfDpWx9qs9Db3mxr7hRN/story.html
(4)’ “Self-Driving Car Sparks New Guidelines”, Joseph B. White, The Wall Street Journal (Online WSJ.com), 30 May 2013. http://online.wsj.com/article/SB10001424127887323728204578515081578077890.html
(5) “Automobiles Ape the iPhone”, Seth Fletcher, Fortune (European Edition), 20 May 2013, Volume 167, No. 7, p. 25.