There is a widespread expectation that autonomous or driverless cars of the sort being developed by Google will soon become commercially available and active on public roads. A recent Slate article makes some strong arguments for why that expectation may be premature:
But the maps have problems, starting with the fact that the car can’t travel a single inch without one. Since maps are one of the engineering foundations of the Google car, before the company’s vision for ubiquitous self-driving cars can be realized, all 4 million miles of U.S. public roads will be need to be mapped, plus driveways, off-road trails, and everywhere else you’d ever want to take the car. So far, only a few thousand miles of road have gotten the treatment, most of them around the company’s headquarters in Mountain View, California. The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again.
Another issue is what will happen to driverless cars when they get into a situation where they cannot function (say, a construction site includes temporary stop lights, or you turn onto a road which isn’t mapped)? I can’t see passengers being very happy when their car simply won’t go anywhere anymore, and they need to abandon it and find some other form of transport.
Google’s Lame Demo Shows Us How Far Its Robo-Car Has Come
…
The ride was more carefully choreographed than a Taylor Swift concert. I pressed the the big black “Go” button, and the car rolled away with a whir. It made a few turns, and maxed out at around 15 mph. A Google employee stepped in front of me, and the car slowed and let him continue on his way unhindered. A car pulled up alongside me, and the Google Car slowed to ensure we didn’t collide. Then a cyclist made a similar move, and the car responded in a similar fashion. I saw the car make the exact same trip 10 times in all.
…
Making those predictions is likely the most crucial work the team is doing, and it’s based on the huge amount of time the cars have spent dealing with the real world. Anything one car sees is shared with every other car, and nothing is forgotten. From that data, the team builds probabilistic models for the cars to follow.
The trouble is, a fully driverless car needs to operate safely in all environments. “You don’t really need a map to do simple lane-keeping,” says John Ristevski, HERE’s grandiosely named vice-president of reality capture. “But if you’re on a five-lane freeway, you need to know which of those five lanes you’re in, which are safe to traverse, and at what exact point that exit ramp is coming up.”
The trouble is road markings can wear away or disappear under snow. And modern laser-surveying sensor systems (called LIDARs, after light detection and ranging) may not be accurate in those conditions. LIDARS calculate distances by illuminating a target with laser light and measuring the time it takes for the light to bounce back to the source. Radar does much the same thing with radio waves. In cars, LIDARS and radars have an effective range of around 50 metres, but that can shrink significantly in rain or when objects are obscured by vehicles ahead. Even the smartest car travelling at motorway speeds can “see” only around a second and a half ahead. What HD maps give self-driving cars is the ability to anticipate turns and junctions far beyond sensors’ horizons.
Even more important for an autonomous vehicle is the ability to locate itself precisely; an error of a couple of metres could place a car on the wrong side of the road. Commercial GPS systems are accurate only to around 5 metres, but can be wrong by 50 metres in urban canyons and fail completely in tunnels. HD maps, however, can include a so-called localisation layer that works with a variety of sensors to position a car within centimetres.
http://www.economist.com/news/science-and-technology/21696925-building-highly-detailed-maps-robotic-vehicles-autonomous-cars-reality
Driver Behaviours In A World of Autonomous Mobility
These are the behaviours and practices that will mainstream in our self-driving urban landscape.
“The prevalence of capatcha street furniture, itself autonomous and reconfigurable, introduced by residents looking to filter out autonomous vehicles from passing through their neighbourhoods. Introduced by one of the early pioneers of Baidu’s Self Driving Car project, with an acute sense of algorithm. (The opposite will also be true, with human-drivers filtered out of many contexts, it will be interesting to see how our cities are carved up.)”
But pedestrians may be able to force self-driving cars to brake with confidence, given the regulatory contours that the cars’ firmware will have to conform to. In a paper published last fall in the Journal of Planning Education and Research, Adam Millard-Ball lays out three ways this could go: either cities will be effectively no-go zones for self-driving cars as pedestrians blithely step into the road; or pedestrians will be scared off by the cars’ cameras and the possibility of getting a facial-recognition-identified cameras from breaking the rules; or drivers will take control over their cars rather than chilling with their smartphones, believing that pedestrians will be scared off by the possibility of a human driver failing to brake in time.
How Pedestrians Will Defeat Autonomous Vehicles
The ‘game of chicken’ which could be a serious problem for driverless cars
Pedestrians, Autonomous Vehicles, and Cities
Adam Millard-Ball
Autonomous vehicles, popularly known as self-driving cars, have the potential to transform travel behavior. However, existing analyses have ignored strategic interactions with other road users. In this article, I use game theory to analyze the interactions between pedestrians and autonomous vehicles, with a focus on yielding at crosswalks. Because autonomous vehicles will be risk-averse, the model suggests that pedestrians will be able to behave with impunity, and autonomous vehicles may facilitate a shift toward pedestrian-oriented urban neighborhoods. At the same time, autonomous vehicle adoption may be hampered by their strategic disadvantage that slows them down in urban traffic.
Charlie Miller made headlines in 2015 as part of the team that showed it was possible to remote-drive a Jeep Cherokee over the internet, triggering a 1.4 million vehicle recall; now, he’s just quit a job at Uber where he was working on security for future self-driving taxis, and he’s not optimistic about the future of this important task.
To start with, self-driving cabs will be — by definition — fully computerized. Other car hacks have relied on hijacking the minority of vehicle functions that were controlled by computers, but on a self-driving car, everything is up for grabs. Also: by design, there may be no manual controls (and even if there are, they’ll be locked against random intervention by taxi passengers!).
It gets worse: passengers have unsupervised physical access to the car. In information security, we generally assume that if attackers can get unsupervised physical access to a device, all bets are off (this is sometimes called the evil maid attack, as one of the common threat-models is a hotel chambermaid who accesses a laptop while the owner is out of their room). Someone who wants to attack a self-driving taxi only needs to hail it — and worse still, ports like the OBD2 can’t be blocked, under penalty of federal law.
Securing Driverless Cars From Hackers Is Hard. Ask the Ex-Uber Guy Who Protects Them
Study Finds Automatic Braking With Rearview Cameras, Sensors Can Cut Backup Crashes By 78 Percent
Running a Tesla Model 3 on Autopilot off the Road with GPS Spoofing
Jim Hackett, the boss of Ford, acknowledges that the industry “overestimated the arrival of autonomous vehicles”. Chris Urmson, a linchpin in Alphabet’s self-driving efforts (he left in 2016), used to hope his young son would never need a driving licence. Mr Urmson now talks of self-driving cars appearing gradually over the next 30 to 50 years. Firms are increasingly switching to a more incremental approach, building on technologies such as lane-keeping or automatic parking. A string of fatalities involving self-driving cars have scotched the idea that a zero-crash world is anywhere close. Markets are starting to catch on. In September Morgan Stanley, a bank, cut its valuation of Waymo by 40%, to $105bn, citing delays in its technology.
…
Another lesson is that machine-learning systems are brittle. Learning solely from existing data means they struggle with situations that they have never seen before. Humans can use general knowledge and on-the-fly reasoning to react to things that are new to them—a light aircraft landing on a busy road, for instance, as happened in Washington state in August (thanks to humans’ cognitive flexibility, no one was hurt). Autonomous-car researchers call these unusual situations “edge cases”. Driving is full of them, though most are less dramatic. Mishandled edge cases seem to have been a factor in at least some of the deaths caused by autonomous cars to date. The problem is so hard that some firms, particularly in China, think it may be easier to re-engineer entire cities to support limited self-driving than to build fully autonomous cars (see article).
Modern AI technology has been far more successful. Billions of people use it every day, mostly without noticing, inside their smartphones and internet services. Yet despite this success, the fact remains that many of the grandest claims made about AI have once again failed to become reality, and confidence is wavering as researchers start to wonder whether the technology has hit a wall. Self-driving cars have become more capable, but remain perpetually on the cusp of being safe enough to deploy on everyday streets. Efforts to incorporate AI into medical diagnosis are, similarly, taking longer than expected: despite Dr Hinton’s prediction, there remains a global shortage of human radiologists.
…
The resulting systems can do some tasks, such as recognising images or speech, far more reliably than those programmed the traditional way with hand-crafted rules, but they are not “intelligent” in the way that most people understand the term. They are powerful pattern-recognition tools, but lack many cognitive abilities that biological brains take for granted. They struggle with reasoning, generalising from the rules they discover, and with the general-purpose savoir faire that researchers, for want of a more precise description, dub “common sense”. The result is an artificial idiot savant that can excel at well-bounded tasks, but can get things very wrong if faced with unexpected input.
Without another breakthrough, these drawbacks put fundamental limits on what AI can and cannot do. Self-driving cars, which must navigate an ever-changing world, are already delayed, and may never arrive at all. Systems that deal with language, like chatbots and personal assistants, are built on statistical approaches that generate a shallow appearance of understanding, without the reality. That will limit how useful they can become. Existential worries about clever computers making radiologists or lorry drivers obsolete—let alone, as some doom-mongers suggest, posing a threat to humanity’s survival—seem overblown.
Self-driving cars work in the same way as other applications of machine learning. Computers crunch huge piles of data to extract general rules about how driving works. The more data, at least in theory, the better the systems perform. Tesla’s cars continuously beam data back to headquarters, where it is used to refine the software. On top of the millions of real-world miles logged by its cars, Waymo claims to have generated well over a billion miles-worth of data using ersatz driving in virtual environments.
The problem, says Rodney Brooks, an Australian roboticist who has long been sceptical of grand self-driving promises, is deep-learning approaches are fundamentally statistical, linking inputs to outputs in ways specified by their training data. That leaves them unable to cope with what engineers call “edge cases”—unusual circumstances that are not common in those training data. Driving is full of such oddities. Some are dramatic: an escaped horse in the road, say, or a light aircraft making an emergency landing on a highway (as happened in Canada in April). Most are trivial, such as a man running out in a chicken suit. Human drivers usually deal with them without thinking. But machines struggle.
One study, for instance, found that computer-vision systems were thrown when snow partly obscured lane markings. Another found that a handful of stickers could cause a car to misidentify a “stop” sign as one showing a speed limit of 45mph. Even unobscured objects can baffle computers when seen in unusual orientations: in one paper a motorbike was classified as a parachute or a bobsled. Fixing such issues has proved extremely difficult, says Mr Seltz-Axmacher. “A lot of people thought that filling in the last 10% would be harder than the first 90%”, he says. “But not that it would be ten thousand times harder.”
Mary “Missy” Cummings, the director of Duke University’s Humans and Autonomy Laboratory, says that humans are better able to cope with such oddities because they can use “top-down” reasoning about the way the world works to guide them in situations where “bottom-up” signals from their senses are ambiguous or incomplete. AI systems mostly lack that capacity and are, in a sense, working with only half a brain. Though they are competent in their comfort zone, even trivial changes can be problematic. In the absence of the capacity to reason and generalise, computers are imprisoned by the same data that make them work in the first place. “These systems are fundamentally brittle,” says Dr Cummings.
‘Peak hype’: why the driverless car revolution has stalled | Technology | The Guardian
https://www.theguardian.com/technology/2021/jan/03/peak-hype-driverless-car-revolution-uber-robotaxis-autonomous-vehicle
AI researchers build machines, give them certain specific objectives and judge them to be more or less intelligent by their success in achieving those objectives. This is probably OK in the laboratory. But, says Russell, “when we start moving out of the lab and into the real world, we find that we are unable to specify these objectives completely and correctly. In fact, defining the other objectives of self-driving cars, such as how to balance speed, passenger safety, sheep safety, legality, comfort, politeness, has turned out to be extraordinarily difficult.”
https://www.theguardian.com/commentisfree/2021/dec/25/worried-about-super-intelligent-machines-they-are-already-here
Cruise Confirms Robotaxis Rely On Human Assistance Every Four To Five Miles – Slashdot
https://m.slashdot.org/story/421179