The fantasy and reality of self-driving vehicles…Stem cells and self-driving cars: Why we look stupid predicting the technology of the future

If, like us, you’ve become pretty frustrated by all the hype surrounding AV’s and how they are within a whisker of taking over the ‘ultimate driving experience’ from us mere mortals. How within a blink of eye, the road tolls will plummet and panel beaters will be a thing of the past.

Trying to get a balanced view from reading all the guff that the auto industry stacks on a complient media that desperately needs the adverting, is next to impossible. But, if you’re really interest in trying to find out where are we now, and where are we heading here’s a couple of articles that should really help.

The first is entitled “Self-driving cars will need people, too”, by Michael Nees, assistant professor of psychology, Lafayette College and a member of the Human Factors and the Ergonomics Society. The article first appeared in the online publication The Conversation that bills itself as having academic rigour and journalistic flair, which indeed it does. “Stem cells and self-driving cars: Why we look stupid predicting the technology of the future”, is from Melbourne-based economist Jason Mason’s Thomas the Think Engine blog.

 

Self-driving cars will need people too

Self-driving cars are expected to revolutionize the automobile industry. Rapid advances have led to working prototypes faster than most people expected. The anticipated benefits of this emerging technology include safer, faster and more eco-friendly transportation.

Until now, the public dialogue about self-driving cars has centered mostly on technology. The public’s been led to believe that engineers will soon remove humans from driving. But researchers in the field of human factors — experts on how people interact with machines — have shown that we shouldn’t ignore the human element of automated driving.

High expectations for removing human drivers

Automation is the technical term for when a machine – here a complex array of sensors and computers – takes over a task that was formerly accomplished by a human being.

Many people assume that automation can replace the person altogether. For example, Google, a leader in the self-driving car quest, has removed steering wheels from prototype cars.  Mercedes-Benz promotional materials show self-driving vehicles with rear-facing front seats. The hype on self-driving cars implies that the driver will be unneeded and free to ignore the road.

The public also has begun to embrace this notion. Studies show that people want to engage in activities such as reading, watching movies, or napping in self-driving cars, and also that automation encourages these distractions. A study in France even indicated that riding while intoxicated was a perceived benefit.

Automation still requires people

Unfortunately, these expectations will be difficult to fulfill. Handing control of a process to a computer rarely eliminates the need for human involvement. The reliability of automated systems is imperfect.

Tech innovators know from experience that automation will fail at least some of the time. Anticipating inevitable automation glitches, Google recently patented a system in which the computers in “stuck” self-driving cars will contact a remote assistance center for human help.

Yet the perception that self-driving cars will perform flawlessly has a strong foothold in the public consciousness already. One commentator recently predicted the end of automotive deaths. Another calculated the economic windfall of “free time” during the commute.

Self-driving technologies will undoubtedly be engineered with high reliability in mind, but will it be high enough to cut the human out of the loop entirely?

A recent example was widely reported in the media as an indicator of the readiness of self-driving technology. A Delphi-engineered self-driving vehicle completed a cross-country trip. The technology drove 99% of the way without any problems.

This sounds impressive — the human engineers watching at the wheel during the journey took emergency control of the vehicle in only a handful of instances, such as when a police car was present on the shoulder or a construction zone was painted with unusual line markings.

These scenarios are infrequent, but they’re not especially unusual for a long road trip. In large-scale deployment, however, a low individual automation failure rate multiplied by hundreds of millions of vehicles on US highways will result in a nontrivial number of problems.

Further, today’s most advanced prototypes are supported by teams of engineers dedicated to keeping a single vehicle safely on the road. Individual high-tech pit crews won’t be possible for every self-driving car on the road of the future.

People need to be able to take control

How will flaws in automation technology be addressed? Despite Google’s remote assistance center patent, the best option remains intervention by the human driver. But engineering human interactions with self-driving cars will be a significant challenge.

We can draw insights from aviation, as many elements of piloting planes already have been taken over by computers. Automation works well for routine, repetitive tasks, especially when the consequences of automation mistakes are minor – think automatic sewing machines or dishwashers.

The stakes are higher when automation failures can cause harm. People may rely too much on imperfect automation or become out-of-practice and unable to perform tasks the old-fashioned way when needed.

Several recent plane accidents have been attributed to failures in the ways pilots interact with automation, such as when pilots in correctable situations have responded inappropriately when automation fails.

A term – automation surprises – has even been coined to describe when pilots lose track of what the automation is doing. This is a quintessential human factors problem, characterized not by flaws with either automation or pilots per se, but instead by failures in the design of the human-technology interaction.

When machines take over, the work required of the human is typically not removed — sometimes it is not even reduced — as compared to before the automation was implemented. Rather, the job becomes different.

Instead of manual work, the human is relegated to the role of a monitor – one who constantly watches to detect and correct technology failures. The problem is that people are not especially well-suited for this tedious job. It’s not surprising that drivers retaking manual control from automation need up to 40 seconds to return to normal, baseline driving behaviors.

Tech + driver = cooperative effort

All of this is not to say that self-driving cars will fail to deliver benefits; they will undoubtedly transform the driving experience. But to develop this promising technology, human factors must be considered.

For example, multimodal displays that use a combination of visual, auditory, and tactile (touch) information may be useful for keeping the driver informed about what the automation is doing. Adaptive automation – where the computer strategically gives some control of the car back to the driver at regular intervals – may be able to keep the human engaged and ready to respond when needed.

The technology-centric expectations currently being fostered overlook the substantial body of science on the human element of automation. If other examples of automation, including aviation, can provide any insight, focusing on technology to the exclusion of the human it serves may be counterproductive.

Instead, engineers, researchers, and the general public should see vehicle automation as a cooperative effort between humans and technology — one where the human plays a vital, active role in systems that optimize the interaction between the driver and the technology. A key element will likely require designing new, innovative ways to keep the driver in the loop and informed about the status of automated systems. In other words, “self-driving” cars will need people, too.

 

Stem cells and self-driving cars: Why we look stupid predicting the technology of the future

We’ve all seen technology proceed like greased lightning. In my lifetime, we’ve gone from typewriters to internet-enabled laptops. We’ve seen smartphones go berserk and enormous progress in survival rates for cancer.
These fields have transformed. It is tempting to predict more exponential change in the field you’re most excited by. For example, last night I watched a couple of documentaries on stem cell research that were mind-blowingly exciting. But caution is needed.

The fields in which we see progress are affected by survival bias. We don’t see the frustrated scientists trying and failing to revolutionise other fields. Look around you and much is as it was 100 years ago. I’m sitting at a wooden chair at a wooden table, wearing woolen socks and leather shoes.

The alphabet is the same as it was, and so is my keyboard layout. There’s a clock on the wall telling me the time with two rotating hands. I just got over a common cold. I’m eating brown rice and snowpeas. It could be 1850 – if not for the Macbook.

So not everything is on the brink of revolution. Which is why I have to pull back on my former enthusiasm for autonomous cars.  I admit I was focused on the potential upsides – in traffic, in accidents, in parking, and on the successes Google has had with its autonomous car program. Google is backing the project, appointing the old head of Ford. But even Google fails sometimes, as with Google Wave.

“The benefits are so great that we will force ourselves to accept them, even with a few risks,” I told myself. But then I started thinking about the development path, and I became significantly cooler on the chances of success. Autonomous cars will only break through once they are trusted.

TRUST. Humans set a very high bar for risk in situations where they perceive they are not in control. (This is why people object to tiny risks of living downwind of a polluter and won’t let their kids walk to school, but still eat chips and drive fast.)

Autonomous cars won’t just have to prove they are safer than humans at driving, but much safer – for car occupants, other road users, pedestrians, wildlife and pets.

THE LONG TAIL. Computer operated cars are probably already better than humans at driving in car traffic on freeways and on busy roads. Humans are dreadful at mundane repetitive tasks that require paying attention.

Computers could do this part. But car crashes can happen in odd moments. This is where humans excel. We dominate computers at dealing with problems we never saw before. Humans will remain best at dealing with things like:

  • A big black garbage bag blows onto the road but we know we needn’t swerve as we can tell it is light by the way it moves.
  • Kangaroos are on the side of the road so we better slow down because they often jump in front of the car.
  • It’s Saturday afternoon, there’s just been a football match, some sort of fight is happening on the side of the road, and you know someone could easily step out into traffic as part of the brawl.
  • etc, etc.

Many serious crashes occur in scenarios that are in the long tail of distributions. Machine learning will not cover them all, so there will remain a few scenarios (I predict on the basis of statistics alone) in which autonomous cars continue to perform predictably worse than humans despite the best efforts of programmers.

RISKY LAUNCH. Other types of software can launch with “beta phases” where failure is embarrassing, but not catastrophic. But the testing that will have to happen before any serious real world traffic experiments involving autonomous cars will be enormous.

Google’s experiments driving round California are good, but still limited in scope and scale. A few high-profile crashes will be enough to set a very high technical and legal bar for autonomous cars. The concept of surrendering ones life to a machine is a staple of science fiction because it irritates a real issue in human psychology – control.

MANY OPPORTUNITIES FOR SETBACK. It is not just technical problems that can hold up autonomous cars indefinitely. Political, road engineering, PR and software challenges will impede getting autonomous cars to the point where people trust them and forgive their mistakes.

For just one example, the FBI is opposed to driverless cars, according to a brand new report. Solving that will be tricky. And when it is solved another impediment will arise.

There’s a lot of fail points. I suspect – again on the basis of pure statistics – one will resolve into a  big sticking point for a long time to come.

What role for motorbikes in this all autonomous future?

SUCCESS WON’T LOOK LIKE SUCCESS. Cars will continue to have more and more sensors and autonomous capabilities. But during this time, non-autonomous cars will continue to be sold.

Traffic will be mixed for at least the next 50 years. Some freeways and highways will perhaps be autonomous-only. But not places where there are pedestrians, bicycles, shops, parking, and of course traffic lights. So the benefits of full autonomy will not be realised for a very long time. Don’t hold your breath.

The upside of the failure of the fields about which we are most excited is that we might get blindsided by a revolution in a field where we didn’t expect any improvement. Nano technology, GM foods, high-speed trains, smell-o-vision: any of these could be the one in which a breakthrough happens that turns out incredibly positive.