A lot of research is going into how autonomous cars react to their environment. This includes being able to navigate roads, read street signs, and interpret traffic lights, even in bad lighting and inclement weather.
But that's the easy part; unlike people, roadway infrastructure doesn't really change much and is rather predictable. The difficulty comes from self-driving cars having to figuring out the nuances that human drivers take for granted, such as communicating with pedestrians on a busy city street.
Through subtle cues such as eye contact and hand signals, we can usually tell whether a person at a crosswalk has acknowledged that we're heading their way in a two-ton moving hunk of metal. An experienced driver can decode a pedestrian's intent through just body language
When you add in factors such as age (is it a teen or senior citizen crossing the street?), and time of day, or weather conditions (Does that person under an umbrella in a downpour even see me?), the decision process for self-driving cars gets infinitely more complicated. In these everyday scenarios, even just a short trip across town in which drivers make dozens of instantaneous decisions could lead to what UC Berkeley assistant professor Anca Dragan calls the "freezing robot problem" for autonomous cars.
Dragan, who heads Berkeley's InterACT Lab that focuses on "algorithms that move beyond the robot's function in isolation and accounts for interaction and coordination with end-users," recently told Wired that even in common situations "anything the car could do is too risky because there is some worst-case human action that would lead to a collision." She believes that while developers of autonomous technology are focusing on how self-driving cars will react to their surroundings, a built-in blind spot is how people will react to robot cars.
"An autonomous car's actions will influence human actions, whether we like it or not," Dragan adds. "Cars need to start accounting for this influence."
Some are, including a mysterious driverless van that was spotted in the Arlington, Virginia, area in August.
World's First 'Seat Suit'
After the van was investigated by a local television reporter, Ford revealed its was part of a study in conjunction with the Virginia Tech Transportation Institute (VTTI) to devise a way of "communicating a vehicle's intent by soliciting real-world reactions to a self-driving car on public roads."
"Understanding how self-driving vehicles impact the world as we know it today is critical to ensuring we're creating the right experience for tomorrow," John Shutko, a human factors technical specialist for Ford said in a statement. And that's why VTTI developed the world's first "seat suit."
- Ford, Domino's Team Up for Self-Driving Pizza Deliveries Ford, Domino's Team Up for Self-Driving Pizza Deliveries
The study was designed to primarily investigate how pedestrians and autonomous vehicles communicate, not to generate pedestrian response to a ghost car per se. To get a genuine reaction from people when they see an empty vehicle cruising public roads, VTTI dressed up the driver to blend into the seat of an autonomous Transit Connect van.
In addition to the seat suit, the only way to tell that the van is autonomous is the lighting used to communicate with pedestrians. Ford and VTTI first considered using some sort of text display that's been employed on other autonomous test vehicles, but realized that this "would require people all understand the same language," Ford said. Instead they settled on simple lighting displays "since such signals for turning and braking indication are already standardized and widely understood."
They also rejected using symbols because they "historically have low recognition among consumers." But judging from the way a group of teenagers react to the faux driverless van in the video above—and if researchers really want to get people's attention and communicate in a universal language—maybe they should try emoji.