Advertisement
Applied to services like parcel delivery and waste collection, autonomous technology also promises to address worker shortages and reduce congestion.
Technological progress and regulatory approval are key enablers of autonomous vehicles and their mass adoption. But there is one factor that will make or break their success: public acceptance, which represents a significant hurdle.
The issue of public acceptance not only relates to whether people feel comfortable riding inside a driverless car, it also extends to how other road users interact with the vehicle. Early trials of driverless cars have been met with public backlash, including attacks on vehicles and threats against operators.
Related Articles
Advertisement
Researchers have linked these behavioral patterns and lack of trust to the absence of non-verbal communication cues between vehicles and other road users. For manually driven vehicles, these non-verbal cues, such as eye contact and gestures, are essential to ensuring safety, understanding and trust.
This is particularly pertinent in unstructured traffic and in situations involving vulnerable road users, such as a pedestrian crossing the road in front of a vehicle or in areas where pedestrians and vehicles share the same space.
To overcome the lack of non-verbal cues, studies have investigated ways to communicate the intention of autonomous vehicles, their awareness – what they ‘see’ – and even their ’emotions’.
For example, a vehicle could express its frustration when being challenged by a pedestrian deliberately stepping into its path. This vehicle-to-pedestrian communication is achieved through designing so-called external human-machine interfaces that may take the form of simple text or visual displays attached to the front of the vehicle, projections onto the road, and auditory alerts.
In a more distant future, this information could be communicated through augmented reality, as suggested by a study proposing to overlay autonomous vehicles with relevant visual cues.
External human-machine interfaces are a necessary evolution of the traditional blinker, which became widely adopted by car manufacturers in the late 1940s. At their core, fully autonomous vehicles are robots, requiring more advanced communication channels than simple light signals.
But unlike the depiction of Johnny Cab’s driverless taxi in the movie Total Recall, they don’t come with a humanoid robot driver. Instead, they are part of an automated infrastructure, where the city itself becomes a distributed robot.
This kind of conceptual shift opens up new perspectives on how road users will interact with autonomous vehicles in future cities, which are expected to come in different shapes and sizes beyond just driverless cars.
Pedestrians may be able to assist delivery robots that get stuck in the snow or lost in the woods. Starship Technologies, the company behind the iconic white delivery robot, even encourages passersby to give their robots a helping hand when they get stuck.
But this relies on bidirectional communication and the ability of the robot to understand human input, which was clearly not the case for a rogue robot driving right into a crime scene.
It is a long road toward robots roaming the city and getting the human-machine interaction right for public acceptance is challenging. This is not only because of the costs of fully functional autonomous vehicles but also the risks associated with real-world studies.
To overcome this risk, many researchers have turned to study digital replicas or 360-degree recordings of real-world situations in a virtual reality environment.
Results from these studies are promising, suggesting that external human-machine interfaces can improve the trust of road users in autonomous vehicles. In a study of crossing scenarios, 81 per cent of participants reported they felt safer if an external display communicated the vehicle’s intention.
In more complex scenarios, such as those in unregulated mixed-traffic environments, the combination of several factors was found to contribute to study participants expressing trust in the vehicle.
These factors include observing the vehicle’s interactions with other pedestrians, implicit cues, such as the vehicle slowing down and explicit cues, such as external light signals showing intent and awareness.
Historically, car manufacturers have primarily focused on the safety of their passengers. It’s a business strategy as manufacturers can use safety as a selling point. Mercedes-Benz’s manager of driver assistance systems even publicly stated that the company would prioritize the safety of passengers over pedestrians in their autonomous vehicle technology.
As we are moving closer to a future in which the city and its infrastructure become automated, this approach requires rethinking. This is key to encouraging active transport and transitioning towards a greener and healthier urban living, reflected also in the United Nations’ Sustainable Development Goal 11.
Manufacturers of autonomous vehicles have a unique opportunity to be a leader in this emerging and rapidly growing market by complementing their focus on technology with a strong understanding of social and human challenges.
The successful adoption of autonomous vehicles in and by cities also requires innovative policies. Regulatory bodies need to emphasize how these vehicles interact with pedestrians and other vulnerable road users not only algorithmically but also through vehicle-to-pedestrian communication channels.
Instead of regulating how cars adopt automation, what might policies look like if we think about cars more as robots that operate in close proximity to people?