Last modified on December 7, 2023
Situation-aware multimodal interaction in autonomous vehicles
Situation-aware multimodal interaction is an approach that recognizes that the way in which a driver interacts with in-vehicle systems may depend on e.g. traffic conditions, weather, the driver’s condition and other situational aspects. By developing interfaces and systems that take into account and adapt to the dynamic context of the driving environment, it aims to increase safety, reduce distraction and improve the overall driver experience.
In the context of autonomous vehicles, this becomes even more important as the vehicle takes on more tasks traditionally performed by the driver. Autonomous vehicles must not only perceive and understand the environment, but also interact with passengers in a way that is both informative and reassuring. These challenges are being addressed by the AWARE2ALL project partners, bridging the gap with cutting-edge multisensory HMI technologies—visual, audio, and haptic elements—in increasingly autonomous vehicles.
The project aims to design efficient vehicle-to-driver/passenger/pedestrian communication, as well as efficient driving assistance. AWARE2ALL demonstrators feature innovative technologies like sound emitting panels and haptic feedback through seat covers, which will be used in both autonomous and semi-autonomous driving contexts. These technologies ensure a seamless interaction between vehicles and passengers to improve individuals’ awareness of their surroundings, transportation mode, and potential risks.
AWARE2ALL partners will also test and improve existing formal models, e.g. Modality Suitability Prediction Model (MSPM), for selecting the most appropriate communication channel(s) for warnings or information provision. The model also integrates the level of the criticality of the situation as a parameter.
For more information, please contact Christian Bolzmacher at firstname.lastname@example.org
Source: The original article was published here