In highly automated driving (HAD) there are still a lot of driving scenarios where the driver needs to take over control. The reasons for taking over vary from limitations of the range of ego sensors or recognition algorithms to required information, e.g. infrastructure information like traffic lights, which cannot be derived from in-vehicle sensor observations. What all have in common is that any reaction, from the driver as well as from a driver assistance feature, needs to be in time. This becomes clear when looking at the range of sensors. The driver may want to have the speed reduced in advance before a speed sign is reached or be warned in time to take over control if the autonomous driving road ends. It is pretty clear that it needs more than just a high quality in-vehicle sensor processing in order to obtain a wide range of HD information needed for automated driving.
Imagine the following scenario: your vehicle is equipped with all necessary sensors to allow HAD. Your car drives on a motorway which is suitable for HAD while you are sleeping. During an overtaking manoeuver, the sensors which are needed for HAD stop working due to a technical issue and you cannot take over. Fortunately, the vehicle is able to bring itself into a safe state. While the vehicle can most likely navigate to the emergency lane to reach a safe state, it is still a dangerous situation for the passengers.
That is where our approach enters the game: by fusing the environment model of our endangered vehicle in the described scenario with other environment information, it can reach a safe state which doesn’t involve any kind of dangerous situation for the passengers: this for instance can be by driving to the next parking lot, even if this is several kilometers away from the position where the sensors of the vehicle stopped working.
Join Nicole’s presentation on April 18th at 11:30 a.m.