Diary of a road warrior: 10 best practices for ADAS and AD testing

ADAS and automated driving testing vehicle header image

 

Reading time
9 minutes

Diary of a road warrior:
10 best practices for ADAS and automated driving testing

Testing for ADAS level 2 – 4 systems has required us to completely overhaul legacy approaches to testing.

The most obvious reason for this includes the sheer number of sensors and actuators involved in any given feature. Whereas we used to test single sensors at a time, today we easily have upward of 20 sensors to specify.

These sensors — short- and long-range RADAR, mono / stereo camera, sonar and LiDAR — need to act in concert. Therefore we must implement several layers of sensor fusion.

For example, when it comes to the actuators for the braking system, there are up to seven distinct systems able to apply the brakes.

As a result, a failure is not easily tracked to a single root cause, but can be a complex combination of several factors.

 

ADAS and automated driving testing vehicle at salt lake

During the test drives I’ve made around 10.0000 miles and collected over 20 TB of logging data

 

Imagine expanding this to ADAS and AD functionalities such as ACC, LKA, EBA and valet parking. Maybe even convenience features like blind-spot monitoring, night vision, automatic high beam and steering beams, to name just a few.
We have a task of gargantuan proportions in front of us.

With this kind of complexity, best practices are mandatory:

The following is a “Top 10” checklist for today’s increasingly complex ADAS and automated driving testing.
It is based on my company’s decades of experience in testing systems for leading Tier 1s and car makers. Of course my own recent experience in logging thousands and thousands of miles in support of EB Assist Test Lab, a new cloud-based testing and validation solution, across the US and around Tokyo plays into this as well.

 

ADAS and automated driving testing vehicle Lincoln MKZ

This Lincoln MKZ is equipped with driving sensors and used as one of Elektrobit’s vehicles for ADAS and automated driving testing

 

1. Filter or annotate your data as soon as and wherever you can as part of a multi-step approach to annotation

  • I do this to ensure that each step takes advantage of the actual capabilities of your system.

    This is the single best piece of advice I can share with anyone who embarks on a test project.

    Whenever you tag a recording in real- or near real-time around a new, unique or unusual use-case, you can dramatically reduce the amount of data you ultimately need.

    This logic should be applied throughout the entire process of data acquisition.

 

2. Make your driver and/or passenger responsible for manual annotation

  • Traditionally the driver drives and that’s it!

    Introducing ways for the driver to easily tag data (by pushing a button on the steering wheel or voice control, for example) as it comes in makes it possible to quickly know which part of the recording might contain something relevant.

 

3. Run algorithms in the car data logger to identify the most relevant data

  • You can do many checks inside the car data logger, saving time and frustration later on.

    These can be as simple as checking that the recording is consistent and not corrupted.

    Another example is if the values of signals exceed pre-defined thresholds — say, for acceleration — and therefore must be tagged.

    Whenever you do this it will result in a dramatic reduction of the data which needs to be uploaded.

 

4. Ensure your upload station takes advantage of the pre-annotated data and can properly prioritize uploads to the cloud or your on-premise data center

  • It will invariably take a lot of time to upload all the recorded data, whether it’s to on-premise storage, a cloud provider or some hybrid solution.
  • You’ve already annotated your data, so you should take one extra step and determine which data shall go where. Data that is not tagged or annotated should not be uploaded to a high-cost storage location as it will invariably be deleted or archived later on.

    Delete it now and save money!

    This is especially relevant if your teams are located in the same country or region and for urgent/priority projects, as your teams can start working on the relevant data as soon as it’s uploaded.

    Also, be sure that the upload station provides updates on its current status to the relevant individuals.

 

5. Build a custom testing “platform” that meets your needs

  • Establishing a best-of-breed partner ecosystem is the only way to guarantee you can truly benefit from the different offerings on the market.

    Such an ecosystem should include a fast-upload datacenter, hyperscalers and cloud providers to handle all potential use cases and of course software and hardware partners.

    Partnering with companies that are experienced and understand your needs will ensure your resources are well utilized.

  • There is no one-size-fits-all solution, and each company has different needs—oftentimes needs differ even inside a single corporation.

    All organizations, however, have a few common goals when it comes to their testing approaches, and these include flexibility, cost effectiveness and speed.

    Depending on your use-case each of these will eventually play a role.

  • The approach used by some OEMs and Tier 1s is to try to build it all themselves, but this isn’t efficient or cost effective.

    These companies should instead focus on improving their products and bringing them to market more quickly and efficiently.

    In the end, this is what will differentiate them from their competitors.

 

ADAS and automated driving test vehicle on the road

During test drives you will encounter unusual real-world scenarios

 

6. Create a direct pipeline from your logger to the cloud that will allow the driver to share a snippet of a recording if something unusual is happening

  • This direct pipeline approach ensures that an engineer can investigate the issue ASAP and provide a fix or insights as well as determine whether the test drive is still relevant.

    This is critical to avoid dealing with unnecessary data.

  • Examples range from a wrong version of software running on the ECU, to the updated software behaving unusually under certain conditions, such as low light.

 

7. More data is not better. The diversity of your dataset will drive your success

  • Being able to identify how much coverage you have and being critical about your training dataset will ensure the best results.

    This is critical to avoid dealing with unnecessary ADAS and automated driving testing data.

  • You must clearly define metrics to rank and evaluate your data.

    Don’t underestimate the value of tools that can be used to explore your recordings and datasets with different criteria. These can be a heatmap of the position of traffic signs, showing where they were collected and at what time of day.
  • Several companies collect and archive as much data as they can for Hardware-in-the-Loop (HIL) tests. They make sure they don’t miss a thing.

    However, after analyzing the archived data from one vendor to determine its true value, I found that a tiny fraction—less than 10%—was relevant.

    The rest of it was duplicate information and therefore resulted in a very expensive, time-consuming process.

  • Another important aspect is the quality of the recorded data.

    You must record your data with high accuracy (25 ns) so that you can later replay it in HIL farms with a microsecond accuracy.

 

8. Test drive and simulation data should be treated equally

  • Being able to address them equally is key. By this I mean that your data-management system should be able to search regardless of the source of the data.
  • I strongly recommend securing relevant data from third-party companies.

    While the sensor setup won’t be the same, the extracted scenario will be valuable.

 

9. Identify unusual real-world scenarios to open up a new world of possibilities

  • A few examples of scenarios I’ve personally encountered include:

    • Life-size imagery on side of a large semi-truck.
    • Chrome that acts like a mirror.
    • A mannequin that has been ejected from the back of pickup truck.
    • A plow with a trailer that is drifting sideways.
    • A ladder opened on the highway.
    • And a police car reducing the speed of traffic by continuously changing lanes with its sirens on.

  • These examples can later be reproduced in simulation and diversified in thousands of variances.

    The challenge is to only create plausible scenarios that can be enforced by the simulation engine and its rules and / or by the description language of a scenario used.

 

10. Future-proof your toolchain using industry standards

  • To ensure the toolchain you’re creating is usable by others and future-proofed, look to industry standards such as Open Drive, Open Scenario and Open Label.

    Doing so will also ensure it is interoperable with other systems you may wish to use later on.
  • Assuming you’re achieving your current objectives by using clean interfaces and standards, these will also enable you to quickly onboard a new partner at any time.

 


To draw a conclusion…

In quite a few of these best practices, I speak to the role of the driver in the process.

If there is one key “lesson-learned” from my time as a road warrior, it’s that the driver can simplify the process for all involved by playing a more active role.

While often overlooked, the simplicity and ease of use of the system in the hands of the driver will positively impact the quality of the collected data.

As the driver, you’re the person responsible for the entire chain of events, and this really puts you in the spot of being the storm chaser!

From personal experience, I can tell you that I want to find something I’ve never seen before!

As a driver you also want to get quick feedback on what shall be collected and understanding the why enables you to get better data.

I hope my 10 tips will help you with your own ADAS and automated driving testing in the future

Cheers, Jeremy


 

About Jeremy

Jeremy Dahan is a former head of technology research at Elektrobit, where he manages business development for the company’s efforts and partnerships in the Silicon Valley.

Jeremy is itching to get back on the open road again, having logged thousands of miles driving across the US and around Japan in EB’s own test vehicle in 2019 and early 2020.

Follow Jeremy on LinkedIn

 

Author

Author Jeremy-Dahan

Jeremy Dahan
Former Head of Technology Research
at Elektrobit