Das HUD blendet Informationen zu Fahrzeug und Verkehr, Warnhinweise oder Navigationsangaben auf der Windschutzscheibe ein. Indem wichtige Informationen unmittelbar im Sichtfeld des Fahrers erscheinen, leistet es einen großen Beitrag zu mehr Sicherheit und Komfort. In Verbindung mit Augmented Reality (AR) wird dieser Effekt beim AR HUD verstärkt: Durch virtuelle Hinweise direkt in der Fahrsituation vor dem Fahrzeug entsteht eine augmentierte Realität – Hinweise und aktuelles Geschehen im realen Umfeld verschmelzen zu einem Gesamtbild. Die vorausschauenden und präzisen Anzeigen ermöglichen ein schnelleres Erfassen der Situation. Darüber hinaus trägt das AR HUD wesentlich dazu bei, dass Reaktionen von FAS – beispielsweise ein Korrigieren der Spur durch den Active Lane Assist – verständlich vermittelt werden, sodass diese keine Unsicherheit beim Fahrer verursachen. So schafft das AR HUD Vertrauen in FAS und legt damit bereits heute den Grundstein für die künftig für autonomes Fahren benötigte Akzeptanz. Denn der Erfolg dieser Technologie setzt auch die Akzeptanz der Verbraucher voraus, die Stand heute in Deutschland noch unter 50 Prozent liegt [1]. Nachdem der rechtliche Rahmen für autonomes Fahren in Deutschland ab 2022 gelegt wurde [2], ist diese Zukunftsvision jedoch ein weiteres Stück in greifbare Nähe gerückt. Damit das AR HUD seiner wegweisenden Rolle gerecht werden kann, muss seine einwandfreie Funktion abgesichert werden. Die ASAP Gruppe, deren Leistungsspektrum für das (AR) HUD die vollumfängliche Entwicklung umfasst, übernimmt für ihre Kunden die Absicherung und bringt dabei szenariobasiertes Testing und Keyword-Driven Testing zur Anwendung.
Sensor fusion and data extrapolation for real-time displays
Since the virtual cues are intended to be projected directly onto the real environment in front of the vehicle, the display must operate in real time. Accordingly, instead of the simple signal logic previously used in HUDs, the AR HUD requires sensor data fusion as well as extrapolation of all data. This is because all input data relevant for ADAS functions – the input from all sensors and cameras in the vehicle – are also critical for the AR HUD display. For example, Adaptive Cruise Control (ACC) automatically decelerates the vehicle when a relevant driving situation, such as a slower vehicle ahead, necessitates it. In this case, the AR HUD must clearly and unambiguously display the braking manoeuvre as a virtual cue in the real driving situation involving the vehicle ahead. The density of input signals for the AR HUD is accordingly very high. A complete specification for validation is therefore not feasible, as there are infinitely many scenarios in which static and dynamic objects must be recognised and fused into a coherent overall picture using sensor data.
An example selection of variable parameters that must still result in error-free recognition – such as identifying a pedestrian – and subsequently trigger a virtual cue tailored to the driver by the AR Creator includes: the pedestrian's size and walking speed, the angle between the pedestrian and the vehicle, lighting conditions, weather, road surface, and objects like trees and signs. Evaluating all these parameters in every possible combination is simply impossible. Another challenge for validation is the extrapolation of all data. For meaningful AR HUD displays that add value for the driver, these must occur in real time. Due to signal delays from cameras and sensors, the data must be pre-calculated so the AR HUD can make a prediction. For instance, if driving into a curve, and the system detects another vehicle a few metres ahead of the user's vehicle, the AR Creator must calculate the upcoming road layout accordingly and adjust the virtual cues displayed. Given the complex functionality of the AR HUD, real driving tests or traditional testing methods alone are insufficient for time- and cost-efficient validation.
Keyword-driven testing for automation
While scenario-based testing significantly simplifies test execution, the large number of diverse scenarios increases complexity for test automation. Thousands of test cases must be automated to ensure they can run entirely without manual intervention. To achieve this, the descriptions of test cases and driving scenarios must first be implemented in the corresponding tools automatically. Additionally, test automation is responsible for integrating the entire toolchain—around 12 different tools are used alongside the AR HUD control unit and the high-performance control unit, which includes the AR Creator, in AR HUD validation—and ensuring that all tools interact seamlessly and automatically. For example, test automation ensures that all tools are automatically launched at the start of a test run and that a tool for comparing the actual and expected display images is activated at the appropriate moment.
To speed up the implementation of test cases and reduce the effort required for test automation, ASAP combines scenario-based testing with Keyword-Driven Testing for AR HUD validation. In this ISO 29119-5-certified method of test case description, individual test steps are stored in a database in a format that is both human- and machine-readable. For each defined test step—known as keywords—ASAP first writes a corresponding script, enabling automated execution. A test step might involve, for example, the command to activate a specific tool. All finalised keywords (test steps) are universally applicable and can be parameterised within the database. For instance, if the ACC feature requires a line displayed by the AR HUD to be a specific colour, this parameter can be stored. As a result, reusable test steps are created, which only need to be parameterised with different input values. The reading of test steps to create a test case is then automated, resulting in partial automation of test automation. This approach significantly reduces the time required for the vast number of test cases needed for AR HUD validation.
Another advantage of Keyword-Driven Testing is that any changes need only be made once centrally in the database to the relevant keyword, and these updates are then automatically applied to all test cases. Further benefits arise from the fact that all test steps are stored in the database in a format that is both human- and machine-readable. This allows real test drives to be reproduced using the documented data and repeated virtually as often as needed until the desired validation outcome is achieved or the scenario’s consistency is verified. Additionally, virtual test runs can be validated during real test drives, as test cases are available not only as scripts but also in a human-readable format for test drivers.
Through its new approach—a combination of scenario-based and Keyword-Driven Testing—ASAP reduces the effort required for test preparation and execution, ultimately ensuring time- and cost-efficient as well as comprehensive validation of the AR HUD. To further optimise this process, ASAP is currently developing a dedicated test bench. With its modular design, all interfaces are easily accessible, and hardware configurations and prototypes can be replaced effortlessly. In the future, this will enable even more efficient execution of test runs, allowing the AR HUD to lead the way into the future of autonomous driving after final approval.
References
[1] Automated Driving: German drivers are sceptical:
www.next-mobility.de/automatisiertes-fahren-deutsche-autofahrer-sind-skeptisch-a-1047782/=
[2] Law on autonomous driving comes into force:
www.bmvi.de/SharedDocs/DE/Artikel/DG/gesetz-zum-autonomen-fahren.html
[3] PEGASUS Research Project. Safely validating automated driving:
www.pegasusprojekt.de/de/about-PEGASUS