| Article | Electrics/Electronics Services

AR HUD Validation with scenario-based and Keyword-Driven Testing

Pioneering in two ways: The AR Head-Up Display (HUD) not only provides clear and realistic real-time information directly in the driver’s line of sight but also builds trust in driver assistance systems (DAS), paving the way for autonomous driving. However, just as with highly automated DAS, the AR HUD faces countless situations and parameter combinations, making a complete specification of test cases for validation impossible. Traditional testing approaches alone cannot meet these challenges.

 

The ASAP Group, a development partner for the automotive industry, addresses this through scenario-based testing - a method originally developed for the ADAS domain. ASAP has adapted this approach for infotainment systems, combining it with keyword-driven testing to reduce both time and cost while ensuring comprehensive validation.

AR HUD Fahrzeug

Das HUD blendet Informationen zu Fahrzeug und Verkehr, Warnhinweise oder Navigationsangaben auf der Windschutzscheibe ein. Indem wichtige Informationen unmittelbar im Sichtfeld des Fahrers erscheinen, leistet es einen großen Beitrag zu mehr Sicherheit und Komfort. In Verbindung mit Augmented Reality (AR) wird dieser Effekt beim AR HUD verstärkt: Durch virtuelle Hinweise direkt in der Fahrsituation vor dem Fahrzeug entsteht eine augmentierte Realität – Hinweise und aktuelles Geschehen im realen Umfeld verschmelzen zu einem Gesamtbild. Die vorausschauenden und präzisen Anzeigen ermöglichen ein schnelleres Erfassen der Situation. Darüber hinaus trägt das AR HUD wesentlich dazu bei, dass Reaktionen von FAS – beispielsweise ein Korrigieren der Spur durch den Active Lane Assist – verständlich vermittelt werden, sodass diese keine Unsicherheit beim Fahrer verursachen. So schafft das AR HUD Vertrauen in FAS und legt damit bereits heute den Grundstein für die künftig für autonomes Fahren benötigte Akzeptanz. Denn der Erfolg dieser Technologie setzt auch die Akzeptanz der Verbraucher voraus, die Stand heute in Deutschland noch unter 50 Prozent liegt [1]. Nachdem der rechtliche Rahmen für autonomes Fahren in Deutschland ab 2022 gelegt wurde [2], ist diese Zukunftsvision jedoch ein weiteres Stück in greifbare Nähe gerückt. Damit das AR HUD seiner wegweisenden Rolle gerecht werden kann, muss seine einwandfreie Funktion abgesichert werden. Die ASAP Gruppe, deren Leistungsspektrum für das (AR) HUD die vollumfängliche Entwicklung umfasst, übernimmt für ihre Kunden die Absicherung und bringt dabei szenariobasiertes Testing und Keyword-Driven Testing zur Anwendung.

 

New challenges with high-performance control units

The new approaches to validation are necessary, among other reasons, because future vehicle architectures at many OEMs will be based on central high-performance control units. This necessitates new working methods and processes: While functions were previously distributed across many control units in a vehicle, future vehicle generations will rely on only three to five centralised performance control units responsible for logic and functionality. These will be combined with simpler control units for regulation and component actuation—early models with this centralised approach are already in series production.

Using the example of an AR HUD, this means that the majority of the logic of the former HUD control unit is mapped into a software module (AR Creator), which is part of a high-performance control unit. The AR HUD control unit itself only contains basic functionalities, such as displaying the video stream generated by the AR Creator. This fundamentally changes development and validation: since the high-performance control unit is responsible not only for the AR Creator but also for the logic and functionality of numerous other components, far more interfaces and approximately ten to twelve times the amount of underlying source code must be considered.

Many dependencies between individual functions require an overall understanding of the interrelationships that go far beyond knowledge of the component within one's area of responsibility, leading to increased coordination efforts. While validation in the past was downstream of development, with the use of central high-performance control units, testing is now carried out alongside development. This allows potential errors to be identified earlier in the development process, though validation becomes more challenging. Testing is conducted at the end of each sprint in the iterative development model, with dynamic adjustments to the features being tested.

The consequence for validation: it must become significantly faster and more flexible. Additionally, the complexity of the AR HUD’s functionality itself presents entirely new challenges for validation, prompting ASAP to take new approaches in this area.
 

Sensor fusion and data extrapolation for real-time displays

Since the virtual cues are intended to be projected directly onto the real environment in front of the vehicle, the display must operate in real time. Accordingly, instead of the simple signal logic previously used in HUDs, the AR HUD requires sensor data fusion as well as extrapolation of all data. This is because all input data relevant for ADAS functions – the input from all sensors and cameras in the vehicle – are also critical for the AR HUD display. For example, Adaptive Cruise Control (ACC) automatically decelerates the vehicle when a relevant driving situation, such as a slower vehicle ahead, necessitates it. In this case, the AR HUD must clearly and unambiguously display the braking manoeuvre as a virtual cue in the real driving situation involving the vehicle ahead. The density of input signals for the AR HUD is accordingly very high. A complete specification for validation is therefore not feasible, as there are infinitely many scenarios in which static and dynamic objects must be recognised and fused into a coherent overall picture using sensor data.

An example selection of variable parameters that must still result in error-free recognition – such as identifying a pedestrian – and subsequently trigger a virtual cue tailored to the driver by the AR Creator includes: the pedestrian's size and walking speed, the angle between the pedestrian and the vehicle, lighting conditions, weather, road surface, and objects like trees and signs. Evaluating all these parameters in every possible combination is simply impossible. Another challenge for validation is the extrapolation of all data. For meaningful AR HUD displays that add value for the driver, these must occur in real time. Due to signal delays from cameras and sensors, the data must be pre-calculated so the AR HUD can make a prediction. For instance, if driving into a curve, and the system detects another vehicle a few metres ahead of the user's vehicle, the AR Creator must calculate the upcoming road layout accordingly and adjust the virtual cues displayed. Given the complex functionality of the AR HUD, real driving tests or traditional testing methods alone are insufficient for time- and cost-efficient validation.

Scenario-based testing for dynamic workflows

The ASAP Group has therefore adapted scenario-based testing, originally used in the ADAS domain, for application in AR HUD validation. Taking the PEGASUS project into account, ASAP ensures effective and efficient test execution while addressing risk aspects: the PEGASUS research project, carried out by OEMs and numerous partners from industry and academia, aims to establish “generally accepted quality criteria, tools, methods, scenarios, and situations for the approval of highly automated driving functions” [3] to accelerate the realisation of autonomous driving. ASAP’s adaptation of scenario-based testing for AR HUD validation is aligned with the research findings of the PEGASUS project, thereby reducing the complexity of validation that arises from the nearly infinite number of potential test cases. Unlike requirements-based testing, which ASAP also employs in parallel for static, targeted checks, scenario-based testing allows for the validation of dynamic processes. These include, for instance, speed changes or various traffic scenarios (e.g., exiting a roundabout) with variable environments (road users) and diverse environmental conditions (rain, snow, fog, etc.). The test design experts at ASAP are responsible for specifying both the required scenarios and the test cases. In scenario specification, they first define all static and dynamic objects that are part of a scenario, such as an overtaking manoeuvre in an urban setting. The description includes detailed information on all environmental data, including the parameter ranges of the defined objects, such as all possible distances and speeds of a leading vehicle. This outlines how a scenario is fundamentally expected to unfold.

In contrast, the abstraction level is much lower for test case specification. Here, test runs with specific values for all objects involved in the test case are defined to ensure that the overtaking manoeuvre described as a scenario can be correctly executed. Using these defined driving scenarios and test procedures, including the expected results (test cases), ASAP then validates the data transmission from the AR Creator to the AR HUD control unit. This new type of validation in the infotainment domain offers numerous advantages. Scenario-based testing makes time- and cost-efficient validation of the AR HUD possible in the first place. With traditional testing methods that use residual bus simulation, incoming signals and values would have to be manually predefined. However, given the countless parameters in all possible combinations, a manually created residual bus simulation for the AR HUD would not be feasible within a reasonable timeframe. Another significant advantage is that this approach also validates the correct extrapolation of data, which is essential for the AR HUD but challenging to verify. Boundary values of parameter ranges and expected values in the test cases can be precisely defined. For example, this allows the pre-calculation of a curve and the corresponding display of a virtual cue by the AR Creator to be checked for the required exact match.

Keyword-driven testing for automation

While scenario-based testing significantly simplifies test execution, the large number of diverse scenarios increases complexity for test automation. Thousands of test cases must be automated to ensure they can run entirely without manual intervention. To achieve this, the descriptions of test cases and driving scenarios must first be implemented in the corresponding tools automatically. Additionally, test automation is responsible for integrating the entire toolchain—around 12 different tools are used alongside the AR HUD control unit and the high-performance control unit, which includes the AR Creator, in AR HUD validation—and ensuring that all tools interact seamlessly and automatically. For example, test automation ensures that all tools are automatically launched at the start of a test run and that a tool for comparing the actual and expected display images is activated at the appropriate moment.

To speed up the implementation of test cases and reduce the effort required for test automation, ASAP combines scenario-based testing with Keyword-Driven Testing for AR HUD validation. In this ISO 29119-5-certified method of test case description, individual test steps are stored in a database in a format that is both human- and machine-readable. For each defined test step—known as keywords—ASAP first writes a corresponding script, enabling automated execution. A test step might involve, for example, the command to activate a specific tool. All finalised keywords (test steps) are universally applicable and can be parameterised within the database. For instance, if the ACC feature requires a line displayed by the AR HUD to be a specific colour, this parameter can be stored. As a result, reusable test steps are created, which only need to be parameterised with different input values. The reading of test steps to create a test case is then automated, resulting in partial automation of test automation. This approach significantly reduces the time required for the vast number of test cases needed for AR HUD validation.

Another advantage of Keyword-Driven Testing is that any changes need only be made once centrally in the database to the relevant keyword, and these updates are then automatically applied to all test cases. Further benefits arise from the fact that all test steps are stored in the database in a format that is both human- and machine-readable. This allows real test drives to be reproduced using the documented data and repeated virtually as often as needed until the desired validation outcome is achieved or the scenario’s consistency is verified. Additionally, virtual test runs can be validated during real test drives, as test cases are available not only as scripts but also in a human-readable format for test drivers.

Through its new approach—a combination of scenario-based and Keyword-Driven Testing—ASAP reduces the effort required for test preparation and execution, ultimately ensuring time- and cost-efficient as well as comprehensive validation of the AR HUD. To further optimise this process, ASAP is currently developing a dedicated test bench. With its modular design, all interfaces are easily accessible, and hardware configurations and prototypes can be replaced effortlessly. In the future, this will enable even more efficient execution of test runs, allowing the AR HUD to lead the way into the future of autonomous driving after final approval.

References
[1] Automated Driving: German drivers are sceptical:
www.next-mobility.de/automatisiertes-fahren-deutsche-autofahrer-sind-skeptisch-a-1047782/=

[2] Law on autonomous driving comes into force:
www.bmvi.de/SharedDocs/DE/Artikel/DG/gesetz-zum-autonomen-fahren.html

[3] PEGASUS Research Project. Safely validating automated driving:
www.pegasusprojekt.de/de/about-PEGASUS