Last modified on July 19, 2023

Gaps to be addressed in the Common Evaluation Methodology

In the ARCADE 23 November 2020 workshop on Common Evaluation Methodology (CEM), a long list of methodology gaps were identified (see Proceedings, Chapter 4). Evaluating CCAM and following FESTA may be quite difficult and complex. Methodology helps but does not change complexity. 

The ARCADE team further analysed the gaps and identified where we can expect to get information to address these gaps. The gaps are organised along the FESTA-V

We could classify the information needed to (partially) address the gaps into four categories: 

  1. FESTA: the topic is addressed in the FESTA handbook version 8.  
  1. Process: processes such as harmonisation, standardisation or management, not further specified. These processes could, for example, be performed by standardisation bodies, but are at the moment outside the scope of an evaluation methodology.  
  1. Projects: the topic is addressed by CCAM projects running after 2020.  
  1. CCAM call: the topic will be addressed by one of the upcoming CCAM projects from the 2021 and 2022 calls 

The table was first created at the end of 2021 and should be seen as an inventory of information sources that could be used to address the gaps. You are invited to suggest further additions and new developments (please use Feedback form).

Gap (organised along the FESTA V): 
Progress expected from:
1. Implementation plan  
1.1. Framework for coordinating FOTs with multiple locations, OEMs, countries  
Project management 
1.2 Hard to compare FOTs or between multiple locations, countries or OEMs Harmonisation 
2. Function identification and description  
2.1. Common description method (ontology, terminology, format) for ODD, use cases and services (and secondly requirements, vehicles, functions, accidents)  Standardisation 
3. Use cases  
3.1. Common source for describing ODD in terms of driving behaviour, accidents, scenarios and edge cases  Standardisation 
3.2. Method to define, find and use edge cases  EU Hi-Drive
4. Research questions and hypotheses  
4.1. Method to define and prioritise research questions  FESTA 
4.2. Method to define future scenarios  FESTA  
5. Performance indicators  
5.1. Common set of safety indicators with known relation to safety impact  FESTA 
EU Hi-Drive 
5.2. Accepted set of indicators or model for communication and positioning  EU HEADSTART 
6. Study design  
6.1. Approaches for achieving a realistic and rich user experience with prototype vehicles  
Complex by nature
No approach identified
6.2. Method to compare human and automated driving  L3-Pilot 
6.3. Reference to compare new services with. 
What is the baseline for a service in impact assessment? 
EU V4Safety
6.4. Method to define and measure a clear baseline for a FOT impact assessment. What is better?  FESTA 
EU V4Safety
6.5. Shared assumptions on human driving or shared human driving models  HORIZON-CL5-2022-D6-01-03 
6.6. Method to validate cybersecurity  HORIZON-CL5-2021-D6-01-04 
6.7. Method to balance scale of experiment versus generalisation acceptable in impact assessment  FESTA 
6.8. Methodology that can be scaled down for small projects or handles multiple scales of research questions  Micro-FESTA 
7. Ethical and legal issues  
7.1. Need for an innovation friendly framework for running pilots, FOTs and operation  
Project management 
8. Data acquisition
8.1. Data sharing: approaches to handling lack of data and lack of willingness to share  Data sharing Framework 
8.2. Guidelines for efficient and effective process to public road test permission  National type approval bodies
8.3. Common solutions for data management (release, flow, models, formats)  Standardisation 
Data sharing Framework 
L3Pilot Common Data Format 
8.4. Agreed principles for data sharing among industry and research, respecting industrial sensitivity of the data  Data sharing Framework 
8.5. Practical solutions for GDPR-compatible handling of video  Data sharing Framework 
8.6. Overview of urban traffic environments   EU FAME
9. Data analysis  
9.1. Lack of accident data for socio-economic impact assessment  EU V4Safety
9.2. Approaches to simulate at multiple levels of detail (sensor, AD function, vehicle, traffic, city), including effect of safety strategies at vehicle level on traffic level  EU V4Safety
EU Hi-Drive 
9.3. Standardisation of modelling of scenarios and AD functions in simulations  Harmonisation 
9.4. Handling diversity in sensors, data sources, locations, formats – preferably in an automated process  Industry
10. Impact assessment  
10.1. Shared framework to come from KPIs to assessment (data evaluation architecture, combining various test results)  FESTA 
10.2. Shared assumptions to be used in impact assessment and generalisation   EU FAME
10.3. Shared assumptions on changing human behaviour with higher share of CCAM   EU Hi-Drive 
10.4. Shared assumptions to estimate impact on VRUs   EU V4Safety
10.5. Accepted method to come from test results with prototypes to impact assessment for mature CCAM and full scale  FESTA 
10.6. Shared future scenarios for generalisation of impact assessment  EU FAME
10.7. Methods for evaluating impacts on certain accident types  FESTA 
EU V4Safety
10.8. Methods to evaluate AI processes and decisions  HORIZON-CL5-2022-D6-01-05 
Gaps in Evaluation Methodology

Feedback form

Have feedback on this section??? Let us know!

Send feedback


Please add your feedback in the field below.

Your feedback has been sent!
Thank you for your input.

An error occured...
Please try again later.