Last modified on November 21, 2023

Methodology for trustworthy AI in CCAM

Methodology for trustworthy AI in CCAM

21 November 2023

The EC project AI4CCAM has just released its first report on Methodology for trustworthy Artificial Intelligence in Connected, Cooperative and Automated Mobility (CCAM).

The methodology relies on current European guidelines, namely the report Trustworthy Autonomous Vehicles produced by the Joint Research Centre of the European commission (2021), a first instantiation in the autonomous vehicles scope of previous initiatives including the AI Act (European Commission, 2021) and the ethics guidelines for Trustworthy AI (Expert Group on Artificial Intelligence, 2019). It is also based on the developments of the confiance.ai program, a multi-sector research program tackling trustworthiness of AI in critical systems.

The AI4CCAM proposed methodology is based in a macro decomposition of phases in a pipeline to ensure trustworthiness when developing a given AI-based system for CCAM. Within such pipeline, specific activities in the project are circumscribed at a high- level and trustworthiness properties are targeted for each one of these phases. These trustworthiness attributes are based on the current developments at EU level. All properties identified in the confiance.ai program are provided as support to complete the identified trustworthiness attributes depending on the studied use case.

The methodology will be instantiated in one of the AI4CCAM uses cases addressing complementary views on AI use and perception: AI-enhanced ADAS for trajectory perception. Subsequent activities in the project should see its application to other use cases.

Download the report

Source: The original article is published here.