Skip to main content

Aim

The aim of this challenge is to build and assess Trustworthy AI and lower in this way the obstacles for Walloon companies willing to deploy AI solutions in regulated sectors.

What's at stake

AI systems have become increasingly complex, evolving from models with hand-crafted rules (human supervision/intervention) to models that create other models. Many appear as black boxes, which creates mistrust and prevents end-users from relying on otherwise highly efficient technologies. This evolution has pushed the European Union and regulators from many industries to look for ways to certify AI-based critical systems (e.g., airplanes or cars) before manufacturing and using them. The need to certify AI technologies prior to their market deployment is a significant barrier to the adoption of AI, especially in sensitive sectors such as aeronautics, space or medicine. Sectors where they could be precisely of precious support to human decision-making.

Challenges

Establishing a whole framework for the trustworthy development of artificial intelligence. Europe has turned this challenge into one of its priorities in order to differentiate itself from other major competitive markets. Yet, this framework must offer clear and applicable guidelines so that the initial objective of encouraging companies to use these technologies is indeed served.

Properties of Trustworthy AI

- Explainability (XAI)​
- Robustness and safety (HUMBLE AI])
- Resilience to adverse attacks
- Qualification of the data used (FAIRNESS)​

Tasks


The first two years of the ARIAC project focused on the problem of explainability of deep learning models.

  • Study of explainability methods for the classification of medical images
  • Investigation on the metrics to be used to evaluate these explainability methods
  • Explainability of nonlinear dimensionality reduction algorithms (t-sne)
  • Transformers Explained

For the rest of the project, the challenge tries to focus on the quality of the data (detection of bias and adjustment of the models) and on the robustness (generalization of the model) and the safety of the models (quantification of uncertainty and detection of data out of distribution)

 

Responsible(s) for the challenge