EXAIGON

EXAIGON

– Explainable AI systems for gradual industry adoption

 

Docking and Explainable AI

 

The recent rapid advances of Artificial Intelligence (AI) hold promise for multiple benefits to society in the near future. AI systems are becoming ubiquitous and disruptive to industries such as healthcare, transportation, manufacturing, robotics, retail, banking, and energy. However, in order to make AI systems deployable in social environments, industry and business-critical applications, several challenges related to their trustworthiness must be addressed first: Lack of transparency and interpretability, lack of robustness, and inability to generalize to situations beyond their past experiences.

Explainable AI (XAI) aims at remedying these problems by developing methods for understanding how black-box models make their predictions and what are their limitations. The call for such solutions comes from the research community, the industry and high-level policy makers, who are concerned about the impact of deploying AI systems to the real world in terms of efficiency, safety, and respect for human rights.

The EXAIGON project (2020-2024) will deliver research and competence building on XAI, including algorithm design and human-machine co-behaviour, to meet the society’s and industry’s standards for deployment of trustworthy AI systems in social environments and business-critical applications.