Responsible AI

Responsible AI
The way modern AI systems are trained and used raises significant concerns. This work package involves creating XAI (explainable AI) methods to understand what an AI system has learned and to improve its shortcomings. This work is important because it contributes to a more responsible use of AI by making systems more robust, trustworthy, transparent, and fair.
Our research involves providing an AI system with a task, for example to classify whether users should receive a loan based on historical data. The AI system tunes its parameters and learns a high-accuracy solution. However, modern AI learns a vast number of patterns, which makes it very challenging to understand what the system has learned to consider. On the occasions when we do manage to understand what an AI system has learned, we often find that it relies on shortcuts and spurious correlations that perform well on the historical data but fail under deployment. We hope that our work will result in a set of tools for both practitioners and researchers, enabling them to better understand and rectify what their AI systems have learned.