TRUST - Trustworthy AI
The purpose of this work package is to reinforce a common understanding of safe and responsible AI, specifically:
- Establish trust in safe and responsible AI
- Ensure privacy-preserving in AI technologies
- Create guidelines for sustainable and beneficial use of AI
- Develop principles for explainable and transparent AI
- Develop principles for independent assurance of AI deployment
Trust in AI is a necessary condition for the scalability and societal acceptance of these technologies. Without trust, innovation can be stalled. This research investigates, from an interdisciplinary perspective, the multiple dimensions of trust raised by the deployment of AI and builds tools, methods, and a framework for assuring the safe and responsible deployment of AI in industry and society. This work package aims to answer the question: How can such tools address the safety and needs of individuals, organizations and society at large, addressing both non-technical and technical issues? The research will address issues related to safety, explainability, transparency, bias, privacy and robustness, as well as human-machine interactions and co-behaviour all in the context of industry regulations and societal expectations.
Short description: Create searchable repository of trust in AI and assurance guidelines and regulations to be shared with partners and updated throughout whole of NorwAI project.
Time perspective: Start Jan 2021, continuous throughout project
Short description: Review current trust in AI and assurance guidelines and regulations in place effecting the various work package (WP4 -8) applications and innovation pilots.
Time perspective: Start Jan 2021, Report deliverable end Dec 2021
Review of AI regulations and governance
Trusting technology is to understand its performance
Fundamentally, we start to see part of society increasingly relay on decisions and activities made by machines and models that are now fueled with live data.
Frank Børre Pedersen, VP and Programme Director at DNV, says that the starting point for trusting the technology is understanding its performance.
Research activities - Visions and plans
NorwAI’s interdisciplinary design of two pillars, one to study impact of AI on society and a second one to explore trustworthiness of AI solutions are key, says Asun Lera St Claire, Program Director, digital assurance DNV Group Research and Development.