Trustworthy AI

Work Package 3

Trustworthy AI

– TRUST

Woman looking through transparent screen with informationThe purpose of this work package is to reinforce a common understanding of safe and responsible AI, specifically:

  1. Establish trust in safe and responsible AI
  2. Ensure privacy-preserving in AI technologies
  3. Create guidelines for sustainable and beneficial use of AI
  4. Develop principles for explainable and transparent AI
  5. Develop principles for independent assurance of AI deployment

 
 
Trust in AI is a necessary condition for the scalability and societal acceptance of these technologies. Without trust,  innovation  can  be  stalled. This research investigates, from an interdisciplinary perspective, the multiple dimensions of trust raised by the deployment of AI and builds tools, methods, and a framework for assuring the safe and responsible deployment of AI in industry and society. This work package aims to answer the question: How can such tools address the safety and needs of individuals, organizations and society at large, addressing both non-technical and technical issues? The research will address issues related to safety,  explainability,  transparency,  bias,  privacy  and  robustness,  as  well  as  human-machine interactions  and  co-behaviour all  in  the  context  of  industry  regulations  and  societal  expectations.

 

People

People

Projects

Projects

Stories

Stories

Research activities - Visions and plans

Research activities - Visions and plans

Photo. Asun Lera StClair

NorwAI’s interdisciplinary design of two pillars, one to study impact of AI on society and a second one to explore trustworthiness of AI solutions are key, says Asun Lera St Claire, Program Director, digital assurance DNV Group Research and Development.

Read more