Research group for AI, ethics and philosophy (AEP)

Research group for AI, ethics and philosophy (AEP)

Research areas/ research interests:

Artificial intelligence is increasingly affecting human and social lives, in diverse domains. As it does so, pressing ethical and political concerns about accountability, responsibility, and power arise. These concerns are connected with, and should be informed by, basic questions about what forms of understanding, intelligence, and communication that AI systems make possible or engender. NTNU’s research group on AI, ethics, and philosophy brings together philosophers, social scientist, and technologists exploring these foundational issues about AI. 

From the point of view of ethics and social studies of technology, topics of concern include the liability to bias in AI systems; their role in facilitating surveillance; how they shift the power balance among private citizens, the state, and major corporations; how they enable large-scale, systematic manipulation and deception. Questions of interest here also include the hopes that have been voiced by some for morally salutary role of AI creating so-called moral machines or fostering moral enhancement.

From the point of view of theories of computation, cognition, and communication, in theoretical computer science, cognitive science, and philosophy, questions explored include what forms of meaning, understanding, or representation that can be attributed to AI systems. This bears on the issue of what sorts of explanation or intelligibility increasingly opaque AI systems may admit of. Notably, it bears upon the extent to which such systems properly can be explained in broadly common-sense or agent-like terms, an issue that must inform what notions of accountability that have application.

Ethics/ applied ethics: Bias in algorithms; Surveillance capitalism; AI/ moral enhancement; Moral machines; Research ethics 

Theoretical-philosophical: Cognition/ philosophy of mind; neuroscience/consciousness; AI/ language; systems biology; machine learning; AWS 

Empirical, social science: Computerdriven public management; context-dependendence of AI systems; power balance citizen/state



May Thorseth:

  • 2021 Ethical Aspects of Digital Competence in the Norwegian Defense Sector, cooperative project PRIO and NTNU, funded by the Norwegian Ministry of Defence. May Thorseth is project partner.
  • 2020 Digital Threats in the Defence Sector. A Threat to Democracy. Research project funded by the Norwegian Defence Department, t. Research partners: NTNU (Gjøvik og Trondheim), and NORDE (Norwegian council for digital ethics). May Thorseth is project partner.
  • ULTIMATE – indUstry water-utiliTy symbiosis for a sMarter wATer society 1 June 2020 (30 May 2024) under Horizon 2020. May Thorseth is Ethics Officer and member of Project Management Team, and member of WP 4: Examine the socio-political and governance context for WSIS.

Pinar Øzturk:

The EU project,  Mitigating Diversity Biases of AI in the Labor Market, called BIAS for short, focuses on studying the use of AI in the labor market and how to detect and mitigate bias and unfairness in various cognitive processes and decisions involved in the recruitment process.

Fairness in the recruitment related decisions, in particular  the use of AI in screening the applications and short-listing of the candidates is one of the key objectives.

Two principles underlying our understanding of fairness are:

  1. Fairness definition is context sensitive
  2. Understanding and defining context  requires a multidisciplinary effort.

Lenke til prosjektet:


Prosjektets egen:

Ibrahim A. Hameed: 

  • Utvikling av sjølvbetjeningsløysingar som legg til rette for auka gjenvinning av avfall - development of self-service solutions that facilitate increased waste recycling (FoU-prosjektet 328280, 130 KNOK).
  • Marine plastic pollution sweet spot – WP leader -
  • Ocean plastic Policy (PlastOPol). RFF. 0.5 MNOK, 2020.

Espen Stabell: 

  • An Algorithm for Hard Choices? Abstract: Let ‘hard choices’ be choice situations where, of a set of alternatives x and y, x is judged to be neither better nor worse than y, and nor are they equally good. Is there an algorithm for rational choice in these arguably ubiquitous cases of ‘incomparability’ or ‘parity’/‘rough equality’? I discuss this question in the context of artificial intelligence (AI) and, more specifically, in cases of AI decision-making with ethical stakes. I defend the view that a morally and rationally defensible strategy in these cases might be to choose on the grounds of second-order considerations of ‘moral identity’. I then argue that an algorithm for this kind of second-order or reflective deliberation will be hard, if not impossible, to develop.

Anders Nes: 

My research interests principally concern questions arising at the intersection between AI and issues in the philosophy of psychology and philosophy of mind. They include:  

  • What forms of representation and meaning do AI-systems support?
  • What forms of perception do AI-systems enable?
  • What forms of consciousness, or self-consciousness, do AI-systems support or enable?
  • To what extent are current, or near-future AI-systems capable of mind-reading, i.e., roughly, of psychological explanation of and attributions to other agents? 
  • What forms of explanation are the outputs of AI-systems susceptible to; in particular, are they susceptible to anything like the sort of reasons-invoking explanation to which human actions are susceptible? 



5 December 2023: Guest lecture, Dagfinn Døhl Dybvig: "Filosofi, innovasjon og kunstig intelligens", 13.00 – 15.00, meeting room 233, Låven/ Teams Click here to join the meeting

Previous Events

11 October 2023: Workshop on Ethical aspect of the use of AI in decision making. Arranged by the AI, Ethics and Philosophy Research Group (AEP) in collaboration with Center for Sustainable  ICT (CESICT) and Programme for Applied Ethics (PAE)

Photo from workshop, PAE research group
Photo: Roger Søraa

7 January 2022: Memo from meeting on call for NFR proposal Artificial Intelligence, Robotics and Autonomous System

3 December 2021: AI, Ethics and Philosophy seminar

10 December 2020: Workshop for Research group AI, Ethics and Philosophy

Research group leaders

Research group leaders

Other members

Other members