Research group for AI, ethics and philosophy
Research group for AI, ethics and philosophy (AEP)
Research group for AI, ethics and philosophy (AEP)
Research areas/ research interests:
Artificial intelligence is increasingly affecting human and social lives, in diverse domains. As it does so, pressing ethical and political concerns about accountability, responsibility, and power arise. These concerns are connected with, and should be informed by, basic questions about what forms of understanding, intelligence, and communication that AI systems make possible or engender. NTNU’s research group on AI, ethics, and philosophy brings together philosophers, social scientist, and technologists exploring these foundational issues about AI.
From the point of view of ethics and social studies of technology, topics of concern include the liability to bias in AI systems; their role in facilitating surveillance; how they shift the power balance among private citizens, the state, and major corporations; how they enable large-scale, systematic manipulation and deception. Questions of interest here also include the hopes that have been voiced by some for morally salutary role of AI creating so-called moral machines or fostering moral enhancement.
From the point of view of theories of computation, cognition, and communication, in theoretical computer science, cognitive science, and philosophy, questions explored include what forms of meaning, understanding, or representation that can be attributed to AI systems. This bears on the issue of what sorts of explanation or intelligibility increasingly opaque AI systems may admit of. Notably, it bears upon the extent to which such systems properly can be explained in broadly common-sense or agent-like terms, an issue that must inform what notions of accountability that have application.
Ethics/ applied ethics: Bias in algorithms; Surveillance capitalism; AI/ moral enhancement; Moral machines; Research ethics
Theoretical-philosophical: Cognition/ philosophy of mind; neuroscience/consciousness; AI/ language; systems biology; machine learning; AWS
Empirical, social science: Computerdriven public management; context-dependendence of AI systems; power balance citizen/state
- IFR (Dept. of Philosophy and Religious Studies)
- IDI (Dept. of Computer Science)
- NTNU AI Lab
- KULT (Centre for Technology and Society)
- INB (Dept. of Neuromedicine and Movement Science)
- Department of ICT and Natural Sciences (NTNU Ålesund)
- Department of Art and Media Studies
- Department of Social Anthropology
- 2021 Ethical Aspects of Digital Competence in the Norwegian Defense Sector, cooperative project PRIO and NTNU, funded by the Norwegian Ministry of Defence. May Thorseth is project partner.
- 2020 Digital Threats in the Defence Sector. A Threat to Democracy. Research project funded by the Norwegian Defence Department, t. Research partners: NTNU (Gjøvik og Trondheim), and NORDE (Norwegian council for digital ethics). May Thorseth is project partner.
- ULTIMATE – indUstry water-utiliTy symbiosis for a sMarter wATer society 1 June 2020 (30 May 2024) under Horizon 2020. May Thorseth is Ethics Officer and member of Project Management Team, and member of WP 4: Examine the socio-political and governance context for WSIS.
The EU project, Mitigating Diversity Biases of AI in the Labor Market, called BIAS for short, focuses on studying the use of AI in the labor market and how to detect and mitigate bias and unfairness in various cognitive processes and decisions involved in the recruitment process.
Fairness in the recruitment related decisions, in particular the use of AI in screening the applications and short-listing of the candidates is one of the key objectives.
Two principles underlying our understanding of fairness are:
- Fairness definition is context sensitive
- Understanding and defining context requires a multidisciplinary effort.
Lenke til prosjektet:
Prosjektets egen: https://www.biasproject.eu
Ibrahim A. Hameed:
- Utvikling av sjølvbetjeningsløysingar som legg til rette for auka gjenvinning av avfall - development of self-service solutions that facilitate increased waste recycling (FoU-prosjektet 328280, 130 KNOK).
- Marine plastic pollution sweet spot – WP leader - https://www.ntnu.no/sustainability/calls/marine-plastic-pollution-sweet-spot
- Ocean plastic Policy (PlastOPol). RFF. 0.5 MNOK, 2020.
- An Algorithm for Hard Choices? Abstract: Let ‘hard choices’ be choice situations where, of a set of alternatives x and y, x is judged to be neither better nor worse than y, and nor are they equally good. Is there an algorithm for rational choice in these arguably ubiquitous cases of ‘incomparability’ or ‘parity’/‘rough equality’? I discuss this question in the context of artificial intelligence (AI) and, more specifically, in cases of AI decision-making with ethical stakes. I defend the view that a morally and rationally defensible strategy in these cases might be to choose on the grounds of second-order considerations of ‘moral identity’. I then argue that an algorithm for this kind of second-order or reflective deliberation will be hard, if not impossible, to develop.
My research interests principally concern questions arising at the intersection between AI and issues in the philosophy of psychology and philosophy of mind. They include:
- What forms of representation and meaning do AI-systems support?
- What forms of perception do AI-systems enable?
- What forms of consciousness, or self-consciousness, do AI-systems support or enable?
- To what extent are current, or near-future AI-systems capable of mind-reading, i.e., roughly, of psychological explanation of and attributions to other agents?
- What forms of explanation are the outputs of AI-systems susceptible to; in particular, are they susceptible to anything like the sort of reasons-invoking explanation to which human actions are susceptible?
- 7 January 2022: Memo from meeting on call for NFR proposal Artificial Intelligence, Robotics and Autonomous System
- 3 December 2021: AI, Ethics and Philosophy seminar
- 10 December 2020: Workshop for Research group AI, Ethics and Philosophy
Research group leaders
Research group leaders
Ibrahim A. Hameed PhD, Professor+47-70161306 +47-41315695 firstname.lastname@example.org Department of ICT and Natural Sciences
Saleh Abdel-Afou Alaliyat Associate Professor+47-70161530 email@example.com Department of ICT and Natural Sciences
Heather Broomfield Department of Public and International Law/University of Oslo
Erlend M. Dons Assistant Professor+47-73596678 firstname.lastname@example.org Department of Philosophy and Religious Studies
Jussi Haukioja Professorjussi.email@example.com Department of Philosophy and Religious Studies
Aurora Hoel Professor, Vice Dean of Arts and Innovation+47-90152598 firstname.lastname@example.org Department of Art and Media Studies
Asle H. Kiran Associate professor+47-92628292 email@example.com Department of Philosophy and Religious Studies
Jonathan Knowles Professor+47-73590629 firstname.lastname@example.org Department of Philosophy and Religious Studies
Gitte Koksvik Associate Professor+47-73591751 email@example.com Department of Social Anthropology
Miri Kyselo Associate Professorm.firstname.lastname@example.org Department of Philosophy and Religious Studies
Sofia Moratti Associate Professor+47-73413064 email@example.com Department of Interdisciplinary Studies of Culture
Ronny Selbæk Myhre Assistant Professorronny.firstname.lastname@example.org Department of Philosophy and Religious Studies
Anders Nes Professor+47-99521468 email@example.com Department of Philosophy and Religious Studies
Rune Nydal Associate Professor+47-73551149 firstname.lastname@example.org Department of Philosophy and Religious Studies
Espen Dyrnes Stabell Researcher+47-73596462 +47-41046535 email@example.com
Inga Strumke Associate Professoringa.firstname.lastname@example.org Department of Computer Science
Roger Andre Søraa Associate Professorroger.email@example.com Department of Interdisciplinary Studies of Culture