Prominent work by our fellows
Learn about NorwAI’s productive fellows
The Norwegian Research Center for AI Innovation (NorwAI) is funding and educating 25 PhD candidates and postdoctoral researchers during its lifetime and and five years into the period, a number of fellows have achieved their goals. Scientific articles and accepted conference papers form an important part of their training and are prerequisite for completing doctoral theses. In 2025, the center has (preliminary figures) published four journal articles and thirteen conference papers.
Here are some highlights of the research conducted by our new AI scholars:
When recommendations discriminate
Bjørnar Vassøy's work focuses on uncovering weaknesses in today's recommendation systems. In today's society, we are constantly receiving automated recommendations, and many people follow these without considering alternative options or consequences very carefully.
But are the recommendations always relevant, and are they fair so that all user groups receive equally good suggestions? Or could it be that two job seekers who are the same in every way, except for gender, can be recommended different career paths?
Bjørnar Vassøy, PhD candidate studying at NTNU, has investigated these questions in his doctoral thesis, which he submitted in the fall of 2025.
He has developed methods to detect systematic biases, and proposed solutions to address theses unfairness issues. He is scheduled to defend his doctoral thesis in the spring of 2026.

Evaluation of call agents
Weronika Łajewska and Nolwenn Barnard at the University of Stavanger worked on conversational agents. The two PhD candidates, both funded by NorwAI, defended their doctoral theses this summer.
The purpose is summarized as follows: the development of conversational agents requires not only research on transparent and grounded methods for response generation, but also reliable and reproducible methods for evaluating these interactive systems.
What was achieved
The research has established methods for building systems that are both reliable and scalable. By ensuring transparent responses, user trust is increased and the risk of misinformation is minimized. At the same time, the use of user simulation makes it possible to test the systems faster and more cost-effectively than with manual testing. The developed resources, which include both high-quality test collections and model implementations, have been made publicly available. The result is a more sustainable development process that produces higher-quality responses.


When AI is introduced and changes everyday work
A team of PhD candidates at NTNU, led by Associate Professor Nhien Nguyen, studies how organizations implement, and employees adapt to, changes with artificial intelligence. The findings show a learning process that requires coordination, reflection and responsible practice.
Alae Ajraoui’s doctoral research, including a case study of six Norwegian companies and a literature review, has identified how the process unfolds across strategic, tactical and operational levels, including how organizational learning develops along the way.
Serinha Murgorgo’s project shows organizations a concrete path to operationalizing ethical AI. They need clear guidance. She develops a framework where policies, roles and resources enable processes and routines such as system design, risk assessments, audits that are supported by culture through communication and training.
Jessica Steppe explores how employees can meaningfully collaborate with generative AI to increase creativity in Norwegian organizations. 55 interviews with knowledge workers show that employees can increase creativity when they start with concrete ideas and further develop them by seeking different perspectives.
Nhien Nguyen herself has contributed to a study on how clinicians respond when AI predictions conflict with their own professional judgment. The work reveals that opaque AI predictions can trigger doubt, while explainable systems foster more productive dialogue, better judgment, and increased trust in AI. The study comcludes that explainability strengthens collaboration between humans and AI in high-risk clinical settings, but overreliance on AI output can gradually weaken critical reflection.
![]() |
![]() |
![]() |
![]() |
New method for time series analysis
PhD student Abdul-Kazeem Shamba at NTNU authored a scientific paper on contrastive learning for time studies that was accepted into the ECAI 2025 main track. The conference in Bologna, Italy in October is known to be the largest European gathering for prominent AI researchers to exchange knowledge.
Time series analysis examines time-dependent patterns (trend, seasonality, autocorrelation) to model and interpret data observed over time. The technique allows practitioners to examine data more thoroughly and uncover meaningful patterns and possible outcomes. Choosing the right method that provides the right model is crucial to gaining useful insights and removing unnecessary or repetitive data points.
Kazeem’s “Contrast All The Time” method is suitable for providing better downstream forecasts and exploits the dynamics between temporally similar moments more efficiently and effectively than existing methods.

2025-12-16



