Niclas Flehmig
About
I completed my bachelor's degree in Mechanical Engineering at the Technical University of Munich (TUM) and continued my academic journey at the TUM with a master's degree in Mechatronics and Robotics. During this period, I stayed a semester abroad on Svalbard, where I studied Arctic Engineering at the University Centre of Svalbard. After this, I returned to Norway for a joint project between my home university (TUM) and the Norwegian University of Technology and Science (NTNU) to write my master's thesis.
During my master's, I worked with applied machine learning mainly for technical processes such as using Gaussian processes for quality assurance processes in the automotive industry and investigating the applicability of Gaussian processes for predictive maintenance in the fish farming industry in Norway.
Currently, I am part of the SUBPRO-Zero project at NTNU as PhD candidate. My research topic is Incorporating AI in safety-critical systems.
Competencies
Research
My research is focused on how we can incorporate AI systems into safety-critical systems. For me, the question is not whether we can use AI systems or not. For AI systems, there are plenty of applications where they can be helpful namely medical image segmentation, predictive maintenance for technical assets, or enhancing safety in the process industry by predicting leakage of liquid hydrogen. My research aims more towards how we can ensure that during deployment the AI system operates in a safe and reliable manner. We address the complex challenge of AI safety and reliability through several key measures, i.e., monitoring, and system design.
The objectives for this research are:
- Investigating the current state: Based on ISO/IEC TR 5469, we conduct challenges for AI in safety-critical systems and get a first look at some ideas of possible solutions.
- Monitoring of AI systems: Same as for parts in a machine, we want to know what is going on in and around the AI system. So, we want to have a monitoring tool that tells us something about the inputs, the model itself and the outputs. This can help the operator to control the AI system and avoid accidents.
- Building a safer system design: One way to increase safety is to set up an architecture around the AI that enhances its safety through fault-tolerant design and fail-safe mechanisms. Even if an AI system by itself is reliable, this does not result in a safe and reliable system.
- Impacts and benefits: In addition to these technical aspects, we want to evaluate the impacts of AI in this field in a beneficial way but also the potential downsides.
Publications
2024
-
Flehmig, Niclas;
Lundteigen, Mary Ann;
Yin, Shen.
(2024)
Implementing Artificial Intelligence in Safety-Critical Systems during Operation: Challenges and Extended Framework for a Quality Assurance Process.
Academic chapter/article/Conference paper
Part of book/report
-
Flehmig, Niclas;
Lundteigen, Mary Ann;
Yin, Shen.
(2024)
Implementing Artificial Intelligence in Safety-Critical Systems during Operation: Challenges and Extended Framework for a Quality Assurance Process.
Academic chapter/article/Conference paper
Outreach
2024
-
LectureLundteigen, Mary Ann; Myklebust, Thor; Flehmig, Niclas. (2024) AI and Functional safety – Pain or gain or both?. ISA SAFESEC event October 9th 2024 , Online 2024-10-09 - 2024-10-09