News & Events

Events

Friday AI Webinar October 29: Towards smart and autonomous industry

Friday AI Webinar October 29: Towards smart and autonomous industry

Welcome to Friday AI Webinar, hosted by Norwegian Open AI Lab & NorwAI: Towards smart and autonomous industry

 

One week after the conference and hackathon days, we’re happy to invite you to an open webinar, Friday October 29th. In this webinar, you’ll get to learn about the hackathon challenge and the winner team will pitch their solution to it. After that, Stein H. Danielsen from Cognite will give a talk about the mission of Cognite to work towards smart and autonomous industry

  • Date: October 29, 2021
  • Time: 13:00-14:00

Information and registration


Trustworthy Complex and Intelligent Systems Webinar Series

Banner with the text Trustworthy Complex and Intelligent Systems Webinar Series

Trustworthy Complex and Intelligent Systems Webinar Series

This series is a collaboration between the European Safety, Reliability & Data Association (ESReDA), the ETH Zürich Chair of Intelligent Maintenance Systems, the ETH Risk Center, ETH Zürich-SUSTech Institute of Risk Analysis, Prediction and Management (Risks-X), the Norwegian Research Center for AI Innovation (NorwAI) and DNV.

Webinars will be run monthly throughout 2021 exploring the themes of trust, ethics and applications of AI and novel technology in complex and safety critical intelligent systems.

All webinars are FREE online meetings via Zoom.

Upcoming webinar

TBA

Speaker Topic

TBA

TBA


Previous events

Banner Previous events


Previous webinars in the Trustworthy Complex and Intelligent Systems Webinar Series

Previous webinars in the Trustworthy Complex and Intelligent Systems Webinar Series

 

Speaker Topic

Maziar Raissi, University of Colorado Boulder

 

Maziar Raissi is currently an Assistant Professor of Applied Mathematics at the University of Colorado Boulder. Dr. Raissi received a Ph.D. in Applied Mathematics & Statistics, and Scientific Computations from University of Maryland College Park. He moved to Brown University to carry out postdoctoral research in the Division of Applied Mathematics. Dr. Raissi worked at NVIDIA in Silicon Valley for a little more than one year as a Senior Software Engineer before moving to Boulder. His expertise lies at the intersection of Probabilistic Machine Learning, Deep Learning, and Data Driven Scientific Computing. In particular, he has been actively involved in the design of learning machines that leverage the underlying physical laws and/or governing equations to extract patterns from high-dimensional data generated from experiments.

Data-Efficient Deep Learning using Physics-Informed Neural Networks 

 

A grand challenge with great opportunities is to develop a coherent framework that enables blending conservation laws, physical principles, and/or phenomenological behaviors expressed by differential equations with the vast data sets available in many fields of engineering, science, and technology. At the intersection of probabilistic machine learning, deep learning, and scientific computations, this work is pursuing the overall vision to establish promising new directions for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data. To materialize this vision, this work is exploring two complementary directions: 

  1. designing data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and non-linear differential equations, to extract patterns from high-dimensional data generated from experiments, and
  2. designing novel numerical algorithms that can seamlessly blend equations and noisy multi-fidelity data, infer latent quantities of interest (e.g., the solution to a differential equation), and naturally quantify uncertainty in computations.
Speaker Topic

Enrico Zio

CRC MINES ParisTech, France

Politecnico di Milano, Italy

 

Enrico Zio is full professor at the Centre for research on Risk and Crises (CRC) of Ecole de Mines, ParisTech, PSL University, France, full professor and President of the Alumni Association at Politecnico di Milano, Italy. 

 

His research focuses on the modeling of the failure-repair-maintenance behavior of components and complex systems, for the analysis of their reliability, maintainability, prognostics, safety, vulnerability, resilience and security characteristics, and on the development and use of Monte Carlo simulation methods, artificial intelligence techniques and optimization heuristics. 

 

In 2020, he has been awarded the prestigious Humboldt Research Award from the Alexander von Humboldt Foundation in Germany

Prognostics and Health Management for Condition-based and Predictive Maintenance: A Look In and a Look Out

 

A number of methods of Prognostics and Health Management (PHM) have been developed (and more are being developed) for use in diverse engineering applications. Yet, there are still a number of critical problems which impede full deployment of PHM and its benefits in practice. In this lecture, we look in some of these PHM challenges and look out to advancements for PHM deployment.

 

Speaker Topic

Øyvind Smogeli, CTO Zeabuz

 

Øyvind Smogeli is the CTO and co-founder of Zeabuz and an Adjunct Professor at NTNU. Øyvind received his PhD from NTNU in 2006, and has spent his career working on modeling, simulation, testing and verification, complex cyber-physical systems, and assurance of digital technologies. He has previously held positions as CTO, COO and CEO of Marine Cybernetics and as Research Program Director for Digital Assurance in DNV.

Zeabuz: Providing trust in a zero emission autonomous passenger ferry

 

Zeabuz is developing a new urban mobility system based on zero emission, autonomous passenger ferries. This endeavour comes with a huge trust challenge: How to prove the trustworthiness towards both passengers, authorities, municipalities, and mobility system operators? This trust challenge has many facets and many stakeholders. There is a need to balance safety and usefulness, balance technical safety and perceived safety, and balance the various stakeholder needs. To solve this, an assurance case is being established that can capture a wide range of claims and evidence in a structured way. This talk introduces the Zeabuz mobility concept, the autonomy architecture, then will focus on the many layers of trust and how to achieve this. The various components of the autonomy system and the simulation technology used to build trust in the autonomy are explained. An approach to build trust in the simulators through field experiments and regular operation will be presented. It will be shown how this all fits into the larger assurance case.

Speaker Topic

Martin Vechev, ETH Zürich

 

Martin Vechev is an Associate Professor at the Department of Computer Science, ETH Zurich. His work spans the intersection of machine learning and symbolic methods with applications to topics such as safety of artificial intelligence, quantum programming and security. He has co-founded 3 start-ups in the space of AI and security, the latest of which LatticeFlow aims to build and deploy trustworthy AI models.

Certified Deep Learning

 

In this talk I will discuss some of the latest progress we have made in the space of certifying AI systems, ranging from certification of deep neural networks to entire deep learning pipelines. In the process I will also discuss new neural architectures that are more amenable to certification as well as mathematical impossibility and complexity results that help guide new kinds of certified training methods.

 

Speaker Topic

Asun Lera St.Clair, DNV & André Ødegårdstuen, DNV

 

Dr. Asun Lera St.Clair, philosopher and sociologist, is Director of the Digital Assurance Program in DNV Group Research and Development and Senior Advisor for the Earth Sciences unit of the Barcelona Supercomputing Center (BSC). She has over 30 years of experience with designing and directing interdisciplinary user-driven and solutions-oriented research for global challenges in the interface of sustainable development and climate change, and more recently on the provision of trust on digital technologies and leveraging these for sustainable development.

André Ødegårdstuen works as a Senior Researcher at DNV where he focuses on the assurance of machine learning. André is active in the area of computer vision for drone surveys of industrial assets and monitoring of animal welfare. He has a background in physics and experience from the Point-of-Care diagnostic industry.

Trustworthy Industrial AI Systems

 

Trust in AI is a major concern of many societal stakeholders. These concerns relate to the delegation of decisions to technologies we do not fully understand, to the misuse of those technologies for illegal, unethical or rights violation purposes, or to the actual technical limitations of these cognitive technologies while we rush to deploy them into society. There is a fast-emerging debate around these questions, often named as responsible AI, AI ethics, or explainable AI. However, there is less discussion as to what should be considered a trustworthy AI system in industrial contexts. AI introduces complexity and creates digital risks.

While complexity in traditional mechanical systems is naturally limited by physical constraints and the laws of nature, complexity in integrated, software-driven systems – which do not necessarily follow well-established engineering principles – seems to easily exceed human comprehension.

In this presentation we will unpack the idea that the trustworthiness of an AI system is not very different from that of a leader or an expert to whom, or an organization to which, we delegate our authority to make decisions or provide recommendations to reach a particular goal. Similarly, we argue that AI systems should be subjected to the same quality assurance methods and principles we use for any other technology.

 

Speaker Topic

Peter Battaglia

 

Peter Battaglia is a research scientist at DeepMind working on approaches for reasoning about and interacting with complex systems.

Structured models of physics, objects, and scenes

 

This talk will describe various ways of using structured machine learning models for predicting complex physical dynamics, generating realistic objects, and constructing physical scenes. The key insight is that many systems can be represented as graphs with nodes connected by edges, which can be processed by graph neural networks and transformer-based models. By considering the underlying structure of the problem, and imposing inductive biases within our models that reflect them, we can often achieve more accurate, efficient, and generalizable performance than if we avoid using principled assumptions.

22 January 2021

Speaker Topic

Joseph Sifakis

 

Hear 2007 Turing Award winner Joseph Sifakis explain the challenges raised by the vision for trustworthy autonomous systems for the autonomous vehicle case and outline his hybrid design approach combining model-based and data-based techniques and seeking trade offs between performance and trustworthiness.

Why is it so hard to make self-driving cars? (Trustworthy autonomous systems)

 

Why is the problem of self-driving autonomous control so hard? Despite the enthusiastic involvement of big technological companies and investment of  billions of dollars, optimistic predictions about the realization of autonomous vehicles have yet to materialize.

Previous webinars in the NorwAI&NAIL Friday Webinar Series

Previous webinars in the NorwAI&NAIL Friday Webinar Series

 Banner, Martin Tveten (Norwegian Computing Center), Introduction to change detection

Welcome to Friday AI Webinar, hosted by Norwegian Open AI Lab & NorwAI: Introduction to change detection, by Martin Tveten from Norwegian Computing Center (Norsk regnesentral)

Abstract
In this seminar, I will introduce some basic ideas underlying statistical change detection methods. Such methods are important for answering any questions of the form "has some statistical property of the data changed over time?" and "if so, when does the change(s) occur?". An important AI-related application of change detection methods is anomaly detection in streaming data, for example from sensor networks.
 
Both the offline and online version of the change detection problem will be considered. In the offline problem, the aim is to retrospectively estimate the points in time where some statistical properties of a time series change. In the online problem, streaming data is processed in real-time with the aim of detecting a change as quickly as possible. A few simple, real and simulated, data examples will guide the presentation throughout, as the focus is on giving an intuition for the general methodology. I will also briefly present my own research.

About the speaker

Martin Tveten is a research scientist at the Norwegian Computing Center, and a former PhD student at the Department of Mathematics, University of Oslo, specialising in methods and algorithms for change and anomaly detection. Applied interests currently include real-time monitoring of IT and industrial system. 

AI Workshop at womENcourage Pre-Event

AI Workshop at womENcourage Pre-Event

About: The workshop will bring together researchers interested in Artificial Intelligence. It is co-located with the womENcourage satellite pre-event organised by the Better Balance in Informatics (BBI) and IDUN projects promoting Women in Computer Science.

When: Tuesday, 21st of September 2021

Where: Radisson Blu Hotel, Tromsø

More information 


NorwAI Innovate Banner

NorwAI Innovate Banner

The conference NorwAI Innovate will be held for the very first time on 20-21 October in Trondheim.

Learn more on the conference website