REMODEL conference
REMODEL conference
This conference is open to researchers with an association to the EU REMODEL project.
Date: 11-13 May 2026 (recommended arrival on Sunday (10 May), the conference will end early afternoon on Wednesday).
Location: Scandic Nidelven
Arrival information:
From the airport, there are three ways to reach the hotel:
The cheapest: You can take the train or local buses between the airport and Scandic Nidelven. Public transport can run irregularly on Sundays, so please check AtB to plan your route. You can purchase your tickets on the AtB app. All of your travel will be in Zone A. If possible, we recommend taking the R70 train and sitting on the right hand side (for the view). Your journey will cost 47 NOK.
The easiest: You can take the airport express bus directly from the airport to Scandic Nidelven (deboarding at Solsiden). Tickets can be purchased either in advance or when boarding the bus. The bus runs regularly and will cost you from 225 NOK - 275 NOK (depending on the ticket and whether you book in advance, which is cheaper).
The fastest: You can get a taxi from the airport. This typically costs between 600 NOK - 900 NOK, but can be more expensive in the evening. If travelling via taxi, we recommended to using a kiosk to get a fixed quote for your journey. You may also wish to use a shared taxi, which is significantly cheaper but needs to be prebooked.
Places to eat:
Grano / Una (good pizza), Osteria Moderna (Italian food), Mat fra Hagen (nice vegan buffet, closes at 8pm), Burger.no (burger restaurant), Trønderburger (smash burgers), Brasilia (buffet for meat lovers, a little expensive), Himalayan Oven (Nepalese), EGON (Norwegian chain restaurant), Gola (ice cream and occasional Argentinian empandas to takeaway).
Schedule
The following schedule is preliminary and may be subject to minor changes.
Day 0 (Sunday)
Registration
Day 1
08.15-08.45 | Registration
08.50-09.00 | Opening
Recent Developments in Scientific Machine Learning: Bridging Mathematical Structure and Data-Driven Models
In this talk, we present an overview of recent developments in SciML from a mathematical perspective. We highlight key methodological advances, including physics-informed neural networks, operator learning, hybrid modeling, and reduced-order approaches, and discuss their respective strengths and limitations. Particular attention is given to emerging alternatives to classical end-to-end training, such as structure-preserving and continuous-time neural models, as well as recent approaches that avoid gradient-based optimization altogether and enable fast and accurate learning of dynamical systems.
We further discuss recent work on the systematic construction of neural network architectures for nonlinear dynamical systems, based on operator-theoretic ideas such as Koopman representations and model reduction, leading to modular, interpretable, and theoretically grounded approaches.
Finally, we outline key open problems and future directions, emphasizing the need for deeper theoretical understanding, scalable algorithms, and robust validation in real-world applications. In line with the goals of the REMODEL network, we argue that mathematical analysis will play a central role in ensuring reliability, interpretability, and practical impact of scientific machine learning.
Neural differential equations for pathwise inference and mixed precision training
Incorporating machine learning into conservative discretisations
Sparse identification of port-Hamiltonian systems from noisy data
Learning discrete forced Euler-Lagrange dynamics from data
The dynamics are decomposed into conservative and non-conservative components, which are learned separately using neural networks. In the absence of external forces, the method reduces to a variational discretization of the action principle, naturally preserving the symplectic structure of the underlying system.
In addition, we extend this framework to account for systems whose configurations evolve on Lie groups, such as SO(n) and SE(n). By incorporating the geometric structure of these manifolds directly into the learning formulation, we obtain models that respect the underlying group constraints and provide improved physical consistency for rotational and rigid body dynamics.
Greedy Learning to Optimize with Convergence Guarantees
Exact conservation laws for neural network integrators of dynamical systems
Reliable DRM Training for PDEs
In this talk we analyse the optimisation structure of PINN and DRM losses and identify two fundamental issues: ill-conditioning induced by activation function choices and the non-convex energy landscape generated by the loss functional. By decomposing the network into simpler mathematical components, we show how these issues can be addressed through activation function specific preconditioning and a separation of the training of weights and biases. Specifically, we interpret the biases as knot locations in a piecewise linear basis, and make use of equidistribution theory to derive theoretically justified bias placement whilst training weights independently. We demonstrate this approach for shallow, fully connected networks with ReLU style activation functions applied to 1D and 1+1D PDEs, and benchmark performance on challenging test problems including convection-dominated equations and exponential boundary layer problems.
Equidistribution-based training of Univariate Free Knot Splines and ReLU Neural Networks
However, the FKS representation, both remains well-conditioned as the number of knots increases, and can be highly accurate if the knots are correctly placed. We leverage the theory of optimal piecewise linear interpolants to improve the training procedure for both a FKS and a \ReLU \, NN. For the FKS we propose a novel two-level training procedure. First solving the nonlinear problem of finding the optimal knot locations of the interpolating FKS using an equidistribution approach. Then solving the nearly linear, well-conditioned, problem of finding the optimal weights and knots of the FKS.
The training of the FKS gives insights into how we can train a \ReLU\,NN effectively to give an equally accurate approximation. To do this we combine the training of the \ReLU\,NN with an equidistribution based loss to find the breakpoints of the \ReLU\, functions, this is then combined with preconditioning the \ReLU\,NN approximation (to take an FKS form) to find the scalings of the \ReLU\,functions. This procedure leads to a fast, well-conditioned and reliable method of finding an accurate shallow \ReLU\,NN approximation to a univariate target function. This method avoids spectral bias and is highly effective for a wide variety of functions. We test this method on a series of regular, singular, and rapidly varying target functions and obtain good results, realising the expressivity of the shallow \ReLU\, network in all cases. We conclude that in the shallow case to gain full expressivity for the \ReLU \, NN we must both find the optimal breakpoints (by equidistribution) {\em and} precondition the problem of finding the optimal coefficients. We then extend our results to more general activation functions, and to deeper networks.
Applications, funding opportunities and industrial collaboration
- Introductions by Natalie Søyseth, Asgeir Sørensen and Wil Schilders
Day 2
Geometric integration of the Brinkman penalization method for Hamiltonian PDEs
Some topics in structure-preserving deep learning
Conformal Symplectic Neural Flows for Learning Multiple Energy-Dissipative Systems
Learned regularisation methods for light-sheet fluorescence microscopy
ity, yet image quality is often hampered by spatially varying blur and mixed Poisson–Gaussian noise. While physics-based frameworks [Toader et al., 2022] account for these statistics, they often
fail to capture the complex structural variability of biological specimens. Conversely, “unrolled” deep learning
methods such as the Richardson-Lucy network (RLN) from Li et al. [2022] offer high-quality image restoration
but lack flexibility, requiring costly retraining whenever imaging modalities or system characteristics change.
To address these challenges, we propose a modular reconstruction framework that decouples the physical imag-
ing model from the learned image prior. Built upon structured primal-dual optimisation schemes such PD3O
[Yan, 2018] and non-linear PDHG [Valkonen, 2014], our method integrates a learned gradient-step denoiser
[Hurault et al., 2022] to enforce structural consistency without being “baked into” the forward model. This
architecture ensures the learned component remains hardware-agnostic; the same denoiser can be reused across
LSFM, widefield, and confocal imaging by simply updating the forward operator. We demonstrate superior
performance on both synthetic and experimental biological datasets, showcasing a robust, versatile approach
to high-fidelity microscopy reconstruction.
Learning Nambu Dynamical Systems
Gradient flows for template-based reconstruction
Approximation theory for 1-Lipschitz ResNets
When is a System Discoverable from Data? Discovery Requires Chaos
More concretely, we show that systems chaotic on their entire domain are discoverable from a single trajectory within the space of continuous functions, and systems chaotic on a strange attractor are analytically discoverable under a geometric condition on the attractor. As a consequence, we demonstrate for the first time that the classical Lorenz system is analytically discoverable. Moreover, we establish that analytic discoverability is impossible in the presence of first integrals, common in real-world systems. These findings help explain the success of data-driven methods in inherently chaotic domains like weather forecasting, while revealing a significant challenge for engineering applications like digital twins, where stable, predictable behavior is desired. For these non-chaotic systems, we find that while trajectory data alone is insufficient, certain prior physical knowledge can help ensure discoverability. These findings warrant a critical re-evaluation of the fundamental assumptions underpinning purely data-driven discovery.
How many measurements are enough? Bayesian recovery in inverse problems with generative priors
Panel discussion
In the panel:
- Chris Budd
- Takaharu Yaguchi
- Carola Schönlieb
- Karen Veroy-Grepl
- Ben Adcock
Conference dinner
Address: Brattørkaia 13B, 7010 Trondheim
Details:
- 10-minute walk from the hotel.
- 5-course dinner including meat and fish.
- Dietary restrictions given in the registration form are known to the restaurant, and alternatives will be provided.
- Drinks are not included, but can be ordered from the bar.
Day 3
Efficient Greedy Sampling in Model Order Reduction
Image Analysis and Deep Learning via PDEs on Lie groups
We present image analysis applications with benefits of PDE-G-CNNs compared to G-CNNs: increase of performance along with a vast reduction in network parameters and training data. This is beneficial in small/mid-size network regimes, but the feature maps require too much memory in the big data regime. To tackle this memory-bottleneck we now explore several approaches, among which Gaussian mixture models (EM-GMM), splatting, geometric transformers and neural fields in the Lie group.
Connected Components on Lie Groups with Examples in Multi-Orientation Image Analysis
During this talk, we will discuss how one can identify connected components in images that allow for small interruptions within the same component. The presented method works on any Lie group with a left-invariant distance, but we use examples of the lifted space of positions and orientations SE(2), allowing us to differentiate between crossings and bifurcations.
Nonexpansive Neural Networks on Hyperbolic Manifolds
Learning Dynamical Symmetries from Poorly Conditioned Data
Organisers
-
Jacob Goodman Researcher
jacob.goodman@ntnu.no Department of Mathematical Sciences -
James Jackaman Postdoctoral Fellow
+47-73412396 james.jackaman@ntnu.no Department of Mathematical Sciences -
Håkon Noren Myhr PhD Candidate
hakon.noren@ntnu.no Department of Mathematical Sciences -
Brynjulf Owren Professor
+47-73593518 +4793021641 brynjulf.owren@ntnu.no Department of Mathematical Sciences
Related events
If you are attending this conference, you may also be interested in the related meeting: SCML2026.
Organised in synergy with the broader REMODEL network, this meeting focuses on Scientific Computing and Machine Learning.
-
Dates: September 14 - 17, 2026
-
Location: University of Bath, UK
-
Read more and register: https://scml.jp