REMODEL conference
REMODEL conference
This conference is open to researchers with an association to the EU REMODEL project.
Date: 11-13 May 2026 (recommended arrival on Sunday (10 May), the conference will end early afternoon on Wednesday).
Location: Scandic Nidelven
Schedule
The following schedule is preliminary and may be subject to minor changes.
Day 1
09.00-09.45 | Wil Schilders
Recent Developments in Scientific Machine Learning: Bridging Mathematical Structure and Data-Driven Models
Abstract: Scientific Machine Learning (SciML) has emerged as a new paradigm at the interface of scientific computing and machine learning, combining the rigor of physics-based modeling with the flexibility of data-driven approaches. While this synthesis has led to impressive progress in modeling, simulation, and inference, it also raises fundamental challenges related to data scarcity, physical consistency, high dimensionality, and computational cost.
In this talk, we present an overview of recent developments in SciML from a mathematical perspective. We highlight key methodological advances, including physics-informed neural networks, operator learning, hybrid modeling, and reduced-order approaches, and discuss their respective strengths and limitations. Particular attention is given to emerging alternatives to classical end-to-end training, such as structure-preserving and continuous-time neural models, as well as recent approaches that avoid gradient-based optimization altogether and enable fast and accurate learning of dynamical systems.
We further discuss recent work on the systematic construction of neural network architectures for nonlinear dynamical systems, based on operator-theoretic ideas such as Koopman representations and model reduction, leading to modular, interpretable, and theoretically grounded approaches.
Finally, we outline key open problems and future directions, emphasizing the need for deeper theoretical understanding, scalable algorithms, and robust validation in real-world applications. In line with the goals of the REMODEL network, we argue that mathematical analysis will play a central role in ensuring reliability, interpretability, and practical impact of scientific machine learning.
In this talk, we present an overview of recent developments in SciML from a mathematical perspective. We highlight key methodological advances, including physics-informed neural networks, operator learning, hybrid modeling, and reduced-order approaches, and discuss their respective strengths and limitations. Particular attention is given to emerging alternatives to classical end-to-end training, such as structure-preserving and continuous-time neural models, as well as recent approaches that avoid gradient-based optimization altogether and enable fast and accurate learning of dynamical systems.
We further discuss recent work on the systematic construction of neural network architectures for nonlinear dynamical systems, based on operator-theoretic ideas such as Koopman representations and model reduction, leading to modular, interpretable, and theoretically grounded approaches.
Finally, we outline key open problems and future directions, emphasizing the need for deeper theoretical understanding, scalable algorithms, and robust validation in real-world applications. In line with the goals of the REMODEL network, we argue that mathematical analysis will play a central role in ensuring reliability, interpretability, and practical impact of scientific machine learning.
09.45-10.30 | Nicole Yang
Neural differential equations for pathwise inference and mixed precision training
Abstract: In this talk, we consider continuous-time learning schemes through neural differential equations. We first discuss the inference of stochastic dynamical systems through only (nonlinear) noisy measurements. Building on a stochastic control formulation, we construct a generative model that maps the reference measure to the posterior measure through variational inference of a controlled diffusion process. This enables efficient generations of data-assimilated trajectories with applications in filtering and system identification. In the second part of the talk, we discuss a mixed precision explicit ODE solver and a custom backpropagation scheme and show their effectiveness in a range of learning tasks. Our scheme uses low-precision computations for evaluating the velocity, parameterized by the neural network, while stability is provided by a custom dynamic adjoint scaling and by accumulating the solution and gradients in higher precision.
10.30-10.50 | Coffee break
10.50-11.10 | Zhongtian Sun
TBA
Abstract: TBA
11.10-11.30 | Håkon Noren Myhr
Sparse identification of port-Hamiltonian systems from noisy data
Abstract: We propose sparse identification of port-Hamiltonian systems (SIPHy) enabling structure preserving symbolic regression from noisy trajectory observations. Port-Hamiltonian systems provides a general framework for describing dynamical systems in terms of energy exchange, dissipation and control. Here, we present an algorithm to jointly identify the Hamiltonian as well as the dissipation and input magnitudes. One of the main challenges of system identification for differential equations is to approximate derivatives of trajectory data that has been corrupted with noise or have missing points in time. To achieve this, we introduce Hamiltonian flow splines which assembles flows of piecewise polynomial Hamiltonians to produce a smooth, differentiable trajectory necessary for sparse regression. Combining flow splines with SIPHy provides a robust method for system identification that handles both high levels of noise and missing data for a range of dynamical systems with dissipation and control.
11.30-11.50 | Martine Dyring Hansen
Learning discrete forced Euler-Lagrange dynamics from data
Abstract: We introduce a data-driven framework for learning the equations of motion of mechanical systems directly from position measurements. This setting is particularly relevant in system identification tasks where only positional information is available, such as motion capture, pixel observations, or low-resolution tracking. Our approach is based on the discrete Lagrange–d’Alembert principle and the forced discrete Euler–Lagrange equations, enabling the construction of physically grounded models from discrete data without requiring velocity measurements.
The dynamics are decomposed into conservative and non-conservative components, which are learned separately using neural networks. In the absence of external forces, the method reduces to a variational discretization of the action principle, naturally preserving the symplectic structure of the underlying system.
In addition, we extend this framework to account for systems whose configurations evolve on Lie groups, such as SO(n) and SE(n). By incorporating the geometric structure of these manifolds directly into the learning formulation, we obtain models that respect the underlying group constraints and provide improved physical consistency for rotational and rigid body dynamics.
The dynamics are decomposed into conservative and non-conservative components, which are learned separately using neural networks. In the absence of external forces, the method reduces to a variational discretization of the action principle, naturally preserving the symplectic structure of the underlying system.
In addition, we extend this framework to account for systems whose configurations evolve on Lie groups, such as SO(n) and SE(n). By incorporating the geometric structure of these manifolds directly into the learning formulation, we obtain models that respect the underlying group constraints and provide improved physical consistency for rotational and rigid body dynamics.
11.50-13.30 | Lunch
13.30-13.50 | Patrick Fahy
Greedy Learning to Optimize with Convergence Guarantees
Abstract: Learning to optimize (L2O) leverages training data to accelerate solving optimization problems. Many existing methods use unrolling to parameterize update steps, but this often leads to memory limitations and a lack of convergence guarantees. We introduce a novel greedy strategy that learns iteration-specific parameters by minimizing the function value at the next step. This approach enables training over significantly more iterations while keeping GPU memory usage constant. We focus on a preconditioned Heavy Ball algorithm with multiple parameterizations, including a novel convolutional preconditioner. Our method ensures that parameter learning is no harder than solving the initial optimization problem and provides convergence guarantees under certain conditions. We test our approach on a Computed Tomography inverse problem, where our learned convolutional preconditioners outperform classical methods like Nesterov’s Accelerated Gradient and L-BFGS.
13.50-14.10 | Eike Mueller
Exact conservation laws for neural network integrators of dynamical systems
Abstract: We consider the construction of neural network surrogates for the solution of differential equations that describe the time evolution of physical systems. In contrast to other problems that are tackled by machine learning, in this case usually a lot is known about the system at hand: for many dynamical systems physical quantities such as (angular) momentum and energy are conserved. Learning these fundamental conservation laws from data is inefficient and will only lead to the approximate conservation of these quantities. We describe an alternative approach for incorporating inductive biases into the surrogate model. For this we use Noether's Theorem which relates conservation laws to continuous symmetries of the system and we incorporate the relevant symmetries into the architecture of the neural network Hamiltonian. We demonstrate that this leads to the exact conservation of (angular) momentum for a range of model systems that include the motion of a particle under Newtonian gravity, orbits in the Schwarzschild metric and two interacting particles in four dimensions. Our numerical results show that the solution conserves the relevant quantities exactly, is more accurate and does not suffer from instabilities that arise when using naive neural network surrogates.
14.10-14.30 | Aengus Roberts
Reliable DRM Training for PDEs
Abstract: Neural Network (NN) based methods for solving differential equations, such as Physics Informed Neural Networks (PINNs) and the Deep Ritz Method (DRM), have in recent years presented an interesting and promising approach to solution approximation. However, despite their rapid growth in popularity, they lack the theoretical error bounds that guarantee accuracy in classical methods such as Cea’s Lemma in the case of the Finite Element Method. In particular, a bound for the training error of these NN methods remains elusive often leading to the networks failing to train with little to no indication as to why.
In this talk we analyse the optimisation structure of PINN and DRM losses and identify two fundamental issues: ill-conditioning induced by activation function choices and the non-convex energy landscape generated by the loss functional. By decomposing the network into simpler mathematical components, we show how these issues can be addressed through activation function specific preconditioning and a separation of the training of weights and biases. Specifically, we interpret the biases as knot locations in a piecewise linear basis, and make use of equidistribution theory to derive theoretically justified bias placement whilst training weights independently. We demonstrate this approach for shallow, fully connected networks with ReLU style activation functions applied to 1D and 1+1D PDEs, and benchmark performance on challenging test problems including convection-dominated equations and exponential boundary layer problems.
In this talk we analyse the optimisation structure of PINN and DRM losses and identify two fundamental issues: ill-conditioning induced by activation function choices and the non-convex energy landscape generated by the loss functional. By decomposing the network into simpler mathematical components, we show how these issues can be addressed through activation function specific preconditioning and a separation of the training of weights and biases. Specifically, we interpret the biases as knot locations in a piecewise linear basis, and make use of equidistribution theory to derive theoretically justified bias placement whilst training weights independently. We demonstrate this approach for shallow, fully connected networks with ReLU style activation functions applied to 1D and 1+1D PDEs, and benchmark performance on challenging test problems including convection-dominated equations and exponential boundary layer problems.
14.30-15.00 | Break
15.00-15.45 | Chris Budd
Equidistribution-based training of Univariate Free Knot Splines and ReLU Neural Networks
Abstract: We consider the problem of improving the accuracy, convergence, and conditioning of univariate nonlinear function approximations using (mainly) shallow neural networks (NN) with a rectified linear unit (\ReLU) activation function. The standard $L_2$ based approximation problem is ill-conditioned and the behaviour of the optimisation algorithms used in training these networks degrades rapidly as the width of the network increases. This can lead to significantly poorer approximation in practice than we would expect from the theoretical expressivity of the \ReLU \, NN architecture. Univariate shallow \ReLU \, NNs and traditional approximation methods, such as univariate Free Knot Splines (FKS) span the same function space, and thus have the same theoretical expressivity.
However, the FKS representation, both remains well-conditioned as the number of knots increases, and can be highly accurate if the knots are correctly placed. We leverage the theory of optimal piecewise linear interpolants to improve the training procedure for both a FKS and a \ReLU \, NN. For the FKS we propose a novel two-level training procedure. First solving the nonlinear problem of finding the optimal knot locations of the interpolating FKS using an equidistribution approach. Then solving the nearly linear, well-conditioned, problem of finding the optimal weights and knots of the FKS.
The training of the FKS gives insights into how we can train a \ReLU\,NN effectively to give an equally accurate approximation. To do this we combine the training of the \ReLU\,NN with an equidistribution based loss to find the breakpoints of the \ReLU\, functions, this is then combined with preconditioning the \ReLU\,NN approximation (to take an FKS form) to find the scalings of the \ReLU\,functions. This procedure leads to a fast, well-conditioned and reliable method of finding an accurate shallow \ReLU\,NN approximation to a univariate target function. This method avoids spectral bias and is highly effective for a wide variety of functions. We test this method on a series of regular, singular, and rapidly varying target functions and obtain good results, realising the expressivity of the shallow \ReLU\, network in all cases. We conclude that in the shallow case to gain full expressivity for the \ReLU \, NN we must both find the optimal breakpoints (by equidistribution) {\em and} precondition the problem of finding the optimal coefficients. We then extend our results to more general activation functions, and to deeper networks.
However, the FKS representation, both remains well-conditioned as the number of knots increases, and can be highly accurate if the knots are correctly placed. We leverage the theory of optimal piecewise linear interpolants to improve the training procedure for both a FKS and a \ReLU \, NN. For the FKS we propose a novel two-level training procedure. First solving the nonlinear problem of finding the optimal knot locations of the interpolating FKS using an equidistribution approach. Then solving the nearly linear, well-conditioned, problem of finding the optimal weights and knots of the FKS.
The training of the FKS gives insights into how we can train a \ReLU\,NN effectively to give an equally accurate approximation. To do this we combine the training of the \ReLU\,NN with an equidistribution based loss to find the breakpoints of the \ReLU\, functions, this is then combined with preconditioning the \ReLU\,NN approximation (to take an FKS form) to find the scalings of the \ReLU\,functions. This procedure leads to a fast, well-conditioned and reliable method of finding an accurate shallow \ReLU\,NN approximation to a univariate target function. This method avoids spectral bias and is highly effective for a wide variety of functions. We test this method on a series of regular, singular, and rapidly varying target functions and obtain good results, realising the expressivity of the shallow \ReLU\, network in all cases. We conclude that in the shallow case to gain full expressivity for the \ReLU \, NN we must both find the optimal breakpoints (by equidistribution) {\em and} precondition the problem of finding the optimal coefficients. We then extend our results to more general activation functions, and to deeper networks.
15.45-16.00 | Break
16.00-17.00 | Discussion
Day 2
09.00-09.45 | Takaharu Yaguchi
Geometric integration of the Brinkman penalization method for Hamiltonian PDEs
Abstract: The Brinkman penalization method is a technique designed to simplify the numerical calculation of wave phenomena in complex domains. This method avoids the need for complex meshing by adding a damping term to the equations in subdomains where obstacles exist. Due to this damping term, the resulting equations are conservative in some regions and dissipative in others. In this talk, we will explain a geometric property of such equations and numerical methods that preserve it.
09.45-10.30 | Carola-Bibiane Schönlieb
Some tropic in structure-preserving deep learning
Abstract: I will discuss some of our recent works on structure preserving deep learning for the design of neural networks with specific properties - such as non-expansiveness and 1-Lipschitz regularity - and their application to imaging and to the solution of partial differential equations.
10.30-10.50 | Coffee break
10.50-11.10 | Baige Xu
Conformal Symplectic Neural Flows for Learning Multiple Energy-Dissipative Systems
Abstract: Many Hamiltonian systems with energy-dissipative terms possess a conformal symplectic structure, wherein the symplectic form is proportionally preserved. In this study, we propose a method based on Conformal Symplectic Neural Networks that learns the mapping to continuous-time flows for different energy-dissipative systems by treating the Hamiltonian as input information.
11.10-11.30 | Amin Sabir
Learned regularisation methods for light-sheet fluorescence microscopy
Abstract: Light-sheet fluorescence microscopy (LSFM) provides high-speed volumetric imaging with reduced phototoxic-
ity, yet image quality is often hampered by spatially varying blur and mixed Poisson–Gaussian noise. While physics-based frameworks [Toader et al., 2022] account for these statistics, they often
fail to capture the complex structural variability of biological specimens. Conversely, “unrolled” deep learning
methods such as the Richardson-Lucy network (RLN) from Li et al. [2022] offer high-quality image restoration
but lack flexibility, requiring costly retraining whenever imaging modalities or system characteristics change.
To address these challenges, we propose a modular reconstruction framework that decouples the physical imag-
ing model from the learned image prior. Built upon structured primal-dual optimisation schemes such PD3O
[Yan, 2018] and non-linear PDHG [Valkonen, 2014], our method integrates a learned gradient-step denoiser
[Hurault et al., 2022] to enforce structural consistency without being “baked into” the forward model. This
architecture ensures the learned component remains hardware-agnostic; the same denoiser can be reused across
LSFM, widefield, and confocal imaging by simply updating the forward operator. We demonstrate superior
performance on both synthetic and experimental biological datasets, showcasing a robust, versatile approach
to high-fidelity microscopy reconstruction.
ity, yet image quality is often hampered by spatially varying blur and mixed Poisson–Gaussian noise. While physics-based frameworks [Toader et al., 2022] account for these statistics, they often
fail to capture the complex structural variability of biological specimens. Conversely, “unrolled” deep learning
methods such as the Richardson-Lucy network (RLN) from Li et al. [2022] offer high-quality image restoration
but lack flexibility, requiring costly retraining whenever imaging modalities or system characteristics change.
To address these challenges, we propose a modular reconstruction framework that decouples the physical imag-
ing model from the learned image prior. Built upon structured primal-dual optimisation schemes such PD3O
[Yan, 2018] and non-linear PDHG [Valkonen, 2014], our method integrates a learned gradient-step denoiser
[Hurault et al., 2022] to enforce structural consistency without being “baked into” the forward model. This
architecture ensures the learned component remains hardware-agnostic; the same denoiser can be reused across
LSFM, widefield, and confocal imaging by simply updating the forward operator. We demonstrate superior
performance on both synthetic and experimental biological datasets, showcasing a robust, versatile approach
to high-fidelity microscopy reconstruction.
11.30-11.50 | Jay Dhesi
Learning Nambu Dynamical Systems
Abstract: Nambu dynamics generalises Hamiltonian dynamics by evolving states according to an n-ary bracket and n-1 Hamiltonians, thus encoding multiple conserved quantities in its algebraic structure. We introduce Nambu Neural Networks (NNNs) to learn such systems from data. We make use of local canonical form results for Nambu-Poisson manifolds to justify learning a coordinate transform via an invertible network, while we parameterise conserved quantities explicitly with MLPs. We show that NNNs accurately recover both dynamics and invariants of classical Nambu systems. We also show that in the Poisson regime, our approach recovers Poisson Neural Networks (PNNs), but with improved expressivity of the learned invariants. In particular, global trajectories of Poisson systems which have compact symplectic leaf topology cannot be represented by PNNs as these are parameterised by diffeomorphisms to non-compact canonical forms. In our method, however, these symplectic leaves are learned explicitly via MLPs.
11.50-13.30 | Lunch
13.30-13.50 | Erik Jansson
Gradient flows for template-based reconstruction
Abstract: Template-based reconstruction, also known as indirect shape matching, can be formulated as a matching problem in which a template is deformed to explain indirectly observed data through a given forward model. We study this problem by formulating a gradient flow directly on a Lie group which acts on a shape space. Equipping the deformation group with a right-invariant metric produces evolution equations. In finite-dimensional groups, this results in ordinary differential equations, while for infinite-dimensional groups it leads to partial differential equations. The poster focuses on the geometric derivation of these equations in a general setting, highlighting the roles of the group action, the choice of metric, and the induced momentum map. The framework applies to both finite-dimensional groups, such as direct products of SO(3), as well as infinite-dimensional groups, such as diffeomorphism groups. We provide examples illustrating how the general construction specializes in concrete cases.
13.50-14.10 | Davide Murari
Approximation theory for 1-Lipschitz ResNets
Abstract: 1-Lipschitz neural networks are important in robust learning, generative modelling, and inverse problems, but enforcing Lipschitz constraints often comes at the price of reduced expressiveness. In this talk, I will discuss approximation results for a class of 1-Lipschitz ResNets built from explicit Euler steps of negative gradient flows. After briefly recalling why these architectures are useful and where related models have already appeared in previous work, I will outline the main idea behind our main approximation theorem. The key observation is that these networks can realise max/min operations and thereby represent scalar piecewise affine 1-Lipschitz functions. This constructive viewpoint leads to universal approximation results for scalar 1-Lipschitz maps. I will conclude by discussing open directions, including extensions to vector-valued functions.
14.10-14.30 | Zak Shumaylov
When is a System Discoverable from Data? Discovery Requires Chaos
Abstract: The deep learning revolution has spurred a rise in advances of using AI in sciences. Within physical sciences the main focus has been on discovery of dynamical systems from observational data. Yet the reliability of learned surrogates and symbolic models is often undermined by the fundamental problem of non-uniqueness. The resulting models may fit the available data perfectly, but lack genuine predictive power. This raises the question: under what conditions can the systems governing equations be uniquely identified from a finite set of observations? We show, counter-intuitively, that chaos, typically associated with unpredictability, is crucial for ensuring a system is discoverable in the space of continuous or analytic functions. The prevalence of chaotic systems in benchmark datasets may have inadvertently obscured this fundamental limitation.
More concretely, we show that systems chaotic on their entire domain are discoverable from a single trajectory within the space of continuous functions, and systems chaotic on a strange attractor are analytically discoverable under a geometric condition on the attractor. As a consequence, we demonstrate for the first time that the classical Lorenz system is analytically discoverable. Moreover, we establish that analytic discoverability is impossible in the presence of first integrals, common in real-world systems. These findings help explain the success of data-driven methods in inherently chaotic domains like weather forecasting, while revealing a significant challenge for engineering applications like digital twins, where stable, predictable behavior is desired. For these non-chaotic systems, we find that while trajectory data alone is insufficient, certain prior physical knowledge can help ensure discoverability. These findings warrant a critical re-evaluation of the fundamental assumptions underpinning purely data-driven discovery.
More concretely, we show that systems chaotic on their entire domain are discoverable from a single trajectory within the space of continuous functions, and systems chaotic on a strange attractor are analytically discoverable under a geometric condition on the attractor. As a consequence, we demonstrate for the first time that the classical Lorenz system is analytically discoverable. Moreover, we establish that analytic discoverability is impossible in the presence of first integrals, common in real-world systems. These findings help explain the success of data-driven methods in inherently chaotic domains like weather forecasting, while revealing a significant challenge for engineering applications like digital twins, where stable, predictable behavior is desired. For these non-chaotic systems, we find that while trajectory data alone is insufficient, certain prior physical knowledge can help ensure discoverability. These findings warrant a critical re-evaluation of the fundamental assumptions underpinning purely data-driven discovery.
14.30-15.00 | Break
15.00-15.45 | Ben Adcock
How many measurements are enough? Bayesian recovery in inverse problems with generative priors
Abstract: Deep learning is currently transforming how inverse problems arising in imaging reconstruction are solved. Yet many methods lack substantial theory, especially guarantees that precisely describe how many measurements suffice for accurate and stable recovery. One of the most promising methodologies in recent years follows a Bayesian approach, where a generative model is trained and then used as a prior. In this talk, I will present recent work on recovery guarantees for Bayesian inverse problems with general priors. The main results are new, nonasymptotic guarantees ensuring accurate and stable recovery that relate the number of measurements (or sensors) to intrinsic properties of the prior and the sensing mechanism. This theory not only describes when recovery succeeds, but it can also be leveraged to improve recovery, as it leads to novel, theoretically optimal measurement design protocols. After presenting the theory, I will conclude by discussing applications to medical imaging and PDE-based inverse problems.
15.45-16.00 | Break
16.00-17.00 | Discussion
Day 3
09.00-09.45 | Karen Veroy-Grepl
Efficient Greedy Sampling in Model Order Reduction
Abstract: This talk presents the Polytope Division Method (PDM), a greedy algorithm for solving high-dimensional configuration optimization problems -- such as those arising in model reduction and optimal experimental design -- where one seeks an optimal sampling of parameter spaces. Classical approaches like standard greedy sampling rely on fixed training sets and quickly suffer from the curse of dimensionality. PDM replaces global sampling with an adaptive, geometry-driven strategy based on recursive polytope subdivision. At each step, the method evaluates the objective only at samples in dynamically refined regions. This yields a sampling complexity that scales linearly with dimension, avoiding exponential growth. The approach requires no a priori choice of training set size and focuses computational effort where it matters most. Applications to reduced basis methods and empirical interpolation demonstrate strong performance gains. Numerical results show that PDM achieves comparable accuracy to classical methods at significantly lower offline reduced cost.
09.45-10.30 | Remco Duits
Image Analysis and Deep Learning via PDEs on Lie groups
Abstract: We consider image processing and deep learning by solving PDEs in the roto-translation Lie group SE(d). An overview of our (exact) analytic and numeric solutions of these PDEs will be given, with emphasis on their integration in PDE-Based group equivariant networks (PDE-G-CNNs). Such network consists of morphological convolutions with kernels solving nonlinear PDEs (HJB equations for max-pooling over Riemannian balls), and linear convolutions solving linear PDEs (convection, fractional diffusion). Common mystifying (ReLU) nonlinearities are now obsolete and excluded. We achieve network interpretability as we train sparse association fields from neurogeometry.
We present image analysis applications with benefits of PDE-G-CNNs compared to G-CNNs: increase of performance along with a vast reduction in network parameters and training data. This is beneficial in small/mid-size network regimes, but the feature maps require too much memory in the big data regime. To tackle this memory-bottleneck we now explore several approaches, among which Gaussian mixture models (EM-GMM), splatting, geometric transformers and neural fields in the Lie group.
We present image analysis applications with benefits of PDE-G-CNNs compared to G-CNNs: increase of performance along with a vast reduction in network parameters and training data. This is beneficial in small/mid-size network regimes, but the feature maps require too much memory in the big data regime. To tackle this memory-bottleneck we now explore several approaches, among which Gaussian mixture models (EM-GMM), splatting, geometric transformers and neural fields in the Lie group.
10.30-10.50 | Coffee break
10.50-11.10 | Nicky Van den Bergh
Connected Components on Lie Groups with Examples in Multi-Orientation Image Analysis
Abstract: Retinal images are often used to examine the vascular system in a non-invasive way. Studying the behavior of the vasculature on the retina allows for noninvasive diagnosis of several diseases as these vessels and their behavior are representative of the behavior of vessels throughout the human body. For early diagnosis and analysis of diseases, it is important to compare and analyze the complex vasculature in retinal images automatically.
During this talk, we will discuss how one can identify connected components in images that allow for small interruptions within the same component. The presented method works on any Lie group with a left-invariant distance, but we use examples of the lifted space of positions and orientations SE(2), allowing us to differentiate between crossings and bifurcations.
During this talk, we will discuss how one can identify connected components in images that allow for small interruptions within the same component. The presented method works on any Lie group with a left-invariant distance, but we use examples of the lifted space of positions and orientations SE(2), allowing us to differentiate between crossings and bifurcations.
11.10-11.30 | Marta Ghirardelli
Nonexpansive Neural Networks on Hyperbolic Manifolds
Abstract: In this work we investigate ResNet-style neural network architectures defined on hyperbolic manifolds. We introduce a general class of gradient-type layers that implement quasi $\alpha$-firmly non-expansive operators, and we show that these layers are non-expansive (1-Lipschitz). We then specialize our framework to the Poincaré ball model, where we provide explicit constructions and present three alternative formulations of quasi $\alpha$-firmly non-expansive layers.
11.30-11.50 | Jacob Goodman
TBA
Abstract: TBA
11.50-13.30 | Lunch
14.00 | Departure
Organisers
-
Jacob Goodman Researcher
jacob.goodman@ntnu.no Department of Mathematical Sciences -
James Jackaman Assistant Professor
+47-73412396 james.jackaman@ntnu.no Department of Mathematical Sciences -
Håkon Noren Myhr PhD Candidate
hakon.noren@ntnu.no Department of Mathematical Sciences -
Brynjulf Owren Professor
+47-73593518 +4793021641 brynjulf.owren@ntnu.no Department of Mathematical Sciences
Related events
If you are attending this conference, you may also be interested in the related meeting: SCML2026.
Organised in synergy with the broader REMODEL network, this meeting focuses on Scientific Computing and Machine Learning.
-
Dates: September 14 - 17, 2026
-
Location: University of Bath, UK
-
Read more and register: https://scml.jp