Tutorials
Tutorials
Tuesday 23 June
Tutorial 1: An Introduction to Track-to-Track Fusion and the Distributed Kalman Filter
Tutorial duration: 3 hours
presenter:
Felix Govaers
Abstract: The increasing trend towards connected sensors (“internet of things” and “ubiquitous computing”) derive a demand for powerful distributed estimation methodologies. In tracking applications, the “Distributed Kalman Filter” (DKF) provides an optimal solution under certain conditions. The optimal solution in terms of the estimation accuracy is also achieved by a centralized fusion algorithm which receives either all associated measurements or so-called “tracklets”. However, this scheme needs the result of each update step for the optimal solution whereas the DKF works at arbitrary communication rates since the calculation is completely distributed. Two more recent methodologies are based on the “Accumulated State Densities” (ASD) which augment the states from multiple time instants. In practical applications, tracklet fusion based on the equivalent measurement often achieves reliable results even if full communication is not available. The limitations and robustness of the tracklet fusion will be discussed. At first, the tutorial will explain the origin of the challenges in distributed tracking. Then, possible solutions to them are derived and illuminated. In particular, algorithms will be provided for each presented solution. The list of topics includes: Short introduction to target tracking, Tracklet Fusion, Exact Fusion with cross covariances, Naïve Fusion, Federated Fusion, Decentralized Fusion (Consensus Kalman Filter), Distributed Kalman Filter (DKF), Debiasing for the DKF, Distributed ASD Fusion, Augmented State Tracklet Fusion.
Tutorial 2: Quantum Computing and Quantum Physicss Inspired Algorithms: Introduction and Data Fusion Examples
Tutorial duration: 3 hours
presenters:
Felix Govaers
Martin Ulmke
Wolfgang Koch
Abstract: Quantum algorithms for data fusion may become game changers as soon as quantum processing kernels embedded in hybrid processing architectures with classical processors will exist. While emerging quantum technologies directly apply quantum physics, quantum algorithms do not exploit quantum physical phenomena as such, but rather use the sophisticated framework of quantum physics to deal with “uncertainty”. Although the link between mathematical statistics and quantum physics has long been known, the potential of physics-inspired algorithms for data fusion has just begun to be realized. While the implementation of quantum algorithms is to be considered on classical as well as on quantum computers, the latter are anticipated as well-adapted “analog computers” for unprecedentedly fast solving data fusion and resources management problems. While the development of quantum computers cannot be taken for granted, their potential is nonetheless real and has to be considered by the international information fusion community.
Tutorial 3: Data Fusion for EdgeAI
Tutorial duration: 3 hours
presenter:
Claudio M. de Farias
Abstract: The Internet of Things (IoT) is a novel paradigm that is grounded on Information and Communication Technologies (ICT). Recently, the use of IoT has been gaining attraction in areas such as logistics, manufacturing, retailing, and pharmaceutics, transforming the typical industrial spaces into Smart Spaces. Traditional machine learning algorithms may not be suited for resource-constrained scenarios. EdgeAI emerges as a viable solution by optimizing ML algorithms and models for efficiency and deploying them directly on microcontrollers or Edge Devices or other low-power processors, EdgeAI enables on-device inference without relying on cloud-based servers. For EdgeAI, the employment of Data fusion techniques is useful to further compress models, combine data sources, clean data reducing decisions response times, and enabling more intelligent and immediate situation awareness - specially in a Multimodal Data Fusion context. This tutorial aims to show the EdgeAI algorithms in the multisensor data fusion context, both theoretically and in practice.
Tutorial 4: Distributed multitarget tracking with multiagent systems
Tutorial duration: 3 hours
presenters:
Luigi Chisci
Alfonso Farina
Lin Gao
Giorgio Battistelli
Abstract: The tutorial will provide an overview of advanced research in information fusion, specifically concerning distributed multitarget tracking with a multiagent surveillance system. A multiagent surveillance system consists of a network of agents with sensing, processing and communication capabilities that aim to cooperatively monitor a given area of interest for situational awareness purposes. Multitarget tracking aims to detect an unknown number of objects (targets) present in the surveillance area and estimate their states. Special attention will be devoted to the fusion of possibly correlated information from multiple agents and on the random-finite-set paradigm for the statistical representation of multiple targets. Event-triggered communication for enhanced efficiency, resilience to cyber-attacks, and exploitation of AI tor target tracking will also be investigated. Applications to distributed cooperative surveillance, monitoring and navigation tasks will be discussed.
Tutorial 5: Poisson multi-Bernoulli mixtures for multiple target tracking and SLAM
Tutorial duration: 3 hours
presenters:
Ángel García-Fernández
Yuxuan Xia
Yu Ge
Abstract: In this tutorial, the attendant will learn the foundations of the Poisson multi-Bernoulli mixture (PMBM) filter, a state-of-the-art multiple target tracking (MTT) algorithm that has been applied to data from lidars, radars, cameras, integrated search-and-track sensor management and 5-G simultaneous localisation and mapping (SLAM). In addition, the attendant will learn the relations of the PMBM filter with other MTT algorithms such as multi-Bernoulli mixture (MBM) filter, probability hypothesis density (PHD) filter, Poisson multi-Bernoulli (PMB) filter, 𝛿𝛿-generalised labelled multi-Bernoulli (𝛿𝛿-GLMB) filter, multiple hypothesis tracking (MHT), and joint integrated probabilistic data association (JIPDA) filter. Then, this tutorial will cover the extension of the PMBM filter to sets of trajectories to include full trajectory information. Finally, the tutorial will explain PMBM/PMB filters for SLAM.
Tutorial 6: Introduction to Machine Learning Generalization Theory and Information Fusion Applications
Tutorial duration: 3 hours
presenter:
Nageswara Rao
Abstract: The overall theme of the tutorial is to provide an introduction to rigorous foundations for developing, analyzing, and applying ML methods. It is based on the generalization theory that rigorously captures the performance beyond the training, which is often subject to over-fitting and hallucinations. The concept of ML-solvability is introduced and developed, and used to provide a rigorous characterization of hallucinations in terms of generalization errors. The theory is applied and illustrated by developing the generalization equations for problems in information fusion area involving multiple information sources including sensors and estimators. The application specific properties, such as smoothness of thermal-hydraulic equations and bounded variation of data transfers throughput profiles, are used to develop ML solutions together with their generalization equations.
Tutorial 7: Analytic Combinatorics for Multiple Object Tracking
Tutorial duration: 3 hours
presenters:
Roy Streit
Murat Efe
Abstract: This tutorial is designed to facilitate understanding of the classical theory of Analytic Combinatorics (AC) and how to apply it to problems in multi-object tracking. AC is an economical technique for encoding combinatorial problems—without information loss—into the derivatives of a generating function (GF). Exact Bayesian filters derived from the GF avoid the heavy accounting burden required by traditional enumeration methods. Although AC is an established mathematical field, it is not widely known in either the academic engineering community or the practicing data fusion/tracking community. This tutorial lays the groundwork for understanding the methods of AC, starting with the GF for the classical Bayes-Markov filter. From this cornerstone, we derive many established filters (e.g., PDA, JPDA, JIPDA, PHD, CPHD, MultiBernoulli, MHT) with simplicity, economy, and insight. We also show how to use the saddle point method (method of stationary phase) to find low complexity approximations of probability distributions and summary statistics.
Tutorial 8: Practical multi-target tracking and sensor management with Stone Soup
Tutorial duration: 3 hours
presenters:
James Wright
Chris Sherman
Abstract: The Stone Soup framework is designed to provide a flexible and unified software platform for researchers and engineers to develop, test and benchmark a variety of existing multi-sensor, multi-object estimation algorithms and sensor management approaches. It profits from the object-oriented principles of abstraction, encapsulation, and modularity, allowing users (beginners, practitioners, or experts) to focus only on the most critical aspects of their problem. Stone Soup is endorsed by ISIF’s working group on Open Source Tracking and Estimation (OSTEWG). The tutorial will introduce participants to Stone Soup’s basic components and how they fit together. They are delivered by way of demonstrations, set tasks and interactive tutorials where participants will be encouraged to write and modify algorithms. These tasks will be written up in the form of interactive browser-based applications which combine the ability to run code with a presentation environment suitable for instruction. The tutorial will cover examples using linear/non-linear models, filtering, data association, track management, aimed at briefly introducing the topics and familiarise attendees with Stone Soup. This will include practical sessions to aid familiarisation with tracking concepts and how to apply them in Stone Soup.
Tutorial 9: Advances in Context-enhanced Information Fusion
Tutorial duration: 3 hours
presenters:
Lauro Snidaro
Erik Blasch
Abstract: Contextual Information (CI) can be understood as the information that “surrounds” an observable of interest. Even if not directly part of the problem variables being estimated by the system, CI can influence object states or even the sensing and estimation processes themselves. Therefore, understanding and exploiting CI can improve the performance of Information Fusion algorithms and automatic systems that have to deal with varying operating conditions. There is a growing interest for context research that should be considered for developing next-generation Information Fusion systems including Generative Adversarial Networks (GANs), Large Language Models (LLMs), and Digital Twins. Context can have static or dynamic structure, and be represented in many different ways such as maps, knowledge-bases, ontologies, etc. It can constitute a powerful tool to favour adaptability and boost system performance. Application examples include: context-aided surveillance systems (security/defence), traffic control, autonomous navigation, cyber security, ambient intelligence, ambient assistance, etc. The purpose of this tutorial is to survey existing approaches for context-enhanced information fusion, covering the design and development of information fusion solutions integrating sensory data with contextual knowledge. After discussing CI in other domains, the tutorial will focus on context representation and exploitation aspects for Information Fusion systems. The applicability of the presented approaches will be illustrated with real-world context-aware Information Fusion applications.
Tutorial 10: Learning the Noise: Identifying Uncertainty in State-Space Models
Tutorial duration: 3 hours
presenters:
Ondrej Straka
Jindrich Dunık
Abstract: Knowledge of a state-space model of a system is a crucial prerequisite for many state estimation, signal processing, fault detection, and optimal control problems. The model is often designed to be consistent with the random behavior of the system’s quantities and the measurement properties. The deterministic aspect of the model typically arises from mathematical modeling based on the physical, chemical, or biological laws governing the system’s behavior. In contrast, statistics related to the stochastic part are often difficult to derive solely from modeling and must be identified from measured data. Incorrect descriptions of noise statistics may result in significant deterioration in estimation, signal processing, detection, and control quality, or even the failure of the underlying algorithms. The tutorial introduces the history spanning more than six decades, recent advances, and state-of-the-art methods for estimating the properties of the stochastic part of the state-space model. In particular, the estimation of state-space model noise means, covariance matrices, and other parameters is treated. Applications of the proposed methods to real-world problems, such as atomic clock model identification, GNSS receiver measurement noise model identification, and parameter identification of uncertainties in multi-object tracking from image measurements, are demonstrated and discussed.
Tutorial 11: Multiple Extended Object Tracking for Automotive Applications
Tutorial duration: 3 hours
presenters:
Jens Honer
Hauke Kaulbersch
Marcus Baum
Abstract: In order to safely navigate through traffic, an automated vehicle needs to be aware of the trajectories and dimensions of all dynamic objects (e.g., traffic participants) as well as the locations and dimensions of all stationary objects (e.g., road infrastructure). For this purpose, automated vehicles are equipped with modern high-resolution sensors like LIDAR, RADAR or cameras that allow to detect objects in the vicinity. Typically, the sensors generate multiple detections for each object, where the detections are unlabeled, i.e. it is unknown which of the objects was detected. Furthermore, the detections are corrupted by sensor noise, e.g., some detections might be clutter, and some detections might be missing. The task of detecting and tracking an unknown number of moving spatially extended objects (e.g., traffic participants) based on noise-corrupted unlabeled measurements is called multiple extended object tracking. This tutorial will introduce state-of-the-art theory for multiple extended object tracking together with relevant real-world automotive applications. In particular, we will demonstrate applications for different object types, e.g., pedestrians, bicyclists, and cars, using different sensors such as LIDAR, RADAR, and camera. We will consider both classical model-based approaches and AI-based approaches. In particular, we will discuss the benefits, challenges and common aspects of both approaches.
Tutorial 12: Causal Inference for Heterogeneous and Multi-Source Data Fusion: Principles, Transportability, and Applications
Tutorial duration: 3 hours
presenters:
Alessandro Leite
Louis Hernandez
Matthieu Boussard
Abstract: This tutorial presents a rigorous yet accessible introduction to causal inference for heterogeneous and multi-source data fusion. We focus on methodological foundations and practical tools for integrating observational and experimental data collected under varying conditions. Key topics include structural causal models, identification of causal effects, correction for selection bias, transportability across domains, dataset shift, and robustness under intervention. We also discuss how modern AI tools, including large language models (LLMs), can assist in causal discovery, knowledge extraction, and structured information integration. Through a combination of theory, practical demonstrations, and hands-on Python exercises, participants will learn to build robust causal models that enhance reliability, interpretability, and decision-making in multi-sensor and safety-critical systems. The tutorial bridges the gap between traditional data fusion and causal reasoning, equipping Fusion researchers with tools to move from correlation-based integration toward causal-aware fusion strategies.
Tutorial 13: Signal Processing for IoT – Decision Fusion in Sensor Networks
Tutorial duration: 3 hours
presenters:
Pramod K. Varshney
Pierluigi Salvo Rossi
Domenico Ciuonzo
Abstract: The digital transformation is reshaping healthcare, industry, communications, and security. The Internet-of-Things (IoT) paradigm plays a central role through distributed sensing, communication, processing, and control. This tutorial adopts a statistical signal processing perspective and focuses on distributed binary hypothesis testing in wireless sensor networks with a fusion center equipped with multiple antennas. The framework resembles a MIMO system and enables energy-efficient, robust detection of phenomena such as environmental hazards and industrial leaks. The objective of this tutorial is to cover both design and of fusion approaches for this futuristic IoT setup. The tutorial is made of three main sections. The first section introduces classical decision fusion over parallel-access and multiple-access channels. The second section moves to the recent paradigm of MIMO decision fusion, based on array processing techniques. Finally, the third section explores the massive MIMO regime, i.e. the case in which the fusion center is equipped with a large-size antenna.
Tutorial 14: Visual Object Tracking with Images: From Theory to Practice
Tutorial duration: 6 hours
presenters:
Claudio Miceli de Farias
Pablo Rangel
Abstract: The proposed tutorial introduces the fundamental principles and practical methodologies underlying visual object tracking, providing participants with the essential first steps required to understand, design, and implement tracking systems based on image data. Emphasis is placed on the integration of computer vision measurements with probabilistic state-estimation and data fusion frameworks, enabling robust perception in real-world environments.
Tutorial 15: Bayesian Estimation with Learned Models: A Graphical-Model Perspectiv
Tutorial duration: 6 hours
presenters:
Erik Leitinger
Florian Meyer
Abstract: Localization, tracking, and mapping are increasingly important in emerging applications, including autonomous navigation, applied ocean sciences, asset tracking, future communication networks, and the Internet of Things. These applications pose new theoretical and methodological challenges to information fusion due to heterogeneous sensors. Processing measurements is often complicated by uncertainties beyond Gaussian noise, such as missed detections and clutter, uncertain measurement origins, and an unknown and time-varying number of objects to be localized or tracked.
Methodologically, these challenges can be effectively addressed by Bayesian inference methods that integrate physical models with data-driven learning. Model-based approaches provide principled ways to incorporate prior knowledge, uncertainty quantification, and structure, while learning-based components enable adaptation to complex, unknown, or time-varying environments. This combination offers important advantages in terms of robustness, scalability, and performance, particularly in scenarios where classical assumptions are violated, or incomplete models limit achievable accuracy. Recent advances at the intersection of signal processing, probabilistic inference, and artificial intelligence have enabled the seamless integration of trainable components into Bayesian estimation pipelines. Rather than replacing principled statistical models, neural networks can be used to learn and enhance unknown system dynamics, measurement models, or data association mechanisms, while preserving interpretability and uncertainty propagation. Structured representations such as factor graphs and message passing provide a natural interface for embedding these learned components in a modular and scalable manner. These AI-enhanced Bayesian methods have demonstrated strong performance gains in challenging localization, tracking, and mapping problems.







