Two accepted papers at ECAI 25 for Kazeem
For his work on contrastive learning for time: Two papers and presentations at ECAI for NorwAI’s Abdul Kazeem Shamba
NorwAI’s PhD student Abdul Kazeem Shamba attended ECAI and gave two presentations about his work on contrastive learning for time series. ECAI is the largest European gathering for AI researchers to meet and exchange. The European Conference on AI (ECAI) took place between October 25-October 30, in Bologna, Italy.
On Saturday he presented during the workshop on “Learning from Difficult Data”and on Monday he gave a main track talk titled “Contrast All The Time: Learning Time Series Representation from Temporal Consistency”.
Abstract nr 1
"eMargin: Revisiting Contrastive Learning with Margin-Based Separation"
We revisit previous contrastive learning frameworks to investigate the effect of introducing an adaptive margin into the contrastive loss function for time series representation learning.
Specifically, we explore whether an adaptive margin (eMargin), adjusted based on a predefined similarity threshold, can improve the separation between adjacent but dissimilar time steps and subsequently lead to better performance in downstream tasks.
Our study evaluates the impact of this modification on clustering performance and classification in three benchmark datasets. Our findings, however, indicate that achieving high scores on unsupervised clustering metrics does not necessarily imply that the learned embeddings are meaningful or effective in downstream tasks.
To be specific, eMargin added to InfoNCE consistently outperforms state-of-the-art baselines in unsupervised clustering metrics, but struggles to achieve competitive results in downstream classification with linear probing.
The source code is publicly available at https://github.com/sfi-norwai/eMargin
Abstract nr 2
“Contrast All The Time: Learning Time Series Representation from Temporal Consistency”
Representation learning for time series using contrastive learning has emerged as a critical technique for improving the performance of downstream tasks. To advance this effective approach, we introduce CaTT (Contrast All The Time), a new approach to unsupervised contrastive learning for time series, which takes advantage of dynamics between temporally similar moments more efficiently and effectively than existing methods.
CaTT departs from conventional time-series contrastive approaches that rely on data augmentations or selected views. Instead, it uses the full temporal dimension by contrasting all time steps in parallel.
This is made possible by a scalable NT-pair formulation, which extends the classic N-pair loss across both batch and temporal dimensions, making the learning process end-to-end and more efficient.
CaTT learns directly from the natural structure of temporal data, using repeated or adjacent time steps as implicit supervision, without the need for pair selection heuristics.
We demonstrate that this approach produces superior embeddings which allow better performance in downstream tasks.
Additionally, training is faster than other contrastive learning approaches, making it suitable for large-scale and real-world time series applications.
The source code is publicly available at https://github.com/sfi-norwai/CaTT
2025-10-30
By Rolf D. Svendsen

