Rhythm and Intentionality in Computer Assisted Music making

Rhythm and Intentionality in Computer Assisted Music making

About the project

The project focus on computer assisted music creation utilizing lean datasets in custom made intelligent and generative models. We investigate if this allows a more nuanced control and better affordances regarding intentionality, when compared with deep learning models based on large data sets. 

The project output is a software tool to perform realtime analysis of rhythm in improvised music performance, a series of recordings, as well as artistic process documentation. These are complemented by two scientific papers on the details of the rhythm analysis method , and the generative algorithm based on probabilistic logic. Yet another publication is made on the aesthetics of automatic music generation by means of AI and algorithmic techniques. Questions of machine creativity, human/computer creative interaction, and the ethics and environmental concerns are treated in the publication on aesthetics, in the form of a round table discussion between Carvalhais, Magnusson, Formo and Brandtsegg.

The master thesis of Steinar Bolstad Skålid connects to the project in terms of exploration of AI models for audio generation and transformation.

Background

In recent years, a plethora of AI based tools for music generation have become available (a curated collection can be found in "AI/ML Music Tools" - Web survey by Andreas Bergsland, 2024). In general, we see that current machine learning models are often unsatisfactory for realizing a creative intention. This is in part due to their basis in a statistical average over a large set of training data. Furthermore, the environmental impact of computing with large data sets is significant and ethically questionable. We note, also within the computing disciplines, that there is a growing interest to utilize intelligent generative models that rely less on huge datasets, but focus on smaller and more compact datasets. For our purposes, these more compact models could potentially allow for personalization, parametrization and precise creative control.

Any generative model requires a source data set, and for music this usually is based on pitch, frequency, and rhythm. Pitch and frequency can be analyzed and represented with relative ease, but rhythm is a harder problem. Expressive rhythm analysis is a challenge not yet solved although research has been ongoing since the early 1980's (a good survey can be found in  ). Methods are usually based on assumptions from western (art) music, like metrical hierarchy, pulse and meter. As part of this project, we have developed a prototype of a new model for rhythm analysis and representation. The new method requires only a small set of events to produce meaningful results. It is based on maintaining several simultaneous/parallel theories of integer relationships of delta times between events in a time series. The prototype was developed 2024-2025, based on previous research at NTNU Music Technology in the period 2017-2024. It shows promising results, with parametric control over representation preferences, adaptability, and intentional control.