Publications
Efficient Representation Learning for Music Via Likelihood Factorisation of a Variational Autoencoder
Ningzhi Wang, Daniel Stoller, Simon Dixon
IEEE International Workshop on Machine Learning for Signal Processing (MLSP) • 2025
LLark: A Multimodal Instruction-Following Language Model for Music
Joshua P. Gardner, Simon Durand, Daniel Stoller, Rachel M. Bittner
International Conference on Machine Learning (ICML) • 2024
Show abstract
Music has a unique and complex structure which is challenging for both expert humans and existing AI systems to understand, and presents unique challenges relative to other forms of audio. We present LLark, an instruction-tuned multimodal model for music understanding. We detail our process for dataset creation, which involves augmenting the annotations of diverse open-source music datasets and converting them to a unified instruction-tuning format. We propose a multimodal architecture for LLark, integrating a pretrained generative model for music with a pretrained language model. In evaluations on three types of tasks (music understanding, captioning, reasoning), we show that LLark matches or outperforms existing baselines in music understanding, and that humans show a high degree of agreement with its responses in captioning and reasoning tasks. LLark is trained entirely from open-source music data and models, and we make our training code available along with the release of this paper. Additional results and audio examples are at https://bit.ly/llark, and our source code is available at https://github.com/spotify-research/llark.
Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages
Simon Durand, Daniel Stoller, Sebastian Ewert
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) • 2023
Few-Shot Musical Source Separation
Yu Wang, Daniel Stoller, Rachel M. Bittner, Juan Pablo Bello
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) • 2022
A Deep Learning Approach to Intelligent Drum Mixing With the Wave-U-Net
Marco A. MartÃnez RamÃrez, Daniel Stoller, David Moffat
Journal of the Audio Engineering Society • 2021
Deep Learning for Music Information Retrieval in Limited Data Scenarios
Daniel Stoller
PhD Thesis, Queen Mary University of London • 2020
Show abstract
While deep learning (DL) models have achieved impressive results in settings where large amounts of annotated training data are available, overfitting often degrades performance when data is more limited. To improve the generalisation of DL models, we investigate "data-driven priors" that exploit additional unlabelled data or labelled data from related tasks. Unlike techniques such as data augmentation, these priors are applicable across a range of machine listening tasks, since their design does not rely on problem-specific knowledge.
Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
Daniel Stoller, Mi Tian, Sebastian Ewert, Simon Dixon
International Joint Conference on Artificial Intelligence (IJCAI-PRICAI) • 2020
Show abstract
Convolutional neural networks (CNNs) with dilated filters such as the Wavenet or the Temporal Convolutional Network (TCN) have shown good results in a variety of sequence modelling tasks. However, efficiently modelling long-term dependencies in these sequences is still challenging. Although the receptive field of these models grows exponentially with the number of layers, computing the convolutions over very long sequences of features in each layer is time and memory-intensive, prohibiting the use of longer receptive fields in practice. To increase efficiency, we make use of the "slow feature" hypothesis stating that many features of interest are slowly varying over time. For this, we use a U-Net architecture that computes features at multiple time-scales and adapt it to our auto-regressive scenario by making convolutions causal. We apply our model ("Seq-U-Net") to a variety of tasks including language and audio generation. In comparison to TCN and Wavenet, our network consistently saves memory and computation time, with speed-ups for training and inference of over 4x in the audio generation experiment in particular, while achieving a comparable performance in all tasks.
Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators
Daniel Stoller, Sebastian Ewert, Simon Dixon
International Conference on Learning Representations (ICLR) • 2020
A comparative study of neural models for polyphonic music sequence transduction
Adrien Ycart, Daniel Stoller, Emmanouil Benetos
International Society for Music Information Retrieval Conference (ISMIR) • 2019
Evolutionary multi-objective training set selection of data instances and augmentations for vocal detection
Igor Vatolkin, Daniel Stoller
International Conference on Computational Intelligence in Music, Sound, Art and Design (EvoMUSART) • 2019