A Unified Neural Architecture for Instrumental Audio Tasks
arXiv:1903.00142 [cs], vol.
Computer Science - Machine Learning,Computer Science - Sound,Computer Science - Computer Vision and Pattern Recognition,Computer Science - Information Retrieval
- Steven Spratley
- Daniel Beck
- Trevor Cohn
Within Music Information Retrieval (MIR), prominent tasks – including pitch-tracking, source-separation, super-resolution, and synthesis – typically call for specialised methods, despite their similarities. Conditional Generative Adversarial Networks (cGANs) have been shown to be highly versatile in learning general image-to-image translations, but have not yet been adapted across MIR. In this work, we present an end-to-end supervisable architecture to perform all aforementioned audio tasks, consisting of a WaveNet synthesiser conditioned on the output of a jointly-trained cGAN spectrogram translator. In doing so, we demonstrate the potential of such flexible techniques to unify MIR tasks, promote efficient transfer learning, and converge research to the improvement of powerful, general methods. Finally, to the best of our knowledge, we present the first application of GANs to guided instrument synthesis.