Neuralogram: A Deep Neural Network Based Representation for Audio Signals
arXiv:1904.05073 [cs, eess], vol.
Computer Science - Machine Learning,Computer Science - Sound,Electrical Engineering and Systems Science - Audio and Speech Processing,Computer Science - Multimedia
- Prateek Verma
- Chris Chafe
- Jonathan Berger
We propose the Neuralogram – a deep neural network based representation for understanding audio signals which, as the name suggests, transforms an audio signal to a dense, compact representation based upon embeddings learned via a neural architecture. Through a series of probing signals, we show how our representation can encapsulate pitch, timbre and rhythm-based information, and other attributes. This representation suggests a method for revealing meaningful relationships in arbitrarily long audio signals that are not readily represented by existing algorithms. This has the potential for numerous applications in audio understanding, music recommendation, meta-data extraction to name a few.