PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition

Venue

arXiv:1912.10211 [cs, eess], vol.

Publication Year

2020

Keywords

Computer Science - Sound,Electrical Engineering and Systems Science - Audio and Speech Processing

Authors

  • Qiuqiang Kong
  • Yin Cao
  • Turab Iqbal
  • Yuxuan Wang
  • Wenwu Wang
  • Mark D. Plumbley

Abstract

Audio pattern recognition is an important research topic in the machine learning area, and includes several tasks such as audio tagging, acoustic scene classification and sound event detection. Recently neural networks have been applied to solve audio pattern recognition problems. However, previous systems focus on small datasets, which limits the performance of audio pattern recognition systems. Recently in computer vision and natural language processing, systems pretrained on large datasets have generalized well to several tasks. However, there is limited research on pretraining neural networks on large datasets for audio pattern recognition. In this paper, we propose large-scale pretrained audio neural networks (PANNs) trained on AudioSet. We propose to use Wavegram, a feature learned from waveform, and the mel spectrogram as input. We investigate the performance and complexity of a variety of convolutional neural networks. Our proposed AudioSet tagging system achieves a state-of-the-art mean average precision (mAP) of 0.439, outperforming the best previous system of 0.392. We transferred a PANN to six audio pattern recognition tasks and achieve state-of-the-art performance in many tasks. Source code and pretrained models have been released.