ICLR 2020 impressions and paper highlights
Having just "visited" my first virtual conference, ICLR 2020, I wanted to talk about my general impression and highlight some papers that stuck out to me from a variety of subfields.
Having just "visited" my first virtual conference, ICLR 2020, I wanted to talk about my general impression and highlight some papers that stuck out to me from a variety of subfields.
From June to September 2019, I took a break from my ongoing PhD and worked as a Research Intern at Spotify in London.
When neural networks need bounded outputs (like audio in [-1,1]), tanh or clipping during training causes vanishing gradients. Use linear outputs during training and clip only at test time.
This year's ISMIR was great as ever, this time featuring lots of deep learning, new datasets, and a fantastic boat tour through Paris!
In this post, I want to talk about magnitude spectrograms as inputs and outputs of neural networks, and how to normalise them to help the training process.