Monday 2 March 2020, 12.00PM
Speaker(s): Dr Sander Dieleman, Research Scientist at DeepMind, London
Realistic music generation is a challenging task. When machine learning is used to build generative models of music, typically high-level representations such as scores, piano rolls or MIDI sequences are used, that abstract away the idiosyncrasies of a particular performance.
But these nuances are very important for our perception of musicality and realism, so we embark on modelling music in the raw audio domain. I will discuss some of the advantages and disadvantages of this approach, and the challenges it entails.
Dr Sander Dieleman is a Research Scientist at DeepMind in London, UK, where he has worked on the development of AlphaGo and WaveNet. He was previously a PhD student at Ghent University, where he conducted research on feature learning and deep learning techniques for learning hierarchical representations of musical audio signals.
During his PhD he also developed the Theano-based deep learning library Lasagne and won solo and team gold medals respectively in Kaggle’s “Galaxy Zoo” competition and the first National Data Science Bowl.
In the summer of 2014, he interned at Spotify in New York, where he worked on implementing audio-based music recommendation using deep learning on an industrial scale.
Location: Rymer Auditorium, Music Research Centre, Campus West