Using Data and Machine Learning to Support Human Musical Practices

Wednesday 1 May 2019, 4.00PM

Speaker(s): Rebecca Fiebrink (Goldsmiths, University of London)

It’s 2019, and machine learning seems to suddenly be everywhere: playing Go, driving cars, serving us targeted advertising. Machine learning can compose new folk tunes and synthesise new sounds. What does this mean for those of us who compose or perform new music, or who create new interactions with sound? What does our future hold, besides sitting at home all day listening to algorithmically generated music after robots take our jobs?

In this talk, I’ll invite you to consider what I believe to be a more important and interesting question: How can we instead use machine learning to better support human creative activities? I’ll describe some highlights from research my students and I have done, including using machine learning and related techniques to support new approaches to musical instrument design, to enable latency-free networked musical performance and personalised audience experiences, and to enable a much broader range of people—from software developers to children to music therapists—to build new musical and sonic interactions.

I’ll discuss some of our most exciting findings about how machine learning can support human creative practices, for instance by enabling faster prototyping and exploration of new technologies (including by non-programmers!), by supporting greater embodied engagement in design, and by changing the ways that creators are able to think about the design process and about themselves.

I’ll discuss how these findings inform new ways of thinking about what machine learning is good for, how to make more useful and usable creative machine learning tools, how to teach creative practitioners about machine learning, and what the future of human creative practice might look like.


Dr Rebecca Fiebrink is a Senior Lecturer at Goldsmiths, University of London. Her research focuses on designing new ways for humans to interact with computers in creative practice.

As both a computer scientist and a musician, much of her work focuses on applications of machine learning to music: for example, how can machine learning algorithms help people to create new musical instruments and interactions? How does machine learning change the type of musical systems that can be created, the creative relationships between people and technology, and the set of people who can create new technologies?

Fiebrink is the developer of the Wekinator, open-source software for real-time interactive machine learning whose current version has been downloaded over 25,000 times. She is the creator of a MOOC titled “Machine Learning for Artists and Musicians,” which launched in 2016 on the Kadenze platform. 

She was previously an Assistant Professor at Princeton University, where she co-directed the Princeton Laptop Orchestra. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule.

She has performed with a variety of musical ensembles, including as a laptopist in Sideband and Squirrel in the Mirror, the principal flutist in the Timmins Symphony Orchestra, and the keyboardist in the University of Washington computer science rock band "The Parody Bits.” She holds a PhD in Computer Science from Princeton University.

Location: Sally Baldwin D Block | D003