Accessibility statement

Research Projects

Interactive forms of screen media present a challenge to the status quo of linear content (Ursu et al., 2008; Cesar & Geerts, 2017). From a music/soundtrack perspective, nonlinear content tends to be generated by selecting among pre-recorded samples, which can result in poor matching of visual and audio streams, resulting in suboptimal audience experiences. The premise of the proposed project is that some exploratory research is necessary on the topic of music creation and emotional content prediction, to lay part of the groundwork for moving away from pre-recorded samples for interactive screen media soundtracks, and toward systems where music is generated automatically, grounded in psychologically validated models of the perceived emotionally expressive content of music.
There are such models of the perceived emotional content of music based on symbolic music features (e.g., Eerola, et al., 2009; Eerola, et al., 2013), but to our best knowledge none of these have been incorporated in music creation software. There is potential, therefore, in developing a music creation system that displays – in real time – the predicted perceived emotional content of music, as composed by humans or generated algorithmically (Burnard, 2012; Hickey, 2002).

This project has two main objectives:
Train an algorithm that predicts emotional content in real time, based on features of an in-progress melodic composition;
Explore the feasibility of implementing this algorithm in the music creation interface.

Work plan
M1: We will first integrate a note sequencer into web-based data collection instrument. This is based on our existing, well-encapsulated code for a simple, web-based note sequencer. Then we will recruit participants who are asked to create several short melodies conveying ten different expressions typically employed in composition for screen media.

M2: The resulting melodies will then be used as stimuli in a listening experiment where participants will rate their perceived expression.

M3: These ratings will then function as ground-truth training data for an emotion recognition algorithm that will learn systematic relationships between musical features and each of the perceived expression ratings. In order to quantify melodic properties, we will employ various existing libraries for extracting musical
features from written notes (e.g. tonality, pitch height, pitch range, regularity, tempo, etc.).

M4: Subsequently, we will integrate the emotion recognition algorithm into the previously used web-based note sequencer. It will feature a real-time visualisation of predicted emotional content for notes input into the sequencer.

Team
Dr Tom Collins’ (CoI) research focuses on web-based interfaces for music creation and computational music research. He will provide the main point of support for M1-4.
Dr Hauke Egermann’s (CoI) research focuses on modelling emotional responses to music. He will contribute to M1, 2 and 4.
Dr Federico Reuben’s research focuses on software development for music creation. He will contribute to M1 and 4.
Dr Jez Wells’ (CoI) research focuses on signal processing and music production. He will contribute to M1 and 4.
Dr (cand) Liam Maloney (PDRA) is primarily responsible for programming the note sequencer and data collection instrument, for conducting the study and processing the data, and for the modeling work and paper drafting.

Project run time: January - December 2020.

This project receives funding from the Digital Creativity Labs.

 

Contact us

Music, Science and Technology Research Cluster

Email: music@york.ac.uk

Call: +44 (0)1904 322 446