Interactive Audio Systems Symposium

September 23rd, 2016, University of York, United Kingdom.

Recent advances in low-cost motion-tracking and sensing technologies, coupled with increased computing power have paved the way for new methods of audio production and reproduction. Interactive audio systems can now readily utilise non-tactile data such as listener location, orientation, gestural control and even biometric feedback, such as heart rate, to intelligently adjust sound output. Such systems offer new creative possibilities for a diverse range of applications ranging from virtual reality, mobile technologies, in-car audio, gaming, social media and film and TV production.


On the 23rd September 2016, The University of York held the first symposium dedicated to the topic of interactive audio systems. The symposium explored the perceptual, signal processing and creative challenges and opportunities arising from such audio systems affected through enhanced human-computer interaction.

Proceedings

The proceedings for the 2016 Interactive Audio Systems Symposium can be found here.

Photos

Photos of the talks and demos that happened during the event can be found here.

Programme

08:45 Registration Opens
09:15 Symposium Opening
09:20 Keynote 1 - Marcus Noisternig, IRCAM - Sound in space and space in sound
Paper Session 1 Interactive Spatial Audio
10:00 An Ambisonics audio library for interactive spatial sound processing on the web - Archontis Politis
10:20 A simple algorithm for real-time decomposition of first order Ambisonics signals into sound objects controlled by eye gestures - Giso Grimm, Joanna Luberadzka, Jana Müller and Volker Hohmann
10:40 Spatial sound via cranial tissue conduction - Peter Lennox and Ian McKenzie
11:00 Coffee Break
11:30 Tutorial 1 - Chris Pike, BBC - Interactive Aspects of Object Based Audio in Broadcasting
Paper Session 2 Real-time interactive sonification.
12:00 Using Real-Time Sonification of Heart Rate Data to Provide a Mobile Based Training Aid for Runners - John Sanderson and Andy Hunt
12:20 Sonification playback rates during matching tasks of visualised and sonified EEG data - Michael Gavin, Rokaia Jedir and Flaithrí Neff
12:40 Lunch, Posters and Demos (Studio demos begin at 1:10)
14:10 Keynote 2 - Stefania Serafin, Aalborg - Sonic Interactions in Virtual Reality
Paper Session 3 Interactive Sound Design
14:50 An Acoustic Wind Machine and its Digital Counterpart: Initial Audio Analysis and Comparison - Fiona Keenan and Sandra Pauletto
15:10 A Semantically Motivated Gestural Interface for the Control of Audio Dynamic Range - Thomas Wilson and Steven Fenton
15:30 Safe and Sound Drive: Design of interactive sounds supporting energy efficient behaviour - Arne Nykänen, Mariana Lopez and Rob Toulson
15:50 Coffee Break
16:20 Tutorial 2 - Lauri Savioja, Aalto University - Introduction to Room Acoustics Modeling and Auralization for Interactive Systems
Paper Session 4 Interactive music production systems
16:50 Preliminary Investigations into Virtual Reality Ensemble Singing - Gavin Kearney, Helena Daffern, Calum Armstrong, Lewis Thresh, Haroom Omodudu and Jude Brereton
17:10 The Interactive Music Producer - Tracy Redhead
17:30 Symposium Closing

Posters

A comparison of subjective evaluation of soundscapes with physiological responses - Francis Stevens, Damian Murphy and Stephen Smith
A Filter Based Approach to Simulating Acoustic Radiation Patterns Through a Spherical Array of Loudspeakers - Calum Armstrong
Boundary element modelling of KEMAR for binaural rendering: Mesh production and validation - Kat Young, Gavin Kearney and Tony Tew
Echolocation in virtual reality - Darren Robinson and Gavin Kearney
In 3D Space, Everyone Can Hear You Scream... From All Directions - Sam Hughes
Vertical Amplitude Panning for Various Types of Sound Sources - Maksims Mironovs and Hyunkook Lee
Virtual Headphone Testing for Spatial Audio - Hugh O'Dwyer, Enda Bates and Francis Boland
Taking advantage of geometrical acoustic modeling using metadata - Dale Johnson and Hyunkook Lee
The effects of decreasing the magnitude of elevation-dependent notches in HRTFs on median plane localisation - Jade Clarke and Hyunkook Lee

Demonstrations

Spatial Audio for Domestic Interactive Entertainment - Gavin Kearney
Multi-User Virtual Acoustic Environments - Calum Armstrong and Jude Brereton
Listener-adaptive object-based stereo - Dylan Menzies
Demo of an array for adaptive personal audio and adaptive Transaural reproduction - Marcos Simón
Spatial sound via cranial tissue conduction - Peter Lennox and Ian McKenzie
High spatial-resolution parametric strategies for spatial sound recording and reproduction - Archontis Politis and Ville Pulkki
Sonicules - Jude Brereton
Preliminary Investigations into Virtual Reality Ensemble Singing - Gavin Kearney, Helena Daffern, Calum Armstrong, Lewis Thresh, Haroom Omodudu and Jude Brereton
Object-based reverberation for interactive spatial audio - Philip Coleman, Philip Jackson, Andreas Franck, Chris Pike
Listener-adaptive object-based stereo - Dylan Menzies
Bela - an open-source embedded platform for low-latency interactive audio - Giulio Moro
A simple algorithm for real-time decomposition of first order Ambisonics signals into sound objects controlled by eye gestures, Giso Grimm
Spatial Audio in Video Games for Improved Player Quality of Experience, Joseph Rees-Jones