Ken is an Electronic Hardware, Firmware and Software developer, specialising in audiovisual and therapeutic research. He gained an MA with distinction in Music Technology at the Sonic Arts Research Centre in Belfast in 2004, and since then has held Research Associate posts at various UK Universities.
His current post is KTP Research Associate in Audio Modelling Development, a partnership between the University of York and RPPtv London, supervised by Dr Jez Wells. This aims to develop cutting edge audio tools for sound designers in Media and Games, specifically: realistic and computationally efficient real-time modelling of a wide range of environmental sonic events and soundscapes, with intuitive user-centric interface controls, and accessible as a cloud service over the internet.
New computational methods for high-accuracy spectral modelling of audio (MMPAG research group, School of Arts and Creative Technologies, University of York)
Interactive Multisensory Environments [iMUSE]; contactless control of aesthetically-pleasing audio/visual/tactile feedback to increase wellbeing and agency in people with disabilities (University of Sunderland)
Wearable Devices for neuro-physiological interventions (Newcastle University: Institute of Neuroscience: Human Movement Lab)
Signal-processing, machine-learning algorithms and an iOS application for classification and feedback from utterances by pre-speech infants [BabblePlay] (University of York, AudioLab / Linguistics)
Web Audio Virtual Environment Rendering [WEAVER] (University of York Audio Lab and secondment to BBC Audio Research, Salford.)
VR capture, rendering-workflow and experimental procedures for assessing wellbeing differences between real and virtual community choirs [SINGSVR] (University of York: AudioLab / School of Arts and Creative Technoligies / Stockholm Environmental Institute)