The focus of this research is on the extension of Computational Auditory Scene Analysis (CASA) techniques to include spatial audio capture and rendering methods for environmental soundscape. The use of the information gained from such analysis will be used to inform a procedurally determined soundscape auralisation engine capable of synthesizing complex, parametrically controlled spatial sound environments.
3D CASA records spatial information related to direction of arrival, trajectory and spread for the various sound sources in a typical auditory scene and applies this information to assist with sound identification and classification (footsteps likely come from low down, aeroplane sounds from up high etc.). This information will help to build up a parametric spatial description of various auditory scenes. The resulting auralisation engine will be capable of generating ecologically valid evolving soundscapes of arbitrary length that fully encapsulate the perceptual characteristics of a given auditory scene and contain all the necessary information for identification of this scene by a human.