Tel: +44 (0)1904 32 2358
Fax: +44 (0)1904 32 2335
Areas of Expertise: Acoustic Modelling, Binaural Audio, Signal Processing: Audio, Spatial Audio
Tony spent the first 11 years of his career working in industry. During this period he held research and development posts in the telecommunications sector, where he designed components of the public switched telephone network. For six years he was chief electronics designer in a small company manufacturing assistive technology for people with severe disabilities. From there he moved to the Nuffield Orthopaedic Centre in Oxford where, as part of the Engineering Department at the University, he became a design engineer within a multidisciplinary research team whose activities included gait analysis, orthotic device development and compliance monitoring and fracture healing measurement. He also designed assistive technology to provide independent mobility for people with very limited voluntary movements.
Tony’s interest in binaural signal processing research began with his move to York, where he has also developed a love of teaching at all levels. His experience in communications and assistive device technologies first led him to investigate algorithms for hearing aids using binaural cues and this has developed into a co-ordinated programme of binaural research thanks to contributions over the years from many excellent students and colleagues. He has extensively investigated the measurement and use of head-related transfer functions (HRTFs) for the individualisation of binaural spatial audio signals, such as techniques for their interpolation and their rapid measurement from head shape. The work on HRTF synthesis from morphological measure continues in the form of an international collaboration with the University of Sydney. Some fascinating perceptual aspects of spatial audio are being explored in a project jointly funded by the University and a collaborating industrial partner in which we are investigating auditory and other influences on the realism of binaural audio.
Binaural audio signal processing, binaural psychoacoustics, acoustic modelling, applied spatial hearing, perceptual listening tests, binaurally-informed digital hearing aids.
These mainly revolve around my interests in learning and teaching. For example, I am a member of the University Forum for the Enhancement of Learning and Teaching, University Teaching Committee, the University Public Lectures Committee, the Science and Society working group, and Chair of the University Learning and Teaching Projects Committee.
Publications information is available via the York Research Database
I enjoy teaching at all levels. Modules I have taught include:
I serve/have served on the following teaching-related committees
and serve/have served on the following University committees and groups
Research opportunities in binaural audio and related areas
in the Audio Lab Research Group, School of Physics, Engineering and Technology, University of York
Tony Tew, Jingbo Gao, Chris Pike, Alistair Hinde, Laurence Hobden
Our research within the Audio Lab Research Group in the School of Physics, Engineering and Technology explores and exploits the acoustics and psychoacoustics of human hearing, chiefly to deliver advances in audio technology. In particular, we are working on:
These areas are summarised below.
1 The morphoacoustics of human hearing
Practically all external sounds reach the human auditory system via the left and right ear canals. Despite being limited to only two channels of auditory information, we are capable of determining the direction of a sound source typically to within a few degrees. Acoustically, this impressive performance can largely be accounted for by the auditory spatial cues of inter-aural time difference, inter-aural level difference and the pinna (outer ear) cues. The hearing system combines these cues to create in us a sense of the sound’s direction. The cues are embedded in a family of acoustic filters known as head-related transfer functions (HRTFs). When the pressure variations from a sound source are input to a left HRTF the output approximates the pressure variations at the left eardrum and a right HRTF produces the corresponding pressure variations at the right eardrum.
HRTFs are created as a result of the complex shape (or morphology) of the ear flaps, the head and upper torso. With sufficient knowledge of an individual’s morphology it is possible to calculate the associated unique set of HRTFs. This is currently a difficult and computationally intensive process, but nevertheless may ultimately be easier than measuring them acoustically. Finding an efficient way to estimate individualised HRTFs is viewed by many as the key to achieving widespread exploitation of 3D personal audio. Our research is contributing to this goal in several ways.
Morphoacoustic perturbation analysis
|Pioneered at York and still under development, morphoacoustic perturbation analysis (MPA) is a powerful technique for probing the relationship between human morphology and the acoustic auditory cues present in HRTFs . The figure on the left, for example, shows the regions of the outer ear chiefly responsible for controlling the position of the so-called pinna notch. There are exciting research questions relating to human spatial hearing and the introduction of high quality binaural audio into the consumer market which MPA has the potential to answer.|
The Sydney-York Morphological and Recording-of-Ears database
|The Sydney-York Morphological and Recording-of-Ears (SYMARE) database [2, 3] is the result of more than ten years’ collaboration between the Universities of York and Sydney. The database consists of high resolution meshes describing the morphology of over 60 subjects (see the figure, above), together with measurements of their HRTFs. The SYMARE database is an unrivalled source of physiological and acoustical data for informing and validating research in spatial audio.|
2 The synthesis of binaural audio
The equivalence of live and virtual spatial audio
Methods for improving the performance of consumer-oriented sound reproduction are being researched and developed internationally. In a laboratory environment it is possible to create sound reproduction systems which make it difficult to distinguish between a live listening experience and a virtual one. Through headphones spatial sound may be created using binaural audio methods based on HRTFs. Rigorously demonstrating the equivalence of a live auditory experience and virtual one through headphones is not trivial, because blind tests in which the listener does not know whether they are listening to the virtual or to the real experience cannot be achieved directly. We have developed several methods for tackling this problem [4, 5, 6] and these provide us with ways of performing comparisons in different situations, including those described below.
Perceptually robust simplifications
Extending high performance binaural audio from the laboratory into consumer technology has proved to be very challenging and many factors are impeding progress. A fertile area of research is identifying simplifications which can be applied to the problem space without affecting the perceptual integrity of the resulting audio. We are investigating simplifications in the measurement of morphology, the computation of HRTFs and in the HRTFs themselves. Finding suitable simplifications could finally lead to 3D audio which is effective, practical and viable outside the laboratory.
Assessing quality of experience and plausibility
An exact equivalence between real and virtual auditory experiences is hard to achieve, but often it is not required and may even be undesirable. It may be sufficient to communicate a plausible sound scene that portrays the spatial impression intended by its creator without it necessarily matching reality precisely. Indeed, virtual auditory scenes may deliberately set out to violate physical reality for artistic reasons and in such situations it makes no sense to aim for complete realism. Creating a plausible sound scene rather than an exact one relaxes the technical constraints which need to be met. This greatly aids reproducing binaural audio in an uncontrolled environment where, for example, relatively little is known about the listener (e.g. their HRTFs) and their situation (e.g. the acoustic properties of their listening space). Through our research partnership with BBC R&D  we are involved in identifying the key processes necessary for achieving plausible binaural audio in such circumstances.
3 Spatially informed hearing aid algorithms
The healthy human hearing system is capable of performing well in a variety of adverse acoustic conditions. A listener who has a hearing deficit, however, even if it affects only one ear, typically finds it much more difficult to understand a conversation in the presence of competing sounds. Binaural hearing provides the auditory system with a means of distinguishing one sound from another based on their different locations. It also plays an important role in increasing intelligibility in the presence of room reverberation. We are investigating a wide variety of spatial cues and evaluating their potential for improving the intelligibility of speech in challenging acoustic environments. Our goal is to develop a binaural audio algorithm suitable for implementing in a binaural hearing aid.
<span ">4 Non-visual displays for connected television
Over recent years the face of television has altered dramatically from the provision of a small number of broadcast streams to the availability of hundreds of channels with interactive content and internet accessibility. This explosion of content has required the development of increasingly complex user interfaces. Particularly for people with visual impairments, navigating and using this greatly increased functionality is challenging. In this research, based in BBC Research and Development, auditory methods for presenting companion content which minimises the disruption to additive content, such as audio description, are being explored.
5 Research Projects
Postgraduate research projects are available, subject to funding, in the areas outlined above. If you have a particular research topic in mind which lies somewhat outside these areas, please feel free to contact me to discuss it (firstname.lastname@example.org).
|1.||Tew, A. I., Hetherington, C. T., & Thorpe, J. B. (2012) Morphoacoustic perturbation analysis: principles and validation, Paper presented at Acoustics 2012, 23-27 April 2012, Nantes, France.
|2.||The Sydney-York Morphological and Recording-of-Ears (SYMARE) database
|3.||Jin, C., Guillon, P., Epain, N., Zolfaghari, R., van Schaik, A., Tew, A. I., & Thorpe, J. (2014). Creating the Sydney York morphological and acoustic recordings of ears database. IEEE Transactions on Multimedia, 16(1), 37-46.|
|4.||Moore, A. H., Tew, A. I., & Nicol, R. (2007). Headphone transparification: a novel method for investigating the externalisation of binaural sounds. Poster session presented at 123rd AES Convention, New York, United States.|
|5.||Moore, A. H., Tew, A. I., & Nicol, R. (2010). An initial validation of individualized crosstalk cancellation filters for binaural preceptual experiments. Journal of the Audio Engineering Society, 58(1/2), 36-45.|
|6.||Satongar, D., Pike, C., Lam, Y. & Tew, A. I. (2013). On the influence of headphones on localisation of loudspeaker sources. Paper presented at 135th AES Convention, New York, United States.|
|7.||The BBC R&D Academic Research Partnership
Tony Tew research summary/2015-02-15