Accessibility statement

Building a novel tool to reinforce babble in pre-babbling typically developing hearing and deaf infants

Overview

Infant babble (sequences of consonants and vowels, e.g., /bababa/) is thought to underpin the development of accurate consonant production. The age at which babble begins and the extent of babble can reliably predict later progress in speech development. Can infant babble be increased through positive reinforcement and can this enhance speech development?

The core of this proposal is the development of a real-time analysis algorithm to discriminate babble from other sounds that infants make, based on speech pressure input from the microphone. The algorithm will respond to babble, but not other vocalizations, visually reinforcing naturally occurring babble. We are asking for funds to develop a prototype device and to run a small pilot study to test whether hearing infants (in the first instance) can learn the connection between their babble the visual reinforcement.

If successful, we plan to develop this as a clinical device for infant populations whose babble and first words are delayed, particularly deaf infants who receive no auditory feedback. Deaf infants whose babble will be visually rewarded may produce a wider range of language sounds and may start producing words earlier.

Results

We have created a library of files of infant vocalizations, some which we define as 'babble' (voiced vocalization which contains a consonant) and others which we define as 'non-babble'. These utterances were consolidated from previous studies and were re-transcribed and analyzed to ensure they were suitable for effective training of our algorithm to detect babble. We then created distinct files in which they were classified into 'babble' or 'non-babble'.

We began by using Global Opportunities, a self-teaching algorithm, to distinguish babble from non-babble. The Global Opportunities' method reached 60% accuracy, which was promising but could not be improved upon with this method alone. Traditional acoustic analysis techniques were applied to infant vocalisations with a view to integrating both methods for improved success for Global Opportunities in the future.

We then succeeded in writing an algorithm which identifies voiced infant vocalizations, as distinct from non-voiced (whispered) vocalizations. This also allows the identification of (and rule out as non-babble) sounds which are not produced by a human vocal tract (such as banging sounds or other environmental noise). We have begun to develop the algorithm for identifying individual consonant types, in order to identify, among the voiced vocalizations, only those which contain consonants and that therefore qualify as babble. We started with the identification of stops and nasals with some success and this part of the algorithm is still under development due to the highly variable nature of infant babble.

We have now been successful at securing funding through the Early Stage Commercialization Fund. As a result of this funding an iPad app is currently being developed which will make use of the voicing detection algorithm, to supply infants with a visual response to their voiced vocalizations.

We have also made contacts with representatives of SureStart in York and Save the Children. Both have been impressed by our plans to use the app to encourage babbling in infants from low socio-economic-status families, and have offered to help us with recruitment (SureStart) and with endorsement letters (Save the Children).

Outputs

Grants

  • Tamar Keren-Portnoy, University of York Early Stage Commercialisation AwardBuilding ‘Babble Lite’: developing a commercial game for ipad which responds to a baby’s vocalisations with stimulating visuals, £19,957

Principal Investigator

Dr Tamar Keren-Portnoy
Department of Langugage and Linguistic Science

Co-Investigators

Professor David Howard
Department of Electronics
david.howard@york.ac.uk

Dr Helena Daffern
Department of Electronics
helena.daffern@york.ac.uk