Does it make sense to sonify information that refers to an already audible phenomenon such as prosodic data? In order to be useful, a sonification of prosody should contribute to the comprehension of paralinguistic features that may not otherwise attract the attention of the listener.
Within this context, this paper illustrates a modular and flexible framework for the reduction and processing of prosodic data to be used for enhancing the perception of a speaker’s intention, attitude and emotions. The model uses speech audio as input and provides MIDI and MusicXML data as output so allowing samplers and notation software to auralize and display the information.
The described architecture has been subjectively tested by the author over many years in compositions for solo instruments, ensembles and orchestra. Two outcomes of the research are discussed: the advantages of an adaptive strategy for data reduction, and the auditory display of the deep pitch and temporal structures underlying prosodic processing.