next up previous
Next: A Twelve-Step Program for Up: Analysis Procedures Previous: Analysis Procedures

Background

Below I discuss how data may be transformed from tape-recordings into summary files containing phonological and linguistic categories and phonetic measurements which are the empirical basis of the present studies of phonetic implementation. Some readers may find the description of this transformation to be overly detailed, but I include it in the belief that some will find it helpful.

Some background for this discussion is necessary. An important stimulus for this work is technology. The study of the sounds and sound systems of language has benefited repeatedly from advances in speech technology. Convenient field recording and reproduction of sounds was impossible until the tape-recorder; rapid broad-band spectral analysis was impossible until the Sound Spectrograph; and consistent and precise automatic estimation of vowel formant frequencies was unattained until the development of Linear Predictive Coding.

Software advances have enabled easy, interactive data collection, analysis, and display. With mouse-driven interaction with graphic displays of data, tasks like the extraction of formant frequencies from vowel nuclei can be done at a rate of many vowels per minute, compared with the minutes required for each vowel token using earlier technologies. The continuing efforts of a community of electrical engineers has made possible relatively accurate, automatic extraction of formant and pitch contours.

Hardware advances have come on at least two fronts: processing speed, and mass storage size. Computer processing speeds have reached the point where spectrograms can be calculated in near real-time on relatively affordable workstations. Thus if the scientist wishes to examine waveforms or spectrograms or formant tracks for any particular segment of speech, this is no longer a time-consuming project requiring thought and dedication to the process of data analysis itself, but just a few seconds of work. Research now becomes more a matter of immediately testing one's ideas rather than painstakingly manipulating low-level details.

The computer's capability to store and access large quantities of data has grown qualitatively, changing the kinds of work that may be done. Hours of speech can be stored on-line and accessed randomly. This allows phonetic studies to examine whole conversations, which previously were too large to be stored on a single computer.

Consider an example central to the phonological discussion above: if one phonologist could effortlessly listen to and examine spectrograms of numerous instances of /yr/ in natural speech for different speakers and different dialects, then the objective reality of the intuition-based claim of another phonologist (in this case, me) that this sequence is bisyllabic could be assessed with ease and certainty. Or if I wonder about the phonemic content of the word ``get'' for Jim from Chicago, I can issue a short command to a computer, to locate all twelve tokens in the transcription and in moments listen to each of the utterances twice: the [] quality common to them all suggests that it is /gt/, not /gt/. Checking the hypothesis with // and // in other environments, with other speakers, etc., takes only moments more. Examples could be multiplied further. Phonological and phonetic research practices may be revolutionized by these technological advances.


next up previous
Next: A Twelve-Step Program for Up: Analysis Procedures Previous: Analysis Procedures
Thomas Veatch 2005-01-25