Visualization of Big Data Phonetics

paper, specified "long paper"
Authorship
  1. 1. William Kretzschmar

    Department of English - University of Georgia

  2. 2. Joey Stanley

    Department of English - University of Georgia

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.


We received funding from the American National Science Foundation to conduct automated formant extraction of stressed vowels, as they occur in sixty-four audio interviews conducted in the Southern US, about 370 hours of speech. The resulting data set contains about 2 million vowel tokens. After extraction, we processed the formants in order to reveal their distributional patterns, overall in the South and in a number of regional and social categories of speakers. Our Big Data phonetics thus represents a challenge to generalizations made on the basis of smaller data sets with few tokens per speaker. Now, with automated tools, we can record complete sets of formants from a data set instead of just a few examples, and our observation of how phonetic realizations actually occur for individuals and in populations can reform traditional understandings of phonetic systems.

We have found that the distributional patterns in our phonetic data follow the predictions of complexity science: realizations of each vowel in each group will occur in a nonlinear pattern where a few types of realizations are very common, some types are moderately common, and most types are rare (see Kretzschmar 2009, 2015). These distributions, call them A-curves, are scale-free, in that the same A-curve pattern will appear in every subset of the data, whether for the overall set, or for any regional of social subset, or for any individual. Our very large data set, Big Data in a humanities setting, shows that the complex system of human speech, generated by massive numbers of interactions between speakers, creates reliable distributional patterns that linguists can use to make better predictions about how language is produced and perceived in populations.
In this paper, we describe a new tool for visualization of all of our Big Data phonetic results, called the Gazetteer of Southern Vowels (GSV), available to the public at http://lap3.libs.uga.edu/u/jstanley/vowelcharts/. The site was built in Shiny (https://shiny.rstudio.com/)
, a web application framework for R. With Shiny, users can utilize the computational power of the R programming language without having to learn R or install it to their computers. Shiny was designed to make interactive web apps. GSV exploits the framework to supply traditional F1/F2 plots of our phonetic data, and also to supply point-pattern F1/F2 plots that take advantage of a standard tool of spatial analysis (GIS). A key feature of GSV is a range of user-selected display features as applied to user-selected vowel types in specific environments, and used to display results from user-selected groups of speakers. GSV works with the 2 million vowel measurements from the forced alignment and automatic format extraction of our NSF grant. Our modifications of forced alignment and automatic formant extraction processes (see Evanini 2009, Rosenfelder et al 2011, Reddy and Stanford 2015) are described in Olson et al 2017, but our methods are not the subject of this paper.

The traditional F1/F2 plots of GSV show user-adjustable plots with associated counts and statistical information. Fig 1 shows a plot of the
fleece and
kit vowels from eight speakers in Georgia, with ellipses for the 95% confidence intervals and means marked with labels for the vowels. Fig 2 shows the same data, now with display of the individual tokens as points. The statistical information is shown in Fig 3.

Figure 1

Figure 2

Figure 3

These figures show the difference that Big Data can make for phonetics. Before automatic formant extraction, it was normal for phoneticians to make perhaps 10 measurements of a vowel, and then to take the mean of those measurements as the target location for realization of a vowel. Using Big Data we have hundreds of tokens instead of 10, and their distributional patterns are far more widely dispersed and the mean location, while we can still calculate it, does not appear to represent a very useful generalization about realization of a vowel.
Figure 4

Figure 5

With GSV, we can drill down into such distributions at will. Fig 4 shows the tokens from just one of the Georgia speakers. Fig 5 shows the tokens from the same speaker, now only in environments before a stop consonant. Unstressed tokens can be included at will. As Figs 4 and 5 illustrate, the means differ for the different environments for this speaker, and similar differences occur for each speaker and between all the speakers in the database. The flexibility of the visualization calls into question easy assumptions about targets and the amount of variation for phonetic segments.
The point-pattern displays of GSV address the usefulness of statistical means and confidence intervals. Fig 6 shows the Georgia tokens for
fleece, Fig 7 the tokens for
kit.

Figure 6

Figure 7

The ellipses have been retained, but the new elements consist of a square grid applied to the F1/F2 space, and shading by density. The darkest grid units have the densest occurrence of tokens, down to clear units with lightest density (units with no tokens are not included in the display). Figs 6 and 7 show that there are several units with high density, not a single target unit, and that the means of the ellipses do a poor job of representing how people usually realize their vowels. As for the traditional plots, with GSV it is possible to drill down to single speakers and environments with the point-pattern plots.
In addition to the speaker and vowel summaries provided with the traditional plots, the point-pattern visualization provides charts of the frequency rankings of the grid units. Fig 8 shows the ranking for the Georgia
fleece tokens, and Fig 9 the
kit tokens. The rankings both clearly form A-curves, and this status is confirmed by the Gini Coefficient at top right in each figure. As Kretzschmar (2009) discussed, Gini Coefficients for normal distributions are always below 0.2, while A-curve, nonlinear distributions are always much higher. The coefficients here, near 0.5, match what we expect to find in our Big Data phonetics results. The point-pattern visualization thus has proven, with every chart at every level of scale in the dataset, that the phonetic realizations of human speech match the prediction of complexity science for nonlinear A-curves.

Figure 8

Figure 9

We look forward to more and better digital visualizations for our Big Data phonetics, such as a new visualization in 3D phonetic space over time now in development, as in Figure 10.
Figure 10

The visualization looks something like a sheaf (say, of wheat stems), which shows either a relatively straight or a curvilinear movement over time. We use five measurements for this purpose (20%, 35%, 50%, 65%, 80% of the duration of the vowel, as automatically extracted), and plot a line between the measurements. Here, the visualization shows 25 tokens of /i/ in the word "three" and 25 tokens of /ɪ/ in the word "six", in blue and green. The base of the visualization is a normal F1/F2 plot, so it shows that the origins of the tokens are relatively dispersed and the blue origins overlap with the green. Over time (the five measurements appear clearly in the leftmost green vowel) many of the tokens are straight, but some are quite bent, either diphthongal or perhaps measurement errors. When we develop the sheaf method we will be able to see and evaluate how duration affects the realization of vowels. The visualization of phonetic duration in 3D will be an essentially new way of looking at phonetic data.
This challenge to traditional notions in phonetics would not be possible without effective digital methods. Part of the story is forced alignment and automatic formant extraction, which makes it possible to generate Big Data in phonetics. An equally important form of digital assistance is the visualization tool, GSV, as prepared with Shiny, which allows the analyst to see how Big Data is distributed in F1/F2 space. Traditional F1/F2 plots can begin the analysis, and innovative implementation of a spatial analysis technique in phonetic space offers a striking new interpretation of the way that vowels are realized.

Bibliography

Evanini, K. (2009). The Permeability of dialect boundaries: a case study of the region surrounding Erie, Pennsylvania. Dissertation, University of Pennsylvania.

Kretzschmar, W. A., Jr. (2009).
The Linguistics of Speech. Cambridge: Cambridge University Press.

Kretzschmar, W. A., Jr. (2015).
Language and Complex Systems. Cambridge: Cambridge University Press.

Kretzschmar, W. A., Jr., Kretzschmar, B. and Brockman, I. (2013). Scaled measurement of geographic and social speech data.
Literary and Linguistic Computing 28: 173-187.

Olsen, R. M., Olsen, M., Stanley, J. A., Renwick, M. and Kretzschmar, W. A., Jr. (2017). Methods for transcription and forced alignment of a legacy speech corpus.
Proceedings of Meetings on Acoustics
30, 060001; doi:
http://dx.doi.org/10.1121/2.0000559.

Reddy, S. and Stanford, J. (2015). Toward completely automated vowel extraction: Introducing DARLA.
Linguistics Vanguard.

Rosenfelder, I., Fruehwald, J., Evanini, K. and Yuan, J. (2011). FAVE (Forced Alignment and Vowel Extraction) program suite. http://fave.ling.upenn.edu.

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.