Cultivating the Public Philosophy Journal

poster / demo / art installation
Authorship
  1. 1. Christopher P. Long

    Pennsylvania State University

  2. 2. Mark Fisher

    Pennsylvania State University

  3. 3. Dean Rehberger

    Michigan State University

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.

The orthography/identity hypothesis proposes that a speaker’s motivation for selecting between available orthographic variants in a language (e.g. between British colour and American color in English) is to some extent informed by the speaker’s desire to express a certain identity 1 2 3. In the Canadian context, where a mixture of American/British variants are used – often with non-categorical preference 45 – Heffernan et al. 6 developed a method to qualify the orthography/identity connection in terms of ideology and show that during periods of increased “anti-Americanism,” specifically during unpopular American-led wars, American variant use declines relative to the British. Heffernan et al.’s data cover the years 1921 to 2004, and are derived from the student newspaper The Gateway at the University of Alberta in Alberta, Canada. Their method involved locating expressions of national sentiment for each year of the data, rating “anti-American sentiment” on a 7-point Likert scale (255 ratings over 85 years performed by each author) and correlating this with the relative frequencies of 15 orthographic variables (Table 2, though color / colour is my addition): the negative correlation obtained was quite high, with Pearson-r -0.715, p = 0.001.
However, follow-up work by the present author, using data in the same timeframe from the archive of the University of British Columbia’s student newspaper The Ubyssey (~50 million words) in the neighbouring province of British Columbia, failed to find similar short-term diachronic changes in variant use correlated with periods of increased “anti-American sentiment” 7. Following Heffernan et al.’s method, an insignificant correlation was obtained: Pearson-r -0.434, p = 0.064. Historical relative orthographies do differ between Canada’s provinces 8 910 but, assuming the strong connection between orthography and identity, no clear explanation remains for the lack of correlation in other Canadian data. Without dispensing with the orthography/identity hypothesis, I hypothesize that proximal linguistic contexts are also motivating factors in variant selection, and propose to integrate a more context sensitive model into this top-down, language-external theory of linguistic identity performance.
To test for contextual differences, I treated the problem as one of word sense disambiguation, where the goal is to distinguish lexemes using a set of features and a computational language model—a technique most often used to distinguish between ambiguous meanings of homonyms (such as judge, bank, bow, etc.) 11. Features used were a window of words surrounding each variant (8 words either side of the target was found optimal, excluding other instances of variables if present) and the model was Naïve Bayes. If orthographic variants can be discriminated based on surrounding context, we can assume that those words are in some way unique—with the interesting implication, in the extreme case, that spelling variants might not just diverge orthographically but semantically, as well 12. Maybe they mean different things. My experiments attempted to disambiguate variants in each variable from one another using unsupervised and supervised classification, in both cases using the Naïve Bayes form.
Though Naïve Bayes makes the linguistically improbable assumption of feature independence, it has been noted for its precision in classification problems in spite of this simplification (i.e. its ‘naiveté’) 13. For unsupervised classification I used my own Python implementation of a Naïve Bayes classifier where the parameter estimates are learned through Estimation Maximization (EM), as described in e.g. Manning and Schütze 14. As Pedersen 15 observes, testing the results of unsupervised classification is complicated by the fact that the algorithm does not assign labels to inputs, instead clustering them, but accuracy can be represented as the proportion of the dominant variant in each cluster. My classifier outputs two ‘sense groups’, which would ideally correspond to the American or Canadian variant. After performing 10 trials and averaging the results, I found that only three variables out of 16 (Heffernan et al. exclude color / colour—I include it) produced significant results, in that their prediction accuracies departed from – or improved upon – their ‘lower bound’ accuracies, where the lower bound is the relative frequency of each variant and therefore the accuracy one would achieve simply by assigning each variant to a category based on its occurrence. For brevity, only these three are represented in Table 1.
American variant accuracy lower bound Canadian variant accuracy lower bound
color 55.5% 54% colour 65.6% 46%
gray 23.2% 17% grey 86.2% 83%
jewelry 51.3% 49% jewellery 55.4% 51%
Unsupervised classification for colour improves accuracy by 19.4% over its lower bound, but increases for other variables are marginal and – like colour – generally apply to one variant only.
For supervised classification I used the Naïve Bayes classifier in the Python library Natural Language Tool Kit (NLTK)16, a similar method to Mahowald’s 17 recent study in which y- and ­th- pronouns were disambiguated in a corpus of Shakespeare’s plays based on context. Whereas unsupervised classification performed poorly, supervised classification obtained surprising accuracy for multiple variables after 10 validation trials. These results are summarized in Table 2, ranked by accuracy, with an asterisk denoting significance at the p = 0.001 level. In this experiment, the lower bound for each variable is 50% because a random subset of the tokens was evaluated and counts were set equal for each variant during testing.
variable accuracy total count
jewelry / jewellery 81.6%* 564
gray / grey 79.3%* 3112
color /colour 74.5%* 5312
program / programme 70.1%* 5704
honor / honour 62.0%* 2538
enrollment / enrolment 61.2%* 2086
humor / humour 61.1%* 2862
neighbor / neighbour 60.7%* 494
defense / defence 58.6%* 5640
judgment / judgement 56.8% 928
offense / offence 56.3%* 1568
centered / centred 55.7% 488
marvelous / marvellous 55.6% 316
fulfill / fulfil 54.7% 312
labeled / labelled 54.4% 270
kilometers / kilometres 40.0% 72
It would seem that we are able to predict, sometimes with high accuracy, whether certain variables will realize their American or British variant based on context. But why? If orthographic variations are simply different graphemes of the same lexeme, decided rather capriciously by an American lexicographer in the nineteenth century 18, why should this be possible?
The Naïve Bayes module in NLTK provides output for identifying features most useful in making its decisions, and can help answer this question. For gray / grey, the case is clear, since the terms most likely to indicate British grey are Pt. and Point (Point Grey is the name of the land on which the University of British Columbia lies), and terms indicating American gray are proper nouns like Bob, John, and Stuart (Gray is a common surname). A revealing result, but only so far as it reveals a highly restrictive context in non-compositional forms, and might suggest this variable be excluded from further testing. Contexts for color / colour are more interesting, however, and fall into two large subjective categories: ‘cultural’ and ‘technological’ (Table 3).
variant category informative features
colour cultural diversity, women, people, racism, queer
color cultural people
colour technological connected, jet, print, modem, monitor
color technological cartoons, TV
As a collocation analysis reveals, in the ‘cultural’ category phrases such as women of colour, people of colour, andqueers of colour occur often with colour – 225 total instances, its most frequent collocate – but hardly ever with color (11 instances). In the ‘technological’ category, computer terms appear with colour and entertainment terms with color, where these terms are often found in advertisements and the site of these interactions tend to be local for colour and global for color (a local transaction for a colour monitor, at least prior to the expansion of current global markets, but the international consumption of color television). However, these phrases are easily recognizable as historically specific (to around post-1980). Indeed, the unsupervised classification of colour backs up this historical selectivity: significantly more of the items grouped at 65.6% accuracy are from this decade—context and history are intertwined, of course, and it exceeds my scope to disambiguate these here. But the more a-historical distribution of terms predicting jewelry / jewellery suggests historical clustering is not inevitably the rule: local activities like piercing and repairs, and localities denoted by West and Point (i.e. the location of a shop in WestPoint Grey) predict British jewellery, but generic sales terms accessories, fine, place, and giftware predict American jewelry. In sum, advertisements, or, more generically, ‘solicitations’, are the dominant vehicle of these variants and prefer the British when the activity is local (both economically and socially—these will be further described) and the American generally. I will also further discuss how accounting for genre affects classification accuracy. Overall, British variants are more uniquely contextualized, and therefore more easily discriminated, than American.
Qualitative sociolinguistic approaches like Heffernan et al.’s (2010) locate identity as an exterior motivating condition for language, with the necessary assumption that orthography is selected independently of linguistic context. And though this paper finds that this assumption does not hold, the ability to disambiguate orthographic variants based on context is interesting, but not explanatory in its own right. These contexts are also motivated, and computational techniques take us full-circle back to considering ideological – but more interactional – motivations for linguistic context.
References

1. Lipski, J. (1975). Orthographic variation and linguistic nationalism. La Monda Lingvo-Problemo 6. 37-48.
2. Schieffelin, B. B., and Doucet, R. C. (1994). The ‘real’ Haitian Creole: Ideology, metalinguistics and orthographic choice. American Ethnologist 21. 176–200.
3. Sebba, M. (2000). Orthography and ideology: Issues in Sranan spelling. Linguistics 38. 925–948.
4. Chambers, J. K. (2011). ‘Canadian dainty’: The rise and decline of Briticisms in Canada. In Legacies of Colonial English: Studies in Transported Dialects, 224-241. Cambridge: Cambridge UP.
5. Pratt, T. K. (1993). The hobgoblin of Canadian English spelling. In S. Clarke (ed.), Focus on Canada, 45–64. Amsterdam: Benjamins.
6. Heffernan, K., Borden, A., Erath, A. C., and Yang, J-L. (2010). Preserving Canada’s ‘honour’: Ideology and diachronic change in Canadian spelling variants. Written Language and Literacy 13(1). 1-23.
7. Grue, D. (forthcoming). Testing Canada's 'honour': Does orthography index ideology? Strathy Student Working Papers on Canadian English.
8. Brinton, L. and Fee, M. (2001). Canadian English. In John Algeo (ed.), The Cambridge history of the English language, vol. 6, 422-439. Cambridge: Cambridge UP.
9. Ireland, R. J. (1979). Canadian spelling: An empirical and historical survey of selected words. Ph.D. dissertation, York University, Toronto.
10. Ireland, R. J. (1980). Canadian spelling: How much British? How much American? English Quarterly 12(4). 64-80.
11. Pedersen, T. (2002). A baseline methodology for word sense disambiguation. Proceedings of the Third International Conference on Intelligent Text Processing and Computational Linguistics. 126-135.
12. Miller, G. A. and Charles, W. G. (1991). Contextual correlates of semantic similarity. Language and Cognitive Processes 6(1). 1-28.
13. Abney, S. (2008). Semisupervised Learning for Computational Linguistics. New York: Chapman & Hall.
14. Manning, C. D. and Schütze, H. (1999). Foundations of Statistical Natural Language Processing. Cambridge, Mass.: MIT Press.
15. Pedersen, T. (1998). Learning Probabilistic Models of Word Sense Disambiguation. Ph.D. Dissertation, Southern Methodist University, University Park, Texas.
16. Bird, S., Klein, E., and E. Lopez. (2009). Natural Language Processing with Python. Sebastopol, CA: O'Reilly Media.
17. Mahowald, K. (2012). A Naïve Bayes classifier for Shakespeare's second-person pronoun. Literary and Linguistic Computing 27(1). 17-23.
18. Webster, N. (1846). A dictionary of the English language; abridged from the American dictionary. [American Dictionary]. New York: Huntington and Savage.

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

Complete

ADHO - 2014
"Digital Cultural Empowerment"

Hosted at École Polytechnique Fédérale de Lausanne (EPFL), Université de Lausanne

Lausanne, Switzerland

July 7, 2014 - July 12, 2014

377 works by 898 authors indexed

XML available from https://github.com/elliewix/DHAnalysis (needs to replace plaintext)

Conference website: https://web.archive.org/web/20161227182033/https://dh2014.org/program/

Attendance: 750 delegates according to Nyhan 2016

Series: ADHO (9)

Organizers: ADHO