On phonologisation

Linguistics
Author

Stefano Coretta

Published

April 24, 2021

After the post on the definition of random effects, I thought about writing another one on the definition of phonologisation.

As part of my PhD thesis on the effect of consonant voicing on vowel duration, I briefly reviewed the five definitions of phonologisation I could find in the literature. This post includes text from my thesis and expands on a few points.

The main take-away of the sections to follow is that what one means with phonologisation is (unsurprisingly) dependent on the set of assumptions that are part of the framework within which the term is used. Secondly, these definitions are mutually exclusive, to an extent such that one’s argument based on one definition might be inappropriate under another definition. It is thus important to always contextualise the use of the term when employed, even when the meaning might be self-evident from the context.

1 Five definitions of phonologisation

I could identify at least five different definitions of phonologisation (but there are surely more). These differ substantially, as mentioned above, and are in the most part incompatible with one another. The five definitions are found within the following general phonological frameworks:

  • Structuralism (i.e. classical/traditional phonology).
  • Lexical Phonology.
  • Stratal Optimality Theory.
  • Life-Cycle of Phonological Processes (an extension of Stratal OT).
  • Exemplar-based models.

I will discuss each of these in turn.

2 Structuralism

The classical or structuralist definition states that phonologisation occurs when a contextual allophone becomes contrastive, or in other words it becomes a phoneme (Kiparsky 2015), generally after the disappearance or replacement of the conditioning context.

A classical example of phonologisation concerns the development of a contrast between velar and palatal consonants from velar consonants in Sanskrit (Hock 1991: 149). At some point in the history of Sanskrit, the velar stops /k/ and /g/ (which derived from PIE velars and labialised velars) where palatalised when followed by /i/ and /e/, creating an allophonic distinction between velars proper and palatal consonants.

The subsequent change of /e/ (and /o/) to /a/ removed the context conditioning palatalisation (the front vowel /e/), thus creating minimal pairs opposing /ka, ga/ and /tʃa, dʒa/. At this stage, the palatal allophones were phonologised.

sound change phonemic phonetic
/ka, ke/ [ka, ke]
palatalisation /ka, ke/ [ka, tʃe]
/e/ > /a/ /ka, tʃa/ [ka, tʃa]

The IE roots for ‘what’ and ‘and’ illustrate the phonologisation of the palatal consonants.

  • PIE *kʷod > Skt. /kad/ kád ‘what’ (cf. Lat. quod).
  • PIE *-kʷe > Skt. /ke/ [tʃe] > /tʃa/ -ca ‘and’ (cf. Lat. -que).

This conceptualisation of phonologisation amounts to saying that phonetic features that were previously computed procedurally (during phonological/phonetic derivation) from an underlying lexical representation are now instead already part of the lexical representation (which is, in structural terms, a string of phonemes/features/elements).

3 Lexical Phonology

Phonologisation assumes a different meaning within the framework of Lexical Phonology (Kiparsky 1988). Lexical Phonology argues that there exist two types of phonological processes: processes that apply at the lexical level (stem and prosodic word), and processes that are post-lexical and apply across the board.1 According to the view of Lexical Phonology, a process is phonologised when it goes from being post-lexical to being lexical.

To carry on with the Sanskrit example, palatalisation was initially post-lexical, in other words it was applied across the board during the phonological derivation process after all lexical processes have been applied to the stem and then word. At some point in the history of Sanskrit, the process of velar palatalisation started being applied also at the lexical level (with the original “copy” of the process possibly still being applied post-lexically). Velar palatalisation has been phonologised, creating so called “quasi-phonemes” (i.e. categorical, distinctive units, not yet able to create lexical contrast, Janda 1999).

4 Stratal Optimality Theory

Kiparsky (2000) borrows the definition of phonologisation from Lexical Phonology and applies it to Stratal Optimality Theory (Kiparsky 2000; Bermúdez-Otero 2017).

Stratal OT assumes that the phonological module of the grammar is divided into three levels (called strata, or domains) as in Lexical Phonology: the stem, the word, and the phrasal level.

OT constraints, in Stratal OT, are independently ordered in each level, so that within each level different orders allow for different outputs to be selected. Stratal OT also stipulates that phonological constraints apply iteratively (cyclically) from the narrower domain, namely the stem, through the word domain, to the phrasal domain. Under cyclicity, the output of one domain is passed over as input to the next, and so on.

For Kiparsky (2000), phonologisation occurs when the constraint ordering of the phrasal domain (the post-lexical level of Lexical Phonology) is copied over to the word and stem domains (the lexical level of Lexical Phonology).

5 Life-Cycle of Phonological Processes

An extension of Stratal OT, the Life-Cycle of Phonological Processes (Bermúdez-Otero 2007; Bermúdez-Otero 2015), offers yet another definition of phonologisation and a more fine-grained terminological set. Bermúdez-Otero (2015) reserves the term phonologisation for when a physio-physiological (mechanic) phenomenon comes under the control of the speaker/hearer and becomes part of their grammar (more specifically, part of the phonetic module of the grammar).

The process, once it has entered the grammar, can further ascend through increasingly deeper grammatical modules. A (gradient) phonologised process is said to be stabilised (and thus categorical) once it is generated by a categorical phonological process, which applies at the phrase level. At this stage, a stabilised process has entered the phonological module of the speaker/hearer.

A stabilised process further undergoes domain narrowing when it starts being applied at the word level and then at the stem level. In the final step in the ascent of a sound pattern through the grammar, a phonological process comes under morphological and lexical control, until “it may die altogether, leaving behind no more than inert traces in underlying representations” (Bermúdez-Otero 2015: 12).

6 Exemplar Theory

A further definition of phonologisation comes from exemplar models of speech perception and production (Johnson 1997; Pierrehumbert 2001; Sóskuthy et al. 2018; Ambridge 2018; Todd, Pierrehumbert & Hay 2019).

A core tenet of these models is that speech tokens are stored in memory as so-called exemplars after having being experienced. Depending on the specifics of the particular model, exemplars are stored at varying degrees of granularity and richness of detail.

Each exemplar consists of a (more or less) faithful representation of the experienced token that generated it, and it thus contains information from multiple levels and factors (phonetic, lexical, syntactic, sociolinguistic, contextual, and so on). Lexical and other linguistic units are represented as sets of exemplars, or exemplar clouds. The representational space of exemplar clouds is multi-dimensional and can be operationalised as a multivariate distribution (i.e. a joint distribution of multiple variables).

In modular approaches to grammar as briefly expounded above, sound alternations can be encoded (in terms of derivational rules and/or constraints) either at the phonological level or at the phonetic level of representation.

On the other hand, as Sóskuthy (2013), pp. 183 illustrates, in exemplar-based models all sound alternations are directly encoded by exemplars within the exemplar cloud, at one single level of representation. As soon as an exemplar with new phonetic characteristics is experienced and stored, the representation of that lexical item already contains information about the sound alternation. In this sense, every type of variation is phonologised (i.e. represented) from the outset as soon as it is experienced by the speaker/hearer and stored in memory.

6.1 References

Ambridge, Ben. 2018. Against stored abstractions: A radical exemplar model of language acquisition. Pre-print available at PsyArXiv. https://doi.org/10.2139/ssrn.3219847.
Bermúdez-Otero, Ricardo. 2007. Diachronic phonology. In The cambridge handbook of phonology, 517. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511486371.022.
Bermúdez-Otero, Ricardo. 2015. Amphichronic explanation and the life cycle of phonological processes. In The oxford handbook of historical phonology, 374–399. Oxford: Oxford University Press.
Bermúdez-Otero, Ricardo. 2017. Stratal phonology. In S. J. Hannahs & Anna R. K. Bosch (eds.), The routledge handbook of phonological theory, 100–134. Routledge. https://doi.org/10.4324/9781315675428-5.
Hock, Hans Henrich. 1991. Principles of historical linguistics. Berlin: Mouton de Gruyter. https://doi.org/10.1515/9783110871975.
Janda, Richard D. 1999. Accounts of phonemic split have been greatly exaggerated—but not enough. In Proceedings of the 14th international congress of phonetic sciences, vol. 14, 329–332.
Johnson, Keith. 1997. Speech perception without speaker normalization: An exemplar model. In Keith Johnson & John W. Mullenix (eds.), Talker variability in speech processing, 145–165. San Diego, CA: Academic Press.
Kiparsky, Paul. 1988. Phonological change. In Frederick J. Newmeyer (ed.), Linguistics: The cambridge survey, vol. 1 Linguistic theory: foundations, 363–415. Cambridge: Cambridge University Press.
Kiparsky, Paul. 2000. Opacity and cyclicity. The linguistic review 17(2-4). 351–366. https://doi.org/10.1515/tlir.2000.17.2-4.351.
Kiparsky, Paul. 2015. Phonologization. In The oxford handbook of historical phonology. 563–579: Oxford: Oxford University Press.
Pierrehumbert, Janet B. 2001. Exemplar dynamics: Word frequency, lenition and contrast. In Joan L. Bybee & Paul J. Hopper (eds.), Frequency and the emergence of linguistic structure, 137–157. Amsterdam Philadelphia: John Benjamins Publishing Company. https://doi.org/10.1075/tsl.45.08pie.
Sóskuthy, Márton. 2013. Phonetic biases and systemic effects in the actuation of sound change. Edinburgh: University of Edinburgh PhD thesis.
Sóskuthy, Márton, Paul Foulkes, Vincent Hughes & Bill Haddican. 2018. Changing words and sounds: The roles of different cognitive units in sound change. Topics in Cognitive Science. Wiley Online Library 10(4). 1–16. https://doi.org/10.1111/tops.12346.
Todd, Simon, Janet B. Pierrehumbert & Jennifer Hay. 2019. Word frequency effects in sound change as a consequence of perceptual asymmetries: An exemplar-based model. Cognition 185. 1–20. https://doi.org/10.1016/j.cognition.2019.01.004.

Footnotes

  1. Note that in generative phonology, of which Lexical Phonology is a strand, speakers are assumed to store abstract, underlying phonemic forms, or representations, in memory. These underlying forms, to be produced, go through a series of neuro-cognitive processes, or derivation, that generate a surface representation which is then sent to the motor system which executes the motor plan corresponding to that surface representation. The details widely vary depending on the model or framework.↩︎