Defending The Motor Theory Of Speech Perception

The Motor Theory of Speech Perception seeks to explain the remarkable fact that people have superior abilities to perceive speech as opposed to non-speech sounds. The theory postulates that people use their ability to produce speech when they perceive speech as well, through micro-mimicry. In other words, when we see someone speaking, we make micro replicas of the mouth movements we see, thus helping us to understand what is being said. A major objection to this explanation has been put forward by Mole (2010), who denies that there is anything special about speech perception as opposed to perception of non-speech sound. In this article I will defend the Motor Theory against Mole’s (2010) objection by arguing the contrary: there is something special about speech perception.

Introduction

Our speech perception functions very well even in conditions where the signal is of poor quality. These abilities are markedly better than our perception of non-speech sounds. For example, consider how you can fairly easily pick out the words being uttered, even against a background of intense, and louder, traffic noise. This fact makes it seem  that there is a special nature to speech perception as compared to perception of non-speech sounds.

The Motor Theory of Speech Perception (Liberman and Mattingly 1985) seeks to explain this special nature of speech perception. It postulates that the mechanical and neural elements involved in the production of speech are also involved in the perception of speech.   On this view, speech perception is the offline running of the systems that when online, actually produce speech. According to the Motor Theory, motor activation – i.e. micro-movements of mouth and tongue muscles or preparations thereto – are also occurring when perception of speech takes place. The idea is that if you make subliminal movements of the type you would make to produce an `S’ sound, you are thereby well-placed to understand that someone else whom you see making such movements overtly is likely to be producing an `S’ sound. This is how we understand one another’s speech so well.  And so it is key to the Motor Theory of Speech Perception that speech perception is special.

In some ways, the position of the Motor Theory in explaining speech perception is analogous to the position of Simulation Theory (see Short, 2015) in explaining how we are often able to predict and explain the behaviour of other people (so-called Theory of Mind). In both cases, the account seeks to generate a maximally powerful explanation of the phenomenon using the minimum of additional “moving parts”.  The Motor Theory notes that we already have complicated machinery to allow us to produce speech and suggests that that machinery may also be used to perceive and understand speech.  The Simulation Theory account of Theory of Mind notes that we already have an immensely complex piece of machinery – a mind – and postulates that we may also use that mind to simulate others and thus understand them.  I see value in these parsimonious and economical simulation approaches in both areas.

Mole (Ch. 10, 2010) challenges the Motor Theory.  He agrees that speech perception is special, but not that it is special in such a way as to support the Motor Theory.  In this article, I will offer responses on behalf of the Motor Theory to Mole’s (2010) challenge in five ways, as outlined below.

  1. Mole (2010) claims that speech perception is not special.  If that is true, then the Motor Theory cannot succeed because it proceeds from that assumption.  I will first deny Mole’s (2010) claim that other perception also involves mapping from multiple percepts to the same meaning and is therefore not unique to speech perceptionTaking an example from speech, we understand the name “Sherlock” to refer to that detective even though it may be pronounced in a myriad of different ways.  This phenomenon is known as invariance.  Mole (2010) claims that there is nothing special about speech perception, because other types of perception (such as colour perception) also involve mapping from multiple external sources of perceptual data to the same single percept. I will show that the example from visual perception invoked by Mole (2010) is not of the type that would dismiss the need for a special explanation of speech perception provided by the Motor Theory.
  2. Mole (2010) makes another claim which is also intended to challenge the idea that underpins the Motor Theory that there is a special invariance in speech perception.  This special invariance is the way that we always understand “Sherlock” to refer to the detective whichever accent the name is spoken in, or whatever the background noise level is (provided of course that we can actually hear the name).  Mole (2010) claims that invariances in speech perception are not special as similar invariances also occur in face recognition.  Mole (2010) seeks to make out his face recognition point by discussing how computers perform face recognition;  I will show that he does not succeed here.
  3. In the famous McGurk experiment, so-called “cross-talk” effects are seen. These occur where visual and aural stimuli interact with each other and change how one of them is perceived.  For example, subjects seeing a video of someone saying “ga” but hearing a recording of someone saying “ba” report that they heard “da.”  Since the Motor Theory postulates that speech perception is special, such cross-talk effects will support the Motor Theory if they are in fact special to speech perception.  Mole (2010) uses cross-modal data from two experiments with the aim of showing that such cross-talk also exists in non-speech perception.  I will suggest that the experiments Mole (2010) cites do not provide evidence for the sort of cross-talk phenomenon that Mole (2010) needs to support his position.
  4. I will refute Mole’s (2010) claim that Motor Theory cannot account for how persons who cannot speak can nevertheless understand speech by outlining how that could occur.
  5. Finally, I will briefly consider a range of additional data that support the Motor Theory which therefore challenges the position espoused by Mole (2010).  These are that the the Motor Theory explains all three of cerebellar involvement in dyslexia,  observed links between speech production and perception in infants and why neural stimulation of speech production areas enhances speech perception.

Challenges To Mole (2010)

Mole’s (2010) Counterexample From Visual Perception Is Disanalogous To Speech Perception

A phoneme is a single unit of speech.  It can be thought of, roughly, as the aural equivalent of a syllable.  Any single phoneme will be understood by the listener despite the fact that there will be many different sound patterns associated with it.  It is clearly a very useful ability of people to be able to ignore details about pitch, intensity and accent in order to focus purely on the phonemes which convey meaning.  This invariance is a feature of speech perception but not of sound perception, which situation motivated the proposal of the Motor Theory.

It is important to be clear on where there is invariance and where there is lack of invariance in perception.  There is invariance in the item which the perceiver perceives (for example, Sherlock) even though there is a lack of invariance in the perceptual data that allows the perceiver to have the perception.  So we can see that it is Sherlock’s face (an invariance in what is understood) even though the face may be seen from different angles (a lack of invariance in perceptual input).  Similarly, we may hear that it is Sherlock’s name that is spoken (an invariance in what is understood) even though the name may be spoken in different accents (a lack of invariance in perceptual input).   Lack of invariance is of course the same as variance; this discussion however tends to be couched in terms of invariance and its absence.

For supporters of the Motor Theory, this invariance in what the listener reports that they have heard is evidence that the perceptual object in speech perception is a single gesture – the one phoneme that the speaker intended to pronounce.  This single object is always reportable despite the fact that the phoneme could have been pronounced in a wide variety of accents.  The accents can vary a great deal but there is still invariance in what the speaker hears because most accents can be understood.

Mole (2010) denies that this invariance is evidence for the special nature of speech.  Mole (p.217, 2010) writes: “[e]ven if speech were processed in an entirely non-special way, one would not expect there to be an invariant relationship between […] properties of speech sounds […] and phonemes heard for we do not […] expect perceptual categories to map onto simple features of stimuli in a one-to-one fashion.”

Mole’s (2010) argument is as follows.  He allows that there is not a one-to-one mapping between stimulus and perceived phoneme in speech perception.  I will also concede this.  Mole (2010) then denies that this means that speech perception is special on the grounds that there is not in general a one-to-one mapping between stimulus and percept in perception (other than in speech).  He produces a putative example in vision, by noting the existence of `metamers’.  A metamer is one of two colours of slightly different wavelengths that are nevertheless perceived to be the same colour.  Note that colour is defined here by wavelength rather than phenomenology.  So Mole (2010) has indeed produced a further example of a situation where there is not a one-to-one mapping between stimulus and percept.

Mole (2010) has indeed produced a further example of a situation where there is not a one-to-one mapping between stimulus and percept.  However, this lack of one-to-one mapping is not exactly what is cited as the cause of the special nature of speech perception under the Motor Theory. Rather the relevant phenomenon is ‘co-articulation’ – i.e., the way in which we are generally articulating more than one phoneme at a time. As Liberman and Mattingly write (1985, p. 4), “coarticulation means that the changing shape of the vocal tract, and hence the resulting signal, is influenced by several gestures at the same time” so the “relation between gesture and signal […] is systematic in a way that is peculiar to speech”.  So while it is indeed the case that there are multiple stimuli being presented which result in a single percept, it is the temporal overlap between those stimuli that is the key factor, not the mere fact of their multiplicity.  In other words, the Motor Theory argument relies on the fact that a speaker is pronouncing more than one phoneme at a time during overlap periods.

This means that Mole’s (2010) metamer example is disanalogous, because it only deals with the multiplicity of the stimuli in the mapping and not with their temporal overlap.  This is the case because there cannot in fact be a temporal overlap between two colour stimuli.  We can see this using a thought experiment.  Let us imagine a lighting rig that is capable of projecting any number of arbitrary colours and also of projecting more than one colour at the same time.

In that case, we could not say that the perception of a colour being projected at a particular time was changed by the other colours being projected with it.  That situation would simply be the projection of a different colour.  So a projection of red light with green light does not produce a modified red, it produces yellow light.  It is not possible to have a “modified red,” because such a thing is not red any more.  The rig would not be projecting a different sort of red; it would be projecting a different colour that was no longer red.

I will illustrate this further with an example from a different sensory modality: hearing.  The position I am taking about red (more exactly, an precise shade of red) is essentialist.  On essentialist accounts, there are certain properties of an item which can be changed and will result in a modified version of that item.  There are other properties, the essential ones, which cannot be modified consistent with the original item retaining its identity.

For example, some properties of an opera are essential to it being an opera.  By definition, it is symphonic music with singing.  A symphony requires only the musical instruments.  Some properties of an opera can be changed and this will result in a modified opera.  One could replace the glass harmonica scored for the Mad Scene in Lucia di Lammermoor with flute.  One would then have a performance of a modified version of Lucia which would be a modified opera and would still be an opera.

What one could not do is change an opera into a symphony, strictly speaking.  There could be a performance of the first act of Lucia as normal and one would be watching a performance of an opera.  If in the second act the musicians came out and played without the singers, one would not have converted an opera into a symphony.  One would have ceased to perform an opera and begun to perform a symphony, albeit one musically identical to the non-vocal parts of Lucia.

Returning to the lighting rig, we cannot say here that yellow is a modified red without abandoning any meaning for separate colour terms altogether – every colour would be a modified version of every other colour.  This impossible lighting rig is what Mole (2010) needs to cite to have a genuine example, because it would be a case of multiple stimuli being projected at the same time and resulting in activation of the same perceptual category.

In sum, a metamer is an example where there is no one-to-one mapping between stimulus and perceptual category, but also where the different stimuli are not simultaneous.  This is the case because we cannot be looking at both colours involved in a metamer at the same time.  A co-articulation by contrast is an example of where there is no one-to-one mapping between stimulus and perceptual category, but where the different stimuli are indeed simultaneous. As it is that very simultaneity that is the key to the special nature of the systematic relation between gesture and signal under the Motor Theory, Mole (2010) does not have an example here that demonstrates that speech perception is not special.

Face Recognition Does Not Show A Similar Sort of Invariance Of Perception As Speech Recognition

Mole (2010) claims that face recognition is another example of invariance – for example, we can recognise that we are looking at Sherlock’s face from various angles and under different lighting conditions – thereby challenging the idea that invariance in speech perception is evidence for the special nature of speech perception.  His claim is that the invariance in the way we can always report that we are looking at Sherlock’s face despite variance in input visual data is similar to the invariance in the way that we can always report we have heard Sherlock’s name despite variance in input aural data.  If that is true, then Mole (2010) has succeeded in showing that speech perception is not special as the Motor Theory claims.

Mole (2010) allows that we use invariances in face recognition, but denies this could ever be understood by examination of retinal data.  He writes: “[t]he invariances which one exploits in face recognition are at such a high level of description that if one were trying to work out how it was done given a moment-by-moment mathematical description of the retinal array, it might well appear impossible” (Nudds and O’Callaghan 2010, p. 216).  What this means is that it would be difficult to get from the retinal array (displaying a great deal of lack of invariance) to the features we use in recognising Sherlock such as our idea of the shape of his nose (which is quite invariant).

However, this can be questioned as follows.  Since the only thing that computers can do in terms of accepting data is to read in a mathematical array, Mole’s (2010) claim is in fact equivalent to the claim that it cannot be understood how computers can perform face recognition.  That claim is false.  To be very fair to Mole (2010), his precise claim is that the task might appear impossible, but I shall now show that since it is widely understood to be possible, it should not appear impossible either.

Fraser et al. (2003) describe an algorithm that performs the face recognition task better than the best algorithm in a ‘reference suite’ of such algorithms.  Their computer is supplied with a gallery of pictures of faces and a target face and instructed to sort the gallery such that the target face is near the top.  The authors report that their algorithm is highly successful at performing this task.  Fraser et al. write (2003, p. 836): “[w]e tested our techniques by applying them to a face recognition task and found that they reduce the error rate by more than 20% (from an error rate of 26.7% to an error rate of 20.6%)”.  So the computer recognized the target face around 80% of the time.

So we see firstly that the computer can recognize a face.  [It is not an objection here to claim that strictly speaking, computers cannot `recognise’ anything.  All that we require here is that computers can be programmed so as to distinguish faces from one another merely by processing visual input.  It is this task which Mole (2010) claims appears impossible.]  Then we turn to the claim that how the computer does this cannot be understood.  That is refuted by the entire paper, which is an extended discussion of exactly that.  Since this in an active area of research, we can take it that such understanding is widely to hand in computational circles, and should be more wide-spread.

It may be true in one sense that we could not efficiently perform the same feat as the computer – in the sense of physically taking the mathematical data representing the retinal array and explicitly manipulating it in a sequence of complex ways in order to perform the face recognition task.  In another sense, we could, of course. It is what we do every time we actually recognize a face.  The mechanics of our eyes and the functioning of our perceptual processing system have the effect of performing those same mathematical manipulations.  We know this because we do in fact perform face recognition using only the retinal array as input data.

Mole (2010) has indeed provided an example of invariance (i.e., in face recognition) but the example does not damage the need for a special explanation of the speech perception invariances, because the face perception example can in fact easily be explained.  Therefore Mole (2010) has not here provided a further example of a invariance and he has not thereby questioned the specialness of speech perception.  Speech perception continues indeed to exhibit a unique invariance which continues to appear in need of unique explanation.

Experimental Data Do Not Show Cross-Modal Fusion 

Cello Experiment

Mole (2010) argues that an experiment on judgments made as to whether a cello was being bowed or plucked shows the same illusory optical/acoustic combinations as are seen in the McGurk effect.  The McGurk effect (McGurk and MacDonald 1976) is observed in subjects hearing a /ba/ stimulus and seeing a /ga/ stimulus.  The subjects report that they have perceived a /da/ stimulus.  It is important to note that this is not one of the stimuli presented; it is a fusion or averaging of the two stimuli.  So an optical stimulus and and an acoustical stimulus have combined to produce an illusory result which is neither of them.

If Mole’s (2010) claim that the cello experiment shows McGurk-like effects is true, this would show that these illusory effects are not special to speech, thus challenging the claim that there is anything special about speech that the Motor Theory can explain.  Mole (p. 221, 2010) writes: “judgments of whether a cello sounds like it is being plucked or bowed are subject to McGurk-like interference from visual stimuli”.  However, the data Mole (2010) cites do not show the same type of illusory combination and so Mole (2010) is unable to discharge the specialness of speech perception as he intends.

The Motor Theory postulates that the gesture intended by the speaker is the object of the perception, and not the acoustical signal produced.  The theory explains this by also postulating a psychological gesture recognition module which will make use of the speech production capacities in performing speech perception tasks.  Thus the McGurk effect constitutes strong evidence for the Motor Theory by explaining that the module has considered optical and acoustical inputs in deciding what gesture has been intended by the speaker.  This strong evidence would be weakened if Mole (2010) can show that McGurk-like effects occur other than in speech perception, because the proponents of the Motor Theory would then be committed to the existence of multiple modules and their original motivation by the observed specialness of speech would be put in question, in fact as in the McGurk effect.

More specifically, the paper Mole (2010) cites, Saldaña and Rosenblum (1993), describes an experimental attempt to find non-speech cross-modal interference effects using a cello as the source of acoustic and optical stimuli.  Remarkably, Saldaña and Rosenblum (1993) state prominently in their abstract that their work suggests “the nonspeech visual influence was not a true McGurk effect” in direct contradiction of Mole’s (2010) stated reason for citing them.

There are two ways to make a cello produce sound: it can be plucked or it can be bowed.  The experimenters proceed by presenting subjects with discrepant stimuli – for example, an optical stimulus of a bow accompanied by an acoustical stimulus of a pluck.  Saldaña and Rosenblum (1993) found that the reported percepts were adjusted slightly by a discrepant stimulus in the direction of that stimulus.

However, to see a McGurk effect, we need the subjects to report that the gesture they perceive is a fusion of a pluck and a bow.  Naturally enough, this did not occur, and indeed it is unclear what exactly such a fusion might be.  Therefore, Mole (2010) has not here produced evidence that there are McGurk effects outside the domain of speech perception.

Mole’s (2010) response is to dismiss this as a merely quantitative difference between the effects observed by the two experiments.  Mole (p. 221, 2010) writes:  “[t]he McGurk effect does reveal an aspect of speech that is in need of a special explanation because the McGurk effect is of a much greater magnitude than analogous cross-modal context effects for non-speech sounds”.  As we have seen, Mole (2010) is wrong to claim there is only a quantitative difference between the McGurk effect observed in speech perception and the cross-modal effects observed in the cello experiment because only in the former were fusion effects observed.  That is most certainly a major qualitative difference.

Mole’s (2010) claim that the cello results are only quantitatively different to the results seen in the McGurk effect experiment produces further severe difficulties when we consider in detail the experimental results obtained.  The cello experimenters describe a true McGurk effect as being one where there is a complete shift to a different entity – the syllable is reported as clearly heard and is entirely different to the one in the acoustic stimulus.  Saldaña and Rosenblum (1993, p. 409) describe these McGurk data as meaning: “continuum endpoints can be visually influenced to sound like their opposite endpoints”.

The cello data were not able to make a pluck sound exactly like a bow and in fact the discrepant optical stimuli were only able to slightly shift the responses in their direction, by less than a standard deviation, and in some cases not at all.  This is not the McGurk effect at all and so Mole (2010) cannot say it is only quantitatively different.  Indeed, Saldaña and Rosenblum (1993, p. 410) specifically note that: “[t]his would seem quite different from the speech McGurk effect”.

In sum, the cross-modal fusion effect that Mole (2010) needs is physically impossible in the cello case and the data actually found do not even represent a non-speech analog of the McGurk effect, as is confirmed by the authors.  Once again, speech perception remains special and the special Motor Theory is needed to explain it.

Sound Localization Experiment

The other experiment relied on by Mole (2010) was conducted by Lewald and Guski (2003) and considered the ventriloquism effect, whereAs above, the result that Mole (2010) needs to support his theory is an effect that is a good analogy to the McGurk effect in a non-speech domain. As I will show below, the data from the Sound Localisation Experiment also fails to bear out his claim that there are McGurk-like effects outside the domain of speech perception.

The Sound Localisation  Experiment uses tones and lights as its acoustic and optical stimuli.  It investigates the ventriloquism effect quantitatively in both the spatial and temporal domains.  The idea is that separate optical and acoustic events will tend to be perceived as a unified single event with optical and acoustical effects.  This will only occur if the spatial or temporal separation of the component events is below certain thresholds.

Lewald and Guski (2003, p. 469) propose a “spatio-temporal window for audio-visual integration” within which separate events will be perceived as unified.  They suggest maximum values of 3◦ for angular or spatial separation and 100 ms for temporal separation.  Thus a scenario in which a light flash occurs less than 3◦ away from the source of a tone burst will produce a unified percept of a single optical/acoustical event as will a scenario in which a light flash occurs within 100 ms of a tone burst.  Since the two stimuli in fact occurred at slightly different times or locations, this effect entails that at least one of the stimuli is perceived to have occurred at a different time or location than it actually did.

To recap, in the McGurk effect, discrepant optical and acoustic stimuli result in a percept that is different to either of the two stimuli and is a fusion of them.  We may allow to Mole (2010) that Lewald and Guski (2003) do indeed report subjects perceive a single event comprising a light flash and a tone burst.  However, that is insufficient to constitute an analogy to the McGurk effect.  Subjects do not report that their percept is some fusion of a light flash and a tone burst – as with the cello experiment, it is unclear what such a fusion could be – they merely report that an event has resulted in these two observable effects.  [We may note that Lewald and Guski (2003) do not take themselves to be searching for non-speech analogs of the McGurk effect; the term does not appear in their paper or in the titled of any of their 88 citations, throwing doubt on the claim that they are working in the field at all.]

Indeed, the subjects were not even asked whether they perceived some fused event.  They were asked whether the sound and the light had a common cause; were co-located or were synchronous.  As Lewald and Guski write (p. 470, 2003): “[i]n Experiment 1, participants were instructed to judge the likelihood that sound and light had a common cause.  In Experiment 2, participants had to judge the likelihood that sound and light sources were in the same position. In Experiment 3, participants judged the synchrony of sound and light pulses’ ”.  A ‘common cause’ might have been some particular event but it is not the sound and the light and they were the only things that were perceived therefore the instructions do not even admit the possibility that a fused event was perceived.

Since Lewald and Guski (2003) are measuring the extent to which participants agree that a light and a tone had a common cause, were co-located or were synchronous, it is puzzling that Mole (p. 221, 2010) cites them to support his claim that perceived flash count can be influenced by perceived tone count.  We see this when Mole writes (p. 221, 2010):  “[t]he number of flashes that a subject seems to see can be influenced by the number of concurrent tones that he hears (Lewald and Guski 2003)”.

Moreover, neither the Sound Localisation Experiment nor the cello experiment support Mole’s (p. 221, 2010) summation that “[i]t is not special to speech that sound and vision can interact to produce hybrid perceptions influenced by both modalities” in the way he needs.  Unlike with the McGurk effect, there are no hybrid perceptions in either case, where  “hybrid” is understood to be ‘a perception of an event which is neither of the stimulus events’.

There are cross-modal effects between non-speech sound stimuli and optical stimuli but that is inadequate to support Mole’s (2010) claim that speech is not special.  We still need the special explanatory power of the Motor Theory.

Mute Perceivers Can Be Accommodated

One of Mole’s (2010) challenges is that the Motor Theory cannot explain how some people can have the capacity to perceive speech that they lack the capacity to produce.  Mole writes (p. 226, 2010) that “[a]ny move that links our ability to perceive speech to our ability to speak is an unappealing move, since it ought to be possible to hear speech without being able to speak oneself”.  There is an equivocation here though on what is meant by ‘capacity to produce’.  Mole (2010) is reading that term so that the claim is that someone who is unable to use their mouth to produce speech lacks the capacity to perceive speech.  Since such mute people can indeed as he claims understand speech, he takes his claim to be made out.

However, in the article cited by Mole (2010), it is clear that this is not what is understood by ‘capacity to produce’.  In the study by Fadiga et al. (2002) described, the neuronal activation related to tongue muscles is not sufficient to generate movement.  This activation is a result of the micro-mimicry that takes place when people are perceiving speech.  Fadiga et al. (2002) call this mimicry “motor facilitation.”

Fadiga et al. (p. 400, 2002) write: “The observed motor facilitation is under-threshold for overt movement generation, as assessed by high sensitivity electromyography showing that during the task the participants’ tongue muscles were absolutely relaxed”.   Thus the question is whether the subject has the capacity to produce such a sub-threshold activation, and not the capacity to produce speech via a super-threshold activation.   Naturally, since all the subjects had normal speech, they could produce both a sub-threshold and a super-threshold activation, with the latter resulting in speech.

However, someone could be able to activate their tongue muscles below the threshold to generate overt movement but not be able to activate those muscles above the threshold.  That would mean that they lacked ‘capacity to produce’ in Mole’s (2010) sense, but retained it in Fadiga et al.’s (2002) sense.  This would be a good categorization of the mute people who can understand speech they cannot utter.  Those people would retain the ability to produce the neural activity that Fadiga et al. observe, which does not result in tongue muscle movement.  This is a testable empirical claim to which my account is committed.  It is possible that they may not be able to even produce the sub-threshold neural signals. If that turns out to be correct, it would be a problem for the Motor Theory and the defence I have offered for it here.

Similarly, we can resolve Mole’s (2010) puzzle about how one can understand regional accents that one cannot mimic; i.e. I can understand people who speak with an accent that is different to mine.  The capacity to understand a particular accent could result from our ability to generate the necessary sub-threshold activations, but not the super-threshold ones.  If we go on to acquire that regional accent, our super-threshold muscle activation capacities would be of the required form.  This again is an empirical prediction which makes my account subject to falsification by data.

This hypothesis could have interesting implications in the field of developmental psychology.  Mole (p. 216, 2010) outlines how infants can perceive all speech sound category distinctions, but eventually lose the ability to discriminate the ones that do not represent a phoneme distinction in their language.  So it may be the case that all infants are born with the neural capacity to learn to generate super-threshold activations of all regional accents, but eventually retain that capacity only at the sub-threshold level – because they can later understand a wide range of regional accents – and lose the capacity at the super-threshold level – for those regional accents they cannot mimic.

Another implication here of the Motor Theory is to say that a listener’s vocal tract can function as a model of itself, just as a listener’s vocal tract can function as a model of a speaker’s vocal tract.  This means that the sub-threshold activation functions as a model of the super-threshold activation. So, perceptual capacities involve the former modelling the latter exactly as the Motor Theory predicts.  Such an approach does not commit the Motor Theory to the modelling/perception neurons controlling the sub-threshold activations being the same as the production neurons controlling speech production, so the account is not susceptible to falsification on that precise point.

Further Brief Challenges To Mole (2010)

The Motor Theory Explains Cerebellar Involvement In Dyslexia

Mole (2010) challenges the Motor Theory and in doing so, challenges the idea that speech production capacities are involved in speech recognition.  For this reason, any data showing links between speech production capacities and speech recognition capacities will be a problem for him.

Ivry and Justus (2001) refer to a target article that shows that 80% of dyslexia cases are associated with cerebellar impairments.  Since the cerebellum is generally regarded as a motor area, and dyslexia is most definitely a language disorder, we have clear evidence for a link between language and motor areas.  That is naturally a result that can be clearly accommodated by the Motor Theory which links speech production and speech recognition.

It is not open to Mole (2010) to respond that the link is only between motor control areas and writing control areas, because although writing skills are the primary area of deficit for dyslexic subjects, the authors also found impairments in reading ability to be strongly associated with the cerebellar impairments.  This can be explained on the Motor Theory because it says that Motor deficits will result in speech recognition deficits.  Mole (2010) needs to provide an explanation of this which does not rely on the Motor Theory.

The Motor Theory Explains Links Between Speech Production And Perception In Infants

Mole (2010) does not address some important results supplied by Liberman and Mattingly (1985: p. 18) that link perception and production of speech.  These data show that infants preferred to look at a face producing the vowel they were hearing rather than the same face with the mouth shaped to produce a different vowel.  That effect is not seen when the vowel sounds were replaced with non-speech tones matched for amplitude and duration with the spoken vowels.  What this means is that the infants are able to match the acoustic signal to the optical one.  In a separate study, the same extended looking effect was seen in infants when a disyllable was the test speech sound.  These data cannot be understood without postulating a link between speech production and speech perception abilities, because differentiating between mouth shapes is a production-linked task – albeit one mediated by perception – and differentiating between speech percepts is a perceptual task.

The Motor Theory Explains Why Neural Stimulation Of Speech Production Areas Enhances Speech Perception

D’Ausilio et al. (2009) conducted an experiment in which Transcranial Magnetic Stimulation (“TMS”) was applied to areas of the brain known to be involved in motor control of articulators.  Articulators are the physical elements that produce speech, such as the tongue and lips.  After the TMS, the subjects were tested on their abilities to perceive speech sounds.  It was found that the stimulation of speech production areas improved the ability of the subjects to perceive speech.  The authors suggest that the effect is due to the TMS causing priming of the relevant neural areas such that they are more liable to be activated subsequently.

Even more remarkably, the experimenters find more fine grained effects such that stimulation of the exact area involved in production of a sound enhanced perceptual abilities in relation to that sound.  D’Ausilio et al (2009, p. 383) report: “the perception of a given speech sound was facilitated by magnetically stimulating the motor representation controlling the articulator producing that sound, just before the auditory presentation”.  This constitutes powerful evidence for the Motor Theory’s claim that the neural areas responsible for speech production are also involved in speech perception.

Conclusion

Special situations require special explanations.  The Motor Theory of Speech Perception is a special explanation of speech perception which, as evidenced by the rejection of Mole’s objections, continues to be needed.  One might say that such “specialness” means the Motor Theory stands in a vulnerable and isolated position, as it seeks to explain speech perception in a way that is very different to how we understand other forms of perception.   Here, I would revert to my brief opening remarks about the similarities between the Motor Theory and Simulation Theory.  Whilst the Motor Theory is indeed a special way to explain speech perception, it is at the same time parsimonious and explanatorily powerful because like Simulation Theory, it does not require any machinery which we do not already know we possess.  This is perhaps what underlies the continued attractiveness of Motor Theory as a convincing account of how people perceive speech so successfully.

References 

D’Ausilio, A et al. 2009  The Motor Somatotopy of Speech Perception.  Current Biology 19: pp. 381–385.  DOI: 10.1016/j.cub.2009.01.017

Fadiga, L et al. 2002  Speech Listening Specifically Modulates the Excitability of Tongue Muscles: a TMS study.  European Journal of Neuroscience, 15: pp. 399–402.  DOI: 10.1046/j.0953-816x.2001.01874.x

Fraser, A M et al. 2003  Classification modulo invariance, with application to face recognition.  Journal of Computational and Graphical Statistics, 12 (4): pp. 829–852.  DOI: 10.1198/1061860032634

Ivry, R B and T C Justus 2001  A neural instantiation of the motor theory of speech perception.  Trends in Neuroscience, 24 (9): pp. 513–5.  DOI: 10.1016/S0166-2236(00)01897-X

Lewald, J and R Guski 2003  Cross-modal perceptual integration of spatially and temporally disparate auditory and visual stimuli.  Brain Research. Cognitive Brain Research (Amsterdam), 16: pp. 468–478.  DOI: 10.1016/S0926-6410(03)00074-0

Liberman, A and I G Mattingly 1985  The Motor Theory of Speech Perception Revised.  Cognition, 21: pp. 1–36.  DOI: 10.1016/0010-0277(85)90021-6

McGurk, H and J MacDonald 1976  Hearing lips and seeing voices.  Nature, 264, (5588): pp. 746–748.  DOI: 10.1038/264746a0

Mole, C 2010 The motor theory of speech perception in Sounds and Perception: New Philosophical Essays.  Oxford: Oxford University Press.  DOI: 10.1093/acprof:oso/9780199282968.001.0001

Saldaña, H M and L D Rosenblum 1993  Visual influences on auditory pluck and bow judgments.  Perception And Psychophysics, 54 (3): pp. 406– 416.  DOI: 10.3758/BF03205276

Short, T L 2015  Simulation Theory: a Psychological and Philosophical Consideration.  Abingdon: Routledge.  URL: https://www.routledge.com/Simulation-Theory-A-psychological-and-philosophical-consideration/Short/p/book/9781138294349

Author: Tim Short

I went to Imperial College in 1988 for a BSc(hons) in Physics. I then went back to my hometown, Bristol, for a PhD in Particle Physics. This was written in 1992 on the ZEUS experiment which was located at the HERA accelerator in Hamburg (http://discovery.ucl.ac.uk/1354624/). I spent the next four years as a post-doc in Hamburg. I learned German and developed a fondness for the language and people. I spent a couple of years doing technical sales for a US computer company in Ireland. In 1997, I returned to London to become an investment banker, joining the legendary Principal Finance Group at Nomura. After a spell at Paribas, I moved to Credit Suisse First Boston. I specialized in securitization, leading over €9bn of transactions. My interest in philosophy began in 2006, when I read David Chalmers's "The Conscious Mind." My reaction, apart from fascination, was "he has to be wrong, but I can't see why"! I then became an undergraduate in Philosophy at UCL in 2007. In 2010, I was admitted to graduate school, also at UCL. I wrote my Master's on the topic of "Nietzsche on Memory" (http://discovery.ucl.ac.uk/1421265/). Also during this time, I published a popular article on Sherlock Holmes (http://discovery.ucl.ac.uk/1430371/2/194-1429-1-PB.pdf). I then began work on the Simulation Theory account of Theory of Mind. This led to my second PhD on philosophical aspects of that topic; this was awarded by UCL in March 2016 (http://discovery.ucl.ac.uk/1475972/ -- currently embargoed for copyright reasons). The psychological version of this work formed my book "Simulation Theory". My second book, "The Psychology Of Successful Trading: Behavioural Strategies For Profitability" is in production at Taylor and Francis and will be published in December 2017. It will discuss how cognitive biases affect investment decisions and how knowing this can make us better traders by understanding ourselves and other market participants more fully. I am currently drafting my third book, wherein I will return to more purely academic philosophical psychology, on "Theory of Mind in Abnormal Psychology." Education: I have five degrees, two in physics and three in philosophy. Areas of Research / Professional Expertise: Particle physics, Monte Carlo simulation, Nietzsche (especially psychological topics), phenomenology, Theory of Mind, Simulation Theory Personal Interests: I am a bit of an opera fanatic and I often attend wine tastings. I follow current affairs, especially in their economic aspect. I started as a beginner at the London Piano Institute in August 2015 and passed Grade Two in November 2017!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s