Why women are better traders and investors than men — a psychological explanation
Warwick University Business School (“WUBS”) have conducted a fascinating study on the investment performance of men and women. They show that women perform significantly better with a good sample size and temporal range. They make some interesting remarks on why this might be. I think I can add some extra psychological depth to this — so we can see that female traders appear to have some quite deep natural advantages and they should feel encouraged about managing their own investments.
What WUBS did was collaborate with the share dealing service offered by Barclays Bank. They looked at 2800 investors over three years. There are various ways of measuring stock market performance, but one of the most common is to compare the performance of a portfolio with a relevant stock market index. (I explain what a stock market index is here: What Is A #Bear #Market?)
It is quite hard to outperform an index consistently. This fact is what lies behind the recent strong growth of tracker funds. You may as well buy the index if you can’t beat it. The results from the WUBS study showed that women consistently outperformed the FTSE-100 index and men did not. The male investors returned 0.14% above the index which is basically statistically consistent with having performed equivalently to it. However, I suspect that these investors would have been better off just buying the index rather than paying a lot of trading fees to obtain the same performance.
The female investors outperformed the FTSE-100 by a massive 1.80%. This may not sound much, but it is actually huge. Done over a lengthy period, it would lead to significantly improved results. Let us assume that the FTSE-100 returns 5% a year. If you started with £10,000 and performed as the male investors do, you would end up with £45,000 after 30 years. (It is always important to think long term in the stock market; to prefigure part of the answers I will discuss below, the women seem to understand this.) The female investors would turn £10,000 into £72,000 over the same 30 year period. That is a huge improvement over £45,000 and bear in mind that the female investors have taken the same risk, making it even more impressive. (One caveat is in order here: no one performs this consistently over the long-term–if they say they do, it is a huge red flag. Remember Madoff? But the point stands.)
How are female investors outperforming?
WUBS and Barclays set out a few reasons which could explain the outperformance. One of them is the one we already know about. Women are less over-confident than men. I explain how that works here: Women Are Better Traders Than Men. In summary, women tend less often to think that their new idea is brilliant and then abandon their previous idea before it has had time to work. Men on the other hand just get extremely convinced about their new sure-fire idea and go with it. Interestingly, women’s lack of over-confidence is not manifested in what they say about their beliefs. They just don’t act on them as often. We could discuss philosophically what that means about our account of belief — but the key point is that women are less likely to trade in deleterious ways!
But there are new reasons suggested. There are three that I think are especially interesting.
Women stay away from terrible ideas like #Bitcoin (this explanation is proposed by a Guardian commentary from Patrick Collinson; see links below)
I have not seen any data on how many women bought into Bitcoin, but is is certainly consistent with my claim in the second post above that female investors have stayed away — we know that women did not vote for Trump very often and much less so if they had college degrees. In addition all of the online hysteria (!) from Bitcoin boosters appeared to be from deluded male market participants.
Women avoid “lottery style” trading
It has always struck me as insanity to own a lot of penny stocks which are supposed to return ten times the amount you invest in a year because this almost never happens. As I discuss in my book, The Psychology of Successful Trading, traders can get seduced by vivid stories, incorrectly over-estimating massively their likelihood of coming about. A far better approach is just to sit still in major stocks for a long time, with maybe some spicy options for fun in a minor section of the portfolio. The problem with picking the next Amazon (or Bitcoin, for that matter) is that you can’t. You would have to own a million penny stocks for each Amazon or Apple. So this strategy is exciting but completely unsuccessful.
Men hold on to their losers
It seems that women are better at getting out of something which hasn’t worked. This came very close to home for me. Infamously, I am still holding Deutsche Bank stock, partly because I recommended it in my book as a contrarian trade. Banks are supposed to trade at at least book value (in fact, 2.0x before the crisis). So if you buy a bank at 0.25x book value, you can’t lose right? Because it is buying something for a quarter of its value. That hasn’t worked for me yet — maybe a female trader would have got out of this position a long time ago.
In conclusion, we have seen some deep-seated psychological advantages which female traders will have over male ones. This should encourage women in their investing.
I will argue that Proust’s picture of how we get into the minds of other’s is simulationist, thus following the account that I favour rather than the mainstream one.
The term in psychology for the way in which we predict and explain the behaviour of others is “Theory of Mind.” This is, I suggest, something of a placeholder, because it is in fact deeply unclear how we do this. Or even if we get it right. It certainly looks like we do, but that’s just because we confirm our results using the same method. (This is sometimes known as the “dipstick problem” in philosophy. I can’t tell whether my fuel gauge is accurate if I only look at the fuel gauge.)
There are two accounts of Theory of Mind in academic psychology. One is called Theory Theory. This is the claim that we have a theory of other people that we learn when young. This is the mainstream account. The other account, which I support, is called Simulation Theory:
Simulation Theory suggests that instead of using a theory of others, what we do when we predict and explain their behaviour is to simulate them. Metaphorically, we place ourselves in what we think is their position with the information and desires we thing they have, and then work out what we would do.
Anyone who has read Proust knows that he has an exceptionally deep and unusual set of insights into our psychology. His insights are not paralleled elsewhere in my view, with the possible exception of Henry James. For this reason, it is unsurprising to me that he also favours Simulation Theory. Moreover, Proust even seems to suggest the defence of Simulation Theory using cognitive biases which I have proposed.*
There are two key quotations I will use to back up this claim.** The character Swann is discussing “fellow-feeling,” and remarks to himself as below:
“he could not, in the last resort, answer for any but men whose natures were analogous to his own, as was, so far as the heart went, that of M. de Charlus. The mere thought of causing Swann so much distress would have been revolting to him. But with a man who was insensible, of another order of humanity, as was the Prince des Laumes, how was one to foresee the actions to which he might be led by the promptings of a different nature?”
This tells us that Swann has observed that it is easier for him to predict or explain the behaviour of others when those others are similar to him. In this particular case, Swann is wondering which of his friends might have sent him a distressing anonymous letter. Swann believes that Charlus is similar to Swann himself, that Swann himself would not have sent such a letter, and therefore Swann concludes that Charlus did not send the letter.
On the other hand, Swann believes that des Laumes is a very different individual, who is “insensible.” (I suspect that a more modern translation would use “insensitive” here.). Note that Swann, in a very simulationist vein, does not say “des Laumes is insensitive, so he might have sent the letter.” Instead, he says “des Laumes is insensitive, so I cannot tell what he would do.”
This is a very simulationist line. It says, in effect, that Swann is unable, he believes, to simulate des Laumes, because des Laumes is very different to Swann. Note this is not consistent with the mainstream Theory Theory view. There is no reason why Swann, an intelligent and perceptive man, could not have a good theory of insensitive behaviour. There is by contrast every reason why Swann could struggle to simulate insensitive behaviour, lacking as he does the experience “from the inside” of such behaviour.
A further simulationist view is suggested later; someone might be a genius:
“or, although a brilliant psychologist, [not believe] in the infidelity of a mistress or of a friend whose treachery persons far less gifted would have foreseen.”
This is a claim that people may be extremely intelligent and even special gifted in academic psychology but still make Theory of Mind errors in relation to other people not so gifted. Note how uncongenial this is to Theory Theory. Intelligent people who are brilliant psychologists should have an excellent theory of others and so be able to make very good predictions of their behaviour. Simulation Theory, by contrast, will predict exactly what Proust is describing here: brilliant, intelligent (highly moral?) individuals will fail to predict the behaviour of others who do not possess those characteristics. And similarly, more ordinary mortals will be able to simulate much better and thus predict much better when the person to be predicted is more like the person doing the predicting.
The major objection to Simulation Theory is that it does not account for surprising results in social psychology, such as the infamous Stanford prison experiment. Here, people behave amazingly harshly, for no apparent reason. This behaviour is not predicted by anyone. Theory Theorists claim that Simulation Theory cannot explain this, because we should just be able to simulate being a guard in a fake prison and then predict the harsh behaviour.
I provide a response to this objection on behalf of Simulation Theory. I suggest that what is missing from the simulation is a cognitive bias. In the case of the Stanford Prison Experiment, the bias in question I propose is Conformity Bias. Simply put, this is just our tendency to do what we are told. This bias is a lot stronger than we suppose, in comfortable repose.
It is gratifying to find Swann also gesturing in the direction of this Bias Mismatch Defence, as I call it. Swann further observes that he:
“knew quite well as a general truth, that human life is full of contrasts, but in the case of any one human being he imagined all that part of his or her life with which he was not familiar as being identical with the part with which he was.”
This, if Swann is accurate in his self-perception here, is a description of a systematic Theory of Mind error. It is a form of synecdoche, if you like. Swann takes the part of the person he knows and assumes that all of the rest of that person is the same.
I have suggested that one of the biases which can throw off our simulations is the Halo Effect. This means we know one thing about a person or item which has a certain positive or negative perceived value, and we then assume that all of the attributes of the person or item have the same value. For instance, someone who is a good speaker is probably also honest etc. There is of course no strong reason to think this, rationally speaking.
I have discussed the implications of the Halo Effect on predicting behaviour in financial markets previously:
In that case, I called the Bitcoin bubble just before it burst by employing the Halo Effect and positing that it was affecting the judgement of buyers. It is encouraging to see that Swann is also on the same page as me here!
Note that I do not claim to be a Proust expert or even have completed my reading yet! I do not therefore suggest that the above represents a radical new reading of the whole of Proust. I make only the modest claim that in this one paragraph, Proust describes a version of Theory of Mind which is more congenial to simulation than to theory. Since there are only these two developed candidate explanations of Theory of Mind, then that is already interesting. (There is also a hybrid account which employs both simulation and theory, but that is a mess in my view and there is no evidence of for any theory in the above quotation and therefore no evidence for a hybrid account.)
*”IN SEARCH OF LOST TIME – Complete 7 Book Collection (Modern Classics Series): The Masterpiece of 20th Century Literature (Swann’s Way, Within a Budding … The Sweet Cheat Gone & Time Regained)” by Marcel Proust, C. K. Scott Moncrieff, Stephen Hudson).
**It might be argued that this view is not that favoured by Proust himself but by Swann, who is a character created by Swann. I will not pursue this sort of Plato/Socrates point, but merely observe that it is at the very least true that Proust considers the position worth discussing. Moreover, I think it is very clear that Swann is rather to be considered an intelligent, discerning individual, if perhaps somewhat afflicted by propensities for self-deception, and so the fact that this view is at least that of Swann is sufficient to make it interesting. (I am informed by someone who knows Proust better than me that I am likely to revise my view of Swann in a negative direction as my reading progresses.)
One common feature shared by both groups is distrust of experts
We know that if you voted for Trump, you are more likely to be less intelligent, less educated, poorer and more rural. I will argue that this leads to a further feature — distrust of experts — which is required to be a supported of either Trump or Bitcoin. This suggests that when Bitcoin crashes, Trump voters will experience most of the losses. In this post, I will consider only the distrust of experts feature.
Note that I said “more likely to be …” We are talking about two curves here. It is not certain that you are less intelligent and poorer etc. It would not be an objection here to say “I have a PhD and I am rich and I voted for Trump.” To say that would be to commit the Anecdotal Fallacy, which I argued yesterday:
One of the notable points about Bitcoin is that there are no professional, experienced or institutional investors who have invested in Bitcoin. (If that changes, we should all become seriously concerned.). Everyone who holds Bitcoin is an inexperienced amateur. I put this to a Bitcoin enthusiast, and received the following reply.
Mark Cuban invested big into Unikorn. Peter Thiel invested into bitpay which is a wallet company. Mike Novogratz (former president of fortress investments and partner at Goldman Sachs) runs Galaxy Investments (almost exclusively crypto). Tim Draper bought 30,000 btc in 2014. And Bill Gates: there are no definitive articles on how much BTC he holds but he has plenty of quotes talking about how it’s the future
I will now show why none of that works.
Mark Cuban and Unikorn
The first point to make here is that it is odd to cite Cuban here since he is on record as saying that Bitcoin is a bubble. The other problem is that Unikoin, the token involved in this ICO, is not Bitcoin. (I also believe that almost all of the other ICOs are fraudulent, but I would need a lot more space and time to show that.) Finally, Unikoin will apparently permit sports betting, so while I do not recommend that, it at least has a theoretical source of value. Bitcoin does not.
Novogratz and Galaxy Investments
Novogratz and Galaxy Investment Partners have invested into the huge and under the radar Worldwide Asset eXchange (WAX). This is like selling shovels to miners in the Klondike gold rush. (Reportedly, Trump’s grandfather ran a Klondike brothel.) Selling shovels is a great business to be in, irrespective of how many of the miners or Bitcoin holders go bust. So this again is not an example of a major investor holding Bitcoin.
Tim Draper and 30,000 btc
This is the only one of the examples which approaches being serious. We must take it seriously because Draper reportedly invested serious money: $18m. And he is actually holding Bitcoin as opposed to backing exchanges. The caveats though are manifold. First, he lost 40,000 Bitcoin in the Mt Gox fraud, and the fact that this did not give him pause makes me think he is an esoteric thinker. Secondly, a lot of his remarks concern enthusiasm “for the technology”. It is very important to keep a clear distinction between Bitcoin — a Ponzi scheme — and the block chain — a very interesting technology. Thirdly, this is one man against every investment bank, hedge fund, regulator and all the other expert investors in the world.
I have in fact been told that my 20 year experience of successful investing is a disadvantage, because it means I am unable to understand the “glorious opportunity” allegedly represented by Bitcoin. There are in fact some advantages to disadvantages, as I argue in my new book:
— but that isn’t one of them.
Bill Gates and the future
This is an excellent example of muddled analysis and poor understanding of the importance of precision and sourcing one’s quotes from reputable sources. (It is no coincidence that Bitcoin supporters and Trump voters alike disparage proper news sources like the New York Times and prefer websites with manufactured quotes.) We are not actually given a quote from Gates which is the first problem. But secondly, it is highly likely that Gates thinks the blockchain is the (part of) the future and is not holding any sizeable numbers of Bitcoin. A distributed transparent ledger, which is what the blockchain is, is indeed a highly interesting piece of technology which would have many very useful applications. As just one example, imagine replacing property registers with blockchain. Myriad opportunities for money laundering and corruption would disappear, and be replaced with an efficient technology. The fact that Bitcoin is also built on the blockchain is irrelevant.
People in this country have had enough of experts
This is actually a quotation from a pro-Brexit politician, but we see the same pattern across the Brexit “debate,” in Trump vs Clinton, in global warming and in MMR Vaccine/autism. In each case, you need to believe that you are right and anyone educated or with specialist knowledge is wrong. You also need to believe that those people are lying to you — for no obvious reason.
The quality of the arguments raised by Bitcoin proponents can be seen to be extremely poor. I discussed here:
— some really bad arguments. What is remarkable here though is not the quality of the arguments — they are all very poor — but that this is someone who has somehow managed to publish a book on Bitcoin while clearly not understanding it at all.
So now you can decide. If you invest in Bitcoin, you are lining up with the people who mistrust experts. If you voted Trump, you did the same thing, because you are probably a climate change denier. So I think there is a very strong likelihood that many Trump voters are also holding Bitcoin. And they are going to pay a heavy price for both decisions.
The Anecdotal Fallacy, wherein people privilege their own experience over statistics, is one of the many cognitive biases that infect our thinking. It is particularly dangerous in financial markets, as illustrated by the current bubble in Bitcoin
The Anecdotal Fallacy occurs when people ignore statistics and quote a story of events that happened to them. Often, it will turn out not to have even happened to them, but to “someone they know.” While this latter step is an additional move away from constituting useful data, it is not the most malign effect of this bias. The main problem is that assessing probabilities on the basis of personal experiences is almost completely useless even when those personal experiences actually occurred.
There is only one way to assess probabilities, and that is to use statistics of similar event frequencies. This is extremely hard. In fact, even understanding it when it has been competently done by scientists or statisticians is extremely hard. It needs a lot of training and it seems as though our psychology is almost designed to trip us up.
The Anecdotal Fallacy is extraordinary widespread. It’s use seems in many circumstances to be almost automatic. If you give anyone any data on anything at all, people will generally respond with what they think is a counterargument from their own experience. Apparently intelligent and successful people fall into this error, so those qualities are not a prophylactic here. For example, Rupert Murdoch recently tweeted a photo accompanied by the text: “Just flying over N Atlantic 300 miles of ice. Global warming!”
This is a fairly extreme example which may have been deliberately provocative, but in my view it is just quite stupid. The ideas that global warming has to have happened already in all locations and that it would eliminate all ice on the planet betray a non-existent understanding of the problem. The only way to assess the probability that global warming is a genuine threat is to look at graphs showing correlations between greenhouse gas concentrations and temperature rises over several decades.* A personal experience is simply irrelevant to that task.
We also tend to over-estimate the probability of vivid events. I see this as an aspect of the Availability Heuristic, which I think is related to the Anecdotal Fallacy. We use the Availability Heuristic when we assess the probability of events by considering how difficult it is to think of an example of that type of event. Obviously we will make systematic errors in probability judgment if some events are easier to recall than others, and more vivid events are more easy to recall. I discuss this aspect of our psychology in the context of financial markets in my new book:
Why is the Anecdotal Fallacy relevant to the Bitcoin bubble?
Because everyone who is buying Bitcoin is doing so based on one of two events. Either they themselves have recently made a large amount of money from buying it or someone they know has. Twitter is full of stories of people doing so. This is extremely vivid and alluring. It draws more people in, which of course is what helps to sustain the bubble and indeed any Ponzi scheme.
Note that the problem is not that these stories are false. A lot of people have indeed made a lot of money out of Bitcoin. However, it is still a terrible investment — in fact, I don’t think we can even call it an investment — because it has no fundamental value and can crash to zero at any moment. It will definitely do so; we just don’t know when. So the problem is rather that people are using the Anecdotal Fallacy to assess the probability that Bitcoin will rise forever. They do not consider the statistics on bubble which have occurred widely throughout financial history. Any “asset” which rises this quickly has been a bubble which has eventually crashed to zero value. It will do so as quickly as it ascended.
So the statistics are diametrically opposed to our psychology here. Stay away from Bitcoin at all costs.
*The reason I say “several decades” is because we have only been taking detailed measurements for about 150 years. However, we have enough data from ice cores etc going back much, much further, just with bigger error bars.
The US had vast superiority in all assets that were thought to matter but was still defeated in the Vietnam War — why?
It is clear that the US possessed much more in the way of conventional military assets in the conflict with North Vietnam than the opposing forces. This point is widely accepted so I will not spend much time arguing for it. For example, the US had tanks while the
Viet Cong had no anti-tank weapons.* US forces had “superb artillery and air support” (Sheehan, p. 447, 1988) which enabled any US troops facing locally superior odds to succeed. The entire US army fought with the doctrine of “superior firepower” (Sheehan, p. 243, 1988). The financial resources that the US was able to apply also hugely outweighed those of its opponents in a largely peasant guerrilla army. Sheehan (p. 624, 1988) writes that commodity aid to South Vietnam reached the staggering figure of $650m in 1966.
This last point is decisive. It has been wisely observed that:
“Most wars have been wars of attrition, settled by which side had more staying power through the ability to apply men and materiel.” **
The GDP of North Vietnam in 1965 was $6.0bn in 2015 dollars. The GDP of the US in 1965 was $4.1tn in 2009 dollars — that is, 683x larger.
So why did the US lose? Consider the following highly insightful quotation.
“When McNamara wants to know what Ho Chi Minh is thinking, he interviews himself.” ***
Robert McNamara was the Secretary of Defense at the time, and so crucial to managing the war effort. It is clearly important to know what the enemy is thinking. McNamara’s error was to do this in the way that most people do. This is where we come to Theory of Mind.
Theory of Mind is the label in psychology for the way we predict and explain the behaviour of others. We all do this all the time. There is a vibrant debate in psychology as to how we do it. The mainstream view is called “Theory Theory.” This holds that children as young as five, who already have a serviceable Theory of Mind, have formed it by learning a theory of other people. They are supposed to have done this by most psychologists in a scientific fashion: they propose hypotheses and then confirm or disconfirm them empirically.
I support the opposing view, which is known as Simulation Theory.**** This suggests that we run our Theory of Mind by putting ourselves in the position of others and seeing what we would do. This, according to the quotations, is exactly what McNamara did. And it is why he was wrong and why the US lost.
We can see this same factor in action with another quote from a significant protagonist
in Vietnam: Green Beret Colonel Kurtz who makes the following observation on realising that the Viet Cong have removed the arms of all the children in a village who were vaccinated against Polio by US forces.
And then I realized… like I was shot… like I was shot with a diamond… a diamond bullet right through my forehead. And I thought, my God… the genius of that! The genius! The will to do that!
The surprise of the Colonel is again an illustration of Theory of Mind error. If his simulation of the Viet Song had been more accurate, he would have been able to predict their action here. That he was not, and that he was able to see how effective, if inhuman, this strategy was, shows that he was perhaps able to adjust and improve his Theory of Mind more than McNamara was.
It also illustrates the type of Theory of Mind error we should expect. McNamara was a company man, who was experienced from his time running Ford in systems analysis and data handling. So when he simulated Ho Chi Minh, he would draw conclusions along the lines of “I am faced with overwhelming odds; all of the analysis says that overwhelming odds always win; I therefore cannot win.”
What this misses out is the “Blut und Boden” point hinted at by Kurtz. It misses out the will to fight on one’s own soil irrespective of the prospects of success. It misses out the will to enlist the entire male and female population in the war effort, with many women driving supplies down the Ho Chi Minh trail at night without lights under largely ineffective yet heavy US bombing. It misses out what the French missed at Dien Bien Phu: the will to disassemble artillery pieces and carry them up jungled mountains by hand.
So this is why the US lost. It is also presumably why my book is held by the following library:
You can also buy a copy at the link below if you want to know more about Theory of Mind. ****
* Sheehan, N. (1988) A Bright Shining Lie: John Paul Vann and America in Vietnam. Vintage Books
** “The other side has a vote”, The Economist, Oct 14 2017
*** This quotation is from James Willbanks, an army strategist. It is written up in The Economist, “Buried Ordnance,” in the issue of Sep 14 2017. The piece is a review of “The Vietnam War,” a TV documentary by Burns and Novick.
The Motor Theory of Speech Perception seeks to explain the remarkable fact that people have superior abilities to perceive speech as opposed to non-speech sounds. The theory postulates that people use their ability to produce speech when they perceive speech as well, through micro-mimicry. In other words, when we see someone speaking, we make micro replicas of the mouth movements we see, thus helping us to understand what is being said. A major objection to this explanation has been put forward by Mole (2010), who denies that there is anything special about speech perception as opposed to perception of non-speech sound. In this article I will defend the Motor Theory against Mole’s (2010) objection by arguing the contrary: there is something special about speech perception.
Our speech perception functions very well even in conditions where the signal is of poor quality. These abilities are markedly better than our perception of non-speech sounds. For example, consider how you can fairly easily pick out the words being uttered, even against a background of intense, and louder, traffic noise. This fact makes it seem that there is a special nature to speech perception as compared to perception of non-speech sounds.
The Motor Theory of Speech Perception (Liberman and Mattingly 1985) seeks to explain this special nature of speech perception. It postulates that the mechanical and neural elements involved in the production of speech are also involved in the perception of speech. On this view, speech perception is the offline running of the systems that when online, actually produce speech. According to the Motor Theory, motor activation – i.e. micro-movements of mouth and tongue muscles or preparations thereto – are also occurring when perception of speech takes place. The idea is that if you make subliminal movements of the type you would make to produce an `S’ sound, you are thereby well-placed to understand that someone else whom you see making such movements overtly is likely to be producing an `S’ sound. This is how we understand one another’s speech so well. And so it is key to the Motor Theory of Speech Perception that speech perception is special.
In some ways, the position of the Motor Theory in explaining speech perception is analogous to the position of Simulation Theory (see Short, 2015) in explaining how we are often able to predict and explain the behaviour of other people (so-called Theory of Mind). In both cases, the account seeks to generate a maximally powerful explanation of the phenomenon using the minimum of additional “moving parts”. The Motor Theory notes that we already have complicated machinery to allow us to produce speech and suggests that that machinery may also be used to perceive and understand speech. The Simulation Theory account of Theory of Mind notes that we already have an immensely complex piece of machinery – a mind – and postulates that we may also use that mind to simulate others and thus understand them. I see value in these parsimonious and economical simulation approaches in both areas.
Mole (Ch. 10, 2010) challenges the Motor Theory. He agrees that speech perception is special, but not that it is special in such a way as to support the Motor Theory. In this article, I will offer responses on behalf of the Motor Theory to Mole’s (2010) challenge in five ways, as outlined below.
Mole (2010) claims that speech perception is not special. If that is true, then the Motor Theory cannot succeed because it proceeds from that assumption. I will first deny Mole’s (2010) claim that other perception also involves mapping from multiple percepts to the same meaning and is therefore not unique to speech perceptionTaking an example from speech, we understand the name “Sherlock” to refer to that detective even though it may be pronounced in a myriad of different ways. This phenomenon is known as invariance. Mole (2010) claims that there is nothing special about speech perception, because other types of perception (such as colour perception) also involve mapping from multiple external sources of perceptual data to the same single percept. I will show that the example from visual perception invoked by Mole (2010) is not of the type that would dismiss the need for a special explanation of speech perception provided by the Motor Theory.
Mole (2010) makes another claim which is also intended to challenge the idea that underpins the Motor Theory that there is a special invariance in speech perception. This special invariance is the way that we always understand “Sherlock” to refer to the detective whichever accent the name is spoken in, or whatever the background noise level is (provided of course that we can actually hear the name). Mole (2010) claims that invariances in speech perception are not special as similar invariances also occur in face recognition. Mole (2010) seeks to make out his face recognition point by discussing how computers perform face recognition; I will show that he does not succeed here.
In the famous McGurk experiment, so-called “cross-talk” effects are seen. These occur where visual and aural stimuli interact with each other and change how one of them is perceived. For example, subjects seeing a video of someone saying “ga” but hearing a recording of someone saying “ba” report that they heard “da.” Since the Motor Theory postulates that speech perception is special, such cross-talk effects will support the Motor Theory if they are in fact special to speech perception. Mole (2010) uses cross-modal data from two experiments with the aim of showing that such cross-talk also exists in non-speech perception. I will suggest that the experiments Mole (2010) cites do not provide evidence for the sort of cross-talk phenomenon that Mole (2010) needs to support his position.
I will refute Mole’s (2010) claim that Motor Theory cannot account for how persons who cannot speak can nevertheless understand speech by outlining how that could occur.
Finally, I will briefly consider a range of additional data that support the Motor Theory which therefore challenges the position espoused by Mole (2010). These are that the the Motor Theory explains all three of cerebellar involvement in dyslexia, observed links between speech production and perception in infants and why neural stimulation of speech production areas enhances speech perception.
Challenges To Mole (2010)
Mole’s (2010) Counterexample From Visual Perception Is Disanalogous To Speech Perception
A phoneme is a single unit of speech. It can be thought of, roughly, as the aural equivalent of a syllable. Any single phoneme will be understood by the listener despite the fact that there will be many different sound patterns associated with it. It is clearly a very useful ability of people to be able to ignore details about pitch, intensity and accent in order to focus purely on the phonemes which convey meaning. This invariance is a feature of speech perception but not of sound perception, which situation motivated the proposal of the Motor Theory.
It is important to be clear on where there is invariance and where there is lack of invariance in perception. There is invariance in the item which the perceiver perceives (for example, Sherlock) even though there is a lack of invariance in the perceptual data that allows the perceiver to have the perception. So we can see that it is Sherlock’s face (an invariance in what is understood) even though the face may be seen from different angles (a lack of invariance in perceptual input). Similarly, we may hear that it is Sherlock’s name that is spoken (an invariance in what is understood) even though the name may be spoken in different accents (a lack of invariance in perceptual input). Lack of invariance is of course the same as variance; this discussion however tends to be couched in terms of invariance and its absence.
For supporters of the Motor Theory, this invariance in what the listener reports that they have heard is evidence that the perceptual object in speech perception is a single gesture – the one phoneme that the speaker intended to pronounce. This single object is always reportable despite the fact that the phoneme could have been pronounced in a wide variety of accents. The accents can vary a great deal but there is still invariance in what the speaker hears because most accents can be understood.
Mole (2010) denies that this invariance is evidence for the special nature of speech. Mole (p.217, 2010) writes: “[e]ven if speech were processed in an entirely non-special way, one would not expect there to be an invariant relationship between […] properties of speech sounds […] and phonemes heard for we do not […] expect perceptual categories to map onto simple features of stimuli in a one-to-one fashion.”
Mole’s (2010) argument is as follows. He allows that there is not a one-to-one mapping between stimulus and perceived phoneme in speech perception. I will also concede this. Mole (2010) then denies that this means that speech perception is special on the grounds that there is not in general a one-to-one mapping between stimulus and percept in perception (other than in speech). He produces a putative example in vision, by noting the existence of `metamers’. A metamer is one of two colours of slightly different wavelengths that are nevertheless perceived to be the same colour. Note that colour is defined here by wavelength rather than phenomenology. So Mole (2010) has indeed produced a further example of a situation where there is not a one-to-one mapping between stimulus and percept.
Mole (2010) has indeed produced a further example of a situation where there is not a one-to-one mapping between stimulus and percept. However, this lack of one-to-one mapping is not exactly what is cited as the cause of the special nature of speech perception under the Motor Theory. Rather the relevant phenomenon is ‘co-articulation’ – i.e., the way in which we are generally articulating more than one phoneme at a time. As Liberman and Mattingly write (1985, p. 4), “coarticulation means that the changing shape of the vocal tract, and hence the resulting signal, is influenced by several gestures at the same time” so the “relation between gesture and signal […] is systematic in a way that is peculiar to speech”. So while it is indeed the case that there are multiple stimuli being presented which result in a single percept, it is the temporal overlap between those stimuli that is the key factor, not the mere fact of their multiplicity. In other words, the Motor Theory argument relies on the fact that a speaker is pronouncing more than one phoneme at a time during overlap periods.
This means that Mole’s (2010) metamer example is disanalogous, because it only deals with the multiplicity of the stimuli in the mapping and not with their temporal overlap. This is the case because there cannot in fact be a temporal overlap between two colour stimuli. We can see this using a thought experiment. Let us imagine a lighting rig that is capable of projecting any number of arbitrary colours and also of projecting more than one colour at the same time.
In that case, we could not say that the perception of a colour being projected at a particular time was changed by the other colours being projected with it. That situation would simply be the projection of a different colour. So a projection of red light with green light does not produce a modified red, it produces yellow light. It is not possible to have a “modified red,” because such a thing is not red any more. The rig would not be projecting a different sort of red; it would be projecting a different colour that was no longer red.
I will illustrate this further with an example from a different sensory modality: hearing. The position I am taking about red (more exactly, an precise shade of red) is essentialist. On essentialist accounts, there are certain properties of an item which can be changed and will result in a modified version of that item. There are other properties, the essential ones, which cannot be modified consistent with the original item retaining its identity.
For example, some properties of an opera are essential to it being an opera. By definition, it is symphonic music with singing. A symphony requires only the musical instruments. Some properties of an opera can be changed and this will result in a modified opera. One could replace the glass harmonica scored for the Mad Scene in Lucia di Lammermoor with flute. One would then have a performance of a modified version of Lucia which would be a modified opera and would still be an opera.
What one could not do is change an opera into a symphony, strictly speaking. There could be a performance of the first act of Lucia as normal and one would be watching a performance of an opera. If in the second act the musicians came out and played without the singers, one would not have converted an opera into a symphony. One would have ceased to perform an opera and begun to perform a symphony, albeit one musically identical to the non-vocal parts of Lucia.
Returning to the lighting rig, we cannot say here that yellow is a modified red without abandoning any meaning for separate colour terms altogether – every colour would be a modified version of every other colour. This impossible lighting rig is what Mole (2010) needs to cite to have a genuine example, because it would be a case of multiple stimuli being projected at the same time and resulting in activation of the same perceptual category.
In sum, a metamer is an example where there is no one-to-one mapping between stimulus and perceptual category, but also where the different stimuli are not simultaneous. This is the case because we cannot be looking at both colours involved in a metamer at the same time. A co-articulation by contrast is an example of where there is no one-to-one mapping between stimulus and perceptual category, but where the different stimuli are indeed simultaneous. As it is that very simultaneity that is the key to the special nature of the systematic relation between gesture and signal under the Motor Theory, Mole (2010) does not have an example here that demonstrates that speech perception is not special.
Face Recognition Does Not Show A Similar Sort of Invariance Of Perception As Speech Recognition
Mole (2010) claims that face recognition is another example of invariance – for example, we can recognise that we are looking at Sherlock’s face from various angles and under different lighting conditions – thereby challenging the idea that invariance in speech perception is evidence for the special nature of speech perception. His claim is that the invariance in the way we can always report that we are looking at Sherlock’s face despite variance in input visual data is similar to the invariance in the way that we can always report we have heard Sherlock’s name despite variance in input aural data. If that is true, then Mole (2010) has succeeded in showing that speech perception is not special as the Motor Theory claims.
Mole (2010) allows that we use invariances in face recognition, but denies this could ever be understood by examination of retinal data. He writes: “[t]he invariances which one exploits in face recognition are at such a high level of description that if one were trying to work out how it was done given a moment-by-moment mathematical description of the retinal array, it might well appear impossible” (Nudds and O’Callaghan 2010, p. 216). What this means is that it would be difficult to get from the retinal array (displaying a great deal of lack of invariance) to the features we use in recognising Sherlock such as our idea of the shape of his nose (which is quite invariant).
However, this can be questioned as follows. Since the only thing that computers can do in terms of accepting data is to read in a mathematical array, Mole’s (2010) claim is in fact equivalent to the claim that it cannot be understood how computers can perform face recognition. That claim is false. To be very fair to Mole (2010), his precise claim is that the task might appear impossible, but I shall now show that since it is widely understood to be possible, it should not appear impossible either.
Fraser et al. (2003) describe an algorithm that performs the face recognition task better than the best algorithm in a ‘reference suite’ of such algorithms. Their computer is supplied with a gallery of pictures of faces and a target face and instructed to sort the gallery such that the target face is near the top. The authors report that their algorithm is highly successful at performing this task. Fraser et al. write (2003, p. 836): “[w]e tested our techniques by applying them to a face recognition task and found that they reduce the error rate by more than 20% (from an error rate of 26.7% to an error rate of 20.6%)”. So the computer recognized the target face around 80% of the time.
So we see firstly that the computer can recognize a face. [It is not an objection here to claim that strictly speaking, computers cannot `recognise’ anything. All that we require here is that computers can be programmed so as to distinguish faces from one another merely by processing visual input. It is this task which Mole (2010) claims appears impossible.] Then we turn to the claim that how the computer does this cannot be understood. That is refuted by the entire paper, which is an extended discussion of exactly that. Since this in an active area of research, we can take it that such understanding is widely to hand in computational circles, and should be more wide-spread.
It may be true in one sense that we could not efficiently perform the same feat as the computer – in the sense of physically taking the mathematical data representing the retinal array and explicitly manipulating it in a sequence of complex ways in order to perform the face recognition task. In another sense, we could, of course. It is what we do every time we actually recognize a face. The mechanics of our eyes and the functioning of our perceptual processing system have the effect of performing those same mathematical manipulations. We know this because we do in fact perform face recognition using only the retinal array as input data.
Mole (2010) has indeed provided an example of invariance (i.e., in face recognition) but the example does not damage the need for a special explanation of the speech perception invariances, because the face perception example can in fact easily be explained. Therefore Mole (2010) has not here provided a further example of a invariance and he has not thereby questioned the specialness of speech perception. Speech perception continues indeed to exhibit a unique invariance which continues to appear in need of unique explanation.
Experimental Data Do Not Show Cross-Modal Fusion
Mole (2010) argues that an experiment on judgments made as to whether a cello was being bowed or plucked shows the same illusory optical/acoustic combinations as are seen in the McGurk effect. The McGurk effect (McGurk and MacDonald 1976) is observed in subjects hearing a /ba/ stimulus and seeing a /ga/ stimulus. The subjects report that they have perceived a /da/ stimulus. It is important to note that this is not one of the stimuli presented; it is a fusion or averaging of the two stimuli. So an optical stimulus and and an acoustical stimulus have combined to produce an illusory result which is neither of them.
If Mole’s (2010) claim that the cello experiment shows McGurk-like effects is true, this would show that these illusory effects are not special to speech, thus challenging the claim that there is anything special about speech that the Motor Theory can explain. Mole (p. 221, 2010) writes: “judgments of whether a cello sounds like it is being plucked or bowed are subject to McGurk-like interference from visual stimuli”. However, the data Mole (2010) cites do not show the same type of illusory combination and so Mole (2010) is unable to discharge the specialness of speech perception as he intends.
The Motor Theory postulates that the gesture intended by the speaker is the object of the perception, and not the acoustical signal produced. The theory explains this by also postulating a psychological gesture recognition module which will make use of the speech production capacities in performing speech perception tasks. Thus the McGurk effect constitutes strong evidence for the Motor Theory by explaining that the module has considered optical and acoustical inputs in deciding what gesture has been intended by the speaker. This strong evidence would be weakened if Mole (2010) can show that McGurk-like effects occur other than in speech perception, because the proponents of the Motor Theory would then be committed to the existence of multiple modules and their original motivation by the observed specialness of speech would be put in question, in fact as in the McGurk effect.
More specifically, the paper Mole (2010) cites, Saldaña and Rosenblum (1993), describes an experimental attempt to find non-speech cross-modal interference effects using a cello as the source of acoustic and optical stimuli. Remarkably, Saldaña and Rosenblum (1993) state prominently in their abstract that their work suggests “the nonspeech visual influence was not a true McGurk effect” in direct contradiction of Mole’s (2010) stated reason for citing them.
There are two ways to make a cello produce sound: it can be plucked or it can be bowed. The experimenters proceed by presenting subjects with discrepant stimuli – for example, an optical stimulus of a bow accompanied by an acoustical stimulus of a pluck. Saldaña and Rosenblum (1993) found that the reported percepts were adjusted slightly by a discrepant stimulus in the direction of that stimulus.
However, to see a McGurk effect, we need the subjects to report that the gesture they perceive is a fusion of a pluck and a bow. Naturally enough, this did not occur, and indeed it is unclear what exactly such a fusion might be. Therefore, Mole (2010) has not here produced evidence that there are McGurk effects outside the domain of speech perception.
Mole’s (2010) response is to dismiss this as a merely quantitative difference between the effects observed by the two experiments. Mole (p. 221, 2010) writes: “[t]he McGurk effect does reveal an aspect of speech that is in need of a special explanation because the McGurk effect is of a much greater magnitude than analogous cross-modal context effects for non-speech sounds”. As we have seen, Mole (2010) is wrong to claim there is only a quantitative difference between the McGurk effect observed in speech perception and the cross-modal effects observed in the cello experiment because only in the former were fusion effects observed. That is most certainly a major qualitative difference.
Mole’s (2010) claim that the cello results are only quantitatively different to the results seen in the McGurk effect experiment produces further severe difficulties when we consider in detail the experimental results obtained. The cello experimenters describe a true McGurk effect as being one where there is a complete shift to a different entity – the syllable is reported as clearly heard and is entirely different to the one in the acoustic stimulus. Saldaña and Rosenblum (1993, p. 409) describe these McGurk data as meaning: “continuum endpoints can be visually influenced to sound like their opposite endpoints”.
The cello data were not able to make a pluck sound exactly like a bow and in fact the discrepant optical stimuli were only able to slightly shift the responses in their direction, by less than a standard deviation, and in some cases not at all. This is not the McGurk effect at all and so Mole (2010) cannot say it is only quantitatively different. Indeed, Saldaña and Rosenblum (1993, p. 410) specifically note that: “[t]his would seem quite different from the speech McGurk effect”.
In sum, the cross-modal fusion effect that Mole (2010) needs is physically impossible in the cello case and the data actually found do not even represent a non-speech analog of the McGurk effect, as is confirmed by the authors. Once again, speech perception remains special and the special Motor Theory is needed to explain it.
Sound Localization Experiment
The other experiment relied on by Mole (2010) was conducted by Lewald and Guski (2003) and considered the ventriloquism effect, whereAs above, the result that Mole (2010) needs to support his theory is an effect that is a good analogy to the McGurk effect in a non-speech domain. As I will show below, the data from the Sound Localisation Experiment also fails to bear out his claim that there are McGurk-like effects outside the domain of speech perception.
The Sound Localisation Experiment uses tones and lights as its acoustic and optical stimuli. It investigates the ventriloquism effect quantitatively in both the spatial and temporal domains. The idea is that separate optical and acoustic events will tend to be perceived as a unified single event with optical and acoustical effects. This will only occur if the spatial or temporal separation of the component events is below certain thresholds.
Lewald and Guski (2003, p. 469) propose a “spatio-temporal window for audio-visual integration” within which separate events will be perceived as unified. They suggest maximum values of 3◦ for angular or spatial separation and 100 ms for temporal separation. Thus a scenario in which a light flash occurs less than 3◦ away from the source of a tone burst will produce a unified percept of a single optical/acoustical event as will a scenario in which a light flash occurs within 100 ms of a tone burst. Since the two stimuli in fact occurred at slightly different times or locations, this effect entails that at least one of the stimuli is perceived to have occurred at a different time or location than it actually did.
To recap, in the McGurk effect, discrepant optical and acoustic stimuli result in a percept that is different to either of the two stimuli and is a fusion of them. We may allow to Mole (2010) that Lewald and Guski (2003) do indeed report subjects perceive a single event comprising a light flash and a tone burst. However, that is insufficient to constitute an analogy to the McGurk effect. Subjects do not report that their percept is some fusion of a light flash and a tone burst – as with the cello experiment, it is unclear what such a fusion could be – they merely report that an event has resulted in these two observable effects. [We may note that Lewald and Guski (2003) do not take themselves to be searching for non-speech analogs of the McGurk effect; the term does not appear in their paper or in the titled of any of their 88 citations, throwing doubt on the claim that they are working in the field at all.]
Indeed, the subjects were not even asked whether they perceived some fused event. They were asked whether the sound and the light had a common cause; were co-located or were synchronous. As Lewald and Guski write (p. 470, 2003): “[i]n Experiment 1, participants were instructed to judge the likelihood that sound and light had a common cause. In Experiment 2, participants had to judge the likelihood that sound and light sources were in the same position. In Experiment 3, participants judged the synchrony of sound and light pulses’ ”. A ‘common cause’ might have been some particular event but it is not the sound and the light and they were the only things that were perceived therefore the instructions do not even admit the possibility that a fused event was perceived.
Since Lewald and Guski (2003) are measuring the extent to which participants agree that a light and a tone had a common cause, were co-located or were synchronous, it is puzzling that Mole (p. 221, 2010) cites them to support his claim that perceived flash count can be influenced by perceived tone count. We see this when Mole writes (p. 221, 2010): “[t]he number of flashes that a subject seems to see can be influenced by the number of concurrent tones that he hears (Lewald and Guski 2003)”.
Moreover, neither the Sound Localisation Experiment nor the cello experiment support Mole’s (p. 221, 2010) summation that “[i]t is not special to speech that sound and vision can interact to produce hybrid perceptions influenced by both modalities” in the way he needs. Unlike with the McGurk effect, there are no hybrid perceptions in either case, where “hybrid” is understood to be ‘a perception of an event which is neither of the stimulus events’.
There are cross-modal effects between non-speech sound stimuli and optical stimuli but that is inadequate to support Mole’s (2010) claim that speech is not special. We still need the special explanatory power of the Motor Theory.
Mute Perceivers Can Be Accommodated
One of Mole’s (2010) challenges is that the Motor Theory cannot explain how some people can have the capacity to perceive speech that they lack the capacity to produce. Mole writes (p. 226, 2010) that “[a]ny move that links our ability to perceive speech to our ability to speak is an unappealing move, since it ought to be possible to hear speech without being able to speak oneself”. There is an equivocation here though on what is meant by ‘capacity to produce’. Mole (2010) is reading that term so that the claim is that someone who is unable to use their mouth to produce speech lacks the capacity to perceive speech. Since such mute people can indeed as he claims understand speech, he takes his claim to be made out.
However, in the article cited by Mole (2010), it is clear that this is not what is understood by ‘capacity to produce’. In the study by Fadiga et al. (2002) described, the neuronal activation related to tongue muscles is not sufficient to generate movement. This activation is a result of the micro-mimicry that takes place when people are perceiving speech. Fadiga et al. (2002) call this mimicry “motor facilitation.”
Fadiga et al. (p. 400, 2002) write: “The observed motor facilitation is under-threshold for overt movement generation, as assessed by high sensitivity electromyography showing that during the task the participants’ tongue muscles were absolutely relaxed”. Thus the question is whether the subject has the capacity to produce such a sub-threshold activation, and not the capacity to produce speech via a super-threshold activation. Naturally, since all the subjects had normal speech, they could produce both a sub-threshold and a super-threshold activation, with the latter resulting in speech.
However, someone could be able to activate their tongue muscles below the threshold to generate overt movement but not be able to activate those muscles above the threshold. That would mean that they lacked ‘capacity to produce’ in Mole’s (2010) sense, but retained it in Fadiga et al.’s (2002) sense. This would be a good categorization of the mute people who can understand speech they cannot utter. Those people would retain the ability to produce the neural activity that Fadiga et al. observe, which does not result in tongue muscle movement. This is a testable empirical claim to which my account is committed. It is possible that they may not be able to even produce the sub-threshold neural signals. If that turns out to be correct, it would be a problem for the Motor Theory and the defence I have offered for it here.
Similarly, we can resolve Mole’s (2010) puzzle about how one can understand regional accents that one cannot mimic; i.e. I can understand people who speak with an accent that is different to mine. The capacity to understand a particular accent could result from our ability to generate the necessary sub-threshold activations, but not the super-threshold ones. If we go on to acquire that regional accent, our super-threshold muscle activation capacities would be of the required form. This again is an empirical prediction which makes my account subject to falsification by data.
This hypothesis could have interesting implications in the field of developmental psychology. Mole (p. 216, 2010) outlines how infants can perceive all speech sound category distinctions, but eventually lose the ability to discriminate the ones that do not represent a phoneme distinction in their language. So it may be the case that all infants are born with the neural capacity to learn to generate super-threshold activations of all regional accents, but eventually retain that capacity only at the sub-threshold level – because they can later understand a wide range of regional accents – and lose the capacity at the super-threshold level – for those regional accents they cannot mimic.
Another implication here of the Motor Theory is to say that a listener’s vocal tract can function as a model of itself, just as a listener’s vocal tract can function as a model of a speaker’s vocal tract. This means that the sub-threshold activation functions as a model of the super-threshold activation. So, perceptual capacities involve the former modelling the latter exactly as the Motor Theory predicts. Such an approach does not commit the Motor Theory to the modelling/perception neurons controlling the sub-threshold activations being the same as the production neurons controlling speech production, so the account is not susceptible to falsification on that precise point.
Further Brief Challenges To Mole (2010)
The Motor Theory Explains Cerebellar Involvement In Dyslexia
Mole (2010) challenges the Motor Theory and in doing so, challenges the idea that speech production capacities are involved in speech recognition. For this reason, any data showing links between speech production capacities and speech recognition capacities will be a problem for him.
Ivry and Justus (2001) refer to a target article that shows that 80% of dyslexia cases are associated with cerebellar impairments. Since the cerebellum is generally regarded as a motor area, and dyslexia is most definitely a language disorder, we have clear evidence for a link between language and motor areas. That is naturally a result that can be clearly accommodated by the Motor Theory which links speech production and speech recognition.
It is not open to Mole (2010) to respond that the link is only between motor control areas and writing control areas, because although writing skills are the primary area of deficit for dyslexic subjects, the authors also found impairments in reading ability to be strongly associated with the cerebellar impairments. This can be explained on the Motor Theory because it says that Motor deficits will result in speech recognition deficits. Mole (2010) needs to provide an explanation of this which does not rely on the Motor Theory.
The Motor Theory Explains Links Between Speech Production And Perception In Infants
Mole (2010) does not address some important results supplied by Liberman and Mattingly (1985: p. 18) that link perception and production of speech. These data show that infants preferred to look at a face producing the vowel they were hearing rather than the same face with the mouth shaped to produce a different vowel. That effect is not seen when the vowel sounds were replaced with non-speech tones matched for amplitude and duration with the spoken vowels. What this means is that the infants are able to match the acoustic signal to the optical one. In a separate study, the same extended looking effect was seen in infants when a disyllable was the test speech sound. These data cannot be understood without postulating a link between speech production and speech perception abilities, because differentiating between mouth shapes is a production-linked task – albeit one mediated by perception – and differentiating between speech percepts is a perceptual task.
The Motor Theory Explains Why Neural Stimulation Of Speech Production Areas Enhances Speech Perception
D’Ausilio et al. (2009) conducted an experiment in which Transcranial Magnetic Stimulation (“TMS”) was applied to areas of the brain known to be involved in motor control of articulators. Articulators are the physical elements that produce speech, such as the tongue and lips. After the TMS, the subjects were tested on their abilities to perceive speech sounds. It was found that the stimulation of speech production areas improved the ability of the subjects to perceive speech. The authors suggest that the effect is due to the TMS causing priming of the relevant neural areas such that they are more liable to be activated subsequently.
Even more remarkably, the experimenters find more fine grained effects such that stimulation of the exact area involved in production of a sound enhanced perceptual abilities in relation to that sound. D’Ausilio et al (2009, p. 383) report: “the perception of a given speech sound was facilitated by magnetically stimulating the motor representation controlling the articulator producing that sound, just before the auditory presentation”. This constitutes powerful evidence for the Motor Theory’s claim that the neural areas responsible for speech production are also involved in speech perception.
Special situations require special explanations. The Motor Theory of Speech Perception is a special explanation of speech perception which, as evidenced by the rejection of Mole’s objections, continues to be needed. One might say that such “specialness” means the Motor Theory stands in a vulnerable and isolated position, as it seeks to explain speech perception in a way that is very different to how we understand other forms of perception. Here, I would revert to my brief opening remarks about the similarities between the Motor Theory and Simulation Theory. Whilst the Motor Theory is indeed a special way to explain speech perception, it is at the same time parsimonious and explanatorily powerful because like Simulation Theory, it does not require any machinery which we do not already know we possess. This is perhaps what underlies the continued attractiveness of Motor Theory as a convincing account of how people perceive speech so successfully.
D’Ausilio, Aet al. 2009 The Motor Somatotopy of Speech Perception. Current Biology 19: pp. 381–385. DOI: 10.1016/j.cub.2009.01.017
Fadiga, L et al. 2002 Speech Listening Specifically Modulates the Excitability of Tongue Muscles: a TMS study. European Journal of Neuroscience, 15: pp. 399–402. DOI: 10.1046/j.0953-816x.2001.01874.x
Fraser, A M et al. 2003 Classification modulo invariance, with application to face recognition. Journal of Computational and Graphical Statistics, 12 (4): pp. 829–852. DOI: 10.1198/1061860032634
Ivry, R B and T C Justus 2001 A neural instantiation of the motor theory of speech perception. Trends in Neuroscience, 24 (9): pp. 513–5. DOI: 10.1016/S0166-2236(00)01897-X
Lewald, J and R Guski 2003 Cross-modal perceptual integration of spatially and temporally disparate auditory and visual stimuli. Brain Research. Cognitive Brain Research (Amsterdam), 16: pp. 468–478. DOI: 10.1016/S0926-6410(03)00074-0
Liberman, A and I G Mattingly 1985 The Motor Theory of Speech Perception Revised. Cognition, 21: pp. 1–36. DOI: 10.1016/0010-0277(85)90021-6
McGurk, H and J MacDonald 1976 Hearing lips and seeing voices. Nature, 264, (5588): pp. 746–748. DOI: 10.1038/264746a0
Mole, C 2010 The motor theory of speech perception in Sounds and Perception: New Philosophical Essays. Oxford: Oxford University Press. DOI: 10.1093/acprof:oso/9780199282968.001.0001
Saldaña, H M and L D Rosenblum 1993 Visual influences on auditory pluck and bow judgments. Perception And Psychophysics, 54 (3): pp. 406– 416. DOI: 10.3758/BF03205276
The Picture Superiority Effect is one of a large number of cognitive biases that affect how we think and act. It is important to know about these biases in the context of financial markets because they can impair our decision making but also inform traders on how other market participants may react
As in previous posts featuring on this blog, I will first outline a cognitive bias drawing from the relevant psychological literature and then describe how that plays out in financial markets. My basic point throughout is that it is critical for market participants to know about these unavoidable biases for two reasons. Firstly, knowing about them is the first step to being able to recognise when they are operative and assessing whether they have resulted in an optimal decision, with specific relevance here to trading decisions. Secondly, since no-one is free of these biases, traders can expect that other market players will be influenced by them and trade accordingly.
The Picture Superiority Effect is relatively straightforward. What psychologists have found is that people find it easier to remember images than words. There are different opinions in the literature as to why this might be. In my view, the effect is likely to be explained by our preference for the vivid and concrete over the dull and abstract; but in fact, the causation is not that important for our purposes here. We just need to know that everyone remembers imagery more than text. This is probably no surprise; in particular in the age of social media, as pictures are shared more widely on social media than text (and so we might surmise that there is also a Video Superiority Effect which is even stronger).
There is some discussion as to how age interacts with the Picture Superiority Effect. Early researchers found that younger people recalled more pictures than words while older subjects did not, suggesting that the Picture Superiority Effect exists only in younger people. More recent work, however, appears to find the exact opposite. Given the general improvement in experimental methodologies that occurs over time and the parallel increase in knowledge, I would say that the more recent studies are more likely to be correct. But that observation remains subject to further confirmation/disconfirmation.
As a result, there have been some suggestions that what is happening is that images work as a compensation mechanism for older adults who are experiencing memory deficits. So the overall story may be that younger people are prone to the Picture Superiority Effect, middle age adults are less prone to it, and then older people embrace the effect for compensation purposes. This would mean something like older people are deliberately relying more on pictures to assist them in remembering things. There is also advice from the intelligence community (!) to the effect that the way to remember a lot of items without writing them down is to modify a visual memory of a very familiar location, such as one’s home, and add to it strange and striking items which represent the data one wishes to remember.
All of this means that everyone who is involved in financial markets can expect that the Picture Superiority Effect will play a role in their thinking to a differing extent at various life stages. How would this work?
This type of point — how do cognitive biases affect our performance in financial markets — is one I discuss at length in my book:
One example I give there is related to imagery, although I am actually discussing a different cognitive bias called the Availability Heuristic. The example is the photos and video with which we are all familiar of people who had been fired from Lehman Bros. after it collapsed in the crisis. These pictures and ones like them are extremely easy to remember. In fact, they are difficult to forget. This sort of thing might make you unreasonably averse to buying bank shares. Similarly, pictures of Elon Musk looking depressed might make you avoid TSLA stock. There may or may not be good reasons for
avoiding such stocks (my view is the opposite at present) — but what is 100% clear is that if you read a story about banks or TSLA and only recall a picture of a fired banker or a sad Elon Musk, you have not retained very much which is useful in terms of making a market decision. Even if you give equal weight to the picture and the words, you are probably still weighting the evidential value of the total information value available to you wrongly.
It is probably wise to set aside the limited information value represented by imagery and focus on the data — which may of course be presented graphically without being just a photo.