The US Was Defeated In #Vietnam By Systematic Theory Of Mind Error

The US had vast superiority in all assets that were thought to matter but was still defeated in the Vietnam War — why?

It is clear that the US possessed much more in the way of conventional military assets in the conflict with North Vietnam than the opposing forces.  This point is widely accepted so I will not spend much time arguing for it.  For example, the US had tanks while the Viet Cong had no anti-tank weapons.*  US forces had “superb artillery and air support” (Sheehan, p. 447, 1988) which enabled any US troops facing locally superior odds to succeed.  The entire US army fought with the doctrine of “superior firepower” (Sheehan, p. 243, 1988).  The financial resources that the US was able to apply also hugely outweighed those of its opponents in a largely peasant guerrilla army.  Sheehan (p. 624, 1988) writes that commodity aid to South Vietnam reached the staggering figure of $650m in 1966.

This last point is decisive.  It has been wisely observed that:

“Most wars have been wars of attrition, settled by which side had more staying power through the ability to apply men and materiel.” **

The GDP of North Vietnam in 1965 was $6.0bn in 2015 dollars.  The GDP of the US in 1965 was $4.1tn in 2009 dollars — that is, 683x larger.

So why did the US lose?  Consider the following highly insightful quotation.

“When McNamara wants to know what Ho Chi Minh is thinking, he interviews himself.” ***

Robert McNamara was the Secretary of Defense at the time, and so crucial to  managing the war effort.  It is clearly important to know what the enemy is thinking.  McNamara’s error was to do this in the way that most people do.  This is where we come to Theory of Mind.

Theory of Mind is the label in psychology for the way we predict and explain the behaviour of others.  We all do this all the time.  There is a vibrant debate in psychology as to how we do it.  The mainstream view is called “Theory Theory.” This holds that children as young as five, who already have a serviceable Theory of Mind, have formed it by learning a theory of other people.  They are supposed to have done this by most psychologists in a scientific fashion: they propose hypotheses and then confirm or disconfirm them empirically.

I support the opposing view, which is known as Simulation Theory.**** This suggests that we run our Theory of Mind by putting ourselves in the position of others and seeing what we would do.  This, according to the quotations, is exactly what McNamara did.  And it is why he was wrong and why the US lost.

We can see this same factor in action with another quote from a significant protagonist in Vietnam: Green Beret Colonel Kurtz who makes the following  observation on realising that the Viet Cong have removed the arms of all the children in a village who were vaccinated against Polio by US forces.

And then I realized… like I was shot… like I was shot with a diamond… a diamond bullet right through my forehead. And I thought, my God… the genius of that! The genius! The will to do that!

The surprise of the Colonel is again an illustration of Theory of Mind error.  If his simulation of the Viet Song had been more accurate, he would have been able to predict their action here.  That he was not, and that he was able to see how effective, if inhuman, this strategy was, shows that he was perhaps able to adjust and improve his Theory of Mind more than McNamara was.

It also illustrates the type of Theory of Mind error we should expect.  McNamara was a company man, who was experienced from his time running Ford in systems analysis and data handling.  So when he simulated Ho Chi Minh, he would draw conclusions along the lines of “I am faced with overwhelming odds; all of the analysis says that overwhelming odds always win; I therefore cannot win.”

What this misses out is the “Blut und Boden” point hinted at by Kurtz.  It misses out the will to fight on one’s own soil irrespective of the prospects of success.  It misses out the will to enlist the entire male and female population in the war effort, with many women driving supplies down the Ho Chi Minh trail at night without lights under largely ineffective yet heavy US bombing.  It misses out what the French missed at Dien Bien Phu: the will to disassemble artillery pieces and carry them up jungled mountains by hand.

So this is why the US lost.  It is also presumably why my book is held by the following library:

Institute for Defense Analyses Library
IDA Library

Alexandria, VA 22311 United States

You can also buy a copy at the link below if you want to know more about Theory of Mind. ****

* Sheehan, N. (1988)   A Bright Shining Lie: John Paul Vann and America in Vietnam.  Vintage Books

** “The other side has a vote”, The Economist, Oct 14 2017

***  This quotation is from James Willbanks, an army strategist.  It is written up in The Economist, “Buried Ordnance,” in the issue of Sep 14 2017.  The piece is a review of “The Vietnam War,” a TV documentary by Burns and Novick.

**** Short, T L 2015  Simulation Theory: a Psychological and Philosophical Consideration.  Abingdon: Routledge.  URL: https://www.routledge.com/Simulation-Theory-A-psychological-and-philosophical-consideration/Short/p/book/9781138294349

Defending The Motor Theory Of Speech Perception

The Motor Theory of Speech Perception seeks to explain the remarkable fact that people have superior abilities to perceive speech as opposed to non-speech sounds. The theory postulates that people use their ability to produce speech when they perceive speech as well, through micro-mimicry. In other words, when we see someone speaking, we make micro replicas of the mouth movements we see, thus helping us to understand what is being said. A major objection to this explanation has been put forward by Mole (2010), who denies that there is anything special about speech perception as opposed to perception of non-speech sound. In this article I will defend the Motor Theory against Mole’s (2010) objection by arguing the contrary: there is something special about speech perception.

Introduction

Our speech perception functions very well even in conditions where the signal is of poor quality. These abilities are markedly better than our perception of non-speech sounds. For example, consider how you can fairly easily pick out the words being uttered, even against a background of intense, and louder, traffic noise. This fact makes it seem  that there is a special nature to speech perception as compared to perception of non-speech sounds.

The Motor Theory of Speech Perception (Liberman and Mattingly 1985) seeks to explain this special nature of speech perception. It postulates that the mechanical and neural elements involved in the production of speech are also involved in the perception of speech.   On this view, speech perception is the offline running of the systems that when online, actually produce speech. According to the Motor Theory, motor activation – i.e. micro-movements of mouth and tongue muscles or preparations thereto – are also occurring when perception of speech takes place. The idea is that if you make subliminal movements of the type you would make to produce an `S’ sound, you are thereby well-placed to understand that someone else whom you see making such movements overtly is likely to be producing an `S’ sound. This is how we understand one another’s speech so well.  And so it is key to the Motor Theory of Speech Perception that speech perception is special.

In some ways, the position of the Motor Theory in explaining speech perception is analogous to the position of Simulation Theory (see Short, 2015) in explaining how we are often able to predict and explain the behaviour of other people (so-called Theory of Mind). In both cases, the account seeks to generate a maximally powerful explanation of the phenomenon using the minimum of additional “moving parts”.  The Motor Theory notes that we already have complicated machinery to allow us to produce speech and suggests that that machinery may also be used to perceive and understand speech.  The Simulation Theory account of Theory of Mind notes that we already have an immensely complex piece of machinery – a mind – and postulates that we may also use that mind to simulate others and thus understand them.  I see value in these parsimonious and economical simulation approaches in both areas.

Mole (Ch. 10, 2010) challenges the Motor Theory.  He agrees that speech perception is special, but not that it is special in such a way as to support the Motor Theory.  In this article, I will offer responses on behalf of the Motor Theory to Mole’s (2010) challenge in five ways, as outlined below.

  1. Mole (2010) claims that speech perception is not special.  If that is true, then the Motor Theory cannot succeed because it proceeds from that assumption.  I will first deny Mole’s (2010) claim that other perception also involves mapping from multiple percepts to the same meaning and is therefore not unique to speech perceptionTaking an example from speech, we understand the name “Sherlock” to refer to that detective even though it may be pronounced in a myriad of different ways.  This phenomenon is known as invariance.  Mole (2010) claims that there is nothing special about speech perception, because other types of perception (such as colour perception) also involve mapping from multiple external sources of perceptual data to the same single percept. I will show that the example from visual perception invoked by Mole (2010) is not of the type that would dismiss the need for a special explanation of speech perception provided by the Motor Theory.
  2. Mole (2010) makes another claim which is also intended to challenge the idea that underpins the Motor Theory that there is a special invariance in speech perception.  This special invariance is the way that we always understand “Sherlock” to refer to the detective whichever accent the name is spoken in, or whatever the background noise level is (provided of course that we can actually hear the name).  Mole (2010) claims that invariances in speech perception are not special as similar invariances also occur in face recognition.  Mole (2010) seeks to make out his face recognition point by discussing how computers perform face recognition;  I will show that he does not succeed here.
  3. In the famous McGurk experiment, so-called “cross-talk” effects are seen. These occur where visual and aural stimuli interact with each other and change how one of them is perceived.  For example, subjects seeing a video of someone saying “ga” but hearing a recording of someone saying “ba” report that they heard “da.”  Since the Motor Theory postulates that speech perception is special, such cross-talk effects will support the Motor Theory if they are in fact special to speech perception.  Mole (2010) uses cross-modal data from two experiments with the aim of showing that such cross-talk also exists in non-speech perception.  I will suggest that the experiments Mole (2010) cites do not provide evidence for the sort of cross-talk phenomenon that Mole (2010) needs to support his position.
  4. I will refute Mole’s (2010) claim that Motor Theory cannot account for how persons who cannot speak can nevertheless understand speech by outlining how that could occur.
  5. Finally, I will briefly consider a range of additional data that support the Motor Theory which therefore challenges the position espoused by Mole (2010).  These are that the the Motor Theory explains all three of cerebellar involvement in dyslexia,  observed links between speech production and perception in infants and why neural stimulation of speech production areas enhances speech perception.

Challenges To Mole (2010)

Mole’s (2010) Counterexample From Visual Perception Is Disanalogous To Speech Perception

A phoneme is a single unit of speech.  It can be thought of, roughly, as the aural equivalent of a syllable.  Any single phoneme will be understood by the listener despite the fact that there will be many different sound patterns associated with it.  It is clearly a very useful ability of people to be able to ignore details about pitch, intensity and accent in order to focus purely on the phonemes which convey meaning.  This invariance is a feature of speech perception but not of sound perception, which situation motivated the proposal of the Motor Theory.

It is important to be clear on where there is invariance and where there is lack of invariance in perception.  There is invariance in the item which the perceiver perceives (for example, Sherlock) even though there is a lack of invariance in the perceptual data that allows the perceiver to have the perception.  So we can see that it is Sherlock’s face (an invariance in what is understood) even though the face may be seen from different angles (a lack of invariance in perceptual input).  Similarly, we may hear that it is Sherlock’s name that is spoken (an invariance in what is understood) even though the name may be spoken in different accents (a lack of invariance in perceptual input).   Lack of invariance is of course the same as variance; this discussion however tends to be couched in terms of invariance and its absence.

For supporters of the Motor Theory, this invariance in what the listener reports that they have heard is evidence that the perceptual object in speech perception is a single gesture – the one phoneme that the speaker intended to pronounce.  This single object is always reportable despite the fact that the phoneme could have been pronounced in a wide variety of accents.  The accents can vary a great deal but there is still invariance in what the speaker hears because most accents can be understood.

Mole (2010) denies that this invariance is evidence for the special nature of speech.  Mole (p.217, 2010) writes: “[e]ven if speech were processed in an entirely non-special way, one would not expect there to be an invariant relationship between […] properties of speech sounds […] and phonemes heard for we do not […] expect perceptual categories to map onto simple features of stimuli in a one-to-one fashion.”

Mole’s (2010) argument is as follows.  He allows that there is not a one-to-one mapping between stimulus and perceived phoneme in speech perception.  I will also concede this.  Mole (2010) then denies that this means that speech perception is special on the grounds that there is not in general a one-to-one mapping between stimulus and percept in perception (other than in speech).  He produces a putative example in vision, by noting the existence of `metamers’.  A metamer is one of two colours of slightly different wavelengths that are nevertheless perceived to be the same colour.  Note that colour is defined here by wavelength rather than phenomenology.  So Mole (2010) has indeed produced a further example of a situation where there is not a one-to-one mapping between stimulus and percept.

Mole (2010) has indeed produced a further example of a situation where there is not a one-to-one mapping between stimulus and percept.  However, this lack of one-to-one mapping is not exactly what is cited as the cause of the special nature of speech perception under the Motor Theory. Rather the relevant phenomenon is ‘co-articulation’ – i.e., the way in which we are generally articulating more than one phoneme at a time. As Liberman and Mattingly write (1985, p. 4), “coarticulation means that the changing shape of the vocal tract, and hence the resulting signal, is influenced by several gestures at the same time” so the “relation between gesture and signal […] is systematic in a way that is peculiar to speech”.  So while it is indeed the case that there are multiple stimuli being presented which result in a single percept, it is the temporal overlap between those stimuli that is the key factor, not the mere fact of their multiplicity.  In other words, the Motor Theory argument relies on the fact that a speaker is pronouncing more than one phoneme at a time during overlap periods.

This means that Mole’s (2010) metamer example is disanalogous, because it only deals with the multiplicity of the stimuli in the mapping and not with their temporal overlap.  This is the case because there cannot in fact be a temporal overlap between two colour stimuli.  We can see this using a thought experiment.  Let us imagine a lighting rig that is capable of projecting any number of arbitrary colours and also of projecting more than one colour at the same time.

In that case, we could not say that the perception of a colour being projected at a particular time was changed by the other colours being projected with it.  That situation would simply be the projection of a different colour.  So a projection of red light with green light does not produce a modified red, it produces yellow light.  It is not possible to have a “modified red,” because such a thing is not red any more.  The rig would not be projecting a different sort of red; it would be projecting a different colour that was no longer red.

I will illustrate this further with an example from a different sensory modality: hearing.  The position I am taking about red (more exactly, an precise shade of red) is essentialist.  On essentialist accounts, there are certain properties of an item which can be changed and will result in a modified version of that item.  There are other properties, the essential ones, which cannot be modified consistent with the original item retaining its identity.

For example, some properties of an opera are essential to it being an opera.  By definition, it is symphonic music with singing.  A symphony requires only the musical instruments.  Some properties of an opera can be changed and this will result in a modified opera.  One could replace the glass harmonica scored for the Mad Scene in Lucia di Lammermoor with flute.  One would then have a performance of a modified version of Lucia which would be a modified opera and would still be an opera.

What one could not do is change an opera into a symphony, strictly speaking.  There could be a performance of the first act of Lucia as normal and one would be watching a performance of an opera.  If in the second act the musicians came out and played without the singers, one would not have converted an opera into a symphony.  One would have ceased to perform an opera and begun to perform a symphony, albeit one musically identical to the non-vocal parts of Lucia.

Returning to the lighting rig, we cannot say here that yellow is a modified red without abandoning any meaning for separate colour terms altogether – every colour would be a modified version of every other colour.  This impossible lighting rig is what Mole (2010) needs to cite to have a genuine example, because it would be a case of multiple stimuli being projected at the same time and resulting in activation of the same perceptual category.

In sum, a metamer is an example where there is no one-to-one mapping between stimulus and perceptual category, but also where the different stimuli are not simultaneous.  This is the case because we cannot be looking at both colours involved in a metamer at the same time.  A co-articulation by contrast is an example of where there is no one-to-one mapping between stimulus and perceptual category, but where the different stimuli are indeed simultaneous. As it is that very simultaneity that is the key to the special nature of the systematic relation between gesture and signal under the Motor Theory, Mole (2010) does not have an example here that demonstrates that speech perception is not special.

Face Recognition Does Not Show A Similar Sort of Invariance Of Perception As Speech Recognition

Mole (2010) claims that face recognition is another example of invariance – for example, we can recognise that we are looking at Sherlock’s face from various angles and under different lighting conditions – thereby challenging the idea that invariance in speech perception is evidence for the special nature of speech perception.  His claim is that the invariance in the way we can always report that we are looking at Sherlock’s face despite variance in input visual data is similar to the invariance in the way that we can always report we have heard Sherlock’s name despite variance in input aural data.  If that is true, then Mole (2010) has succeeded in showing that speech perception is not special as the Motor Theory claims.

Mole (2010) allows that we use invariances in face recognition, but denies this could ever be understood by examination of retinal data.  He writes: “[t]he invariances which one exploits in face recognition are at such a high level of description that if one were trying to work out how it was done given a moment-by-moment mathematical description of the retinal array, it might well appear impossible” (Nudds and O’Callaghan 2010, p. 216).  What this means is that it would be difficult to get from the retinal array (displaying a great deal of lack of invariance) to the features we use in recognising Sherlock such as our idea of the shape of his nose (which is quite invariant).

However, this can be questioned as follows.  Since the only thing that computers can do in terms of accepting data is to read in a mathematical array, Mole’s (2010) claim is in fact equivalent to the claim that it cannot be understood how computers can perform face recognition.  That claim is false.  To be very fair to Mole (2010), his precise claim is that the task might appear impossible, but I shall now show that since it is widely understood to be possible, it should not appear impossible either.

Fraser et al. (2003) describe an algorithm that performs the face recognition task better than the best algorithm in a ‘reference suite’ of such algorithms.  Their computer is supplied with a gallery of pictures of faces and a target face and instructed to sort the gallery such that the target face is near the top.  The authors report that their algorithm is highly successful at performing this task.  Fraser et al. write (2003, p. 836): “[w]e tested our techniques by applying them to a face recognition task and found that they reduce the error rate by more than 20% (from an error rate of 26.7% to an error rate of 20.6%)”.  So the computer recognized the target face around 80% of the time.

So we see firstly that the computer can recognize a face.  [It is not an objection here to claim that strictly speaking, computers cannot `recognise’ anything.  All that we require here is that computers can be programmed so as to distinguish faces from one another merely by processing visual input.  It is this task which Mole (2010) claims appears impossible.]  Then we turn to the claim that how the computer does this cannot be understood.  That is refuted by the entire paper, which is an extended discussion of exactly that.  Since this in an active area of research, we can take it that such understanding is widely to hand in computational circles, and should be more wide-spread.

It may be true in one sense that we could not efficiently perform the same feat as the computer – in the sense of physically taking the mathematical data representing the retinal array and explicitly manipulating it in a sequence of complex ways in order to perform the face recognition task.  In another sense, we could, of course. It is what we do every time we actually recognize a face.  The mechanics of our eyes and the functioning of our perceptual processing system have the effect of performing those same mathematical manipulations.  We know this because we do in fact perform face recognition using only the retinal array as input data.

Mole (2010) has indeed provided an example of invariance (i.e., in face recognition) but the example does not damage the need for a special explanation of the speech perception invariances, because the face perception example can in fact easily be explained.  Therefore Mole (2010) has not here provided a further example of a invariance and he has not thereby questioned the specialness of speech perception.  Speech perception continues indeed to exhibit a unique invariance which continues to appear in need of unique explanation.

Experimental Data Do Not Show Cross-Modal Fusion 

Cello Experiment

Mole (2010) argues that an experiment on judgments made as to whether a cello was being bowed or plucked shows the same illusory optical/acoustic combinations as are seen in the McGurk effect.  The McGurk effect (McGurk and MacDonald 1976) is observed in subjects hearing a /ba/ stimulus and seeing a /ga/ stimulus.  The subjects report that they have perceived a /da/ stimulus.  It is important to note that this is not one of the stimuli presented; it is a fusion or averaging of the two stimuli.  So an optical stimulus and and an acoustical stimulus have combined to produce an illusory result which is neither of them.

If Mole’s (2010) claim that the cello experiment shows McGurk-like effects is true, this would show that these illusory effects are not special to speech, thus challenging the claim that there is anything special about speech that the Motor Theory can explain.  Mole (p. 221, 2010) writes: “judgments of whether a cello sounds like it is being plucked or bowed are subject to McGurk-like interference from visual stimuli”.  However, the data Mole (2010) cites do not show the same type of illusory combination and so Mole (2010) is unable to discharge the specialness of speech perception as he intends.

The Motor Theory postulates that the gesture intended by the speaker is the object of the perception, and not the acoustical signal produced.  The theory explains this by also postulating a psychological gesture recognition module which will make use of the speech production capacities in performing speech perception tasks.  Thus the McGurk effect constitutes strong evidence for the Motor Theory by explaining that the module has considered optical and acoustical inputs in deciding what gesture has been intended by the speaker.  This strong evidence would be weakened if Mole (2010) can show that McGurk-like effects occur other than in speech perception, because the proponents of the Motor Theory would then be committed to the existence of multiple modules and their original motivation by the observed specialness of speech would be put in question, in fact as in the McGurk effect.

More specifically, the paper Mole (2010) cites, Saldaña and Rosenblum (1993), describes an experimental attempt to find non-speech cross-modal interference effects using a cello as the source of acoustic and optical stimuli.  Remarkably, Saldaña and Rosenblum (1993) state prominently in their abstract that their work suggests “the nonspeech visual influence was not a true McGurk effect” in direct contradiction of Mole’s (2010) stated reason for citing them.

There are two ways to make a cello produce sound: it can be plucked or it can be bowed.  The experimenters proceed by presenting subjects with discrepant stimuli – for example, an optical stimulus of a bow accompanied by an acoustical stimulus of a pluck.  Saldaña and Rosenblum (1993) found that the reported percepts were adjusted slightly by a discrepant stimulus in the direction of that stimulus.

However, to see a McGurk effect, we need the subjects to report that the gesture they perceive is a fusion of a pluck and a bow.  Naturally enough, this did not occur, and indeed it is unclear what exactly such a fusion might be.  Therefore, Mole (2010) has not here produced evidence that there are McGurk effects outside the domain of speech perception.

Mole’s (2010) response is to dismiss this as a merely quantitative difference between the effects observed by the two experiments.  Mole (p. 221, 2010) writes:  “[t]he McGurk effect does reveal an aspect of speech that is in need of a special explanation because the McGurk effect is of a much greater magnitude than analogous cross-modal context effects for non-speech sounds”.  As we have seen, Mole (2010) is wrong to claim there is only a quantitative difference between the McGurk effect observed in speech perception and the cross-modal effects observed in the cello experiment because only in the former were fusion effects observed.  That is most certainly a major qualitative difference.

Mole’s (2010) claim that the cello results are only quantitatively different to the results seen in the McGurk effect experiment produces further severe difficulties when we consider in detail the experimental results obtained.  The cello experimenters describe a true McGurk effect as being one where there is a complete shift to a different entity – the syllable is reported as clearly heard and is entirely different to the one in the acoustic stimulus.  Saldaña and Rosenblum (1993, p. 409) describe these McGurk data as meaning: “continuum endpoints can be visually influenced to sound like their opposite endpoints”.

The cello data were not able to make a pluck sound exactly like a bow and in fact the discrepant optical stimuli were only able to slightly shift the responses in their direction, by less than a standard deviation, and in some cases not at all.  This is not the McGurk effect at all and so Mole (2010) cannot say it is only quantitatively different.  Indeed, Saldaña and Rosenblum (1993, p. 410) specifically note that: “[t]his would seem quite different from the speech McGurk effect”.

In sum, the cross-modal fusion effect that Mole (2010) needs is physically impossible in the cello case and the data actually found do not even represent a non-speech analog of the McGurk effect, as is confirmed by the authors.  Once again, speech perception remains special and the special Motor Theory is needed to explain it.

Sound Localization Experiment

The other experiment relied on by Mole (2010) was conducted by Lewald and Guski (2003) and considered the ventriloquism effect, whereAs above, the result that Mole (2010) needs to support his theory is an effect that is a good analogy to the McGurk effect in a non-speech domain. As I will show below, the data from the Sound Localisation Experiment also fails to bear out his claim that there are McGurk-like effects outside the domain of speech perception.

The Sound Localisation  Experiment uses tones and lights as its acoustic and optical stimuli.  It investigates the ventriloquism effect quantitatively in both the spatial and temporal domains.  The idea is that separate optical and acoustic events will tend to be perceived as a unified single event with optical and acoustical effects.  This will only occur if the spatial or temporal separation of the component events is below certain thresholds.

Lewald and Guski (2003, p. 469) propose a “spatio-temporal window for audio-visual integration” within which separate events will be perceived as unified.  They suggest maximum values of 3◦ for angular or spatial separation and 100 ms for temporal separation.  Thus a scenario in which a light flash occurs less than 3◦ away from the source of a tone burst will produce a unified percept of a single optical/acoustical event as will a scenario in which a light flash occurs within 100 ms of a tone burst.  Since the two stimuli in fact occurred at slightly different times or locations, this effect entails that at least one of the stimuli is perceived to have occurred at a different time or location than it actually did.

To recap, in the McGurk effect, discrepant optical and acoustic stimuli result in a percept that is different to either of the two stimuli and is a fusion of them.  We may allow to Mole (2010) that Lewald and Guski (2003) do indeed report subjects perceive a single event comprising a light flash and a tone burst.  However, that is insufficient to constitute an analogy to the McGurk effect.  Subjects do not report that their percept is some fusion of a light flash and a tone burst – as with the cello experiment, it is unclear what such a fusion could be – they merely report that an event has resulted in these two observable effects.  [We may note that Lewald and Guski (2003) do not take themselves to be searching for non-speech analogs of the McGurk effect; the term does not appear in their paper or in the titled of any of their 88 citations, throwing doubt on the claim that they are working in the field at all.]

Indeed, the subjects were not even asked whether they perceived some fused event.  They were asked whether the sound and the light had a common cause; were co-located or were synchronous.  As Lewald and Guski write (p. 470, 2003): “[i]n Experiment 1, participants were instructed to judge the likelihood that sound and light had a common cause.  In Experiment 2, participants had to judge the likelihood that sound and light sources were in the same position. In Experiment 3, participants judged the synchrony of sound and light pulses’ ”.  A ‘common cause’ might have been some particular event but it is not the sound and the light and they were the only things that were perceived therefore the instructions do not even admit the possibility that a fused event was perceived.

Since Lewald and Guski (2003) are measuring the extent to which participants agree that a light and a tone had a common cause, were co-located or were synchronous, it is puzzling that Mole (p. 221, 2010) cites them to support his claim that perceived flash count can be influenced by perceived tone count.  We see this when Mole writes (p. 221, 2010):  “[t]he number of flashes that a subject seems to see can be influenced by the number of concurrent tones that he hears (Lewald and Guski 2003)”.

Moreover, neither the Sound Localisation Experiment nor the cello experiment support Mole’s (p. 221, 2010) summation that “[i]t is not special to speech that sound and vision can interact to produce hybrid perceptions influenced by both modalities” in the way he needs.  Unlike with the McGurk effect, there are no hybrid perceptions in either case, where  “hybrid” is understood to be ‘a perception of an event which is neither of the stimulus events’.

There are cross-modal effects between non-speech sound stimuli and optical stimuli but that is inadequate to support Mole’s (2010) claim that speech is not special.  We still need the special explanatory power of the Motor Theory.

Mute Perceivers Can Be Accommodated

One of Mole’s (2010) challenges is that the Motor Theory cannot explain how some people can have the capacity to perceive speech that they lack the capacity to produce.  Mole writes (p. 226, 2010) that “[a]ny move that links our ability to perceive speech to our ability to speak is an unappealing move, since it ought to be possible to hear speech without being able to speak oneself”.  There is an equivocation here though on what is meant by ‘capacity to produce’.  Mole (2010) is reading that term so that the claim is that someone who is unable to use their mouth to produce speech lacks the capacity to perceive speech.  Since such mute people can indeed as he claims understand speech, he takes his claim to be made out.

However, in the article cited by Mole (2010), it is clear that this is not what is understood by ‘capacity to produce’.  In the study by Fadiga et al. (2002) described, the neuronal activation related to tongue muscles is not sufficient to generate movement.  This activation is a result of the micro-mimicry that takes place when people are perceiving speech.  Fadiga et al. (2002) call this mimicry “motor facilitation.”

Fadiga et al. (p. 400, 2002) write: “The observed motor facilitation is under-threshold for overt movement generation, as assessed by high sensitivity electromyography showing that during the task the participants’ tongue muscles were absolutely relaxed”.   Thus the question is whether the subject has the capacity to produce such a sub-threshold activation, and not the capacity to produce speech via a super-threshold activation.   Naturally, since all the subjects had normal speech, they could produce both a sub-threshold and a super-threshold activation, with the latter resulting in speech.

However, someone could be able to activate their tongue muscles below the threshold to generate overt movement but not be able to activate those muscles above the threshold.  That would mean that they lacked ‘capacity to produce’ in Mole’s (2010) sense, but retained it in Fadiga et al.’s (2002) sense.  This would be a good categorization of the mute people who can understand speech they cannot utter.  Those people would retain the ability to produce the neural activity that Fadiga et al. observe, which does not result in tongue muscle movement.  This is a testable empirical claim to which my account is committed.  It is possible that they may not be able to even produce the sub-threshold neural signals. If that turns out to be correct, it would be a problem for the Motor Theory and the defence I have offered for it here.

Similarly, we can resolve Mole’s (2010) puzzle about how one can understand regional accents that one cannot mimic; i.e. I can understand people who speak with an accent that is different to mine.  The capacity to understand a particular accent could result from our ability to generate the necessary sub-threshold activations, but not the super-threshold ones.  If we go on to acquire that regional accent, our super-threshold muscle activation capacities would be of the required form.  This again is an empirical prediction which makes my account subject to falsification by data.

This hypothesis could have interesting implications in the field of developmental psychology.  Mole (p. 216, 2010) outlines how infants can perceive all speech sound category distinctions, but eventually lose the ability to discriminate the ones that do not represent a phoneme distinction in their language.  So it may be the case that all infants are born with the neural capacity to learn to generate super-threshold activations of all regional accents, but eventually retain that capacity only at the sub-threshold level – because they can later understand a wide range of regional accents – and lose the capacity at the super-threshold level – for those regional accents they cannot mimic.

Another implication here of the Motor Theory is to say that a listener’s vocal tract can function as a model of itself, just as a listener’s vocal tract can function as a model of a speaker’s vocal tract.  This means that the sub-threshold activation functions as a model of the super-threshold activation. So, perceptual capacities involve the former modelling the latter exactly as the Motor Theory predicts.  Such an approach does not commit the Motor Theory to the modelling/perception neurons controlling the sub-threshold activations being the same as the production neurons controlling speech production, so the account is not susceptible to falsification on that precise point.

Further Brief Challenges To Mole (2010)

The Motor Theory Explains Cerebellar Involvement In Dyslexia

Mole (2010) challenges the Motor Theory and in doing so, challenges the idea that speech production capacities are involved in speech recognition.  For this reason, any data showing links between speech production capacities and speech recognition capacities will be a problem for him.

Ivry and Justus (2001) refer to a target article that shows that 80% of dyslexia cases are associated with cerebellar impairments.  Since the cerebellum is generally regarded as a motor area, and dyslexia is most definitely a language disorder, we have clear evidence for a link between language and motor areas.  That is naturally a result that can be clearly accommodated by the Motor Theory which links speech production and speech recognition.

It is not open to Mole (2010) to respond that the link is only between motor control areas and writing control areas, because although writing skills are the primary area of deficit for dyslexic subjects, the authors also found impairments in reading ability to be strongly associated with the cerebellar impairments.  This can be explained on the Motor Theory because it says that Motor deficits will result in speech recognition deficits.  Mole (2010) needs to provide an explanation of this which does not rely on the Motor Theory.

The Motor Theory Explains Links Between Speech Production And Perception In Infants

Mole (2010) does not address some important results supplied by Liberman and Mattingly (1985: p. 18) that link perception and production of speech.  These data show that infants preferred to look at a face producing the vowel they were hearing rather than the same face with the mouth shaped to produce a different vowel.  That effect is not seen when the vowel sounds were replaced with non-speech tones matched for amplitude and duration with the spoken vowels.  What this means is that the infants are able to match the acoustic signal to the optical one.  In a separate study, the same extended looking effect was seen in infants when a disyllable was the test speech sound.  These data cannot be understood without postulating a link between speech production and speech perception abilities, because differentiating between mouth shapes is a production-linked task – albeit one mediated by perception – and differentiating between speech percepts is a perceptual task.

The Motor Theory Explains Why Neural Stimulation Of Speech Production Areas Enhances Speech Perception

D’Ausilio et al. (2009) conducted an experiment in which Transcranial Magnetic Stimulation (“TMS”) was applied to areas of the brain known to be involved in motor control of articulators.  Articulators are the physical elements that produce speech, such as the tongue and lips.  After the TMS, the subjects were tested on their abilities to perceive speech sounds.  It was found that the stimulation of speech production areas improved the ability of the subjects to perceive speech.  The authors suggest that the effect is due to the TMS causing priming of the relevant neural areas such that they are more liable to be activated subsequently.

Even more remarkably, the experimenters find more fine grained effects such that stimulation of the exact area involved in production of a sound enhanced perceptual abilities in relation to that sound.  D’Ausilio et al (2009, p. 383) report: “the perception of a given speech sound was facilitated by magnetically stimulating the motor representation controlling the articulator producing that sound, just before the auditory presentation”.  This constitutes powerful evidence for the Motor Theory’s claim that the neural areas responsible for speech production are also involved in speech perception.

Conclusion

Special situations require special explanations.  The Motor Theory of Speech Perception is a special explanation of speech perception which, as evidenced by the rejection of Mole’s objections, continues to be needed.  One might say that such “specialness” means the Motor Theory stands in a vulnerable and isolated position, as it seeks to explain speech perception in a way that is very different to how we understand other forms of perception.   Here, I would revert to my brief opening remarks about the similarities between the Motor Theory and Simulation Theory.  Whilst the Motor Theory is indeed a special way to explain speech perception, it is at the same time parsimonious and explanatorily powerful because like Simulation Theory, it does not require any machinery which we do not already know we possess.  This is perhaps what underlies the continued attractiveness of Motor Theory as a convincing account of how people perceive speech so successfully.

References 

D’Ausilio, A et al. 2009  The Motor Somatotopy of Speech Perception.  Current Biology 19: pp. 381–385.  DOI: 10.1016/j.cub.2009.01.017

Fadiga, L et al. 2002  Speech Listening Specifically Modulates the Excitability of Tongue Muscles: a TMS study.  European Journal of Neuroscience, 15: pp. 399–402.  DOI: 10.1046/j.0953-816x.2001.01874.x

Fraser, A M et al. 2003  Classification modulo invariance, with application to face recognition.  Journal of Computational and Graphical Statistics, 12 (4): pp. 829–852.  DOI: 10.1198/1061860032634

Ivry, R B and T C Justus 2001  A neural instantiation of the motor theory of speech perception.  Trends in Neuroscience, 24 (9): pp. 513–5.  DOI: 10.1016/S0166-2236(00)01897-X

Lewald, J and R Guski 2003  Cross-modal perceptual integration of spatially and temporally disparate auditory and visual stimuli.  Brain Research. Cognitive Brain Research (Amsterdam), 16: pp. 468–478.  DOI: 10.1016/S0926-6410(03)00074-0

Liberman, A and I G Mattingly 1985  The Motor Theory of Speech Perception Revised.  Cognition, 21: pp. 1–36.  DOI: 10.1016/0010-0277(85)90021-6

McGurk, H and J MacDonald 1976  Hearing lips and seeing voices.  Nature, 264, (5588): pp. 746–748.  DOI: 10.1038/264746a0

Mole, C 2010 The motor theory of speech perception in Sounds and Perception: New Philosophical Essays.  Oxford: Oxford University Press.  DOI: 10.1093/acprof:oso/9780199282968.001.0001

Saldaña, H M and L D Rosenblum 1993  Visual influences on auditory pluck and bow judgments.  Perception And Psychophysics, 54 (3): pp. 406– 416.  DOI: 10.3758/BF03205276

Short, T L 2015  Simulation Theory: a Psychological and Philosophical Consideration.  Abingdon: Routledge.  URL: https://www.routledge.com/Simulation-Theory-A-psychological-and-philosophical-consideration/Short/p/book/9781138294349

The Picture Superiority Effect And Financial Markets

The Picture Superiority Effect is one of a large number of cognitive biases that affect how we think and act. It is important to know about these biases in the context of financial markets because they can impair our decision making but also inform traders on how other market participants may react

As in previous posts featuring on this blog  ( https://timlshort.com) I will first outline a cognitive bias drawing from the relevant psychological literature and then describe how that plays out in financial markets.  My basic point throughout is that it is critical for market participants to know about these unavoidable biases for two reasons.  Firstly, knowing about them is the first step to being able to recognise when they are operative and assessing whether they have resulted in an optimal decision, with specific relevance here to trading decisions.  Secondly, since no-one is free of these biases, traders can expect that other market players will be influenced by them and trade accordingly.

The Picture Superiority Effect is relatively straightforward.  What psychologists have found is that people find it easier to remember images than words.  There are different opinions in the literature as to why this might be.  In my view, the effect is likely to be explained by our preference for the vivid and concrete over the dull and abstract; but in fact, the causation is not that important for our purposes here.  We just need to know that everyone remembers imagery more than text.  This is probably no surprise; in particular in the age of social media, as pictures are shared more widely on social media than text (and so we might surmise that there is also a Video Superiority Effect which is even stronger).

There is some discussion as to how age interacts with the Picture Superiority Effect.  Early researchers found that younger people recalled more pictures than words while older subjects did not, suggesting that the Picture Superiority Effect exists only in younger people.  More recent work, however, appears to find the exact opposite.  Given the general improvement in experimental methodologies that occurs over time and the parallel increase in knowledge, I would say that the more recent studies are more likely to be correct.  But that observation remains subject to further confirmation/disconfirmation.

As a result, there have been some suggestions that what is happening is that images work as a compensation mechanism for older adults who are experiencing memory deficits.  So the overall story may be that younger people are prone to the Picture Superiority Effect, middle age adults are less prone to it, and then older people embrace the effect for compensation purposes.  This would mean something like older people are deliberately relying more on pictures to assist them in remembering things.  There is also advice from the intelligence community (!) to the effect that the way to remember a lot of items without writing them down is to modify a visual memory of a very familiar location, such as one’s home, and add to it strange and striking items which represent the data one wishes to remember.

All of this means that everyone who is involved in financial markets can expect that the Picture Superiority Effect will play a role in their thinking to a differing extent at various life stages.  How would this work?

This type of point — how do cognitive biases affect our performance in financial markets —  is one I discuss at length in my book:

https://www.routledge.com/The-Psychology-of-Successful-Trading-Behavioral-Strategies-for-Profitability/Short/p/book/9781138096288

One example I give there is related to imagery, although I am actually discussing a different cognitive bias called the Availability Heuristic.  The example is the photos and video with which we are all familiar of people who had been fired from Lehman Bros. after it collapsed in the crisis.  These pictures and ones like them are extremely easy to remember.  In fact, they are difficult to forget.  This sort of thing might make you unreasonably averse to buying bank shares.  Similarly, pictures of Elon Musk looking depressed might make you avoid TSLA stock.  There may or may not be good reasons for avoiding such stocks (my view is the opposite at present) — but what is 100% clear is that if you read a story about banks or TSLA and only recall a picture of a fired banker or a sad Elon Musk, you have not retained very much which is useful in terms of making a market decision.  Even if you give equal weight to the picture and the words, you are probably still weighting the evidential value of the total information value available to you wrongly.

It is probably wise to set aside the limited information value represented by imagery and focus on the data — which may of course be presented graphically without being just a photo.  If you want to discuss these and other concepts mentioned in the book, or for more information about the book, you can Send Mail

Negativity Bias And Financial Markets

Negativity Bias is one of the powerful and ineradicable aspects of human psychology, which has important effects on the performance of stock market investors

Negativity Bias is perhaps the most expensive and dangerous item in our psychological repertoire insofar as it impacts on our performance in financial markets.  In this post, I will outline the bias and then discuss how its effects play out in markets.

Negativity Bias is reflected in the finding that negative events affect us much more strongly than positive ones.  I should immediately distinguish this effect from the bias I was discussing in my previous post (https://timlshort.com/2017/11/05/attentional-bias-and-financial-markets/)  There, I discussed the subset of Attentional Biases that operate in people who are depressed or anxious, such that they pay more attention to mood congruent information.  Negativity Bias differs from that in that it affects everyone, irrespective of mood and psychiatric diagnosis.  Some forms of Attentional Bias do that as well, but in the previous post I considered only mood-related variants thereof.

The bias can be seen as a mis-calibration, like many of our cognitive biases.  There is a “right-size” for the amount of impact that an event should have on us which is related to the “intensity” of that event.  Obviously, intensity is rather a slippery concept here, but we can give some meaning to it with illustrations.  Two negative events of different intensities would be stubbing one’s toe and breaking an arm.  Two positive events with differing intensities would be receiving a birthday card or falling in love.

So without Negativity Bias — and with what we might regard as a purely rational response to events — there would be a link between the intensity of an event and its impact upon us.  There would not be a link between whether the event  was positive or negative and the size of the impact of the event on us.  This does not mean that it is strange that we react negatively to negative events and positively to positive events (in fact, it would be very strange were this not so!).  What it means is that it is odd that we react more strongly to negative events than we do to positive events of the same intensity.

This was measured by experimental social psychologists in financial terms using sums of money.  It was found that the mis-calibration is very strong: the factor is about 2.5.  In other words, we react 2.5x as strongly to losing $10 as we do to gaining $10.  In other other words, losing something is much, much worse than gaining the same amount.

The Negativity Bias then will have huge impacts on our risk aversion, and that, it is well known, is a key driver of performance in financial markets.  Many people perform extremely badly as a result of excess risk aversion.  In the current environment, it is unwise to be holding substantial amounts of cash.  People should have some emergency funds of course.  But if CPI is running at 3% and interest rates paid by the banks are more like 1%, then anyone holding cash in the bank is basically prepared to pay 2% a year in order to avoid any risk, as they see it.

As I see it, paying to avoid risk like this is just concretising the risk.  You don’t gain: you just get the loss in a form you can pretend doesn’t exist.  It would be much better — and in fact less risky understood correctly — to invest in something.  There is an enormous spectrum of assets and geographies out there from equities in the US, Japan, Emerging Markets and Frontier Markets to bonds issues by governments, investment grade corporates and junk corporates.  There are thousands of ETFs available offering the widest imaginable range of exposures.  Overcome your Negativity Bias and pick one.

I discuss in much more detail the important effects in financial markets of Cognitive Biases like Negativity Bias in my new book:

https://www.routledge.com/The-Psychology-of-Successful-Trading-Behavioral-Strategies-for-Profitability/Short/p/book/9781138096288

If you want to discuss these and other concepts mentioned in the book, or for more information about the book, you can Send Mail

 

 

 

Attentional Biases And Financial Markets

Attentional Biases are operative in everyone’s psychology; they can affect performance in financial markets because they control what information sources we consider

Are happy people better at picking up information that will make them happen?  Do sad people do the opposite?  Have you wondered how your mood can affect your behaviour in ways you don’t know about?  All of this is true and can be explained by considering one form of a Cognitive Bias called Attentional Bias.

We are subject to approximately 150 Cognitive Biases, at the last count.  All of them affect our thinking without us necessarily knowing too much about when they are at work or what the results are.  My project initially is to list and describe these mental subroutines before critically examining them and assessing how they work in a market environment.  The objective is to allow market participants to look out for the operation of Cognitive Biases in their own thinking and trade on the expectation that they will also figure prominently in the thinking of other players.

One of the most important Cognitive Biases is known as Attentional Bias.  It comes in several forms, but all of them have in common that they systematically slant which information we pay attention to.  Obviously this can be expected to have dramatic effects on thinking and market outcomes.  In this post, I will first describe Attentional Bias and then outline how it might play out in a market setting.

Much of the psychological literature on Attentional Bias looks at what we can term mood congruency.  The basic idea here is that we are more likely to look at information which fits our mood.  So, anxious subjects are more likely to look at anxiety-inducing information and depressed subjects are more likely to consider depressing information.  Clearly this is already rather unhelpful for such subjects, but my aims here are only to look at what this might do in markets.

This is widely important because generalised anxiety affects a significant proportion (estimated at between 5% and 30%) of the population.  This is people who are more-or-less anxious more-or-less all of the time. Since it is a significant  minority, it is likely that some of these subjects participate in financial markets, although it is possible that some anxious individuals will self-select out of stock markets.

Depression of sufficient gravity to merit a psychiatric diagnosis affects about 1% of the population; many more people will experience a less severe depression or a more episodic form.  Again, we can expect plenty of market participants to be depressed when trading.

Experimental investigations of mood-disorder linked Attentional Biases have focused on reaction time studies.  A pair of words was briefly presented to experimental subjects on a computer screen.  Sometimes, one of the words was replaced with a dot, which was the signal that a button should be pressed.  The time it took for subjects to press the button was recorded.  It would typically be in the range of several hundred milliseconds.

Sometimes, the other word presented on the other side of the screen to the dot was a threatening word.  The word could be socially threatening (‘humiliated’) or physically threatening (‘injury.’)  The experimenters found what is known in psychology as an RT spike — or a delay in reaction time.  People took longer to see and react to the dot if a threatening word appeared on the other side of the screen.  These effects were quite large.

Perhaps most interestingly, the RT spikes were larger for anxious or depressed subjects, especially if the threat word was specifically related to either anxiety or depression.

What Effects Of Attentional Bias Should Such Individuals Be Aware Of?

It is obvious that such effects could impair traders on a trading floor who are making rapid trade decisions themselves.  Information near their field of vision which is threatening — such as a negative Bloomberg headline — could grab the trader’s attention and cause a delay in response time even if it is unrelated to the trade under consideration at the time.

While this is a real issue, I want to consider non-professional traders as well. In general, day-trading is best avoided as 85% of day traders lose money.  (Day-trading is popular among people new to investing.  It is called that because the aim is to minimise risk by not holding any positions over-night.  However, the necessarily short-term nature of this approach means that one can really only benefit from ‘noise’ in stock movements and there is no way to rationally forecast noise.  Relying on luck is even worse in markets than elsewhere because the punishment is swift.) It is better to be a buy-and-hold investor.  What effects of Attentional Bias should such individuals be aware of?

If one is episodically depressed or anxious, then these are not times to be trading.  Negative mood-congruent information will grab attentional resources and make traders much more likely to exit positions.  This may or may not be the right decision to make; what is clear is that such a decision should be made rationally and with a fair and open consideration of the relevant data.  Often this will not be what everyone else is doing, so my approach lends itself naturally to a contrarian investment stance.  There are other good reasons to be a contrarian investor, including that it fits with a long-term approach — so it is not something much engaged in by day-traders.

If someone is permanently depressed or anxious, then treatment should be sought and one should abstain from trading until an improvement is seen.  If no such improvement can be achieved, then I am sympathetic, but I would suggest hiring financial advisers in that circumstance.  It would be one thing less to be concerned about and would likely have more optimal outcomes, despite the extra fees involved.

I discuss in much more detail the important effects in financial markets of Cognitive Biases like Attentional Bias in my new book:

https://www.routledge.com/The-Psychology-of-Successful-Trading-Behavioral-Strategies-for-Profitability/Short/p/book/9781138096288

Email me at shorttim1@gmail.com:

Send Mail

Opposition to Gun Control is not “Superstition” 

There have been suggestions recently that some voters are immune to evidence and argument. They rely instead on gut feelings and instinct. These individuals are described as being “intuitionists” or “superstitious.”  I will suggest that while this is the correct direction of travel, that we can in fact be more precise and locate the issue as a consequence of cognitive bias known as Status Quo Bias.

The Onion puts the strangeness of the gun control debate best with its satirical headline: “No Way to Prevent This, Says Only Country Where This Regularly Happens.” To rational observers, it seems obvious that if you have 310m guns in a country and 93 people a day are killed by them, you should reduce the second number by reducing the first. Almost every other country in the world does this and it works. Yet a majority of Republicans and even 25% of Democrats disagree. This is ignoring data on a massive and deadly scale.

I think this is not best explained by appealing to superstition as The Economist did recently. Superstitious beliefs, to be sure, are not based on data and do not often result in true claims. If they do, it is a coincidence: superstition is not a reliable method of arriving at true claims. There is no such thing as bad luck and walking under a ladder wil not bring it. Opponents of gun control do not believe that firearms are lucky charms.

The appeal to intuition, by contrast, can I think throw light on the topic if precisified in the right way. I would understand intuition as being a collection of cognitive biases. These operate to slant and indeed direct our decision making, largely unbeknownst to us.

The Bias I have in mind here is called Status Quo Bias. For my U.K. readers, I should immediately clarify that this has nothing to do with Francis Rossi. The Bias is also known as the familiarity effect. I will introduce it by asking you to make a quick choice.

Would you prefer to meet a friend for lunch somewhere you have been before or would you rather go to see a stranger to pursue a novel activity in an unknown location?  Most people most of the time would choose the first option.

As with all Biases, this one has its origins in being valuable. Most of the time, it will produce the right result. This is because of a very approximate risk assessment heuristic. We assume that things we have done before which have not harmed us visibly are safe activities. This is wrong but better than nothing. It is in fact I believe an outgrowth of another Bias called the Availabilty Heuristic, but I will set that aside for now.

Status Quo Bias though can be rephrased as the idea that any change is more risky. This can produce conclusions which are as uncongenial to the left as to the right. It is not true, for example, that because countries have borrowed heavily in the past, they can continue to to do indefinitely. As to the topic at hand, for the “intuitionists,” changing to a scenario of tighter gun control is risky because it is new, rather than safer because all of the countries that do it are safer.

If you are wondering what this means practically, I suppose that depends on whether Cognitive Biases are hardwired in to us. That’s unclear, but I think it is at least a start for me to list the mental subroutines we are running. If they are hardwired, then they will be impervious to data, which would explain why the debate is sterile: proponents of gun control continue to say “if we changed this fewer people would die because that is what happens when you have gun control” and opponents would continue not to listen.  If they are not hardwired, then telling people they have these biases might be a start in the direction of changing them.

My new book focusses on these biases and their effects in financial markets:

https://www.routledge.com/The-Psychology-of-Successful-Trading-Behavioral-Strategies-for-Profitability/Short/p/book/9781138096288

I recommend it if you want to be aware of the subconscious processes which guide your behaviour in markets and elsewhere and if you want to become less dependent on your own autopilot which will not be optimising your outcomes.

In Financial Markets, Relying on the “Wisdom of Crowds” Can Be Very Risky

We all tend to do what everyone else does. This saves time and effort on many occasions, but it can cost you a lot of money in financial markets

 

We all tend to do what everyone else does, even when we can see that everyone else is wrong.  In financial markets, this can lead to bubbles and herd behaviour.  It is important to be aware of this tendency within our psychology, so you can at appropriate times avoid joining in the bubbles.  It is important to do this because you will lose a lot of money if you participate or, once in, fail to exit before everyone else does.

In this post, I will briefly outline the relevant psychology so you can both look for the effects in your own thinking and expect those same effects in other market participants.  This will improve your trading.  I discuss this bias and many others in a financial markets context in my new book (see link below).

Conformity Bias is also known in the literature as the Asch Effect, after the pioneer experimenter.  Asch obtained really surprising results, which will show you how strong this effect is.  He had a naive member of the public sit in a room in front of a blackboard with four other people.  The member of the public thought that the other four people were also naive members of the public, but in reality they were actors who were going to behave in a specific way suggested by Asch.

A line of a certain length was drawn on the left side blackboard.  Some other reference lines of different lengths were drawn on the right hand side.  One of them was clearly the same length as the reference line and all of the rest were clearly much shorter or much longer.  Asch then had the people say which of the test lines on the right was the same length as the reference line.

If the naive member of the public went last and heard all of the actors give a wrong answer, he tended to go along with them even though the answer was obviously and clearly wrong.  Amazingly, Asch found that most people gave an obviously wrong answer some of the time and also that some people gave wrong answers most of the time.

This is how strong Conformity Bias is: it works even when the answer is obvious.  Imagine how much more dangerous it is in financial markets where the answers are much less clear cut and much ambiguous and conflicting data must be weighed.

I think this is one factor behind a lot of famous bubbles in financial history. Right now, it looks to me as though the cryptocurrencies, most notably Bitcoin, are exhibiting bubble characteristics.  One sign of this is the enthusiasm of a particular football manager, one noted for his lack of financial acumen, for Ethereum.  I do not say this is a scam; I merely suggest that one should look to more fundamental underpinnings for value than “everyone likes it and it has gone up a lot.”

Avoid Conformity Bias and trade better by trading the other way when you see it happening.

https://www.routledge.com/The-Psychology-of-Successful-Trading-Behavioral-Strategies-for-Profitability/Short/p/book/9781138096288

Email me:

Send Mail