the psychology of successful trading

Simulation Theory


Theory of Mind (ToM) is the label for the abilities we have to predict and explain the behaviour of others, by ascribing mental states such as belief and desire to them, or otherwise. There are two major competing theories of ToM: the Theory Theory (TT) and Simulation Theory (ST). TT holds that we understand others by having a theory of them or their behaviour. ST holds that we understand others by putting ourselves in their place. There are also different types of ST. ST(Transformation) holds that I simulate you by becoming you. ST(Replication) holds that I simulate you by becoming like you. Below I briefly address three objections to ST.

ST(Transformation) is Incomprehensible

ST(Transformation) has been questioned. Stich and Nichols provide three possible interpretations of what Gordon’s position might mean, all of which they find unsatisfactory. They note that Gordon has characterised ST(Transformation) as meaning that “we explain and predict behaviour by “imaginative identification” — “that is, [we] use our imagination to identify with others” (Stich and Nichols [p. 91]{Davies95}) in order to fulfil that aim.

“Imaginative Identification”

They quickly dismiss the first interpretation of this. That was the idea that we experience conscious imagery when we simulate on the grounds of phenomenological implausibility. The second interpretation involves consideration of the explanation. Is the intention to cover all or merely some cases of application of ToM?Stich and Nichols think that if the change is made to `some’ then ST becomes “patently true [but] not very exciting, and […] not incompatible with TT”. (Stich and Nichols [p. 92]{Davies95}).

However, since it seems that there are ambiguous cases of use of both TT and ST, the serious defence of either should lie in the claim that one of the theories explains many important cases of application of ToM and not all. So Gordon’s line should escape Stich and Nichols particular charge here.

Stich and Nichols conclude though that Transformation ST involves “imaginative identification with the other” and that this is a label for “a special sort of mental act or process which […] need not be accompanied by conscious imagery” (Stich and Nichols [p. 92]{Davies95}). Stich and Nichols then ask what this means, bringing the charge that they find it incomprehensible.

“If I Were You” In Simulation Theory

There arise here immediate questions which are familiar. We might ask what people mean when they employ the popular locution `If I were you…’ when giving advice. The conundrum is that the person giving advice presumably means “if I were in your position with my outlook and abilities, I would do X.” However, those abilities and that outlook might preclude being in the situation being advised upon.

It does not seem plausible that the locution means “If I were you in your position with your abilities and outlook I would do X.” That is because a). presumably the person receiving the advice already has access to that type of suggestion and b). the advisor will not, necessarily. Daniel phrases this objection neatly when he asks “how much of myself am I to project into the other persons’s shoes” (Daniel [p. 39]{Daniel93}). The answer, of course, is `the right amount’.

What Makes Simulation Hard?

I will use the term S to refer to the subject doing the simulating. O is the target of simulation. S wishes to understand or predict the behaviour of O. In Simulation Theory, S does this by simulating O.

S will not be successful in simulating O if S ascribes to O abilities and experiences that are remote from those of O. That’s true irrespective of whether that profile of abilities and experiences match those of S more closely. Naturally this presents some difficulties for simulation. S’s will find it difficult to simulate O’s who are dramatically more or less intelligent than themselves.

Simulating People Much Smarter Or Dumber Than Ourselves Is Hard

Stich and Nichols may legitimately ask which line Simulation Theory takes on the conundrum. Re-examining the argument above produces the opposite conclusion. S does not want to use S’s own abilities and outlook to predict what O will do. To the extent O has different abilities and outlooks, S’s prediction will be wrong.

A chess grandmaster does not expect a novice player to use the same defence that he saw used against a particular attack in his last world championship appearance. The grandmaster may indeed struggle to reduce his abilities to the correct level. As a practical matter, this will not be a problem. The grandmaster will simply use his vastly superior playing skills to compensate for his lack of ability to predict what strange tactics the novice will employ. He will still exploit weaknesses easily.

In the other direction, the novice player would do well to predict a grandmaster-level defence against his attack. However, this information will not be available. So it seems as though there are difficulties in becoming the O when the O has significantly different levels of relevant ability.

These difficulties seem less marked when considering information asymmetry. This is because information asymmetries are ubiquitous in everyday life. They occur both between S and O and between the same S at different times. Step changes in ability in a single S are either much less frequent or indeed never seen; outside of perhaps some unusual pathologies.

Only Grandmasters Can Simulate Grandmasters

This challenge seems equally strong on both the replication and the transformation views. If S lacks the ability to become a chess grandmaster, then S also lacks the ability to become like one, in terms of ability at least. S has however no difficulty simulating information asymmetries between S and anyone else. That’s true since this is generally not related to ability differences.

Photo by Gladson Xavier on

However, we need to remember what the challenge is, exactly. It seems to be demanding to know what is meant by becoming someone else. I have sketched out above what this might mean. Then Stich and Nichols can say that on the above outline, it looks as though Simulation Theory provides a picture on which ToM will fail to produce accurate predictions. That will happen when S lacks some of the relevant abilities or disabilities of O. S will perhaps be more successful when the differences between S and O are those of information asymmetry. Fine: there are systematic errors in ToM. These will need explaining; I will do this in later work.

Simulation Theory(Replication) Involves Impossible Ascriptions

One logical objection brought against Goldman by Olson and Astington is fairly easy for Goldman to deal with. The objection is to charge that Goldman

“argues that the ascription of beliefs to others is done by simulating the other’s state on the basis of one’s own. But […] the only definitive evidence for ascribing belief occurs in the case of ascribing false belief. Yet one’s own beliefs are never introspectively available as false beliefs, so how could false beliefs ever be ascribed to others? That is, how could one see in another what was never experienced in one’s self?”

(Olson and Astington [p. 65]{Olson93}).

No One Experiences False Belief

What do Olson and Astington mean by the surprising claim that no one ever experiences their own false belief? They mean that very quickly on discovering conclusive evidence for the falsity of a belief, we will change that belief such that it is no longer false. Or more precisely, we will eliminate the previous belief since it has been falsified. We will replace it with its negation that is a new true belief. So it is true that we never have current experience of a belief that is false now. That of course, is not what Simulation Theory needs.

The above line does not address some situations of cognitive bias. For example, some people continue to believe for example that Brexit is a good idea. That is despite overwhelming evidence to the contrary. That is lack of adequate processing of evidence. The Brexit voters continue to have the false belief that Brexit is a good idea.

It is only true that we have no experience of our own false beliefs if it is true that we have no experience of our beliefs changing. This is because both of those scenarios require only that we have an ability to use memory with some non-zero accuracy to compare our current belief states with our previous ones. We can see then that Introspectionist ST(Replication) needs such a memory capacity. It is though not committed to the claim that it must always function correctly.

Simulation Theory Cannot Account for Some Developmental Data

Stich and Nichols claim some developmental data can be explained by TT but not Simulation Theory. In developing a response to this objection, we may also learn more about the differences between TT and ST. The data in question derive from a variant of the false belief tests. The experimenters ask children about the beliefs of another child sitting in front of them about the contents of a box. The box is closed. The other child may have either looked in the box or been told what is in it.

The first child will be good at answering correctly that the other child knows what is in the box when the other child has looked in the box. But younger children are bad at answering correctly when the other has child knows what is in the box. Older children — five and up — are good at both tasks. They know that if you see what is in the box, you know what is in the box, but they also know that you know if you are told what is in the box.

Folk Psychology And Simulation Theory

Stich and Nichols claim that these data are consistent with TT but not with ST. They write that “as children get older, they master more and more of the principles of folk psychology” (Stich and Nichols [p. 262]{Stich93}). However, they say, while it is clear that even the younger children “form beliefs as the result of perception, verbally provided information, and inference” (Stich and Nichols [p. 262]{Stich93}) they do not have the latter two routes to assessing the beliefs of others.

Thus they are not using their own minds to simulate others, thus ST is false, according to Stich and Nichols. Of course, Stich and Nichols can’t have this conclusion. They can claim that these data show that younger children are unable to use all of the capacities available to them to form their own beliefs when simulating others. Their ToM is to that extent immature. Since Stich and Nichols allow that three-year olds have immature ToM, these data do not weigh one way or the other in the TT vs ST debate.

Maturation And Simulation Theory

We might on this picture suppose that the way ST abilities develop as the child matures is that more of the routes to knowledge that the child uses become available for the simulation as maturation proceeds. Perhaps exactly that just is the development in question.

There is a particular time course of development of these capabilities in the case of the child’s own beliefs. There is no reason to presume that the arrival of abilities to form knowledge from perception, testimony and inference are all simultaneous. So one would expect the same as the child’s abilities to simulate develop. This is exactly what we find. Empirical studies confirm that different ToM component abilities develop at different times.

As Farrant et al confirm, “[c]hildren typically pass the diverse desires task first, followed by the diverse beliefs, knowledge access, contents false belief, and real–apparent emotion tasks in that order” (Farrant et al [p. 1845]{Farrant06}). ST isn’t committed to anything by these data. But if it assumes that maturation means the child can bring more of its own abilities to bear when simulating others, ST will to that extent find empirical support.

See Also:

#Proust: An Argument For #SimulationTheory

What Is “Theory Of Mind?”

By Tim Short

I am a former investment banking and securitisation specialist, having spent nearly a decade on the trading floor of several international investment banks. Throughout my career, I worked closely with syndicate/traders in order to establish the types of paper which would trade well and gained significant and broad experience in financial markets.
Many people have trading experience similar to the above. What marks me out is what I did next. I decided to pursue my interest in philosophy at Doctoral level, specialising in the psychology of how we predict and explain the behaviour of others, and in particular, the errors or biases we are prone to in that process. I have used my experience to write The Psychology of Successful Trading. In this book, I combine the above experience and knowledge to show how biases can lead to inaccurate predictions of the behaviour of other market participants, and how remedying those biases can lead to better predictions and major profits. Learn more on the About Me page.

11 replies on “Simulation Theory”

I’m a natural supporter of the ST in as much as that is how I view the workings of the human brain. I search for material like this which is accessible but I understand the the pedantic work of provisioning any disucssion or paper with definitions is going to make the paper or article difficult to access for many.

I’m interested specifically in the definition of ‘belief’ as it directly relates to the theories discussed in this post. A link to such information would suffice. I would also appreciate further discussion on what you understand as the meaning of ‘experiencing false belief’ in regard to what you have written here.

I am of the opinion that capabilities of the brain are based on functionality and this develops over time as there is stored information on which to build that functionality. As a learning machine cannot define what a chair is without many examples, I understand that information is required to build some functions within the brain and that they are not innate. This makes the human brain very adaptable as a machine. Further, without information of given types during formation (training) the brain will not form some functional capabilities. I find it difficult to understand functional capability as existent without defined ‘hardware’ to support it.

Words like belief, experience, and other amorphous words that get used without definition lead the thinking and conversation down well trodden but wrong garden paths.

Thanks for a good post.

I guess the assumption here is that a belief is a proposition held to be true, but since we don’t really know what a propositional content is, and we have various truth theories (correspondence, coherence, pragmatic) that leaves open a number of large questions. I would probably suggest you look at the Stanford Entry to begin with.

It’s true that the capabilities of the brain may be described in functional terms but that doesn’t show that the brain is so based. I don’t think we can define what a chair is either. We know them when we see them!

I also do not see how you could have software running without hardware to support it, but that hardware could take many forms.

The problem of course is that we can’t do everything at once, so if a discussion is not about belief, it has to assume there is some answer. In this case, I don’t think I need anything beyond what `common sense psychology’ says about belief viz. it’s what you think is true which combines with your desires to inform your behaviour.

Oh, and `experiencing false belief’ is just the time-slice point i.e. you never actually become aware of having a false belief or experience it because it is always changed to a different belief that you believe to be true as soon as you become aware of the falsity of the previous belief.

Thanks for the link. I’m always ready to read new materials.
I understand that you don’t see the mind as a computing machine; seemingly the primary reason for this is that you cannot conceive of the machinations being in the brain.

Grant for the discussion that it is true. A belief to then be consistent with the Stanford definition would be described as a set of rules in the ‘simulation in test’* that is not in contention with any other rules or sets of rules. Of the common beliefs we have and do not ponder much, such as we all have heads, we can see that these are those simulations which are easily informed by our senses and so require little computation time or thought.

We have a large number of previous simulations stored away which we use to match new data against to see if the new data set conflicts with previous rules and data in what is not much different than an efficient matching algorithm. This is how we recognize a chair as a chair – it fulfills the rules of ‘chair’ that we have stored and allows for situations where one designer might call something a chair but your grandma would curse and say that it’s no chair. The two are using different rule sets and criteria to match against. Both are valid rule sets but different.

On a technicality of function, it might actually be a chair but does not meet the simulation criteria of a chair for all persons. In this, it appears that there are two truths yet there is only one ‘truth’ with different interpretations due to rule set differences in the two compute machines.

“The problem of course is that we can’t do everything at once, so if a discussion is not about belief, it has to assume there is some answer. In this case, I don’t think I need anything beyond what `common sense psychology’ says about belief viz. it’s what you think is true which combines with your desires to inform your behaviour.”

What you have said here matches exactly, in my opinion, to the situation I’ve described with the chair. The chair is an easy simulation and if you think about it you will be able to identify all the rules in your simulator that you associate to the idea of ‘chair’. Further, you’ll be able to identify the properties of chair that must always be true for an object to be a chair.

The wonderful part is how fast and effortlessly our brains do this. no?

A false belief is one set of rules for your simulator that turn out to be wrong. When you change the rules to correct to a valid set of rules you are no longer using the incorrect ones and will find it difficult to do so… so it is that when we hold false belief (invalid rule sets for the simulator) we do not see them as invalid though after changing the rule sets we can reflect to see how they were wrong in view of current rule sets functionality.

Did that make sense?

Happy Holidays

I think the mind *could be described as* a computing machine, but that the truth of that doctrine does not entail the falsity of ST. On that point, I will differ from TT supporters who will say that things like flowcharts describing steps in simulation – for example, S sees O lift a cup leads to S ascribes to O the mental state of desiring a drink – count as a rule and therefore as a piece of theory.

I think I can agree with your second paragraph with a bit of reconstruction. It is iimportant that the simulation involved in ST is *of other minds* – including our own minds at different times and under counterfactual circumstances.

I definitely agree with your third paragraph; this is a really important point for my other related work. I call it the ‘setting the bar too low’ error when TT proponents make it too easy for TT to succeed. I claim something like what you do: common simulations could produce ‘rules’ on the fly as it were. That’s not theory – I also claim.

I am happy – apart from one caveat – with the rest of what you say but again we need to focus on simulation in ST being of minds. So you wouldn’t simulate a chair but you might simulate what someone might do with a chair. Of course, you can reply here that that still needs data about chairs! The caveat is that I am not sure we can practically produce a set of rules to tell us what a chair is. For example, most people would start with ‘you can sit in a chair’. However, I think there is a statue of Lincoln in the US where he is seated. We would agree that the sculpture includes a chair, but you couldn’t sit in it because it is occupied or too big.

Simulating the chair is an act of ToM in that your mind does what another mind will do or what you reasonably understand that another mind would do. The statement ‘please bring me a chair’ includes the expectation that both minds will simulate a chair in similar ways. This expectation is based on a ready ‘on hand’ simulation of the basic functioning of another’s mind… A prediction of how that mind will simulate ‘chair’ from their own perspective. The prediction or ToM of the other is predicated on self experience and a modeled prediction based on external evidence available to both self and other. In this ‘bring me a chair’ does not mean bring me the front driver’s seat of a car or a cement park bench even though both would conform to the actual request. The description of ‘chair’ that is left out of the request is omitted due to ToM modelling of what the other mind will understand.

ToM is integral to communication between minds. When the simulation on either end of the communication is largely different (say between a small child and an adult) it can become comical because the expected simulation on one end does not match the real simulation on the other. That would be when the request for a chair results in an adult being brought a chair suitable only for a small child.

I’m rambling a bit. The comical situation results when the simulation of the other mind is reduced to merely an imitation of the other’s mind, that they are equal. To understand another mind and make predictions the other mind must be simulated and in this way the difference in capability/knowledge is not a barrier to the explanation.

Communication is largely about explaining our model/simulation of the world/situation to another mind and understanding another mind’s model/simulation. This is in fact what we are doing with this conversation so that each of us can better simulate the world on our own and so that we can both understand how the other mind simulates the world. We are, for lack of a better definition, exchanging rule sets. I can know (or estimate) what you know by simulating your mind as an object in my simulator where the rules that I believe you use are aspects of the object (mind) I am simulating in my mind.

This is complicated beyond simple explanation by the fact that we emulate the language (body and implied understanding) of the other mind as we model/simulate the rules they are explaining. As we simulate it we assimilate those rules that match with our own or which we accept as more explanatory than our current rules are.

In this way, even for simple objects such as ‘chair’ we are simulating a model of our mind or another mind. This is consciousness. Without simulation there is no predictive capability. Emulation alone does not provide predictive capability, nor does simply understanding the rules that another mind uses. The two must work together to create a simulation of the other mind in order to predict what action that mind will next take. In the game of chess, while some moves telegraph possible next moves, it is the lack of action that telegraphs others to the mind that is more skilled at the game. In knowing what moves are made or not made, the more advanced player can simulate the capabilities of their opponent to predict probable next moves. This, in fact, is the original purpose of the game. Such games teach children to use the tools available in their mind/brain.

Sorry for the ramble.

Simulation is the only method that I can see which provides the necessary abilities to function as we see minds function.

Excellent. Plenty in there I can agree with.

I think most of that is helpful for my position, though I am concerned about the risk that looking at it the way you do might pose collapse risk – viz. ST might not be distinct from TT. So I think I would need to replace ‘rules’ throughout by ‘quasi-rules’ to emphasise that these rules do not constitute a body of theory but instead are generated on the fly by simulation.

I think I also still want to exclude ‘chair simulations’ from within the scope of ToM simulations, for two reasons. Firstly, it again adds to the collapse risk because there might be a ‘theory of chairs’ that people use to identify chairs. It would be very hard luck for me if I successfully excluded theory from ToM in relation to the mind only to have it sneak back in in relation to chairs! Secondly, I think it is a separable question as to whether we simulate chairs. It looks plausible enough to me that we do, as you say, but I do not want to be committed to it because it would impose an additional burden of justification on my position.

What you say about communication is very interesting. We do see communication fail quite often and we do see a lot of errors in ToM where someone else thinks so differently to the way we do that we simply cannot simulate them. A major task for me is to explain why some situations produce systematic ToM errors. This is how TT proponents would respond to your claim that I agree with that simulation seems the best way to explain how we see minds work when they do ToM. The TT people would say ‘hey simulationists, you can’t explain systematic error. You should predict random error. We can postulate an incorrect item of theory which goes wrong in the same way every time’.

I’m not sure I agree on the rules vs. quasi-rules. Perhaps I should discuss a rule or three to see if we agree on what I am calling a rule or rule set.

The object box is a set of rules which roughly go thus:
6 sides, one or more sides might be open/missing, can be square but it’s not needed, often has a way to put stuff inside it so is generally hollow, can be any color, can be lockable, can be any size but generally are small enough to be carried by an adult human, can require tape to close but might have latch mechanism, might be the combination of two open boxes where one fits snugly inside the other to complete the 6 sides, and so on

Anything with these properties is a ‘box’

In communications your mind will add a ‘box’ to the simulation you create on the fly regarding what I am communicating to you and it will generally meet these criteria. The context of my use of ‘box’ will help you determine what the actual properties are such that if I say ‘moving box’ you will apply the properties you associate with boxes used for moving rather than those used for ‘music box’ etc.

We also have context rules: I used A box and I used TO box are very different simulation rule sets.

The rules as such are not created on the fly, but they are used to ad hoc assemble the properties of a simulation object. Through communication you are able to simulate what is in my mind if we share common properties rules for ‘boxes’. If we do not, I can describe the box by its properties until you have an aha moment and we both are simulating the same object in our respective minds. Now I am certain-ish that you have a good simulation of my mind’s simulation and communication can continue.

If one becomes good at helping other simulate objects they are likely to take up the profession of ‘Author’…

I can predict systematic rules errors and systematic simulation errors by having one person not share the same conceptual rules sets. We build many rules which are language or culturally specific. Take for instance the word ‘fag’ or the word ‘suspenders’ – there are going to be errors between a Brit and a USAian. We, in fact, count on these errors. It is most commonly known as humor.

There are predictable errors in conditions where your sensory data is going to be fuzzy as in trying to identify an object in your yard on a moonless night, or an odd sound in the middle of the night. At the edges of sensory capability and outside the confines of well rehearsed simulations we WILL have systematic errors because we lack data for the simulation properties rules. In one case we have a well described yard but low light so very few physical aspect rules are applied to the object – it is without properties and as we begin assigning properties to it to factually simulate it or add it to the simulation of our yard we will make mistakes so we keep looking and swapping property rules in and out of the simulation until our simulation appears to match the sensory data.

What in one moment appears to be a strange animal in the end turns out to be a piece of plastic that the wind has carried into the yard – simulation complete, simulation sharpened up, and the process of simulating further may be abandoned.

We have properties rules for physical aspects, context, function and so on and can apply them in a simulation of the world around us or an object etc. Our rules might be wrong or someone else might have different rules and this would cause errors that are predictable. We test children because we know that their rules will be wrong until a certain stage of development of their brains. That is to say that we know their ToM will be wrong until they are able to develop certain rule sets that they can use to build the simulation of someone else’s thinking/mind. Until a certain stage they appear to be unable to sto imulate what another is thinking and simply believe that everyone thinks the same thing as themselves. It is when communicating with others that we find they do not have the same simulation of the world that we do that we begin to understand that they have different information and rules than we do and we must work to minimize the differences.

The idea that we minimize the differences also drives social interactions. We tend to like be around those whose simulations are most similar to our own. The reason, IMO, is that changing our rules for the simulator is ‘painful’ to the brain. We seek to find a place in the world that matches our simulation of it. This is the least ‘painful’ place for us to be. This is why knowledge changes us so dramatically.

TT as I understand it is not nearly flexible enough to accomplish these processes of mixing and matching rules, swapping them out, changing them on the fly, adding new ones. Without the ability to do the simulation you will lose at poker and probably die on the battlefield as these are more complex scenarios where you must apply rules to your simulation of the situation but be able to change them on demand so that the simulation matches sensory/other data. That is to say that the rules are never fixed but it tends to seem this way and that is why stage magicians can make a living.

Is that enough examples of predictable systematic errors?

Aha. Well, the rules about boxes are rules about object identification which I want to exclude from the domain of ToM. Then I can remain neutral as to whether we recognise boxes by simulation of boxes or theory of boxes. Let me give you one example of the sort of rule that the TT proponents say ground our ToM abilities. (I have chosen it because it should also bear on your initial question about what concept of belief is in play here.)

‘If a person desires X, and believes that not X unless Y, and believes that it is within their power to bring it about that Y, then all other things being equal, the person will attempt to bring it about that Y’.

I can nevertheless make a couple of comments on your box example. Firstly, this looks like a theory not a simulation. It contains a list of criteria for what something has to be like to be a box. It’s a theoretical specification of boxes. Secondly, there is a ton of work being done by your ‘and so on’. Because of the unlimited ability of opponents to your position to come up with unhelpful counterexamples, your actual list of rules would have to be infinite. Returning to ToM, one of my motivations for preferring simulation to theory of others is exactly this: I think the rule set would need to be infinite, and since children of five or less have a pretty good ToM, that looks implausible to me.

I think the TT guys won’t let you talk about ‘simulation rule sets’. They will declare victory on the grounds that you have employed theoretical elements with your simulation.

I fully agree with your penultimate paragraph. This is the position that simulationists have adopted: it is called the Wrong Inputs Defence. However, TT proponents have come up with areas where they say there are no wrong inputs but nevertheless systematic ToM errors. For example, children under five seem to follow the rule ‘ignorance means you get it wrong’ even if they apparently have the right inputs. So in an experiment, they see a green sweet drawn from a bowl containing only red and green sweets. A doll sees that a sweet is taken but does not see which colour it is. When asked what colour the doll thinks the sweet is, the children do not say ‘the doll does not know’ or ‘red or green’ – they say ‘red’. This means they assimilate ignorance to error, despite the fact that they appear to have all the right inputs.

I agree that there is a problem here and a new defence is needed, which I provide.

Okay, I’ll ramble a bit here.

The idea that we are using either simulations or rules theory is incorrect. Rules are the basic bits of the simulation.

See: toward the bottom

In this post the ‘child’ asks about toilet paper in the pool. In the simulation of the situation, the small round tubes were contextually mistaken as a wrong object through lack of information. The rest of the simulation was correct but the properties rules lead to mistaken object and incorrect summary of the simulation.

I do not see how rules create predictive capabilities.

‘If a person desires X, and believes that not X unless Y, and believes that it is within their power to bring it about that Y, then all other things being equal, the person will attempt to bring it about that Y’.

How does the person know ‘not X unless Y’ is true?
How does the person know that Y can be made true?
Why are ‘all other things equal’ in this scenario?

Through simulation the person might find that if S and T are both true and F is false then you will also get X true. I don’t see building that in rules.

“Packed like lemmings into shiny metal boxes” is the information. How do you use rules to summarize correctly that this means rush hour traffic? I opine that only by simulation can this be done.

A child figures out a puzzle not entirely by brute force trial and error but by simulation of the brute force trial and error. For each piece placement new rules are drawn up ad hoc and a simulation of the brute force matching is done mentally so that the child can pick out the right piece with the first motion of their arm.

I think that rules have to be used, but not an infinite set of rules. It is a small set of functional rules that are used willy nilly in the simulation to achieve the summary of the problem presented.

We’ve seen rules for box. Is a Rubik’s cube a box? One rule not described earlier is that boxes obey gravity rules. Unless it’s a box kite, then it’s different and looking at a box kite on the kitchen table requires simulation to understand the purpose of the box even though rules are used.

Without simulation, rules are very restrictive and would not answer the question of ToM.

“Rules are the basic bits of the simulation.”. This the TT side will definitely not let you have. Anything rules-based is a theory. If you a rule like the example I give, you are using a theory. I agree with them on that.

On the swimming pool, that’s a point about perception, which is a separate topic.

“How does the person know ‘not X unless Y’ is true? How does the person know that Y can be made true?”

They don’t, but that is a strength of the rule. It allows for the prediction of behaviour based on ascribed false belief, not knowledge.

“Why are ‘all other things equal’ in this scenario?”

They aren’t. If they aren’t, the rule must be flexible enough to accomodate the differences.

The lemmings point is about language comprehension, which is a separate topic.

“Without simulation, rules are very restrictive and would not answer the question of ToM.” Correct, I think.

Leave a Reply