Should Nozick Call Darwin As A Witness?

Introduction

Nozick claims on several occasions that his picture of knowledge as truth tracking would be favored by natural selection. The two following citations are indicative of this element of Nozick’s position; they are taken from widely separate sections of his major work: `evolution, which doth make trackers of us all, would select for global tracking rather than especially favoring the local version’ (R Nozick, Philosophical Explanations, Belknap Press of Harvard University Press, 1981, p. 194) or `there would be evolutionary selection for better capabilities to detect facts and to have true beliefs’ (ibid., p. 284). The importance of this claim for Nozick rests on more than these remarks however. It would be a defect of his claims about knowledge if the picture thereof meant that we as evolved creatures could not reach it or approach it on any view incorporating the claim that possession of knowledge enhances evolutionary fitness.

At some level, this claim must be true. Any creature in possession of knowledge about benefits available in the environment such as food sources will outperform one not in possession of such knowledge. And any creature making dramatically inaccurate over-estimations of its own capacities will suffer the consequences. However, this essay will argue primarily from Dennett that the picture of complete knowledge being indisputably good for fitness is too simplistic.

chocolate cake with white icing and strawberry on top with chocolate
Photo by Pixabay on Pexels.com

This leads to the potential challenge to Nozick’s position. Evolution will only make us trackers if tracking provides knowledge and knowledge is optimally fit. If the second claim fails, then even if tracking provides knowledge, we may not do it. If we do not do it, then either a different account of knowledge is needed, or we accept that Nozick has described something we will struggle to reach and may not need to.

Nozick does have a defense however. It will be shown that avoiding false positives and false negatives are differentially important as a practical matter in different cases. Yet Nozick can concede this but ask what is the link from practical importance to relevance for epistemic assessment. Since this link cannot be provided, Nozick’s account is in fact safe from attack from evolutionary perspectives. A further variant challenge based on the interaction between evolution and distances in possible world space is considered, and again it seems that Nozick has an adequate response.

One final charge to press against Nozick might be to observe that if in some cases false belief is better for us, then the best way to obtain that might be via avoiding tracking thus: tracking itself could be unfit.

Nozick’s Account Of Knowledge

Nozick has a four-factor account of knowledge that can be seen as an extension of the traditional tripartite model of justified true belief. Nozick’s account relies heavily on the subjunctive conditional. This refers to the connective in sentences like `if A were to be the case, then B would be the case’.

The following standard terminology may be used.

 

  • p: the belief or proposition in question
  • s: the subject
  • B: two place operator preceded by a subscript indicating the subject and succeeded by a subscript indicating the belief
  • ☐→: subjunctive conditional connective
  • ¬: negation

 

 

On this basis, Nozick’s four conditions for knowledge are as follows.

1. p
2. sBp
3. ¬ (1) ☐→ ¬ (2)
4. (1) ☐→ (2)

Condition (3) is known as Sensitivity because if s satisfies it, his beliefs are sensitive to p becoming false: he will no longer believe p in those possible worlds where p is false. Condition (4) is known as Adherence because if s satisfies it, his beliefs adhere to p remaining true: he will continue to believe p in possible worlds where p is true.

sky earth galaxy universe
Photo by Pixabay on Pexels.com

The possible worlds analysis is primarily due to Lewis (D Lewis, Counterfactuals, Blackwell Publishing, 2001) and may be summarized as follows. Possible worlds are sets of propositions. The actual world is given by the set of propositions that are true in the actual world. Different possible worlds have a number of propositions that are true in them, and the closeness of two possible worlds may be considered as related to how many propositions share the same truth values in both worlds; and how dramatic the effects of the changes are.

Nozick believes that Sensitivity and Adherence are of equal importance. Zalabardo (J Zalabardo, UCL Dept. of Philosophy), forthcoming disagrees and claims that Sensitivity is more important that Adherence in that the former does more work and the latter introduces more problems than it solves. For Zalabardo, Nozick’s account would be improved by dropping the Adherence condition altogether. If an evolutionary account can be produced to show that Sensitivity is more likely to be selected for than Adherence, that would bear on this argument.

Nozick introduces Adherence to allow for a type of counterexample which might be termed `fortunate avoidance of misinformation’. The first example relies on the familiar skeptical problem of the brain in a vat being fed data indistinguishable from sensory input. If the experimenters doing this fed the brain the information that it was in fact a brain in a vat, we would not be tempted, according to Nozick, to allow it knowledge on the point. This is because the brain would not have formed such a belief in the close possible world where the experimenters do not feed in such knowledge.

A further counterexample due to Harman is handled similarly. A man reads a newspaper recording that the dictator is dead. This is true, but the regime suppresses the information issuing denials in subsequent editions; the man in question fails to see these denials and so continues to believe the truth. Intuitively though, we are more likely to allow that this is a case of knowledge, after all the man has formed a belief which is true by a method which was in fact reliable, whether he knows it was or not.

It should be noted that Nozick’s reference to `local […] global’ in the quotation given in the first paragraph of the Introduction is irrelevant to our purposes here. He is using the terms temporally to restrict (via the term `local’) the range of times relevant to our assessment of whether s knows p. If p includes `now’ then it could change its truth value later but we might still be prepared to accept that s knows p = `it is raining now’ even if s ceases to know that later on because the rain stops. Nozick acknowledges that further conditionality may be required to account for this but believes it can sit atop his system rather than replace it.

So in the quotation, Nozick is in fact claiming that natural selection would prefer durable or robust knowledge rather than some evanescent version; this seems clear and does not change the position in relation to whether evolution will require tracking. Importantly, he believes that evolution would favor tracking of both types even though it would preferentially select the global version.

The Evolution Of False Belief

While prima facie it would seem to be evolutionarily beneficial always to have true beliefs (i.e. knowledge) there are in fact many situations wherein false belief provides a higher level of fitness. Several examples are discussed by McKay and Dennett (R McKay and D Dennett, The Evolution of Misbelief, Behavioral and Brain Sciences (2009) 32, 493–561) including item recognition in an outdoors setting, which will be the major example considered here, in the next subsection.

Other examples discussed include expectations of AIDS patients, false beliefs about the self and the placebo effect. These will be outlined briefly. The citation given in the case of AIDS patients relates to the initial period of prevalence of the disease in the US, at a time prior to the development of treatments. Life expectancy in such patients was measured in months. Contrary to received wisdom emphasizing acceptance, patients who ignored their HIV status and remained in denial experienced slower onset of symptoms and significantly lengthened survival.

Remarkably, it appears that optimal mental health may be associated with delusionally false and positive beliefs about the self in relation particularly to one’s own capacities. It is well known that this is a frequent occurrence, and so on the type of argument made throughout the paper, it should have a benefit. Everyone thinks they are a better driver than average and they make strong claims about the prowess of their children. All of these delusions may have elements of becoming self-fulfilling prophecies. Students who believe they will perform well in exams are less likely to succumb to performance anxiety and to remain focused on work which will of course improve such performance. Most ironically, people generally claim that they are less susceptible to self-delusion than others.

The placebo effect is well-known and need not be considered in detail here. The belief that crystals will heal one can make that partially true, and the mere presence of doctors and a medical setting can begin healing before any treatment is commenced. Explanations of this have included the idea that the immune system of the body may be compared to an army with antibody cells being the soldiers. The body may throw the reserves into action when in a medical setting because this is less risky when reinforcements in the form of external treatments are expected to become available. If this account is true, it would be one example of a situation in which a mechanism could evolve to translate false belief into beneficial action, though naturally in an evolutionary setting we would need to replace `hospital’ by `safe environment’.

Some explanations for the prevalence of religious beliefs focus on the psychological benefits.

In addition, this type of idea is also present in many other philosophers. Nietzsche: `throughout immense stretches of time the intellect produced nothing but errors; some of them proved to be useful and preservative of the species: he who fell in with them, or inherited them, waged the battle for himself and his offspring with better success’. (F Nietzsche, The Gay Science, Cambridge University Press, 2001, III S.10) By this, Nietzsche means for example the facts that there would have been no astronomy without astrology and no chemistry without alchemy. Ryle (G Ryle, The Concept of Mind, University of Chicago Press, 1949, p. 13) also writes of new more useful myths replacing older myths in physical science, giving the example of the concept of force replacing that of Final Causes — the improvement not deriving from veracity. Unger (P Unger, A Defense of Skepticism, The Philosophical Review, Vol. 80, No. 2 (Apr., 1971), pp. 198-219) considers a circumstance that `allows a false belief to be helpful’, viz. when the alternative is to have no belief at all and some action is called for.

Error Asymmetry In Object Recognition

Absolute knowledge under all circumstances would indeed be optimally evolutionarily fit, but this is not obtainable. Given an irreducible amount of error, it becomes clear in some situations which would have applied evolutionary pressure to the perception systems of early humans, that this error is best accepted in one direction rather than the other. Nozick acknowledges this: `some sacrifice in the total ratio of true beliefs would occur to achieve a higher ratio of important true beliefs’. (ibid., p. 284)

Imagine a situation in which an individual glimpses an item through the trees. It might be a bear or a rock. The terms `bear’ and `rock’ are placeholders for two categories of items: dangerous and harmless. The following table illustrates the possible outcomes in a matrix across what the item actually is and what the observer believes it to be.

Case True Situation Belief T/F Value
A) rock rock T good
B) rock bear F bad
C) bear rock F very bad
D) bear bear T very good

The column headed `Value’ indicates the quality of the outcome for the observer. Note at once the strong asymmetry in the cost of error, where it exists. If A). the observer sees a rock and believes it to be a rock, this is a good outcome: no action need be taken. If B) the observer sees a rock but falsely believes it to be a bear this is a bad outcome but not a disastrous one. Perhaps the observer will needlessly take avoiding action and dissipate some energy.

The worst possible outcome C) is where the observer sees a bear but falsely believes it to be a rock. This is dangerous and is the situation one would expect to be minimized in successfully reproducing individuals. The best outcome is D) where a bear is seen and correctly identified as such: avoiding action can be taken. Note that we are merely classifying the perceptual outcome: so D) is a good perceptual scenario despite the fact that the individual might in fact be better off overall if there were no bears around at all.

Case A) is fine for the subject but relatively uninteresting. Note that natural selection will act only weakly against belief formation mechanisms that are only somewhat positive or negative and will not act at all if they are not harmful or beneficial. Thus a subject persistently misidentifying items within a category will not be selected out to the first approximation, though this may still occur if it produces a second order effect producing a higher error rate in the dangerous category: an example of this would be a subject performing sub-optimally on bear recognition via misidentifying trees as rocks in a scenario where bears frequently live in trees.

A similar table is shown below to explain the emergence of dominance hierarchies in animals. (S H Vessey, Dominance among Rhesus Monkeys, Political Psychology, Vol. 5, No. 4 (Dec., 1984), pp. 623-628) These do not develop solely through combat; some combat avoidance is more optimal than for groups to lose members through constant deadly competition. This indicates that the previously given bear/rock example is not the only exemplar of this type of argument. The test proposition is p = `I will win this combat’; and C is true if the animal opts for combat.

Case True Situation Belief T/F Option Value
A) p p T C good — won
B) p ¬p F ¬C bad — missed win
C) ¬p p F C very bad — lost
D) ¬p ¬p T ¬C good — avoided loss

It is known that animals will avoid combats they believe they would lose, and also err on the side of caution. This means they will accept some B) cases in order to avoid any C) cases. This is the worst and potentially fatal outcome wherein the animal falsely believes it will win: i.e. a Sensitivity failure. The argument is unaffected if animals do not in fact have beliefs but merely act as if they do, presumably by some heuristics which also will track the truth if they are optimal. The asymmetry results from the fact that the animal needs to win some combats, though not all, but must avoid all serious defeats. Animals relying more on Sensitivity are more likely to survive.

It may be noted that there is a minor asymmetry between the two tables in that the two success cases are described as `good’ and `very good’ in the first table, while both are simply `good’ in the second table. This disparity hides nothing major about the account, which is not committed to the strengths of the asymmetry as opposed to the existence thereof. It simply reflects that fact that avoidance of combat carries significance for the establishment of dominance hierarchies in animals. Winning is certainly the best outcome, but avoiding a loss may not be too far distant a second best. However, in the case of the first table, it seems very clearly less valuable to identify a rock correctly than to identify a bear correctly because there is little purposeful interaction with rocks.

Error Asymmetry In Self Beliefs

Delusionally positive beliefs about the self can be beneficial if they relate to mental capacities; false beliefs about some physical capacities would be harmful. The table below shows the payoffs. The test proposition is p = `I am smarter than the rest’. In p, the term `smart’ may be replaced by any beneficial mental quality.

Case True Situation Belief T/F Value
A) p p T good — accurate view of smartness
B) p ¬p F bad — underestimation of smartness
C) ¬p p F very good — `bluffer’s bonus’
D) ¬p ¬p T bad — missed bluffer’s bonus

The term bluffer’s bonus is used to indicate situations of the type where poor students have an unrealistically high belief in their own prowess and thus in fact do better in an exam because they are not discouraged from working by an accurate perception of their likelihood of failure. Since exams did not take place in evolutionarily relevant times, we would need to consider positive mental qualities such as problem-solving ability. Persons with an unrealistically high assessment of their own skills might obtain the bluffer’s bonus by persisting longer with a difficult task.

Alternatively, some types of physical capacity could be used, such as `I can cross this difficult terrain without food or significant discomfort’ while avoiding any risks in the area of unrealistic views of combat prowess. The bluffer’s bonus may additionally be adaptive because it aids deception of others in relation to ones own capacities.

Challenges To Nozick

Bear/rock

The ideas of Sensitivity and Adherence may be paraphrased as below.

 

  • Sensitivity: avoid too many positives i.e. minimize false positives
  • Adherence: obtain all the positives i.e. maximize true positives

 

If we take the scenarios in which a bear is present, then Adherence is more important than Sensitivity. We want to detect all the bears and can handle some false alarms. The situation is reversed for the scenarios in which a rock is present however: Sensitivity is more important than Adherence. A failure of Sensitivity means that were a rock not to be present (and so a bear is present), the observer would fail to recognize the danger presented by the bear. A failure of Adherence means that the observer is needlessly scared by a rock. This may be slightly deleterious in that it could distract from more useful pursuits or lead to a propensity to react less when faced with an actual bear, but is certainly much less dangerous than failing to recognize the actual bear.

So far the bear/rock scenarios are neutral for Nozick. Sensitivity and Adherence both play important roles in the two possibilities. However, the question then becomes one of the prevalence of bears and rocks in the evolutionary environment, or the relative proportion of dangerous to harmless items. Since we are here, and also this is the case now, there is a strong presumption that in fact there were far fewer bears than rocks. Therefore in the situation as it obtains, and presumably has done throughout the evolutionary period, Sensitivity is more important than Adherence.

Nozick’s response here will be the one outlined in the Introduction. Sensitivity may indeed be more important than Adherence to the extent that it is more likely to produce knowledge, but this is not sufficient to delineate the transition from practical importance to relevance for epistemic assessment. We might have an argument that shows that one or other of Sensitivity or Adherence is more beneficial evolutionarily. This could perhaps show that we are more likely as evolved creatures to rely on Sensitivity or Adherence.

But it would still be possible that there is a different sense of `importance’ to be considered in deciding whether s knows p. Even if in the actual world, one of the conditions was operative far more frequently, this could only show that knowledge was approached in the actual world more frequently via one condition. However, the importance for Nozick would be outlining the shape of knowledge as it extends out into near and far possible worlds, or more precisely what shape our counterfactual beliefs would have to take in reasonably close possible worlds in order to qualify as knowledge in the actual world.

Omniscient beings would have beliefs that tracked the truth in all possible worlds. Fortunate mortals who happened merely to have true belief do not qualify as having knowledge because they would cease to hold the belief in very close possible worlds. We have not evolved omniscience, but on Nozick’s picture we have evolved to have beliefs that are somewhat robust out to some distance in the space of possible worlds.

Another possibility for Nozick would be to observe that positive items have not been considered, and to claim that inclusion of these would alter the picture. We may term these `apples’: the category can stand for any useful or nourishing object. It seems here that neither Sensitivity nor Adherence will play a major role. Failing to spot an actual apple will not be greatly harmful providing plenty of apples are still detected. Misclassifying something else as an apple will similarly be unlikely to be grossly harmful as the error will be detected fairly shortly. In any case, the same argument can be made that positive and negative items alike will be much less prevalent than neutral ones and so the inclusion of positive items likely does not alter Nozick’s position in either direction.

Bluffer’s Bonus

The most significant case is C) in which the Bluffer’s Bonus is obtained. This can result in substantial out-performance. In this case, Adherence has not been operative. The other case of false belief, B), involves a failure of Sensitivity. This outcome is unhelpful to the participants who are unrealistically pessimistic about their performance prospects, but since they are in fact highly capable, they will do well providing the pessimism remains within bounds.

white teddy bear reading book
Photo by Pixabay on Pexels.com

Nozick can admit that in this set of cases, Adherence has played the more important role. He can then note analogously with the previous set of bear/rock cases that inverting p to `I am dumber than the rest’ will invert all of the value entries and Sensitivity will become more important. We would then need a way of assessing whether euphoria or depression have been more common in order to decide the relative practical importance and this seems a difficult task.

Ethics

In addition, Nozick may be able to rely on potential ethical implications of his account. This becomes relevant from an evolutionary perspective because there is some evidence that altruistic behavior is adaptive. This appears counterintuitive because altruism means giving up resources for no apparent gain. And yet altruism could evolve in groups — even non-interrelated ones — because of the benefits to be gained from mutual assistance.

After having developed his tracking account of knowledge, Nozick extends the tracking analysis into ethics (ibid., p.291 by noting that ethical behavior should track rightness. In this new context, Sensitivity may be approximated by `we should not do those things which are not right’ and Adherence by `we should do those things that are right’. No clear answer can be given as to the correct picture of ethics, but it is clear that any version which only uses one of these principles would be very different to the duplex account. Sensitive but non-Adherent ethics would prohibit murder but permit negligent slaughter while Adherent but non-Sensitive ethics would mandate saving drowning babies but remain silent on murder. To the extent that those characterizations are correct and to the further extent that we have any evolved ethical behavior, Nozick has a further line tending to show that evolution favors retention of both conditions.

Robustness

A different line of argument can also appear to produce differential levels of importance for Sensitivity and Adherence, though this time leading in the opposite direction of giving priority to Adherence. This relies on Nozick’s observation that close possible non-actual worlds can influence the evolution of our belief-formation mechanisms. Without moving too far in the direction of Lewis’ modal realism, this can be understood as allowing for some flexibility to adapt to changing environments. Clearly creatures with excellent belief-formation mechanisms in the actual world will not be as fit as those which in addition, would retain such mechanisms in close possible worlds. Such creatures would be robust in changing environments.

If Nozick’s account of knowledge is correct, then this should mean that we have beliefs which are Sensitive and Adherent over some range of close possible worlds. There is some expenditure of evolutionary capital as it were to provide additional capacities and these do not produce compensatory payoffs in the actual world until it changes. This means that there will be a limit to the remoteness of possible worlds which can influence the development of belief-forming mechanisms. Intuitively, this just means that creatures with abilities to form correct beliefs about extremely implausible developments from their current environment will not be favored.

The asymmetry between Sensitivity and Adherence can now be seen. Adherence is tested in closer worlds than Sensitivity because p is true in the actual word. Intuitively, the two conditions combined require that beliefs (Sensitivity) switch off in worlds where $\neg$ p but (Adherence) not before then. Nozick’s response here will be to rely on the flexibility inherent in the possible worlds approach. He can insist that the objector provide an account of why the range of evolutionarily relevant worlds is smaller than the distance to the nearest $\neg$ p worlds.

Conclusion

It seems that Nozick is able to head off any evolutionarily-based challenges to his account of knowledge: we could have evolved to be Sensitive and Adherent.

See Also:

Are We Allowed To Follow Our Personal Aims? Nagel says Maybe

Links Between Schopenhauer And Apocalypse Now

Quine And Fine on Reference and Modality

Husserl’s Phenomenological Reduction: What Is It And Why Does Husserl Believe It To Be Necessary?

Nozick’s Claim That Knowledge Is Truth-tracking: A Critical Evaluation

1. Nozick’s Analysis of Knowledge

1.1. Introduction

Nozick is responding to Gettier’s claim that the traditional tripartite definition of knowledge as justified true belief is inadequate. Nozick’s analysis is specified by the following four conditions, which together are necessary and sufficient for knowledge:

1. p
2. Bap
3. ¬ (1) → ¬ (2)
4. (1) → (2).

The symbol → is non-standard: Nozick uses it for his relation of subjunctive conditionality. A → B means that if A were the case, then B would also B the case. This differs from logical implication ⊃. If it is true that A ⊃ B, then in all possible worlds in which A is true, so is B. Nozick uses A → B to mean the much more restricted case in which in the closest group of possible worlds in which A is the case, so is B. We are also using the following symbols: p is a proposition, a is a subject, B is the relation of belief so that Bap means that a believes p; ¬ is negation and → is Nozick’s subjunctive conditional.

adult animal cage cone
Photo by Photo Collections on Pexels.com

Dancy gives an illustration for which he cites Lewis of this crucial distinction. Lewis considers the conditional ‘if kangaroos had no tails, they would topple over’. This is a good illustration of A → B and not of A ⊃ B. In some possible worlds, the Australian tourist board gives the kangaroos crutches. This group of possible worlds is certainly more remote than the nearest possible world in which kangaroos have no tails and are unstable. So A → B is true but A ⊃ B is false because A does not entail B.

One of the terms for this analysis is ‘truth-tracking’ because the subject’s belief is co-variant with the truth of p; if it were the case that p then the subject would believe it and if it were not the case that p then a would not believe it. It has also been known as a counter-factual analysis because of the way in which it discusses near possible worlds distinct to the actual world in order to assess knowledge claims in the actual world.

1.2. Motivation

Nozick introduces his two new conditions (3) and (4) in order to handle cases which had not been soluble on the previous bases. Gettier cases involve erroneously scoped referencing in which the subject appears to have a justification for believing p and p is true and yet the situations appear to fall short of knowledge. For example: “Two other people are in my office and I am justified on the basis of much evidence in believing the first owns a Ford car; though he (now) does not, the second person (a stranger to me) owns one. I believe truly and justifiably that someone (or other) in my office owns a Ford car, but I do not know someone does.” Condition (3) eliminates this type of case as an instance of knowledge, which is a point in favor of Nozick’s analysis.

Nozick’s introduction of condition (4) occurs in the context of a skeptical scenario termed ‘Brain in a Vat’ (“BIV”) by Putnam. The subject is in fact a disembodied brain being stimulated by electrochemical means to have experiences; these being in the base case scenario all of the same experiences as in the current world. This skeptical hypothesis will produce important implications for Nozick’s analysis, to be discussed in the next section. The power of the hypothesis lies in the fact that while it is doxically identical to the actual world, almost everything believed in it is false.

As Nozick points out, his subjunctive analysis is related to but more restricted than the prior causal analysis. Under BIV, the subject can be brought to believe that BIV is true by being given appropriate stimulation. There is a good causal link between the event and the belief formation, but this cannot count as knowledge because it is fortuitous. Nozick can exclude this type of counterexample because it fails condition (4): in the close world to that of the BIV subject where he is not given the relevant stimulation, he no longer believes BIV although it is still true.

1.3. Skeptical Implications and Non-Closure

There is a major ‘heavyweight implication’ of Nozick’s analysis that is highly counter-intuitive. It will be instructive to see how he resolves it. The following principle, termed the closure principle, seems valid:

(CP): Kap & Ka(p -∃ q) → Kq.

The symbol -∃ is used to signify entailment, and so CP may be expressed as ‘if it were the case both that a knows that p and a knows that p entails q, then it would be the case that a knows q’; K is the two-place relation of belief used similar to B for belief previously.

This seems entirely plausible but Nozick uses BIV to argue that it is false. Let p be any everyday proposition such as ‘a is in London’. Let q be the negation of BIV. It is clear that p entails q, that a knows this entailment and that p is true and so under CP, a knows that BIV is false. Yet this is exactly the skeptical scenario that appears difficult to defeat.

Nozick’s dramatic response to this is to deny CP: “Knowledge is not closed under known logical implication”. He explains this by deriving it from the non-closure of (3): “That you were born in a certain city entails that you were born on earth. Yet contemplating what (actually) would be the situation if you were not born in that city is very different from contemplating what situation would hold if you weren’t born on earth.”

1.4. Methods

Nozick introduces a further refinement to handle what he terms the grandmother case. A grandmother comes to know that her grandson is healthy by seeing him. Were he not however, she would nevertheless be told that he was, in order to spare her distress. Nozick wishes to preserve this as a case of knowledge even though it fails condition (3).

This he does by adding the requirement ‘via method M’ to (2), (3) and (4). For a case to represent knowledge, M must not change in the relevant possible counter-factual situations. This means that the grandmother has knowledge in all the possible worlds in which she learns her grandson is healthy by seeing him, and does not in the possible worlds in which she relies on inaccurate testimony. This appears to be the correct result.

2. Objections to Nozick

2.1. Forbes

Forbes defends CP by putting pressure on Nozick’s line that the same method M must be used in (2), (3) and (4) in order to avoid incorrect knowledge ascriptions in the grandmother case. Forbes points out that M being reliable in the actual world where p is true does not entail that M is reliable in even the closest possible worlds where p is false.

The example given is of a reliable computer that can also check its own status. The proposition p is that the computer is functioning normally. The question is whether a subject can acquire knowledge that p by asking the computer to report its own status. If p, then this method M is reliable. However, if not p, then method M is by hypothesis no longer reliable. Thus there is no way to hold M constant while varying the truth value of p in order to assess whether the belief of the subject is co-variant with p.

Forbes allows that Nozick may have a response along lines similar to those used in an example that Nozick himself gives. This is of a vase in a box that is pressing a switch. The switch activates a holographic projector set to show a vase in the box. An observer passes all of (1) – (4) in respect of p, there is a vase in the box, and yet this is not a knowledge case. But Forbes holds that Nozick would then need to concede that the counter-factual analysis was inappropriate for all inferences and this would be arbitrary and severe for Nozick’s analysis. Perhaps Nozick here can instead adopt in some form Harman’s suggestion that all the lemmas be true.

2.2. Wright

Wright also attacks Nozick’s claim to have defeated the sceptical argument by introducing non-closure. He notes that using Nozick’s standard p and q (p = ‘I have a hand’; q = ¬ BIV) produces a problem for the ¬q scenario. Here, BIV is true and so p is false. We can assume that BIV is one of the relevant ¬p scenarios to be considered in assessing whether Kap. But if so, then subject a fails condition (3) because, even though p is false, Bap.

So Wright argues that Nozick must assume that BIV is not one of the relevant ¬p scenarios. And he further uses Nozick’s own argumentation against him with the following line, in which (I) represents ‘had it not been the case that I have a hand, then it would still not have been the case that BIV’.

(I): ¬p → q
(II): ¬q → ¬p
(III): ¬q → q

(II) is simply the statement that in BIV, I do not have a hand and then the reductio (III) follows by modus ponens from (I) and (II). As Wright points out, this could be seen as a refutation of the skeptic, but that line is not open to Nozick who wishes to agree that BIV is logically possible.

Wright allows Nozick the response of denying that transitivity holds for counter-factual conditionals. This would break the step to (III).

2.3. Garrett

Garrett defends Nozick against a purported counterexample given by Martin. Martin’s example considers a subject a placing a bet that pays if either of two horses wins. Subject a uses the unreliable method of finding out whether his horse won in the first race of simply cashing in his slip after the second race while avoiding any information about the first race. If his slip pays, he assumes that the first horse won whereas in fact it could have been the horse in the second race.

nature animal fog freedom
Photo by Pixabay on Pexels.com

Assume that the first horse did in fact win, and this is proposition p. The horse in the second race did not win. Condition (3) seems satisfied because ¬p → ¬Bap. Also, (4) seems satisfied. And yet this surely cannot be a case of knowledge because of the good fortune of a that the second horse did not win; a has failed to consider a relevant alternative.

Nozick’s response will be that in fact the possible worlds in which the first horse loses and the second wins are close enough that they have to be included in the assessment of whether Kap, whereas some possible worlds do not, such as one in which the betting machine has malfunctioned and is paying all slips. And then it is precisely the failure of Kap to track p in those close worlds that means (3) is not satisfied and this is not a knowledge case.

But Garrett has a refined version of this counterexample that he thinks is more dangerous to Nozick. Proposition q is that the father of person A is a philosopher; q is true. Proposition p is that the father of person B is a philosopher. Subject a uses the unreliable method M of forming Bap if a understands that q. It transpires unbeknownst to a that A and B are brothers, so in fact p. This fulfills conditions (1) – (4) but cannot be knowledge because it relied on the random unknown fact of A and B being brothers.

2.4. Gordon’s Response to Garrett

Gordon replies to Garrett’s objection by narrowing the scope of the problem of the father of A being a philosopher. Gordon notes that Nozick can appeal to his insistence that method M be held constant across the counter-factual scenarios. If method M means that a can legitimately infer facts about the father of A from knowing facts about the father of B and knowing that A and B are brothers, then M is reliable. It only becomes unreliable if extended to the general unrelated population. So can Nozick argue that this is in fact no longer M? For Garrett, the question becomes “why is it a requirement of knowledge that one have good grounds for thinking one’s method reliable?”

Gordon holds that Nozick has in fact replaced the tripartite analysis of knowledge as justified true belief (“JTB”) with his four conditions. Nozick is not therefore committed to JTB, and “even if Garrett can show a case in which one can meet Nozick’s conditions while using an unreliable method, he won’t have arrived at a clear counterexample to Nozick”.

2.5. Garrett’s Rejoinder to Gordon

Garrett responds by insisting “it is no presupposition of my counterexamples that it is necessary for knowledge that one have good grounds for thinking one’s method reliable if it is reliable”. Garrett agrees that if his counterexample shows unjustified true beliefs that meet all of Nozick’s conditions, and if JTB is required for knowledge, then he has found cases where Nozick ascribes knowledge incorrectly. However Garrett further claims that his counterexamples are valid against Nozick whether or not JTB is required. This seems strange however.

Garrett seeks to draw an analogy with the standard Gettier cases, saying that it is possible to explain why his father of A case is a counterexample to Nozick by showing the presence of unjustified true beliefs without insisting that justification is essential to knowledge. The idea seems to be that there is no entailment here. This seems true, but Garrett does not specify what alternate method he has to show that Nozick has falsely ascribed knowledge. Or alternatively Garrett may be thinking of a negative condition. Lack of justification is sufficient to disprove a knowledge claim, while the presence of justification is insufficient to prove a knowledge claim. This separation seems somewhat arbitrary though. In summary, Gordon’s defense of Nozick appears successful.

See Also:

Links Between Schopenhauer And Apocalypse Now

Ryle Contra Hidden Mental Processes

What Is “Theory Of Mind?”

‘Both A Black Raven And A Red Herring Confirm The Claim That All Ravens Are Black.’

References

  • R Nozick, Philosophical Explanations, Harvard University Press, 1981, (“PE”), p. 167 et seq.
  • J Dancy, Introduction to Contemporary Epistemology, Blackwell Publishing, 1985, p. 42
  • G Forbes, Nozick On Skepticism, The Philosophical Quarterly, Vol. 34, No. 134 (Jan., 1984), pp. 43-52, Blackwell Publishing
  • C Wright, Keeping Track of Nozick, Analysis, Vol. 43, No. 3 (Jun., 1983), pp. 134-140
  • B Garrett, Nozick on Knowledge, Analysis, Vol. 43, No. 4 (Oct., 1983), pp. 181-184
  • R Martin, Tracking Nozick’s Skeptic: A Better Method, Analysis, (Jan., 1983), pp. 28-33
  • D Gordon, Knowledge, Reliable Methods, and Nozick, Analysis, Vol. 44, No. 1 (Jan., 1984), pp. 30-33
  • B Garrett, Nozick and Knowledge: A Rejoinder, Analysis, Vol. 44, No. 4 (Oct., 1984), pp. 194-196