Value and Belief (thesis)

This is an updated version of my 2003 PhD thesis, as a series of PDFs. As the title suggests, it examines ways in which an individual’s values can affect the beliefs that they sign up to.

It looks at bias in terms of probability and decision theory rather than empirical psychology. I’m a huge fan of the experimental approach, but my own strength is more in the philosophical area. It also takes a broad rather than deep approach: as well as philosophy of science, I touch on philosophy of mind, (micro)economics, psychology and information theory. However, I don’t go very far into any of them: if I’d read more psychologists, I would have seen that they had already addressed some of these topics in a much more sophisticated way.

Scroll down to the one-page summary below, or if you’re really interested in this, download a chapter in PDF form.

Introduction

Part 1: The Bayesian Intentional Stance

This is the weakest section of the lot, and I no longer really believe it: move on to Part 2 unless you’re specifically interested in the logical foundations of probability and rational choice theory
The Intentional Stance / Problems with a Deductive System of Intentional Explanation / Probability and Utility / Objections to Real-valued Measures / The Mortonian Agent / Some Empirical Research / The Dual Role of Consistency Requirements / Beliefs and Opinions / Interaction between Belief and Acceptance / Varieties of Inconsistency / The Normative and Descriptive Cox Proofs

Part 2: The Utility of Truth: Defining Epistemic Values

Informal Uses of the Distinction / Sociological Approaches / Analytical Approaches / Applying Decision Theory / The Utility Analysis / Hesse’s Rival Analysis / What does the distinction apply to? / Connection to Other Folk Psychology Concepts

Part 3: Extending the Model

Is Desire for H to be True a Value Bias? / The Value of Being Informative / Values and Interpretation / Absolute and Relative Rationality / The Delayed Value of Truth principle

Part 4: Rationality and Science

Concepts of Rationality / Making the Distinction in Practice / Constrained Assertion and the Character of Science / Other Demarcation Criteria / How Many Epistemic Utilities Are There? / Is Bias Essential to Science? The Value Neutrality Thesis

Part 5: Dynamics of Opinion

This bit’s rather weak because it’s so purely theoretical
Persuasion / The Question of Rationality / Motivated Inference / Cognitive Dissonance

Part 6: Information-Gathering Behaviour

Value of Information as a Scientific and Philosophical Problem / Two Attempts / Measuring Information / The Standard Account of the Value of Evidence / Information Aversion

Conclusion and References

Summary

Part 2: The Utility of Truth

  • Epistemic values and value biases can be defined structurally, in terms of the values attaching to the (epistemically possible) consequences of actions.
  • In the very simplest case, the epistemic motivation is simply the value of true assertion. “Simplest case” here implies a choice between a finite number of hypotheses whose truth is act-independent and where there are no considerations of verisimilitude.
  • Not every value has to be either an epistemic value or a bias. Specifically, desire for a proposition to be true (or to be false) counts as neither since, assuming act-independence, it has no direct effect on the choice of opinion.
  • In the determination of opinion, strong value bias overrides strong belief rather than vice versa.
  • Opinions can be epistemic at the level of a single reasoning individual, but value-biased when we consider the social context. Alternatively, a individual who is unbiased might accept the testimony of people whose opinions are value biased.

Part 3: Extending the Model

  • Another kind of non-epistemic motivation occurs when the probabilities of states are act-dependent, and significant utilities attach to the states themselves.
  • Apart from via the above-mentioned act dependence, there is no necessary connection between a state H being aversive and the assertion (or contemplation) of H being aversive, so value bias is not the same as wanting something to be true.
  • The simple decision theoretic model introduced in Part 2 can be extended by adding a series of additional cognitive acts, involving increasingly specific levels of commitment or non-commitment. It is relatively straightforward to identify ideally epistemically motivated attitudes for the resulting decision tables, but to give a pattern of preference that is recognisably scientific, it is necessary to introduce a new cognitive utility: the utility of informative assertion.
  • Someone who is motivated to match their opinions to a locally available indicator of truth may or may not be motivated to accept the truth, depending on whether that indicator is reliable. To form a judgement of whether someone is rational in this absolute sense, we have to supply our own estimation of that reliability.
  • Since epistemic rewards can be expected to be less immediate than biases, cases of insufficient epistemic motivation can be thought of as analogous to other cases where people go for immediate rather than deferred gratification. A consequence of this is that we should expect the balance between an individual’s epistemic motivations and biases to be affected by uncertainty about the future.

Part 4: Rationality and Science

  • Scientific rationality can be defined in terms of the distinction between epistemic and non-epistemic motivation. The scientific rationality invoked here is not the same as the basic rationality assumed in intentional explanation. It is also a separate concept from that of instrumental rationality, although as defined here scientific rationality is just instrumental rationality with respect to epistemic goals.
  • As an argument that science should be so defined, we can consider the distinguishing properties that epistemically motivated agents would have, as opposed to the properties of non-epistemically motivated agents. We see that the resulting distinction is a good match for existing informal and philosophical distinctions between science and pseudo-science.
  • From a premise of implicit assertion (that the assertion of a proposition can involve the assertion of what is collectively seen to be a consequence) and a premise that assertion is constrained (i.e. that some combinations of assertions are not possible) it follows that a purely epistemically motivated agent can prefer to accept meaningless or even falsified statements. Therefore the fact that scientists accept general laws, purely theoretical statements and theories with known falsifying instances is not an argument against, and could even be an argument for, the claim that scientists are interested purely in informative truth.
  • Some suggested epistemic values can be reduced to the value of informative truth using arguments from inductive logic. However, these arguments depend on analysing that property of theories in terms of probability. This is a non-trivial task and one may well disagree with a particular probabilistic analysis.
  • Some philosophers argue that science is not necessarily improved by being freed of value bias. However, the features that these arguments present as necessary for science (diversity of opinion and diversity of actively pursued inquiries) could still exist even in the absence of value bias.

Part 5: Dynamics of Opinion

  • In the context of the decision-theoretic model, there are many different kinds of change in attitude that could cause a change of opinion. Each of these can be thought of as a process by which someone might be persuaded of a particular opinion, although there is no reason to think that real-world processes divide up in exactly the way suggested by the model.
  • Among these processes, there is one that is clearly rational in the scientific sense and there are others which are clearly scientifically irrational. The clearest example of the latter is that of persuading someone that H by increasing the value bias that they attach to the opinion that H until it overrides whatever other values are operative in that choice of opinion.
  • For agents cooperating with others in a social environment, there can be expected to be a pressure to have opinions which are coherent over time and which are coherent with one’s actions. This pressure counts as a value bias because once a commitment has been made to an opinion, the motivation attaches to subsequent opinions purely in virtue of their content’s coherence or incoherence with that original opinion, not because of their epistemic merits.
  • These considerations point to two processes of inference acting in parallel. One of these works at the level of belief. This would include, for example, belief change that is directly due to perception. The other works at the level of value and action, and works by motivating opinions or actions as a defensive strategy, to make other opinions and actions seem reasonable.
  • Cognitive dissonance, a much-studied process in which opinions and behaviours change to maintain coherence with each other and with the subject’s self-image, shows that motivated inference is a real phenomenon.

Part 6: Information-Gathering Behaviour

  • Shannon’s proof uses consistency requirements to identify a mathematical measure of the information content of a proposition, just as the Cox proof mentioned in Part 1 uses consistency requirements to derive probability.
  • Bernardo showed that the ideally epistemically motivated Bernardo agent values tests of a proposition H exactly to the extent that they provide Shannonian information about H. Our ideally epistemically motivated agent with a finite number of cognitive acts provides a rough approximation to this utility function. One distinction between scientifically rational and scientifically irrational attitudes (about whether or not H) is that information about H has value for the former but not for the latter.
  • According to a well-established theorem of decision theory, information cannot have negative utility when the probabilities are act-independent.
  • In light of this theorem, the question arises of whether the human propensity to be averse to some information is explainable within descriptive Bayesianism (or indeed any intentional framework).
  • One possible way to include information aversion within descriptive Bayesianism is to invoke motivated inference; in particular, the motivated inference from receiving disconfirmatory information about H to the giving up of one’s opinion that H. If the opinion that H is motivated by a strong value bias, then the disconfirmatory information is aversive, in that the subject would greatly prefer not to receive it.
  • This provides a further distinction between scientifically rational and scientifically irrational attitudes: the latter is a necessary condition for information aversion, whereas a scientifically rational agent cannot be averse to information.

Conclusion

  • “Truth-seeking” is not a vacuous adjective. Anyone can claim to be seeking the truth, even the most closed-minded defender of ancient dogma. Anyone could defend their epistemic practices as truth-seeking using the argument, “I defend whatever is written in [whatever venerated book] and I know the content of that book to be true [for whatever reason].” The structural definition of epistemic value undermines these arguments. It shows an essential difference between wanting true opinion (whatever the truth turns out to be) and wanting some other particular kind of opinion. It is not enough to be promoting or defending something which, you claim, happens to be true. The structural definition also shows us that even though we have a lot of lee-way in what we count as scientific attitude, we are not at liberty to call anything we like an epistemic value.
  • The criteria for theory choice do not have to be arbitrary or ad hoc. The more values, such as fecundity, explanatory power or inter-theoretic unification, that we add to a philosophical understanding of science, the more realistic our picture of scientific practice. On the other hand, the longer our list of characteristically scientific values, the more arbitrary and culture-bound it looks. In Part 4 we saw that epistemic values could be explained in terms of other epistemic values, perhaps even reducing them to the goal of informative truth. In this way we can explore the hidden order behind diverse criteria of theory choice. The chain of justification does not have to end with the blunt acceptance that a particular criterion of theory choice is scientific, because the normative significance of a supposed epistemic value can itself be investigated.
  • Some scientific disagreements are not epistemically significant. It was already shown in Maher’s and Good’s analyses (considered under “Analytical Approaches” in Part 2) that there is a spectrum of ideal scientific attitudes to an inquiry. Consider two scientists, one who accepts H as true and another who says there is not enough evidence to decide between H and its alternatives. It might be that they disagree on the fair odds for H given the evidence, but another possibility is that they give different weightings to informative versus cautious assertion. The exact nature of the disagreement would be revealed if the scientists more precisely spell out their attitudes. This thesis has given further examples of how differences in value can lead to different patterns of theory acceptance. Two scientists may rationally accept incompatible theories even when they are motivated purely towards truth and they evaluate the probabilities of propositions in the same way but are interested in the truth on different matters. The fact that scientists working on the same problem in the same culture still disagree after a long period of collecting and sharing evidence might be used as an argument against the objective nature of science. Similarly, the fact that scientists in a particular discipline all agree might be used as an argument that science leads eventually to an objective consensus. Both these arguments are undermined by considerations of value, because disagreement might not reflect different evaluations of evidential strength and agreement might reflect non-epistemic attachment to a “party line”.
  • Bias at the individual level need not translate into bias at the group level. In deciding how much trust we should give to a piece of testimony, we need to consider (amongst other things) the incentives operating on individual opinions. Somebody might be (relatively) epistemically motivated about H, but might form opinions about H in a totally misguided way. Someone else might be knowledgeable and epistemically motivated about H, but their opinions might be reported to us by someone with a non-epistemic bias. A consequence of this is that if we arrange it so that people’s opinions come to us via the right kind of filters and channels, then their individual value biases will not matter so much. Academic peer review is one such filter. A court of law is another. A society in which advocates of one opinion do not have the power to silence contrary opinion is another.
  1. #1 by Char Paul on April 17, 2017 - 9:35 pm

    Thanks much for the shares. My project looks at learning values for first year stats students.

Leave a comment