4.3.2 |
TRUTH AND NEUTRALITY
IN EXPECTATIONS |
People's behavior does not have to be goal-directed according to
the
norm of neutrality, but if it is
goal-directed the goal aimed at should ultimately be a neutral one.
It has quite commonly been pointed out that people's (or
'human') behavior is in actual fact 'mostly goal-directed'. This
kind of behavior is then described as "rational", because it
involves the choice of the best means available for attaining
the goal in question.
Correct
tho this
description may be, it does not follow that a person who does not choose
the best means available, for example, because this violates someone
else's
right to personhood, would
behave 'irrationally'.
Or, if one wants to call such behavior "irrational", it does not follow
that a person would always have to behave 'rationally'.
Yet, since we are in the context of this division primarily interested in
doctrinal considerations, we
should not only choose neutral-inclusive objectives, but also behave
rationally with respect to these objectives.
On this
teleological scheme the
means-end concept of rational behavior is, indeed, a useful one.
When a decision maker can predict the outcome of an action,
the situation is not problematic.
A person who thus acts under certainty should choose the right goal
(a neutral or
nanapolar one) and make sure that
'er prediction is a true one,
that is, corresponds with reality. But only under ideal circumstances can
a person be entirely sure about the state of affairs 'er action
will bring about. In actual fact a person (more) often (than
not) acts under risk and under uncertainty.
In the case of risk
'e knows at least the objective
probabilities of the possible outcomes; in the case of uncertainty
even these objective probabilities will not all be known to
'im.
When objective probabilities are not known and not given, mere adherence
to the principle of
truth will not help.
The decision maker is then forced to work with 'subjective'
probabilities or expectations.
(Subjective is used here in the sense of nonepistemic
doxastic and expectation in the standard sense,
not in the mathematical sense of product of the probability
that an event will occur and the amount to be received if it
does occur.) If a decision maker has a goal in mind, and if not
all outcomes are known to 'im, 'e will have to make comparisons,
also intra- and inter-personal ones. Even abstaining from every
action presupposes that this would serve the goal in question at
least as well as any positive action. There is nothing dramatic
about this situation: everyone performs such mental operations
all the time, altho definitely not always to serve a neutral or
nanapolar end.
The function of the theory dealing with the problems and
principles of decision-making, decision theory, is not to tell
us what end we ought to choose. Its function is merely to
formulate decision rules telling us what to do given a certain
end. Hence, so far as these ends are concerned decision theory
is neither neutralistic nor antineutralistic. Therefore it is
even more remarkable that one of the classical principles of
decision theory is a neutralistic one, namely the principle of
indifference, also labeled "the rule for choice under uncertainty"
or "the principle of insufficient reason". On this
principle one should assign equal probabilities to all
possibilities in a situation of complete ignorance, if one has to
employ probabilities at all. Altho the principle of indifference
does not prescribe that a person must employ 'subjective'
probabilities, the rational decision maker can usually not help
acting as if 'e does use them. This, at least, is what one
school of decision theory teaches. The theorists of this school
propose expected-utility maximization as decision rule under
uncertainty.
Taken literally, the formulation of the end to be pursued in terms of
'maximum utility' is
extremist and inconsistent
with the neutralistic character of the principle of indifference.
On our terms, the decision rule concerned should be a rule of
expected neutralization, but this reformulation has little or
no impact on the mathematical enterprise itself.
It only underscores that those who accept the principle of
indifference in decision theory should also accept neutralization
as an ultimate
corrective value instead of
maximization or, for that matter, minimization.
Even when restricting themselves to means-end rationality and
even when accepting the same goal or goals, decision theorists
may still disagree about what a rational decision maker is
actually supposed to do. For the principle of indifference has
its competitors too.
And if this principle were wrong, it would not benefit
neutrality in the end, if we tried
to achieve a neutral or nanapolar goal by assigning equal probabilities to
the possibilities in question (assuming that they are completely unknown
to the decision maker).
One alternative principle proposed is the maximin principle.
According to this principle every action or policy must be evaluated in
terms of the worst possibility which can occur by choosing this action or
policy, and it is this worst possibility which must then be maximized.
It has been argued, however, that the maximin and similar principles often
suggest entirely unacceptable decisions in
practise and
'lead to highly irrational decisions in important cases'.
On the maximin principle a person would always have to choose
something unpleasant if choosing something pleasant could possibly
lead to the worst outcome, however unlikely it might be that
this ever happened. On a related principle in political philosophy
(the so-called 'difference principle') society has to give
absolute priority to the interests of the one worst-off individual,
even tho an alternative policy would be beneficial to no
matter how many people.
(It should be noted that the system to which the principle is made to
apply does not distinguish
extrinsic and
intrinsic rights and that
therefore the worst-off individual in this system may not even have 'er
extrinsic property at 'er disposal.) Where this principle does lead to
reasonable decisions, it is 'essentially equivalent to the
expected-utility maximization principle' — as has been said.
When we now substitute neutralization for utility
(maximization), we have come full circle; that is, we are back, then,
in the original position in which it is ultimately only neutrality and
neutralization which count.
Throughout nature and culture neutrality appears as symmetry.
Similarly, in the probabilities of decision theory. There it is
called "symmetry in probability". In effect, symmetry
considerations require here that one attach exactly equal mathematical
probabilities to each of all possible outcomes (assuming that
one does not know that the probabilities are unequal). It is,
of course, nothing else than the principle of indifference which
establishes this equality of probabilities. But while it is
admitted that this principle 'will continue to be a most fertile
idea in the theory of probability' it has also been criticized
for reasons other than those of the maximin type.
Some theorists do not accept the indifference principle as a
formal postulate, but believe that there is 'an element of
truth' in it — a rather odd and ambiguous position indeed.
One objection is not very serious. It is that the principle would
not be strictly applicable for a person who has had the relevant
experience. As the argument runs, one cannot expect a person to
maintain a symmetrical attitude toward a kind of situation (such
as when confronted with a piece of apparatus) with which 'e has
had long experience. Such a person would have to continue
believing against all odds that the possible outcomes of such a
situation were equally probable and independent from case to
case. Since the principle of indifference applies to situations
of 'complete uncertainty' and the principle of truth to situations
of 'complete certainty', there is a wide range of
situations between these two epistemological extremes. Situations
in which a person has had some relevant experience are
typically situations in which 'e is not completely uncertain
anymore (hopefully for the right reasons).
A relevantistic interpretation of the principle of indifference will
therefore bypass the objection altogether: probabilities should be taken
equal, unless the assumption that they are unequal can be justified.
Hence, the 'element of truth' in the principle of indifference
is that a symmetrical initial attitude towards probabilities
needs no justification.
(Note that the traditional belief that all religions would be equally
valid which is called "indifferentism" can only be held by
those who confuse
ideologies or systems of
thought in general with religions, and who have never seriously reflected
on the attitude assumed in different systems of thought with respect to
truth and its interpretation, and with respect to neutrality and its
interpretation.
To be an indifferentist a person must be totally ignorant of the
completely anti-indifferentist content of the religions claimed to be
'equally valid'.)
It has been argued, too, that people do in practise not act
on the indifference principle, nor on the maximin principle or
some other general decision-theoretical rule. But objections of
this sort are rather weak. Firstly, even where only consequentialist
or teleological considerations are concerned, there is
no reason to suppose that people always act rationally — on the
contrary. Secondly, even if we assume that they always do act
rationally, it may not be clear what end or ends they have in
mind. Thus, according to the indifference principle, taking part
in a lottery is irrational if the participator's sole aim is to
win a prize. If there are many lots and few prizes, it is
unneutral to expect that one will win such a prize. Yet, if the
lottery is held in aid of a good cause, and if it is known that
the money one will probably lose, goes to a cause one supports,
then one does behave rationally nonetheless. In such a case one
spends money on something that will, presumably, always serve a
good end, either a personal or a nonpersonal one.
A more serious objection against the principle of
indifference is that it is 'not always obvious what the symmetry
of the information is'. There may be partitions of the domain in
question which many different people all consider uniform
partitions, but the partitioning may in other instances be
controversial. Nonetheless, where there is, perhaps, no agreement
on a single, 'correct' way of partitioning, people will
probably agree that a great number of partitions is not correct.
In all those cases the indifference principle is still operative
in that it makes it impossible to justify many unneutral
expectations. In the next section we shall take a look at an
example of the indifference principle's marked effectiveness
even when it is not immediately obvious what the equal
probabilities must be assigned to.