text
stringlengths 277
5.29k
| textbook_name
stringclasses 16
values |
---|---|
on by Hume (1965) and Kant (1987).
The first is that aesthetic judgement rests, fundamentally, on a felt response to an
object (Kant 1987: §1). Aesthetic judgement on an object cannot be passed second-
hand: it requires personal acquaintance. The testimony of others may make me
confident that I will find a newly published novel rewarding; but I cannot declare it
to be so until I have read it myself. The condition of felt response is reflected in the
fact that an aesthetic judgement, like an avowal of emotion, can be made sincerely or
insincerely.
The second feature, closely connected to the first, is that rules or principles play no
role, or only a highly diminished role, in aesthetic judgement (ibid.: §8). It cannot be
inferred that a piece of music is rapturous simply because it is in a particular key and
orchestrated for certain instruments; or that a painting is dynamic because its compo-
sition has a certain geometric form. No laws connect the aesthetic qualities of an object
with its sensory, non-aesthetic properties.
The point is not just that we are ignorant of rules for aesthetic judgement, or unable
to agree on them, but that there would be little point in trying to formulate a set of
rules: we could not use rules to produce in ourselves felt responses to objects; we
could not, in cases of disagreement, reasonably ask another person to relinquish his or
her judgement on the grounds that it conflicted with the rules; and whenever our
own responses departed from the rules, we would quite rightly repudiate the rules
rather than our responses. The fact that rules are otiose in determining the aesthetic
qualities and value of objects reflects the fact that aesthetic interest is directed essen-
tially towards particular objects, appreciated for their own sake, and not towards the
formulation of general truths. To judge according to rules is to miss the uniqueness of
objects, and so to fail to judge aesthetically. The place of rules, in guiding aesthetic
judgement, is taken by particular, exemplary aesthetic objects; ‘established models’, as
Hume calls them.
Hume and Kant agree, further, in rejecting the view (of earlier, classical and ratio-
nalist aesthetics) that aesthetic qualities are objective. Aesthetic objectivism maintains
that aesthetic qualities are properties inhering in objects, and that aesthetic experience
gives us knowledge of these properties. These properties may be identified with the
object’s formal properties, such as ‘ideal proportion’; or they may be regarded as sui
232
AESTHETICS
generis and irreducible to formal properties (Moore 1984: §§112–21; McDowell, in
Schaper 1983). (These varieties of aesthetic objectivism bear comparison with natu-
ralism and intuitionism in ethics, respectively.)
Aesthetic subjectivism denies that aesthetic qualities inhere in objects, and maintains
that what it is for an object to be beautiful is for it to yield a certain response in the
subject. In aesthetic experience I am affected by the object, and my response does not
consist in knowledge of its properties. The object’s formal and other non-aesthetic prop-
erties are merely what occasions the response. Aesthetic subjectivism, we will see,
comes in various degrees of sophistication (which again may be compared with differ-
ent forms of subjectivism in ethics).
Several very powerful considerations militate in favour of aesthetic subjectivism.
Firstly, as Hume observes, and as we all recognize, the tastes and verdicts of different
individuals and cultures vary enormously. (Lest this be doubted: Johnson said of Shake-
speare that he has ‘faults sufficient to obscure and overwhelm any other merit’; Voltaire
compared Shakespeare’s works to a ‘dunghill’.) If aesthetic qualities are really out
there, inhering in the object, why do we not all pick up on them? The point is not just | Blackwell |
that divergence is a fact of aesthetic life, but that we lack agreed methods for bringing
divergent judgements into line. As was said earlier, rules cannot be appealed to. And
aesthetic disagreement is unlike disagreement about colour, where conditions of phys-
iological normality and standard lighting can be checked independently. Aesthetic
qualities are deeply elusive: the ‘positioning’ of the observer needed to discern aesthetic
qualities involves a plethora of temperamental and culturally parochial factors, all
strongly indicative of subjectivity.
The second point was stressed by Kant (1987: §1). Pleasure is essential to the
experience which grounds an aesthetic judgement. Pleasure, like pain, is, however, a
mental state that does not represent a property of its object; as Kant says, pleasure
‘designates nothing whatsoever in the object’. The necessary connection of aesthetic
judgement with pleasure is readily explained by the subjectivist, but it presents aesthetic
objectivism with an enormous difficulty: how can there be properties inherent in
objects which necessarily generate pleasure simply through being apprehended – if
not on the theological supposition that the world has been metaphysically tailored to
delight us? Such properties would be metaphysically queer in the extreme. (This
objection to aesthetic objectivism may be compared with Hume’s objection to ethical
objectivism, that it fails to capture the necessary connection of moral judgement with
the will.)
This objection can be amplified. Objectivism, because it holds that aesthetic qualities
inhere in objects, necessarily introduces a distinction between reality and appearance,
between how things are and how things seem. Objectivism therefore implies that, even
when an aesthetic judgement has withstood all of the toughest tests, we can still be
mistaken about it: it is still possible that the object lacks the aesthetic quality attributed
to it and is really aesthetically worthless. But this seems mad. It is simply not intelligi-
ble that our greatest art could ‘really’, appearances to the contrary, be worthless. Suc-
cessful ‘appearances’ of aesthetic quality are as good as – indeed they are the same as!
– the real thing: for aesthetic reality precisely consists in appearances.
Put another way, the usual motivation for objectivism in philosophy – namely,
showing that our experiences and theories put us in touch with the world as it really
233
SEBASTIAN GARDNER
is, independently of our means of representing it – has no relevance to the context of
aesthetic experience, which has instead a strong similarity to the context of personal
relationships, in which rewards other than the gaining of knowledge are sought
primarily. The cognitive demand that mind should ‘fit the world’ does not apply to
aesthetic interest, which is in this respect closer to desire than belief – what we want is
for the world to fit us.
If, however, aesthetic subjectivism is common-sensical and philosophically plausible,
it appears to carry a highly uncommon-sensical and unwelcome implication. If aes-
thetic qualities do not inhere in objects, what is to prevent the same object being judged
beautiful by one person and ugly by another – both being equally justified in their pro-
nouncements? If, as Hume puts it, ‘All sentiment is right’, there is no such thing as right-
ness or wrongness in aesthetic matters. In that case, an aesthetic judgement merely
reports the speaker’s mental state, and aesthetic preferences are on a par with gusta-
tory preferences. It follows that aesthetic judgements cannot be contested or supported
with reasons. This position, like emotivism in ethics, to which it corresponds, is incon-
sistent with common sense (Kant 1987: §56).
Thus the problem of taste: of how to maintain the subjective felt basis of aesthetic
judgement, without collapsing into unrestricted RELATIVISM (pp. 395–7). Hume and | Blackwell |
Kant respond to the problem of taste in different ways.
Hume contends that a ‘standard of taste’ – a basis for accepting some judgements
of taste as correct and rejecting others as incorrect – is both philosophically defensible
and visibly at work in the way that we actually conduct our aesthetic affairs. The stan-
dard of taste lies not in the object but in the sensibility of the subject. Aesthetic sensi-
bility varies in its ‘delicacy’: ‘When the organs are so fine as to allow nothing to escape
them, and at the same time so exact as to perceive every ingredient in the composition,
this we call delicacy of taste’ (Hume 1965: 11). A correct aesthetic judgement is one
that issues from a delicate sensibility operating under ideal conditions, which include
the cultivation of taste through practice, the ability to make relevant comparisons, and
freedom from prejudice. We defer to the judgement of
those whose sensibilities
we acknowledge to be superior, that of the ‘critics’. (The concept of an IDEAL OBSERVER
(p. 470) plays an analogous role in Hume’s moral theory.)
It follows that for Hume, aesthetic judgements do not identify aesthetic qualities
inhering in objects, but nor do they simply report the subject’s experiences. An object’s
possession of an aesthetic quality consists in its being ‘fitted’ to generate a certain
response in us. This distances Hume from the kind of unsophisticated subjectivism
which ends up in relativism.
Hume’s solution to the problem of taste depends on contingent facts of nature.
Firstly, the fact that some ‘particular forms or qualities [in objects] are calculated to
please, and others to displease’. Secondly, the contingent uniformity of human sensi-
bility, the sameness in ‘the original structure of the internal fabric’ of our minds – by
which is meant, not that we are all equally competent aesthetic judges, but that our
sensibilities are all of a kind. Where sensibilities differ, the differences are not arbitrary:
aesthetic judgements diverge because of differences in delicacy, and other determinable
deficiencies of taste; if the sensibilities of two subjects are equally delicate and their
conditions of judgement otherwise the same, their judgements will coincide.
234
AESTHETICS
Kant and the Normative Aspect of Aesthetics
Kant rejects the kind of account proposed by Hume. Recall that the problem of taste is
one of justification. The question is, on what grounds can we regard certain aesthetic
judgements as being correct? For Kant, there is nothing in Hume’s position which
accounts for this, the normative aspect of aesthetic judgement. Hume does not validate
the claim, implicit in all aesthetic judgement, that one’s response is appropriate to the
object, and that others ought to concur. Kant interprets this demand for agreement
strongly, as applying to all other people without exception. On Kant’s view, Hume offers
only a causal, psychological explanation of why we do, as a matter of fact, defer to the
judgements of ‘critics’ – he does not say why we ought to do so.
Kant’s solution to the normative problem is, in essence, very simple, although his pre-
sentation of it is extremely intricate. Suppose that, in making an aesthetic judgement,
we abstract from everything that might pertain to our contingent, natural, individually
variable constitutions, and base our judgements solely on conditions that are strictly uni-
versal, in the sense of being available and common to all human beings. Suppose, that
is, we base our aesthetic judgement on the bare form of the object and its interaction
with our basic, universal mental powers of perception and understanding. (Kant’s rea-
soning here recapitulates his analysis of moral judgement in terms of the categorical
imperative.) Kant argues that this ‘universal’ standpoint can be achieved through freeing
our awareness of the object from desire and practical concern (what he calls ‘disinter- | Blackwell |
estedness’), and from our conceptual understanding of it. When these stringent condi-
tions are met, the judgement we make is valid for everyone. We then have what Kant
calls a ‘pure judgement of taste’, which has ‘universal validity’. Kant argues that it is
indeed possible for the mere form of an object to delight us: certain perceptual forms
stimulate our mental faculties optimally, by engendering in us a ‘harmonious free play
of imagination and understanding’, awareness of which is pleasurable. Kant also gives
a metaphysical interpretation of aesthetic experience: it makes us conscious of a
connection that we have with the world, and with one another, which lies beyond the
empirical world. Beauty therefore has a semi-religious and moral significance for Kant
(he calls beauty the ‘symbol of the good’).
Even if we grant that Kant’s theory of mental harmony accounts for the experience of
beauty, a cost attaches to Kant’s solution to the problem of taste. Purifying aesthetic
judgement in the way that Kant requires leaves us with an austere, if not impoverished
view of aesthetic experience – as consisting of nothing but an estimation of form. This
limitation shows up in Kant’s inability to do justice to the psychologically rich diversity
of experience which art offers.
We are left, then, with a choice: between accepting Hume’s account, without (Kant
argues) any way of securing the normative aspect of aesthetic judgement; and accept-
ing Kant’s account, at the price of excluding from aesthetic experience our cultural
identities, and everything in our psychological constitutions that is not strictly univer-
sal (which, the Humean will argue, leaves all too little). The task of providing a single
account which incorporates and combines the insights of Hume and Kant remains.
A number of other issues remain unresolved within the framework of aesthetic
subjectivism, including the formulation of the doctrine itself. Granted that aesthetic
235
SEBASTIAN GARDNER
qualities are subjective, quite how subjective are they? Are they, for instance, on a par
with colours and other secondary qualities of objects? Some theories maintain that they
are. Others deny that aesthetic qualities are on a par with colours and compare them
with ‘looks’ or ‘aspects’ of objects, or describe them as ‘powers’ to produce experiences.
Another, more radical, possibility is that aesthetic judgements are not descriptive at all.
WITTGENSTEIN (chapter 39) (1978) suggests that aesthetic judgements are more like
gestures and exclamations. On this view, the function of an aesthetic judgement is not
to say anything about an object, but to ‘put across’ an experience. (These options are
explored in Sibley 1959, 1965; Sibley and Tanner 1968; Wollheim 1980: essay 6;
Hungerland, in Osborne 1972; Meager 1970; and Scruton 1974.)
It is also necessary, as both Hume and Kant saw, to clarify the nature and role of
reasons and criticism in aesthetic contexts. That aesthetic judgements can be justified
distinguishes them from mere likings, and makes them, in a broad sense, rational. It is
the job of the critic to identify the sources of a work’s aesthetic qualities and determine
what responses are appropriate (a further role for criticism, to be discussed later, is inter-
pretation). But aesthetic reason-giving differs fundamentally from reason-giving in
other contexts and has a number of peculiarities, which make it hard to understand
(Beardsley 1982: ch. 12; Scruton 1979: ch. 5). It does not consist in inferring one
proposition from another; it makes no use of rules, unlike moral reason-giving; and it
makes no use of induction or generalizations, unlike reason-giving in science. Even the
notion of consistency has little application to aesthetic judgements: liking Wagner does
not mean that one ought to dislike Mozart. The problem, in sum, is that aesthetic
reasons operate independently from all of the conditions that appear integral to the | Blackwell |
very concept of a reason.
Two further unresolved issues may be pointed out in conclusion. The first,
bequeathed by Kant’s notion of disinterestedness, concerns the importance of the
concept of an ‘aesthetic attitude’. Some theorists have claimed that aesthetic experi-
ence consists in the adoption of a special attitude in which objects are attended to dis-
passionately and ‘for their own sake’. Indeed, Schopenhauer identifies the point of art
with the metaphysical liberation from life that aesthetic contemplation, which does not
involve the will, brings in its wake (Schopenhauer 1969, 1: third book). The distinctive
phenomenology of aesthetic experience makes this idea alluring, but what can be said
about the aesthetic attitude, other than that it differs from and excludes practical and
cognitive attitudes? Saying that an object is enjoyed ‘for its own sake’ really means that
it is ‘not enjoyed for any ulterior end’. Because descriptions of the aesthetic attitude
tend to remain fundamentally negative, it is unclear what value the concept has,
stripped of the metaphysical interpretation that makes it significant for Schopenhauer
(see, however, Beardsley 1982: ch. 16).
A second issue concerns the concept of beauty, which has undergone a dramatic
reversal of fortunes in the history of aesthetics. Classical aesthetics takes it for granted
that beauty is the only, or at least the fundamental, aesthetic quality. Some modern
writers propose by contrast that ‘beautiful’ is merely a catch-all term, roughly equiva-
lent to ‘aesthetically commendable’, and that there is, as ordinary language implies, a
limitless plurality of aesthetic qualities, encompassing elegance, grace, poignancy and
so on. There is, however, a philosophical issue here which needs to be addressed. The
idea that beauty is not just one aesthetic quality among many, but is somehow pre-
236
eminent and accounts for the unity of aesthetic experience, is compelling, and anyone
who denies this in favour of the ‘pluralist’ view is challenged to find an alternative
account of the unity of the aesthetic. A recent defence of the concept of beauty is in
Mothersill (1984).
AESTHETICS
2 The Essence of Art
HEGEL (chapter 33) asserts that art ‘pervades what is sensuous with mind’, and that art
accordingly has ‘a higher rank than anything produced by nature, which has not
sustained this passage through the mind’ (Hegel 1993: 15, 34). Whether or not Hegel’s
view of the superiority of art is justified, the philosophy of art, to which we now turn,
shares Hegel’s conception of art as a synthesis of mind with something other than itself,
in which the mind ‘recognizes itself ’; and his rejection of the view that a work of art is
simply an artefact possessing aesthetic qualities of a kind also found in nature. Theo-
ries of art attempt to elucidate the nature of the connection with the mind which makes
an object a work of art. Such attempts assume that the concept of art picks out more
than a merely nominal, or historically accidental phenomenon – that art has, in short,
an essence.
2.1 Definitions of art
Attempts have been made to capture the essence of art in a simple definitional formula.
Two representative examples are the definition of art as the imitation of beautiful
nature, and the definition of art as the communication of feeling.
If we consider such definitions, it is clear that they are wide open to counter-examples.
Not every work of art is an imitation of beautiful nature, or communicates feeling: novels
are rarely if ever beautiful, instrumental music is not imitative and much visual art
does not communicate feeling. And even if these formulae provided necessary conditions
for art, they could hardly provide sufficient conditions: some industrial machinery is
beautiful, waxworks are imitative, racist propaganda communicates feelings.
One response to the failure of definitions of this simple kind is scepticism. It has been | Blackwell |
claimed that the difficulty of defining art is a reflection of there not being an essence of
art. Proponents of this view (discussed in Tilghman 1984) have often appealed to
Wittgenstein’s notion that family resemblances between instances of a concept may be
all that there is to be found.
A recent, much-discussed attempt to define art – which is built on the Wittgenstein-
ian denial that art has an essence, but maintains that a definition is nevertheless possible
– is the institutional theory. The institutional theory accepts that works of art cannot
be defined in terms of their intrinsic, perceptible properties, and turns instead to their
extrinsic, social properties. The theory says that something is a work of art if a member
of the artworld has conferred on it the status of being a candidate for appreciation, or if
it has been created in order to be presented to the artworld (Dickie 1984). The artworld
is a nebulous entity that includes artists, critics and some portion of the general public.
‘Work of art’ is, on this account, an ‘honorific’ term: being a work of art is a non-per-
ceptible property which depends upon an object’s place in a cultural context (analogous
237
SEBASTIAN GARDNER
to being ‘in public ownership’ or ‘sacred’). This approach receives an additional stimulus
from the way in which some avant-garde art of this century – paradigmatically,
Duchamp’s ready-made Fountain, a mass-produced urinal entered for exhibition unal-
tered by the artist – appears to have repudiated all traditional criteria for art.
There is, however, something missing at the heart of the institutional theory. It tells
us that members of the artworld bestow the status of art on some objects rather than
others; but not why they do so. It thus leaves out of account the reasons for calling
something a work of art – the conditions taken to justify application of the concept.
This is something that ought to figure in a philosophical, as opposed to a sociological,
treatment of the concept of art (Wollheim 1980: essay 1).
Proponents of the institutional theory seek to deflect this criticism by saying that
they are concerned with the ‘classificatory’ sense of art, not the ‘evaluative’ sense in
which to call something a work of art is to recommend it for appreciation. But this
invites the further objection that it is simply a mistake to separate the classificatory
sense of art from the evaluative. Evaluation is just as integral to the concept of art as
it is to moral concepts. We do not first classify objects as art, and then discover that they
happen to be aesthetically rewarding: conceptually, there is only one move here.
What moral is to be drawn from these attempts to define art? It cannot be ruled out
that a complex definition, perhaps combining the concepts of imitation and communi-
cation in a complex way and adding further conditions, could be devised to encompass
all and only those objects that we consider to be works of art, and at the same time
respect the evaluative nature of the concept of art. But what the difficulty of reading
off an adequate definition of art from the surface of our ordinary conception of art may
be taken to show, more importantly, is that no definition of art can be expected, and in
any case cannot hold much interest, in advance of a theory of art. A theory of art does
not require there to be a simple, manifest property shared by each and every work of
art; it allows that what unifies art – its essence – may not be visible at its surface, and
probes beneath the surface to locate it.
A theory of art may take one of two forms. It may aim directly at developing a single
concept capturing the essence of art. Or it may proceed by building up from an exam-
ination of the various specific dimensions of works of art. The first, more traditional
approach is explored below; the second, more modest approach favoured in contempo-
rary aesthetics, is explored in the following section.
2.2 Theories of art
| Blackwell |
Of the numerous and enormously varied theories of art that have appeared in the
history of aesthetics, three may be singled out as having greatest importance. These
are the mimetic, formalist and expression theories.
Even if the concept of imitation, as ordinarily understood, does not straightforwardly
define art, it may be supposed that the concept can be developed beyond its ordinary
scope. On this claim rests the theory that art consists in mimesis – a Greek term ren-
dered approximately by ‘imitation’, ‘copying’ or ‘representation’ (see Plato 1955;
Aristotle 1987; and chapter 23 of this volume). The object of mimesis is usually iden-
tified with nature, inclusive of human nature. Although, as observed earlier, some art,
such as instrumental music, appears to be non-representational, it is open to the
238
AESTHETICS
mimetic theorist to contend otherwise: in antiquity it was thought that music imitates
the order and harmony of the cosmos and the soul. The mimetic theory construes the
connection of art with the mind asserted by Hegel in the following way: works of art
carry over the mind’s fundamental function of representing the world.
Granted that the concept of representation is open to being extended, the mimetic
theory is open to an objection (Hegel 1993: LXI–LXVII). Even if we accept that
it is natural to enjoy imitation and the skill exhibited therein, as Aristotle observes, why
should we find imitation valuable in the special way that art is found valuable? The values
connected conceptually with representation are cognitive, truth-orientated values such
as accuracy and comprehensiveness. These have some role in art – verisimilitude of plot
and character in literature for instance – and certainly truth is something we value. But
truth does not capture the real interest of art. A work of art must come into its own in
the field of our experience, and not disappear from our attention in the manner of a
transparent window on to the world or vehicle for communicating truths about it. The
mimetic theory thus appears to misconstrue the value of art.
The mimeticist’s conception of representation accordingly needs to be tightened up.
Representation which is artistic must be circumscribed in subject and mode: it must be
of particular kinds of things, represented in a particular way. It is therefore no accident
that mimetic theorists have characteristically gone on to claim that art must idealize its
subject matter. But once the mimetic theory has undergone this modification, the
essence of art has been shifted away from bare representation, in the direction of
whatever it is about art that enables it to idealize its subjects.
A strong candidate for this role is artistic form. Where the mimetic theory ties art
down to the real world, formalism allows the work of art to float free, insisting that only
form, the complex arrangement of parts unique to each individual work, has artistic
significance. Only what is internal to the work is relevant to its status as art; any
outward references, to a real or imaginary world, are irrelevant. The concept of form
can be made more or less narrow; at its narrowest, only sensory parts and their
relations are intended. Formalists differ over the continuity of form in art with form in
nature: Kant holds that artistic form must be recognizably of a kind with natural form;
Bell (1914: ch. 1), the boldest recent exponent of formalism, holds that artistic form,
labelled ‘significant form’, is in effect exclusive to art. Appreciation of art consists, for
the formalist, not in merely recognizing form, but in responding to it: Kant’s theory of
mental harmony has already been mentioned; Bell posits a unique kind of ‘artistic
emotion’ accompanying the recognition of significant form. According to formalism,
what the mind recognizes of itself in works of art is its power of locating order in
perception, exercised to a specially heightened degree.
| Blackwell |
The weakness of formalism most frequently indicated by its critics concerns the
concept of form, which is argued to be too indefinite to play the role that is asked of it.
Asked to specify the kind of form that matters in art, the formalist uses notions such as
‘balance’ or ‘uniformity amidst variety’; but it proves impossible to define these in a way
that prevents them from applying to any object whatsoever. The formalist is conse-
quently pushed to declare that artistic form is indefinable. Bell embraces this claim, but
his theory is then charged with vacuousness: he explains significant form in terms of
the ‘artistic emotion’ that it induces, but then refers us back to ‘significant form’ for our
understanding of artistic emotion.
239
SEBASTIAN GARDNER
Whatever the force of this point against Bell, there is another objection to be pressed.
Is our interest in form really as uncontaminated with ulterior, worldly concerns as for-
malists suppose? Formal values may sometimes be of self-sufficient aesthetic interest, but
much more commonly they serve non-formal ends: form is the vehicle through which a
work articulates its non-formal meaning, apart from which form tends to become artis-
tically uninteresting and merely decorative. In other terms, the attempt to disentangle
form from content leaves, on the side of form, an insufficiently significant residue.
This objection to formalism may be pressed on behalf of the mimetic theory; but it
also points in the direction of the expression theory. The connection of art with feeling,
we said earlier, cannot consist in a straightforward equation of art with the communi-
cation of feeling. The expression theory of art seeks to offer a more sophisticated and
persuasive account of the central place of emotion in art.
The rudiments of the expression theory lie in writings of the Romantics, but it was
first formulated philosophically by Croce (1992), who bound it up with the tenets of
German idealism. The more accessible version of the theory presented by Collingwood
(1937) detaches it from METAPHYSICS (chapter 2), and understands artistic expression
as a special form of self-expression. Artistic expression is a process in which the artist
begins with an indefinite and inchoate emotional state, for which he or she labours to
find a uniquely apt concrete articulation, and in so doing transforms his or her mental
state into something definite, tangible and intelligible. What the artist creates does not
describe his or her state of mind, so much as incorporate it; analogously to the way in
which bodily expressions such as smiles and grimaces embody mental life.
Since the product of expression cannot be known before the process is complete,
expression cannot consist in exercising a technique: it must take whatever particular
shape is commanded by the particular emotion which the artist’s mind is impelled to
clarify. Because expression is not undertaken with any further end in view, artistic cre-
ation contrasts with instrumental activities, in which means and ends are distinct;
these Collingwood calls ‘craft’, and opposes to ‘art proper’. Mistaken, ‘technical’ con-
ceptions of art – which include the conception of art as mimesis or communication of
feeling – arise from a failure to grasp this distinction.
On the side of artistic appreciation, the theory claims that the audience retraces in
the course of appreciating a work of art the route pursued by the artist: their apprecia-
tion re-enacts the artist’s creative process and thereby retrieves his or her psychologi-
cal state (Elliott, in Osborne 1972). The relation of the audience to the work of art thus
mirrors the artist’s understanding of his or her own mind. The capacity of a work of
art to transmit the artist’s psychological state is a necessary consequence of successful
expression. The expression theory, by giving primacy to the perspective of the artist
rather than (as on the mimetic and formalist theories) that of the audience, offers a | Blackwell |
very strong interpretation of Hegel’s claim that the mind ‘recognizes itself ’ in works of
art: works of art do not merely exhibit mental features, they, as it were, contain mind.
The expression theory’s emphasis on the psychology of the artist exposes it to criti-
cism. The mimeticist will object that not only emotions, but also ideas, which refer to
the world, are expressed by art, whose legitimate subject is not restricted to the artist’s
own mind. At the other extreme, the formalist will challenge the expression theory to
say how a supremely self-contained and self-sustaining work such as a Ming vase, can
be construed as a product of personal expression.
240
AESTHETICS
The first objection can be met by explaining that what the theory means by emotion
is any mental state, however permeated with concepts and thought, that has some emo-
tional charge. The condition that emotion be present is plausible, for most people would
agree that mere beliefs without any psychological resonance do not provide sufficient
material for art. The scope of the thoughts embedded in the artist’s emotion is fur-
thermore left open, which means that the artist’s moral and spiritual view of the world
and conception of life are proper material for expression, and that what a work of art
expresses need not include a personal, biographical reference to the artist.
The formalist’s objection cannot be met by expanding the terms of the theory, and
obliges the expression theorist to adopt a semi-stipulative measure. The theory must
declare that objects possessing only formal virtues do not qualify as full-blooded
instances of works of art. This move is obviously acceptable only on the assumption
that art’s centre of gravity lies in the psychology of the artist; it will be rejected by
the formalist as begging the question, and as betraying the theory’s fundamentally
parochial, pro-Romantic bias.
Of the three theories considered, it is fair to say that – if a monopoly is to be granted
– then the expression theory has strengths that give it the edge over the mimetic theory
and formalism. These strengths also account for the theory’s continued hold on our
thinking about art. It may also be pointed out in its favour, that the expression theory
allows the understanding of art to draw on the resources of psychological theory
(Wollheim 1987); an ultimate verdict on the theory will depend in part on what value
is found in such developments.
The Semiotic Theory of Art
A fourth theory, which is not traditional but dominates much contemporary discourse
about art, should be mentioned in conclusion. This is the semiotic theory of art, which
proposes that works and forms of art be analysed in terms of logico-linguistic categories
such as signification, reference, denotation, and syntactic and semantic rules. Goodman
(1976), the principal exponent of the semiotic theory, analyses representation and
expression in such terms. The semiotic theory of art is also assumed by structuralist and
poststructuralist literary theory, which grafts linguistic science and philosophical
theories of language onto the study of literature. On the semiotic theory, art is a symbol
system much like language, and not just in the metaphorical sense in which an expres-
sion theorist might grant that art is a ‘language of emotion’.
What the worth of the semiotic theory turns on can be indicated with reasonable
assurance. The theory derives its motivation from, firstly, a rejection of the primacy of
psychological and experiential concepts (these, it claims, are not autonomous and need
to be understood in terms of language and signification); and, secondly, a belief in the
radically conventional nature of art (which it grounds on a blanket rejection of philo-
sophical realism). Both strands are explicit in the work of Goodman, and in structural-
ism and poststructuralism. It follows that the semiotic theory’s austere perspective on art | Blackwell |
must be rejected by anyone who wishes to maintain that art is essentially connected with
certain forms of experience; as it may also be, on the supposition that human psychol-
ogy sets universal, non-conventional parameters to art. This last idea provides the basis
for a fifth theory of art, the naturalistic, which will be described in section 4 below.
241
SEBASTIAN GARDNER
3 The Dimensions of Art
Recent work in analytical philosophy of art has adopted the second, more piecemeal
and empirical, approach described earlier to the construction of a theory of art. Rather
than aim directly at a general, overarching characterization of art, it has concentrated
on the specific dimensions which constitute works of art, in the belief that an accurate
its parts.
picture of art as a whole will emerge out of a proper understanding of
Representation, expression and meaning have been closely analysed, in terms of the
specific forms that they take in each of the arts. This section traces the debates sur-
rounding these concepts in terms of the form of art most strongly associated with each:
representation in painting, expression in music, and meaning in literature.
(Issues concerning the ontology of works of art, the nature of fiction, emotional
response to fiction, and metaphor, also belong in this context, and would be included
in a fuller account of analytic philosophy of art.)
3.1 Representation
When we look at Titian’s The Rape of Europa we do not see, or do not just see, pigment
distributed across the two-dimensional surface of a canvas: we see a woman borne
across the sea aloft a white bull. Paintings represent or depict things, both real and
imaginary.
We are concerned here with what paintings represent in a visual, as opposed
to interpretative, sense. A painting of a man clutching a stone may represent St
Jerome, and a lute with a broken string in a still-life may represent Discord. In order to
make such iconographic identifications, however, a human figure and a musical
instrument must first be seen in the picture, and this is a perceptual matter; it is
something that art historical scholarship could not do for us. Visual representation is
something we take for granted. When we cease to do so, a philosophical question arises:
what makes pictorial representation possible? The mimetic theory of art, it should be
noted, does not answer this question: it merely assumes that pictorial representation is
possible.
The philosophical puzzle surrounding pictorial representation comes into focus
when one reflects on the difficulties associated with the common-sensical view of how
pictures work. Common sense says that pictorial representation consists in resemblance:
a painting represents X by looking like X. This may at first seem to be not just
self-evident, but the only possible answer. This common-sense view of pictorial repre-
sentation is reflected in the idea that deception provides the test for successful repre-
sentation – as illustrated by the story of Zeuxis’s painting of vines, which, according to
Pliny, deceived the birds into flying down to consume the grapes he had depicted.
It is, however, simply and plainly false that canvases marked with pigments resem-
ble the things that they represent. The Rape of Europa does not even begin to share
the physical dimensions of the scene that it represents! In fact, each canvas resembles
nothing so much as other canvases – but it does not, of course, represent them.
This objection may seem too blunt, for it may seem to have missed the obvious point
that for a canvas to represent something, it must be seen, not as a mere physical object,
242
AESTHETICS
but as a picture. But this just restates the problem, which now becomes: what is it for a
canvas to be seen as a picture?
It may then be ventured that what the resemblance theory really means, is that a
painting of X represents X when it affords us an experience which is like the experience | Blackwell |
of really perceiving X. Sometimes this is put by saying that the canvas gives us an
‘image’ which is the same as that which we would receive from the real thing.
Relocating the resemblance at the level of an experience or image, rather than that
of the canvas, does not, however, advance our understanding for, once again, what is
wanted is an account of how the two-dimensional, differentiated surface of Titian’s
painting can be made to ‘have the look’ – afford the visual experience or create the
image – of a woman carried on the back of a bull.
Very quickly it comes to seem that resemblance does not provide the explanation of
pictorial representation. In its place, theorists turn to other, less obvious notions.
(However, for a recent defence of pictorial representation in terms of experienced
resemblance, see Peacocke 1987.)
The case against resemblance was first set out, at length and with sophistication, by
Gombrich (1960). Gombrich attacked in particular the assumption, which accompa-
nies the resemblance view, that there is such a thing as an ‘innocent eye’ – that an artist
can simply ‘copy what he sees’ by attending to his visual experience, in advance of any
interpretation of the world.
Goodman (1976: ch. 1) carries this line further by rejecting altogether the idea that
pictorial representation is a matter of perception. On Goodman’s account, depiction is
a species of denotation: knowing what a picture represents is purely a matter of inter-
pretation, which requires a grasp of a ‘symbol system’. Realism in painting is just a
function of
the familiarity of a symbol system, the ease with which it imparts
information.
As was said earlier, the motivation for Goodman’s semiotic view is very general.
There is, however, much to query in his claim that pictures are interpreted. It seems to
us that pictures give us visual experiences, and that this is how they differ from prose
descriptions, maps, company logos, road signs and other visual symbols that need to be
read. Goodman’s theory implies that this impression of difference is an illusion. But if
so, what does the illusion of having a visual experience consist in – if not in having a
visual experience?
Gombrich himself does not assimilate pictorial representation to interpretation com-
pletely. On Gombrich’s account, a painting represents X by giving us an illusion of X.
How illusions are created is explained in terms of the representational practices evolved
in the history of art. Art develops through a process of ‘making and matching’. Visual
‘schema’ are originally fabricated semi-arbitrarily in order to represent objects, without
resemblance playing any kind of role. They are then modified by artists in the light of
their perceptual experience, a historical process which results in increasingly life-like
representations (a process which Gombrich models on POPPER’S (pp. 287–90) concep-
tion of the development of scientific theories). Resemblance lies, if anywhere, at the
end of the history of pictorial art, not at its beginning.
It is true that some visual representations – such as Escher’s trompe l’oeil drawings
– create illusions, but it is not true that most paintings have this effect: in looking at a
painting we do not find ourselves having to correct a tendency to mistake its content for
243
SEBASTIAN GARDNER
reality, as the concept of illusion would imply. Furthermore, the illusion theory entails
that in seeing what a painting represents, we stop being aware of the material canvas,
as the illusion takes hold. This, however, contradicts the fact that appreciating a paint-
ing involves simultaneous, complex awareness of
its brushwork and what it
represents (see Wollheim 1974: ch. 13).
Representation and Seeing In
The weaknesses of the views considered so far suggest that we need to posit a form of
visual perception which is, as it were, inherently imaginative. Wollheim’s theory does
this (Wollheim 1980: essay 5; 1987: 46–77). It proposes that pictorial representation | Blackwell |
engages a species of perception which Wollheim calls ‘seeing in’. Wollheim argues for
this claim as follows. An important clue as to the underlying nature of the experience of
painting lies in the fact that paintings give us experiences of something absent or non-
existent. Now this is something that occurs also in dreams, daydreams and hallucina-
tion. It may be ventured that pictorial representation exploits and cultivates a power that
the mind possesses innately: the power to generate visual experiences out of itself. When
this capacity is exercised in the course of perceiving the external world and fused with
perception of external objects – as it is in numerous contexts outside painting: things are
seen in Rorschach ink-blots, clouds and damp-stained walls – we then have seeing in.
Seeing in is not involved in reading maps or interpreting visual signs; here there is no
experience of an absent object. On this theory, what it is for Vermeer’s View of Delft to
represent a townscape is for the canvas to be marked in a way that allows us to see a
townscape in it. Seeing in does not presuppose a resemblance between the canvas and
what the painting represents; according to Wollheim, no systematic account can be given
of how the marking of a canvas determines what is seen in it. Wollheim (1987) proceeds
to develop a general account of painting which shows how its value as an art presup-
poses its rootedness in seeing in and other psychological powers.
3.2 Expression
It is not long before the fugue recedes into its initial calm. Soon, a final crescendo leads us
to what appears to be a triumphant conclusion. But no; at the very moment when victory
seems to be in our grasp, the music loses all confidence, comes to a halt on a chord that is
far from final, and then sinks despondently back into the minor key. Could disillusionment
be more graphically expressed? It is the cry of despair; the hoped-for consolation has failed
to materialize. (Hopkins 1982: 383)
This passage describes emotions expressed in the final movement of Beethoven’s
Piano Sonata Op. 110. The expressive qualities of works of art are a sub-class of their
aesthetic qualities, distinguished by the employment of terms whose primary use lies
in describing people’s emotions. Expressive qualities are integral to the beauty and
meaning of music. Since the nineteenth century, many have believed that music’s
incomparable power of expression secures for it a position of supremacy among the
arts (Walter Pater claimed that ‘all art constantly aspires towards the condition of
music’).
244
AESTHETICS
Describing music in expressive terms exemplifies a broad tendency to talk about
works of art as if they were sentient entities: they are described as having ‘organic
unity’, ‘vitality’ and so on. And yet we do not, of course, really believe or mean to imply
that works of art have feelings. So, given that a musical work is at one level just a
sequence of sounds, just as a painting is nothing but a pigmented canvas, the funda-
mental problem of expression in art, brought out most sharply by music, is the follow-
ing: how is it possible for a work of art to express emotion?
The obvious answer would seem to be that musical works have expressive qualities
because they serve, as the expression theory of art says, as vehicles for expressing the
emotions of the composer. This is, however, no answer. Even if the expression theory of
art is accepted, nothing in it explains how it is possible for a composer to use patterns
of sound to express emotion. It merely takes that fact for granted.
If it does not help to think of the emotion expressed by music as located in the
mind of the composer, it is equally mistaken to attempt to locate it in the mind of the
listener: that is, to analyse a musical work’s expression of an emotion as its power to
arouse that emotion in the listener. Bluntly: we are not ourselves made melancholic, or | Blackwell |
frozen with grief, by music that we would describe in those terms – fortunately. The lis-
tener does not have the emotion which the music expresses, in the same sense as she
has emotions in life. In listening to music, the emotion which is expressed seems to be
located ‘out there’ in the music, not in oneself. The heart of the problem lies in under-
standing how emotion can be ‘objectified’ in this fashion – how it can be ascribed
neither to the composer nor to the listener, but rather somehow hover, suspended,
between them.
Further reflection shows that being moved by music consists, typically, not in simply
recognizing or sharing the emotion expressed by music, but in reacting to it – in the way
that a person may respond to the emotional condition of another, for example, by
responding to their distress with pity. The relation is one of sympathy rather than
empathy. We are familiar with this structure in the context of representational art,
where, in drama, literature and painting, we feel for the protagonist. Its appearance in
music reinforces the difficulty of musical expression, since instrumental music precisely
lacks the representational content that would make emotional response to music
intelligible as a case of participating in a fiction (it has no characters or plot).
At this point, one option is to reconceive the problem of musical expression as merely
one of language: of finding an adequate logical characterization of how terms are used
in the ascription of expressive qualities. An account of this sort – which appears to cut
rather than untie the Gordian knot – is again advanced by Goodman (1976: 85–95).
An alternative way out of the problem is musical formalism, a position defended by
Hanslick (1986). Hanslick grants that music has emotional effects on listeners; but he
denies that there is any genuine sense in which music expresses emotion. Hanslick has
both an argument for denying that music can express emotion, and an account of why
we should come to suppose falsely that it can. For music to express emotion, there would
have to be an intrinsic relation between them. Hanslick argues that this is impossible,
on the grounds that emotions are intentional: they have objects and involve thoughts
and concepts; whereas instrumental music lacks objects and does not involve thoughts
or concepts. Since music is incapable of presenting any object – what it presents is only
itself – it cannot have an intrinsic relation to emotion.
245
SEBASTIAN GARDNER
What music can do, Hanslick argues, is express what he calls ‘musical ideas’, which
are its dynamic properties: ‘The content of music is tonally moving forms’ (ibid.: 29).
Tonality provides a sort of auditory space within which the melody, harmony and
rhythm of music move (see Scruton 1983: ch. 8; 1997: chs 2–3). Hanslick draws an
analogy with the visual movement found in arabesque ornamentation.
Like music, episodes of emotion have dynamic properties, in addition to and inde-
pendently from their conceptual content, since they involve changes in intensity and
bodily feeling. Because music and episodes of emotions share dynamic properties, it is
possible for us to fill in, arbitrarily, the musical idea expressed by a piece of music with
some particular emotion supplied by our own emotional repertoire: giving us the illu-
sion that the music expresses that particular emotion. For Hanslick, musical apprecia-
tion stops properly at the beauty of musical form. Anything else is merely personal
emotional association.
Despite its subtlety, Hanslick’s repudiation of emotion is not easy to accept: describ-
ing musical works in terms of their expressive qualities seems to be, not just natural,
but indispensable for justifying our estimates of their artistic value. The thought that
music is intrinsically expressive should not be given up unless absolutely necessary. Can
anything be salvaged?
Although music does not speak or behave, an analogy between the recognition of | Blackwell |
emotion in music and in people can be suggested. Perhaps music derives its expressive
power from its resemblance to emotionally expressive features of human bodily behav-
iour, physiognomy and speech (Kivy 1989). That would explain the phenomenological
similarity between hearing emotion in music, and perceiving emotion in the body or
voice of another person: in both cases, mental life seems to have been made sensuously
palpable.
This theory is in a position to deflect Hanslick’s objection that music lacks the
conceptual components necessary for an intrinsic connection with emotion. Bodily
and vocal features can express emotional states without communicating their objects:
we can recognize that a person is distressed from the way they look or sound, without
knowing what they are distressed about.
The human analogy theory soon encounters difficulties, however. It shares the
weakness of the resemblance theory of pictorial representation – namely, the difficulty
of locating relevant resemblances. What exactly are the features shared by music and
human bodies? Just as canvases are unlike the things represented in pictures, a plain
description of music in terms of key, rhythm and so on reveals little that is common to
music and human bodies. It is true that both tempo and bodily movement can be slow,
but such properties are no more emotionally definite than Hanslick’s ‘musical ideas’: if
a sad piece of music is slow, what makes its slowness that of sadness rather than seren-
ity? The objection then arises that significant likenesses between music and human
bodies can only be identified after the music has been redescribed in expressive terms,
such as ‘ponderously slow’. Only once music has been experienced as expressive does it
become possible to model it imaginatively on the human figure. Musical expression
remains unexplained.
A second theory that appeals to a resemblance between music and emotion is that
of Langer (1942: ch. 8). On Langer’s theory, musical works are symbols of a special
kind. Unlike the discursive symbolism of language, music shares the ‘logical form’ of
246
AESTHETICS
emotion and, by virtue of doing so, articulates and presents emotion, as language
cannot. Here it is the inner aspect of emotion, rather than its outer bodily expression,
which music is said to mirror (‘Music is our myth of the inner life’: ibid.: 245).
A similar problem, however, confronts Langer’s theory. In what non-metaphorical
sense does emotion have ‘logical form’? And in any case, the notion of formal similar-
ity is so plastic and open-ended that it can surely be found between any emotion and
any piece of music. Again, the alleged isomorphism of music with emotion seems to be
either non-existent, or insufficiently definite to explain musical expression.
If neither the outer nor the inner aspect of emotion has a connection with music
which accounts for its expressiveness, it may seem that enquiry grinds to a halt. Budd
(1985) concludes, after intensive examination of the above theories, and others, that
none succeeds). One remaining avenue, however, is to suppose that the connection of
music with emotion is more direct than we have hitherto assumed it must be: that it
does not take a detour via resemblance.
A clue to the explanation of expressive qualities may lie in the fact that the mind
has, as Hume put it, a natural propensity to spread itself on to the world, and experi-
ence objects as emotionally coloured. Projection, as this may be called, is a well-attested
psychological phenomenon. If this is joined with a second, equally Humean assump-
tion, to the effect that certain objects in the world are fitted by nature to serve as
recipients for the projection of specific emotional states, then we are on the way to a
conception of artistic expression as reposing on, and cultivating, a natural tendency
of ours to dye objects in the world with our emotions (Wollheim 1980: sections 15–19;
1987: 80–7). If Hanslick’s repudiation of emotion is to be avoided, an assumption | Blackwell |
along these lines seems to be at least necessary.
3.3 Meaning
That works of art have meaning – over and above their representational content, and
aesthetic and expressive qualities – is implied by the fact that works of art allow for, and
call for, understanding. Questions of meaning in art are best approached through the
concept of INTERPRETATION (pp. 384–90). Interpretation aims to identify the meanings
of works of art, and is one of the functions of criticism. Questions of interpretation
arise of course in all the arts – is Beethoven’s Missa Solemnis a Christian or a Roman-
tic work? Does Titian’s The Flaying of Marsyus celebrate the triumph of spirit over body,
or that of body over spirit? – but it is literature, and literary criticism, that put the issues
surrounding interpretation in sharpest focus.
Of all the arts, literature allows least scope for aesthetic response to operate inde-
pendently of interpretation: a liking for a literary work not accompanied by so much
as a rudimentary grasp of its meaning fails altogether to qualify as an appreciation of
it. This is due to the fact that the medium of literature, natural language, is, unlike
pigment, sound or stone, inherently meaningful; which, plausibly, allows literary works
to bear more conceptually complex meanings than works in other media; for which
reason literary works generate greater interpretative controversy.
It is hard to find a great literary work for which different interpretations have
not been proposed. Is Hamlet correctly understood in terms of Freud’s theory of the
Oedipus complex? Does Paradise Lost represent Satan as a moral being ‘far superior to
247
SEBASTIAN GARDNER
his God’, as Shelley claims? Do Shakespeare’s sonnets comprise a unified narrative
charting the course of Shakespeare’s own experience? Are Kafka’s writings religious
or political parables, or articulations of the personal angst evident in his diaries
and letters? Does a proper appreciation of Yeats’s poetry presuppose knowledge of his
mytho-logical system? To each of
these questions critics have returned opposite
answers.
Two questions are forced on us by differences in interpretation. Firstly, can bio-
graphical information concerning the author serve as legitimate evidence for or against
a certain interpretation? Secondly, can we talk of one interpretation as being the correct
interpretation of a work?
The starting-point for the first question is the attack made by Wimsatt and Beards-
ley on what they call the ‘intentional fallacy’ (in Newton-de Molina 1984). On their
view, critics err in supposing that information about the author, or any kind of evidence
‘external’ to a poetic text (for instance, about its original cultural circumstances), can
elucidate its meaning as a literary object; all that is pertinent to the meaning of a poem
can be gleaned by careful attention to the words on the page. To suppose otherwise is
to commit the fallacy of confusing a psychological fact about the author with a fact
about the poem, of failing to distinguish a causal explanation of a work from a literary
interpretation of it.
Two opposing views of the relation between the meaning of a literary work (textual
meaning) and the intentions of the author (authorial meaning) can be staked out.
Intentionalism identifies textual meaning with authorial meaning: what the text means
is what the author meant (Wollheim 1980: essay 4). Anti-intentionalism denies that
textual meaning is authorial meaning, and asserts its autonomy: textual meaning
resides objectively in the work, ‘embodied in the language’, and has nothing to do, con-
ceptually, with what the author may have meant (if the two happen to be the same,
that is a coincidence without significance, for the anti-intentionalist). These positions
have connections with traditional theories of art: the expression theory is straightfor-
wardly committed to intentionalism, formalism to anti-intentionalism (the mimetic | Blackwell |
theory is uncommitted).
What, then, are the main considerations in the argument over intentionalism? There
is much to be said, but some of the most important points are the following.
On the one hand, it seems right that a literary work, considered as a work of art,
should be judged in terms of the experience of reading it; if meanings do not show up
in an optimally sensitive reading of the text, then, although the author may have
wished to convey them, they simply do not form part of what the text means. A poem
stands or falls on its own merits. This is Wimsatt and Beardsley’s strongest argument
for anti-intentionalism.
It can, however, be countered by the intentionalist. While it is true that the author’s
intentions must be realized in the text and concretely apprehended there in order for
the work to be artistically successful, when a work is successful, intentionalism is true:
what we grasp is the author’s meaning. The intentionalist may add that, although
‘external’ evidence cannot transform an artistic failure into a success, attention to such
evidence may make it easier to see what meaning is contained in the words on the page.
The intentionalist can also point out that, since we rarely approach literary works
without some awareness of the author’s historical circumstances, other writings and
248
AESTHETICS
so on, there is in fact no firm or deep distinction between internal and external evi-
dence, contrary to what the anti-intentionalist supposes.
The intentionalist’s case can be strengthened further by drawing attention to
the fact that estimates of sincerity, maturity and perceptiveness are important for
our responses to literary works. Literature is flawed if
it exhibits mawkishness,
self-indulgence or crudity of moral vision; the fine consciousness of life and ethical
complexity of George Eliot and Henry James is valued. Literary vices and virtues are
qualities of moral personality apparently ascribed in full-blooded intentionalist spirit
to the author, in so far as he or she is manifest in his or her work. (Amplifying this
approach, see Leavis 1986: part 4.)
The anti-intentionalist will complain that here the intentionalist has once again con-
verted an aesthetic into a non-aesthetic matter, and is treating the text as material for
literary biography, as if it were the critic’s job to pass moral judgement on the author.
We do value qualities such as sincerity in literature, but these are properties of texts:
they indicate strengths in the work itself.
We have, then, intuitions favouring each of intentionalism and anti-intentionalism.
Neither set of arguments is obviously decisive. Wimsatt and Beardsley did not, there-
fore, establish that intentionalism involves a fallacy. They did, however, define anti-
intentionalism as a theoretical option, and indicate the methodological repercussions
of the issue. If intentionalism is true, then the study of literature may cast its net wide
and draw, in addition to literary biography, on other disciplines – history, psychology
or anthropology – concerned with factors contributing, consciously or unconsciously,
to the author’s meaning. If anti-intentionalism is true, then external information ought
to be bracketed out.
The second question raised by differences of
interpretation, which concerns the
notion of correctness, can again be formulated in terms of a sharp opposition. On the
monistic view, there is for each work a uniquely correct interpretation. Whether it can
be identified conclusively, and gain universal acceptance, is another matter. Monists say
that where interpretations differ, and yet seem equally well-supported, their conflict
can be overcome by a ‘super-interpretation’, which incorporates what is true in each
of the interpretations which it supersedes.
Pluralism denies that there is, even in principle, a uniquely correct interpretation
for each work; it maintains that a number of different interpretations can meet the con- | Blackwell |
dition of, not correctness, if correctness is taken to imply uniqueness, but ‘legitimacy’.
Pluralists differ over the criteria for legitimacy of interpretation and how restrictive
these should be. The arguments between monism and pluralism engage closely with
the issue of intentionalism versus anti-intentionalism.
The pluralist focuses in the first instance on the notion of textual meaning. Cor-
rectness of interpretation presupposes that textual meaning is determinate. Wimsatt
and Beardsley help themselves to the assumption that literary works have objective,
determinate meanings, but the pluralist challenges the monist to say where this deter-
minacy comes from: for even if the meaning of each individual word composing a text
is determinate – a claim which is itself vulnerable in view of the ubiquity of metaphor
in literature – the meaning of the work as a whole is far from being a straightforward
function of the dictionary meanings of the words composing it. As we would ordinar-
ily put it, there is ‘room for interpretation’, and this, according to the pluralist, means
249
SEBASTIAN GARDNER
that there is room for a number of interpretations. Easiest, then, is for the monist to
embrace intentionalism: since, if intentionalism is true, a source of determinacy is
readily supplied by the author’s intention.
The pluralist’s second argument draws on a tenet of aesthetic subjectivism: namely,
that felt response is at the core of the aesthetic. Literary interpretation should serve the
kind of interest appropriate to literary works, and literary interest, as a case of aesthetic
interest, centres on response. The proper goal of interpretation is therefore to enhance
our experience of the work, to offer a reading which makes it mean as much as pos-
sible to us. Since the perspectives of readers will always differ, the interpretation that is
optimal for one reader will not be optimal for another. Multiple interpretations of a work
are therefore to be expected, and should be welcomed. To meet this argument on its
own terms, the monist would have to defend the – evidently controversial – claim that
there is for each text one interpretation which is universally optimal.
Arguments between monism and pluralism tend to converge on the following sce-
nario. The monist says that if pluralism is true, then ‘anything goes’ in interpretation, an
implication which the monist claims reduces pluralism to absurdity – The Ancient Mariner
cannot be interpreted as a poem about issues of power and gender in late twentieth-
century culture. The pluralist objects that the notion of a uniquely correct interpretation
fosters dogmatism, and disregards both the richness of texts and the diversity of human
interests which may legitimately be expressed in literary interpretation.
These (unresolved) philosophical issues gain a special urgency from their relevance to
the present, turbulent and logomachic climate of literary studies, where theoretical and
ideological commitments are explicitly adopted and appealed to in interpreting texts. The
dominant deconstructionist conception of literary meaning at work here is fundamen-
tally a form of pluralism, adopted on wholly general philosophical grounds: the very
concept of determinacy of meaning is rejected, in all, not just literary contexts. To this is
added a radically idealist view of literary meaning which reverses the ordinary concep-
tion of the relation of meaning and interpretation: the meaning of a literary work is
conceived as created, rather than grasped by interpretation. The point of literature, on
this picture, is not aesthetic: literature exists for the sake of interpretation.
4 The Point of Art
We come finally to the question of the point of art. What value does art have? What is
art for? This is not a demand for a ‘justification’ of art in the same sense as a justifica-
tion of morality is thought to be required – that would be an inappropriate demand,
since the creation and appreciation of art do not share the necessity of moral obliga- | Blackwell |
tion. The question is rather whether we can articulate and vindicate our sense of the
importance of art. The fact, observed earlier, that art is an evaluative concept, means
that an account of the point of art is not a mere coda to aesthetic enquiry: it should
fall out conceptually from a theory of art.
That art does have some point is not beyond all doubt (Passmore 1991). It has
already been seen how the mimetic theory encounters difficulties on this score. Posi-
tive reasons for scepticism about the value of art are not hard to unearth: they derive
from art’s essential connections with pleasure, play and imagination, its freedom from
250
AESTHETICS
reason and practical purposes. The sceptic will insist that art has the same sort of value
as any other form of entertainment. Plato’s notorious critique of art, in book ten of the
Republic, goes further, by suggesting that art’s preoccupation with appearance
entrenches our ignorance of reality, and that its effects may be psychologically and
morally harmful (see Janaway 1995).
The distinction of ends and means figures prominently in many discussions of the
point of art, and is usually employed critically: some accounts are said to reduce art to
a ‘mere’ means, others to recognize correctly that art is an end.
This contrast is, however, not altogether felicitous, for two reasons. Firstly, art, so
long as it has some value, can always be redescribed as a means to whatever value it
realizes. Secondly, it is not clear what can be meant by describing art, or its apprecia-
tion, as an end. Certainly individual works of art have to be approached in their own
terms and contemplated ‘for their own sake’; but it would be a mistake to move from
this fact about aesthetic attention, that it terminates in its object and does not ‘think
ahead’, to the claim that art is its own point and cannot have extrinsic value. Formal-
ists such as Bell and Hanslick, who describe works of art as ends in themselves, seem
to make this mistake.
What the denial that art is ‘merely a means’ – and the aestheticist slogan ‘art for
art’s sake’ – may be trying to say, is that the value of art realized by art cannot be real-
ized by anything else, or cannot be realized in the same way. Thus interpreted, the
doctrine meets with our agreement: we recoil from hedonistic, moral, didactic or other
instrumental attitudes to art, to the extent that these suggest that other things could
be substituted for art without loss, or that the complexity of art is redundant. Tolstoy’s
(1930) theory of art as the transmission of moral feeling, for example, makes little effort
to distinguish art from other, potentially more efficient, ways of achieving that end. It
follows that so long as art is viewed as having a necessary role in relation to the kind of
value in question, and the complexity of art is taken account of, there is nothing nec-
essarily wrong with assigning a hedonistic, moral, didactic or other goal to art. Schiller
(1989), for instance, argues that the goal of art, as a component of ‘aesthetic educa-
tion’, is to overcome the metaphysical contradictions in human nature in order that we
may attain full ‘humanity’; this goal is ‘extra-aesthetic’, but Schiller regards aesthetic
education as the only way in which it can be achieved.
What forms may accounts of the value of art then take?
They may, first, be divided according to how closely they relate the value of art to
the values of life. In a perspective such as that of Leavis, art is properly and inextri-
cably bound up with the values of life: the ‘raison d’être of the work’ is to ‘have its due
effect and play its part in life’; the ‘essential business of criticism’ is to locate ‘the
creative centre where we have the growth towards the future of the finest life’ (Leavis
1986: 283). At the other extreme, Bell holds that art is autonomous and has value only
to the extent that it distances us from life: ‘to appreciate a work of art we need bring | Blackwell |
with us nothing from life, no knowledge of its ideas and affairs, no familiarity with its
emotions . . . In this world the emotions of life find no place. It is a world with emotions
of its own’ (Bell 1914: 25–7).
The deeper distinction to be made among accounts of the value of art, however, is
between those that are naturalistic, and those that are metaphysical; a distinction that
was foreshadowed by the contrast of Hume and Kant.
251
SEBASTIAN GARDNER
Naturalistic accounts ground art in human nature. The contingent fittedness by
nature of certain objects to our minds proposed by Hume provides the starting point. To
exhibit the point of art, it needs to be shown how works of art engage fundamental and
important mental activities (Dewey 1934). Psychoanalysis provides one example of this
approach (Wollheim 1987). The question of the point of art is therefore answered by
saying that art is natural: art exists because it is natural for us to create and appreciate it.
As such, art is a ‘form of life’ (Wollheim 1980), as necessary a component of human exis-
tence as language, culture and politics. The naturalistic view affirms that art responds to
the psychological needs of human beings, but it denies that the functions which art per-
forms could be specified, or fulfilled, through other means. When art is pictured as a nec-
essary part of the human order, and the appreciation of art as a natural component of
human well-being, the point of art merges with that of human life itself.
The naturalistic view of art therefore qualifies as a fifth theory of art, in addition to
those considered earlier (it may, but need not, be formulated so as to incorporate the
expression theory). By showing how the various dimensions of art map on to psycho-
logical processes, it accounts for the unity of art in terms of the unity of the mind, and
locates the essence of art with reference to human psychology.
Metaphysical accounts of the point of art are, of necessity, more speculative and less
pinned down to the empirical features of art than naturalistic accounts. They will also,
of course, depend explicitly on a general philosophical outlook, which naturalistic
accounts do not need to do. This does not, or should not, mean that they simply squeeze
art into a preformed metaphysical system: they may, on the contrary, allow art a role
in forming the system itself. This is what we find, in different ways, in Kant, Hegel,
Schopenhauer, Schiller and Nietzsche (German idealism has been notably more
accommodating to art than any other philosophical tradition).
The basic demand to which metaphysical accounts of art answer – and which nat-
uralistic accounts cannot properly satisfy – derives from what might be called the ‘trans-
figurative’ aspect of our experience of art: the sense that the transformation of reality
which art effects, and which locates in it a kind of value that we find consoling, is more
than a fanciful embellishment. Tragedy exhibits the transfigurative power of art most
clearly. The aspiration to achieve, through art, a justification of the world ‘as an aes-
thetic phenomenon’, as Nietzsche (1993: 32) put it, is a further part of the legacy of
Romanticism, and has not disappeared from the demands that we put on art.
Further Reading
As an introduction to aesthetics, Richard Wollheim’s short Art and its Objects (1980), which offers
both an overview and a distinctive approach to the subject, and Malcolm Budd’s Values of Art
(1995) are both excellent. Other, plainer introductions are Charlton (1970) and Sheppard
(1987). Hanfling (1992) contains a series of essays specifically written as a unified introduction
to aesthetics, but of sufficient length to present many arguments in detail.
It is important to have a grasp at first-hand of the classic writings in aesthetics, few of which
make difficult reading. Hofstadter and Kuhns (1964) is a first-rate, currently available anthology.
Carritt (1931) is more comprehensive but the extracts are shorter. Two other excellent antholo- | Blackwell |
gies that also serve this end, and intersperse selections from the classics with modern writings,
are Dickie and Sclafani (1977) and Rader (1979). Both are organized thematically and contain
helpful bibliographies (as does Wollheim 1980). The historical development of aesthetics is traced
252
AESTHETICS
by Beardsley (1966), which is also useful as a reference book. Cooper (1992) contains helpful
entries on most figures and topics.
A number of books may be singled out as offering more complex discussions with the stress
on a particular topic or perspective. Danto (1981) is stimulating and addresses many problems,
but with the accent firmly on contemporary art. Budd (1985) gives detailed and lucid analytical
treatment of some central aesthetic theories. Wollheim (1974) brings issues in aesthetics in rela-
tion to philosophy of mind and psychoanalysis. Scruton (1974) also emphasizes the contribu-
tion of philosophy of mind, together with that of philosophy of language; Scruton (1983) relates
aesthetics to cultural issues. Goodman (1976) has been highly influential. Savile (1982) focuses
on questions of value, in the idealist tradition. Walton (1990) attempts to provide a general
theory of artistic representation.
Important articles are collected in Dickie and Sclafani (1977), Barrett (1965), Margolis (1978)
and Osborne (1972).
With respect to particular topics in aesthetics, as broken down in this chapter: Hume needs no
commentary, but Kant’s Critique of Judgement is extremely difficult, and it is best to read either
Kemal (1992) or McCloskey (1987) alongside. Three comprehensive treatments of aesthetic
experience and judgement are Beardsley (1982), Scruton (1974), and Mothersill (1984).
Definitions of art are explored in Davies (1991). Theories of art, and accounts of its point, are
best approached through the classic writings. For detailed treatment of the debates surrounding
each of the dimensions of art, Schier (1986) deals with pictorial representation, Budd (1985)
with expression, and Newton-de Molina (1984) with literary meaning and interpretation.
References
Aristotle 1987: Poetics (translated by S. Halliwell). London: Duckworth.
Barrett, C. (ed.) 1965: Collected Papers on Aesthetics. Oxford: Blackwell.
Beardsley, M. C. 1966: Aesthetics From Classical Greece to the Present. Tuscaloosa: University of
Alabama Press.
—— 1982: The Aesthetic Point of View. Ithaca, NY: Cornell University Press.
Bell, C. 1914: Art. London: Chatto and Windus.
Budd, M. 1985: Music and the Emotions. London: Routledge and Kegan Paul.
—— 1995: Values of Art: Pictures, Poetry and Music. Harmondsworth: Penguin Books.
Carritt, E. F. 1931: Philosophies of Beauty. Oxford: Clarendon Press.
Charlton, W. 1970: Aesthetics. London: Hutchinson.
Collingwood, R. G. 1937: The Principles of Art. Oxford: Oxford University Press.
Cooper, D. (ed.) 1992: The Blackwell Companion to Aesthetics. Oxford: Blackwell.
Croce, B. 1992 [1902]: The Aesthetic as the Science of Expression and of the Linguistic in General
(translated by C. Lyas). Cambridge: Cambridge University Press.
Danto, A. 1981: The Transfiguration of the Commonplace. Cambridge, MA: Harvard University
Press.
Davies, S. 1991: Definitions of Art. Ithaca, NY: Cornell University Press.
Dewey, J. 1934: Art as Experience. New York: Putnam.
Dickie, G. 1984: The Art Circle. New York: Haven.
Dickie, G. and Sclafani, R. J. (eds) 1977: Aesthetics: A Critical Anthology. New York: St Martin’s
Press.
Gombrich, E. H. 1960: Art and Illusion, 2nd edn. New York: Pantheon.
Goodman, N. 1976: Languages of Art. Indianapolis: Hackett.
Hanfling, O. (ed.) 1992: Philosophical Aesthetics. Oxford: Blackwell in association with the Open
University.
253
SEBASTIAN GARDNER
Hanslick, E. 1986 [1854]: On the Musically Beautiful (translated by G. Payzant). Indianapolis, IN:
Hackett.
Hegel, G. W. F. 1993 [1820–9]: Introductory Lectures on Aesthetics (translated by B. Bosanquet).
Harmondsworth: Penguin Books.
Hofstadter, A. and Kuhns, R. (eds) 1964: Philosophies of Art and Beauty. New York: Harper and
Row.
| Blackwell |
Hopkins, A. 1982: Talking About Music. London: Pan.
Hume, D. 1965 [1757]: Of the standard of taste. In ‘Of the Standard of Taste’ and Other Essays.
New York: Bobbs-Merrill.
Janaway, C. 1995: Images of Excellence: Plato’s Critique of the Arts. Oxford: Oxford University Press.
Kant, I. 1987 [1790]: ‘Critique of aesthetic judgement’, Part I of Critique of Judgement (translated
by W. S. Pluhar). Indianapolis: Hackett.
Kemel, S. 1992: Kant’s Aesthetic Theory: An Introduction. London: Macmillan.
Kivy, P. 1989: Sound Sentiment. Philadelphia, PA: Temple University Press.
Langer, S. 1942: Philosophy in a New Key. Cambridge, MA: Harvard University Press.
Leavis, F. R. 1986: Valuation in Criticism and Other Essays. Cambridge: Cambridge University Press.
McCloskey, M. 1987: Kant’s Aesthetics. London: Macmillan.
Margolis, J. (ed.) 1978: Philosophy Looks At the Arts. Philadelphia, PA: Temple University Press.
Meager, R. 1970: Aesthetic Concepts. British Journal of Aesthetics, 10, 303–22.
Moore, G. E. 1984 [1903]: Principia Ethica. Cambridge: Cambridge University Press.
Mothersill, M. 1984: Beauty Restored. Oxford: Clarendon Press.
Newton-de Molina, D. (ed.) 1984: On Literary Intention. Edinburgh: Edinburgh University Press.
Nietzsche, F. 1993 [1871]: The Birth of Tragedy (translated by S. Whiteside). Harmondsworth:
Penguin Books.
Osborne, H. (ed.) 1972: Aesthetics. Oxford: Oxford University Press.
Passmore, J. 1991: Serious Art. London: Duckworth.
Peacocke, C. 1987: Depiction. Philosophical Review, 96, 383–409.
Plato 1955: Republic (translated by H. D. P. Lee). Harmondsworth: Penguin Books.
Rader, M. (ed.) 1979: A Modern Book of Esthetics, 5th edn. New York: Holt, Rinehart and Winston.
Savile, A. 1982: The Test of Time. Oxford: Clarendon Press.
Schaper, E. (ed.) 1983: Pleasure, Preference and Value. Cambridge: Cambridge University Press.
Schier, F. 1986: Deeper Into Pictures. Cambridge: Cambridge University Press.
Schiller, F. 1989 [1793–5]: On the Aesthetic Education of Man (translated by E. Wilkinson and L.
A. Willoughby). Oxford: Clarendon Press.
Schopenhauer, A. 1969 [1819]: The World as Will and Representation, 2 vols (translated by E. F. J.
Payne). New York: Dover.
Scruton, R. 1974: Art and Imagination. London: Methuen.
—— 1979: The Aesthetics of Architecture. London: Methuen.
—— 1983: The Aesthetic Understanding. London: Methuen.
—— 1997: The Aesthetics of Music. Oxford: Clarendon Press.
Sheppard, A. 1987: Aesthetics. Oxford: Oxford University Press.
Sibley, F. 1959: Aesthetic Concepts. Philosophical Review, 68, 421–50.
—— 1965: Aesthetic and Non-aesthetic. Philosophical Review, 74, 135–59.
Sibley, F. and Tanner, M. 1968: Symposium on ‘Aesthetics and Objectivity’. Proceedings of the
Aristotelian Society, supplementary volume 42, 31–72.
Tilghman, B. R. 1984: But is it Art? Oxford: Blackwell.
Tolstoy, L. 1930 [1898]: What is Art? (translated by A. Maude). Oxford: Oxford University Press.
Walton, K. 1990: Mimesis as Make-Believe. Cambridge, MA: Harvard University Press.
Wittgenstein, L. 1978: Lectures and Conversations on Aesthetics, Psychology and Religious Belief.
Oxford: Blackwell.
254
Wollheim, R. 1974: On Art and the Mind. Cambridge, MA: Harvard University Press.
—— 1980: Art and its Objects, 2nd edn. Cambridge: Cambridge University Press.
—— 1987: Painting as an Art. London: Thames and Hudson.
AESTHETICS
Discussion Questions
1 What makes a judgement aesthetic?
2 Are the aesthetic qualities of an object any less objective than its colour?
3 Can arguments about aesthetic value be resolved, and what does it signify if they
cannot?
4 Can it be shown, or must it merely be assumed, that aesthetic judgements differ
from gustatory preferences?
5 In what ways, if any, can reasons be given for aesthetic judgements?
6 Are there any limits on what can be an object of aesthetic attention?
7 In what respects are aesthetic judgements like moral judgements, and in what
respects are they unlike?
8 What can criticism of art hope to contribute to our appreciation of art? | Blackwell |
9 In what respects can one’s relation to works of art be compared with one’s
relations to other people?
10 Does aesthetic experience presuppose an aesthetic attitude?
11 Is it plausible to claim that beauty is the fundamental concept in aesthetics?
12 Does it matter if art cannot be defined?
13 What is the relation between form and content in art, and does either have greater
importance than the other?
14 To what extent is it appropriate and profitable to compare art with language?
15 Does the concept of resemblance have any role to play in elucidating pictorial
representation?
16 Can it be explained how music expresses emotion?
17 What assumptions are needed to uphold the claim that literary works have
objective meanings, and are those assumptions acceptable?
18 Must each literary work have one correct interpretation?
19 Does the meaning of a literary work reside in the mind of its author, in the text
itself, or in neither of these?
20 Is the correct interpretation of a literary work the one that makes it mean most
to us?
21 Does the study of literature need to be guided by literary theory?
22 Does psychology have importance for aesthetics?
23 Do we need art?
24 Is it ever legitimate to view art as serving moral or political ends?
25 Is a paradox involved in the enjoyment of tragedy and, if so, how should it be
resolved?
26 In what sense might it be true that art is a social rather than individual
phenomenon?
27 Does emotional response to fiction require what Coleridge calls a ‘willing suspen-
sion of disbelief ’?
28 Is it more than just a contingent truth that art has a history?
255
SEBASTIAN GARDNER
29 How can one be moved by the plight of Anna Karenina, when one knows her to
be merely fictional?
30 What distinguishes metaphor from nonsense?
31 Does survival of the test of time provide a criterion for artistic achievement?
32 Is photography a form of art?
33 Can it be maintained that any one form of art is superior to the others?
34 Is architecture ‘frozen music’?
35 Can anything be made of the thought that music has ‘metaphysical meaning’?
36 What makes a painting artistically significant?
256
8
Political and Social Philosophy
DAV I D A RC H A R D
Social and political philosophy has experienced a dramatic revival in recent decades. This
chapter considers the background of work in the years after 1945, before examining the
writings on justice of the main architects of this revival, John Rawls and Robert Nozick.
Subsequently discussed are the ideal of equality, the problem of pluralism and the prin-
ciple of neutrality, the communitarian, feminist and Marxist critiques of liberalism, and
the significance of community. The final section considers the nature of political
philosophy and its relation to politics.
1 Introduction
In a now celebrated phrase, Peter Laslett announced that ‘for the moment, anyway,
political philosophy is dead’. He did so in his 1950 introduction to the first collection of
essays entitled Philosophy, Politics and Society. The sixth series was published in 1992,
and the intervening years show clearly enough that the reports of death were greatly
exaggerated, or at least that the moment was merely a passing one. The last fifty years,
and the last thirty in particular, have witnessed a quite astonishing regeneration
of social and political philosophy, understood in Laslett’s own terms as the concern of
philosophers with ‘political and social relationships at the widest possible level of
generality’ (Laslett 1950: vii). The work has displayed a range of views, fertility
imagination, rigour of argumentation, care in critical analysis, and concern to
of
address contemporary problems which are ample testimony to the considerable
strengths of philosophy in the English-speaking world.
In the 1950s these strengths were far more selectively deployed. The study of
politics was, to simplify greatly, divided between the empirical social sciences and | Blackwell |
philosophy. Exponents of the former were optimistic that political science could offer
value-free but well-founded explanatory theories of its object. For its part philosophy
disdained substantive political evaluation and restricted itself
to the logical and
linguistic analysis of political discourse.
The title of Daniel Bell’s 1960 book, The End of Ideology, concisely defined an his-
torical moment, and its subtitle, ‘On the Exhaustion of Political Ideas in the Fifties’,
spelled out what was intended. Bell’s text was a series of sociological essays about
DAVID ARCHARD
America, but its grander claims carried further afield. The 1930s and 1940s testified
to the sheer awfulness of the practical realization of
ideologies like fascism and
Stalinism which made simplifying and absolute claims to truth about humanity, history
and reason. These years also reconciled intellectuals to the virtues of a more moderate
and modest political programme. Capitalism could be tamed by a welfarist state the
economy mixed, and fundamental freedoms adequately protected by democratic
constitutionalism. There was no further need for ideologies in the sense of millenarian
visions of utopia yet to be realized. The exhaustion of ideologies in this narrow sense
seemed also to betoken a disenchantment with politics in general, a retreat from the
traditional concern of political philosophy to prescribe the good society.
This was mirrored in the work of philosophers. The 1950s was the high tide of
CONCEPTUAL ANALYSIS (pp. 2–3), and in an emblematic text, The Vocabulary of Politics,
T. D. Weldon (1953) characterized the enduring questions of political philosophy as mis-
placed, resting on false assumptions and the misuse of unanalysed fundamental terms.
Correctly understood, these questions are merely ‘confused formulations of purely
empirical difficulties’ (ibid.: 192). It is proper for philosophers to analyse the language
and foundational claims of ideologies, but the evaluation of political states of affairs and
institutions is more appropriately reserved for politicians and political scientists.
However, it would be a mistake to over-simplify. From 1945 into the 1960s there was
serious and important work in political philosophy. Karl Popper (1945) sought to rebut
totalizing political theories such as those of PLATO (chapter 23) and MARX (chapter 34).
But he did so by defending an ideal of the ‘open society’, and the politics of piecemeal
reform rather than wholesale social engineering. F. A. von Hayek was entering upon his
lifelong defence of a position which is echoed in important current views (Gray 1984).
Hayek affirmed the ideal of justice as individual freedom under the LAW (chapter 13),
and defended the idea that the economic and social world displays a spontaneous, imper-
sonal pattern. It is simply inappropriate, and ultimately destructive of freedom, for
governments to impose upon this world according to a redundant ideal of ‘social’ justice.
The work of Isaiah Berlin is especially noteworthy, for he anticipates some major
themes in the philosophical liberalism that is now dominant. For Berlin, a plausible
politics must start from a recognition of a plurality in valid yet conflicting ends of life.
There is no single ideal of the good life, and the proper goal of politics is to make
possible the pursuit by individuals of their several ends (Kocis 1980). Berlin (1958)
famously distinguished two senses of
liberty, one characterized as the absence of
obstructions, and the other characterized in terms of self-mastery. He argued for
the former, ‘negative’ liberty, as the ‘truer and more humane ideal’ than the latter,
‘positive’ liberty. At the same time Berlin urged a clear separation of the notions of
liberty and equality. He conceded that the degree of social inequality may determine | Blackwell |
the conditions that make liberty worth having, but insisted that these conditions do
not enter into the definition of liberty.
2 John Rawls and Robert Nozick on Justice
These various writings are important, but it is those of John Rawls that are now taken
to signify the beginning of a distinctly new era. The publication of Rawls’s A Theory of
258
POLITICAL AND SOCIAL PHILOSOPHY
Justice (1972) has proved a landmark, radically changing the character of English-
speaking political philosophy and supplying the ineliminable background against
which all subsequent discussion has been conducted. ‘Political philosophers now must
either work within Rawls’s theory or explain why not’ is fitting tribute from one whose
own views are at a great distance from Rawls’s (Nozick 1974: 183).
A Theory of Justice is much lauded but it is also now an extensively criticized
text. Rawls’s presuppositions, methodology and conclusions have all been subjected
to such extensive criticism that it is important to be clear how and why his text set
the scene. First, A Theory of Justice is a work of substantive political philosophy.
Although Rawls engages in the analysis of concepts, this serves rather than substitutes
for the evaluation of political institutions. A Theory of Justice is more. It defends a
single, unified vision of the good society – one that is recognizably liberal. Rawls
gives philosophical voice to a doctrine of democratic constitutional liberalism
which can with merit be urged as the only feasible theory for contemporary Western
society.
Second, Rawls seeks to combine a recommendation of the ideal polity with a
plausible account of HUMAN NATURE (pp. 672–3). This involves not just a theory
of human MOTIVATION (p. 392) taken from the SOCIAL SCIENCES (chapter 12). It
also – and less noticed – comprises a view of the manner in which citizens can
be brought to recognize and accept as both fair and realistic the formal terms of their
coexistence.
Third, Rawls has provided political philosophy with an agenda. He does not just offer
a theory of justice. He supplies an account of the priority of individual liberty, the
defensible limits of egalitarianism, and the rule of law.
Fourth, Rawls’s work puts contemporary political philosophy back into contact with
its own history. The work and concerns of past writers, such as HOBBES (chapter 28),
LOCKE (chapter 29), Rousseau, HUME (chapter 31), MILL (chapter 35) and KANT (chapter
32), are given renewed pertinence.
Fifth, A Theory of Justice is nevertheless a response to the particular circumstances
of modernity. It is not just a principled affirmation of the virtues of liberalism against
the postwar background of disillusionment with more extreme ideologies. It can be seen
as an attempt to affirm these virtues in the context of a more local crisis, that is the
challenge posed to the ideal of American democratic constitutionalism by the civil
rights movement and the war in Vietnam. More generally again, Rawls, as did Berlin,
insists that value pluralism is an unavoidable feature of modern existence to which
politics must adequately respond.
The ideas of A Theory of Justice can best be laid out by answering a number of broad
questions. First, why is a theory of justice needed? Because publicly agreed terms of
social cooperation are both necessary and possible. Individuals benefit from living and
working together so long as they can be assured that social existence is well-ordered
and stable. Yet there is a predisposition to conflict inasmuch as individuals want differ-
ent and not necessarily compatible things out of their life together. Agreed PRINCIPLES
(pp. 733–6) are needed to regulate interaction, and determine the proper division of
benefits and costs among the members of society. Such principles are possible to the
extent that individuals can see them to be necessary, and their particular terms can be | Blackwell |
agreed under appropriate conditions.
259
DAVID ARCHARD
The Primacy of Justice
Justice has primacy in three senses. It is the first virtue of social institutions. This does
not mean that society cannot display other virtues, only that without at least the assur-
ance of justice these would have little or no value. Second, truth is a demand made of a
system of thought in so far as it relates to reality. Justice is demanded of a polity in so far
as it must relate to the real world. The circumstances of justice are those first noted by
David Hume, namely limited benevolence and moderate scarcity. Humans are not well-
enough disposed to make sacrifices for others, and material resources are not available
in sufficient quantity to make formal agreement on terms of cooperation unnecessary.
Rawls adds to this understanding of circumstances the facts of modern value
pluralism. Individuals have different but not necessarily compatible aims. My pursuit of
my life’s ends will conflict with your pursuit of yours. Agreement on how to reconcile
such a conflict is necessary but possible only if no one set of individual values is affirmed
over the others.
Third, any theory of justice specifies and guarantees an equality of citizenship which
supplies an assurance that no individual may be sacrificed for society’s greater good.
Justice has primacy in that this assurance is not to be compromised.
What is a theory of justice? It is a theory of the publicly agreed, final principles which
define the fundamental terms of social co-operation. It is a theory of those principles
which regulate the institutions, the ‘basic structure’ of society. It does not prescribe par-
ticular outcomes; nor does it provide a criterion for evaluating actions or the charac-
individuals. It is a theory of social justice, which must also
ter and dispositions of
coincide with our considered and reflective MORAL JUDGEMENTS (pp. 225–7). Rawls’s
notion of ‘reflective equilibrium’ allows for, indeed encourages the possibility of, mutual
adjustment in both theory and judgements. Yet any theory of justice must capture what
is essential to and shared by different accounts of justice, namely that ‘institutions are
just when no arbitrary distinctions are made between persons in the assigning of basic
rights and duties and when the rules determine a proper balance between competing
claims to the advantages of social life’ (Rawls 1972: 5).
What then is A Theory of Justice? Rawls’s theory assures citizens equal LIBERTY (pp.
762–3), and a distribution of all other goods that maximizes the expectations of the
least well off. More formally and completely:
First Principle
Each person is to have an equal right to the most extensive total system of equal basic
liberties compatible with a similar system of liberty for all.
Second Principle
Social and economic inequalities are to be arranged so that they are both:
(a) to the greatest benefit of the least advantaged, consistent with the just savings
principle, and
(b) attached to offices and positions open to all under conditions of
opportunity. (Rawls 1972: 302)
fair equality of
The first part of the second principle is familiarly known as the ‘difference principle’,
and the two principles are in lexical order. That is to say that inequalities regulated
260
POLITICAL AND SOCIAL PHILOSOPHY
by the second principle are only permitted when equality of liberty under the first
principle has been guaranteed.
Provided that these principles apply or very nearly apply to the basic institutions of
society then that society is just. And individuals have a duty to abide by the rules of a
just society. Although Rawls offers a CONTRACTUALIST (pp. 672–7) argument for the
rules themselves, the duty to abide by them is not itself contractually based. In part
three of A Theory of Justice Rawls offers an account of the acquisition of a sense of
justice. This grows out of the basic attachments and relationships which constitute any | Blackwell |
society: family, friendship and broader associations. There is in the well-ordered society
a concordance of fair rules and a sense of justice. Citizens can recognize that the rules
are just and acknowledge their duty to abide by them. Fairness and feasibility coincide.
Lastly, A Theory of Justice is a theory of THE RIGHT AND THE GOOD (pp. 598–601). The
just society is not one which realizes the good life of its community. It is one which
permits its members to pursue their own conception of the good under certain condi-
tions. First, the pursuit of the good is constrained by the right, that is the publicly agreed
principles of justice. Second, these principles determine the distribution of primary
social goods – rights, liberties, powers and opportunities, income and wealth – which
are the desired prerequisites of any particular pursuit of the good. Third, people pursue
different ideals of the good life. ‘Human good is heterogeneous because the aims of the
self are heterogeneous’ (Rawls 1972: 554). It would be a disastrous mistake for a polity
to try to impose upon its members any one particular ideal of life. Rawls has continued
to believe that, in any modern society, there will be a plurality of conscientiously sought
ends, and that any state, presuming or prescribing to the contrary, would have to
employ extensive coercion to secure its single vision.
The final question to be answered is, Why A Theory of Justice? This asks what justifi-
cation Rawls offers for his own principles. Rawls famously employs a contractualist
methodology: the principles of justice are those that rational, self-interested, but mutu-
ally disinterested individuals would choose in an original situation specified chiefly in
terms of the parties’ selective knowledge. The contractualist method has been the
subject of a great deal of criticism: why should a hypothetical contract bind? Why
would individuals choose as Rawls claimed they would? Would not individuals choose
differently in a different original position? Moreover Rawls himself has subsequently
denied that the contractarian argument has independent justificatory force and views
it only as a heuristic device which serves to illustrate the force of a claim whose real
justification lies elsewhere.
However, the question then presses of what does justify the principles of justice, and
it presses the more since Rawls has become concerned with the issue of how a just
society can be well-ordered, that is be viewed as legitimate by its citizens. Before
considering this question it is necessary to examine a fundamental, and influential,
critique of Rawls’s understanding of justice.
This is to be found in Robert Nozick’s Anarchy, State and Utopia (1974), which is
rightly paired with Rawls’s A Theory of Justice when the contemporary resurgence of
political philosophy is discussed. The foundations of Nozick’s arguments are RIGHTS (pp.
690–1) and OWNERSHIP (pp. 690–1). The rights possessed by individuals are funda-
mental, and define a moral space surrounding each person whose invasion constitutes
a wrong. Following Locke, Nozick holds the basic rights to be those to life, liberty, and
261
DAVID ARCHARD
‘estate’ or property. For Nozick, these rights are ‘side-constraints’ upon action, and have
infinite moral weight. That is, no amount of good consequences, including even an
overall diminution of rights violations, can justify a single violation of one of these
rights. Nozick does not offer a systematic defence of his understanding of rights. At
most he claims the underlying justification to be the Kantian view of INDIVIDUALS AS
ENDS (p. 736), leading separate, different, consciously and deliberatively shaped lives,
which should not be sacrificed or used as means to others’ ends.
Nozick is egalitarian to the extent of holding that all humans equally possess these
fundamental rights. He also endorses an initial equal distribution of ownership in | Blackwell |
respect of our selves. Each owns his or her own body, and its powers, capacities and
abilities. Nozick believes that through an exercise of this initial self-ownership further
entitlements to bits of the world are generated. He considers only one form of argu-
ment to show how this generation of entitlements might occur. This is John Locke’s
famous claim that one acquires rights to that with which one has mixed one’s labour.
Unfortunately Nozick devotes four incisive pages to expose seemingly unanswerable
difficulties in the Lockean account (Nozick 1974: 174–8). This seems to leave Nozick
with his own problem of unfounded entitlements, and so some critics have charged.
But Nozick is appealing to the plausible background assumption that unowned objects
are there to be owned, and that some sort of fruitful exercise of one’s own powers
grounds an entitlement to ownership of these objects. That is, provided certain
conditions upon legitimate acquisition are met.
Entitlements
What occupies Nozick is less the process of acquisition than the limits which may be set
upon its scope. Nozick borrows again from Locke and urges the acceptance of one fun-
damental proviso: ‘A process normally giving rise to a permanent bequeathable property
right in a previously unowned thing will not do so if the position of others no longer at
liberty to use the thing is thereby worsened’ (ibid.: 178). Nozick’s concern is to show that
this proviso can be met by an established distribution of property entitlements, when one
considers that there may no longer be any unowned objects for individuals to acquire.
His answer is that individuals unable to appropriate are nevertheless better off living
under a system permitting private property than they would be if no original appropria-
tions had been permitted.
The account of entitlements is completed by acknowledging that those who hold bits
of the world are entitled freely to transfer them if and to whomsoever they choose.
It follows that those who receive such freely transferred holdings are themselves now
entitled to hold them. Thus everything which is not unowned is legitimately owned
if acquired justly (by some process which does not violate the Lockean proviso) or
freely transferred from one who justly acquired it. A just set of holdings is just that set of
holdings which came about in the right way.
In part one of Anarchy, State and Utopia Nozick justifies a minimal state. He does so by
constructing a plausible tale of how a state of nature might have given way to a state,
that is an organization legitimately claiming a monopoly on the use of force within a
given territory, without at any stage violating rights. The tale turns on the evolving
262
POLITICAL AND SOCIAL PHILOSOPHY
competition between private agencies formed to protect their clients, with one eventu-
ally assuming a monopolistic role and adequately compensating the others for its
usurpation of their functions.
Part two of Anarchy, State and Utopia is devoted to a defence of the view that such a
state, restricted to preventing force, theft and fraud, and enforcing contracts, is morally
sufficient. Any greater role for the state is illegitimate. Of course an obvious reason for
a greater role would be that people’s holdings need to be redistributed in accord with a
theory of justice. Nozick’s own theory of justice does not require any such action by
government. His task is then to show that redistributive theories of justice are mistaken,
and what they require of the state illicit.
The distinction Nozick draws is between his own entitlement theory of justice, and
patterned or end-state theories. His is a historical theory since the justice of a set of
holdings is given by the propriety of the history which led up to it. End-state theories
are unhistorical, specifying that a distribution must conform to a prescribed structure. | Blackwell |
More particularly, patterned theories prescribe a distribution of holdings according to
some natural attribute or ordered set of such attributes, such as intelligence or effort.
Nozick’s critique of non-historical theories is both general and specific. At a general
level he insists that no patterned distribution can be maintained without persistent and
serious violations of the right to liberty. His reasoning is simple. Voluntary transfers of
those holdings initially assured under any patterned distribution will subvert the
pattern and transgress the principle underlying the pattern. Such transfers are easy to
imagine, and appear eminently unobjectionable inasmuch as they can be simple,
fans willingly
consensual, bilateral exchanges. Nozick’s own famous example is of
paying extra to see an especially talented basketball player, Wilt Chamberlain. Such
transfers can only be prevented by denying individuals the right to do with their
holdings as they choose. Surely no consistent theory of justice can both distribute
holdings and deny the individuals to whom they come any effective control over them?
Nozick’s specific criticisms are of Rawlsian and egalitarian theories of justice. The
fundamental weakness of Rawls’s theory is that it proceeds as if individuals did not
already have claims to ownership of themselves and bits of the world. A cake sponta-
neously given to a group might be divided among its members in a way close to Rawls’s
principles; a cake which I have baked or whose ingredients you have provided will be
divided in quite another way. Rawls is simply wrong to discount the various entitle-
ments individuals bring to any determination of who gets what. People, not least, are
entitled to their natural assets, and, consequently, to any rewards that may flow from
these. Rawls is mistaken to think that these assets are a collective resource, or that the
difference principle which reflects this assumption would be acceptable to the well-off
who are required to make a comparatively greater sacrifice so that the worst-off benefit.
Nozick finds no justification either for any egalitarian principle. A demand for
equality of material condition reduces to an unsubstantiated claim that the function
of society is to meet the needs of its members, and simply discounts the fact that things
are already attached to particular individuals. Equality is not required for self-
esteem, which in fact thrives on comparable differences. At base the demand for
equality is driven by envy.
Nozick’s theory of justice permits the government no interventionist role beyond the
rectification of injustices by the terms of the entitlement principle. It is ironic then that,
263
DAVID ARCHARD
given the logistic difficulties in tracking back and forward between past wrongs and
present holdings, Nozick should suggest the difference principle as a rough rule of
thumb for rectifying injustices (ibid.: 231). Otherwise holdings must be left as they have
come legitimately to stand. That this may result in great disparities of wealth and life
prospects is regrettable but no injustice. The rich may choose to be philanthropists but
they do no wrong in not being. And it would violate their rights to require that they
assist the less fortunate. This seems harsh but it is not obviously wrong-headed. Liberal
critics chide Nozick for offering a ‘libertarianism without foundations’, but this may not
be quite true.
Nozick’s foundations are attractive ones. Individuals do appear to have rights in the
sense that it would be fundamentally wrong to do certain things to them in the name
of a greater social good. Individuals do also have a claim to ownership of their selves.
That this is so can be shown by considering one’s reaction to the idea that bodily parts
should be redistributed equitably or to benefit the least fortunate in this respect. Do the
blind have a claim upon at least one of the eyes of the sighted?
| Blackwell |
Criticism of Nozick is more appropriately directed at his rendering of these founda-
tions and how he builds upon them. There is little warrant for construing rights in an
absolute and exclusively negative manner, not least when appeal is made to Kantian
underpinnings. Within a moral community, individuals in need may, on Kant’s own
principles, have the right to contributions from those who can help them. It is an impos-
sibly stringent requirement of rights merely that they rule out invasions of our moral
space and do nothing to sustain us in our satisfaction of basic needs.
Nozick may be right to think that we, severally, own ourselves and may, in conse-
quence and by some form of activity, come to make legitimate claims upon what is
unowned. But, arguably, the conditions he sets for legitimate acquisition are too lax and
too easily met in ways that favour free-market capitalism. It may very well be possible
to concede an initial equality in self-ownership, and the legitimacy of some process of
acquisition of unowned objects, but then to specify sufficiently tough conditions of
legitimacy to ensure that a final equality of holdings is ensured. Again, the ‘Lockean
proviso’ need not be as easily satisfied by his favoured set of economic arrangements as
Nozick assumes, especially once ambiguities in both the sense of ‘better off ’ and the
baseline against which comparisons are made are clarified.
Nozick seems to presume throughout that ownership is full and exclusive ownership
by individuals, and that anything less amounts to a violation of
liberty rights. Yet
Nozick forgets that, for Locke, individuals made claim through their labouring not upon
what was unowned, but what was communally owned. In his Wilt Chamberlain
example Nozick assumes that individuals within the patterned distribution have hold-
ings with which they are free to do as they choose. All forms of redistribution, includ-
ing taxation, are implausibly stigmatized as coercive interferences with freedom. Yet it
is possible that individuals might choose to limit in advance departures from a favoured
pattern of distribution, and do so in the name of a liberty-preserving equality of
condition.
In sum, Nozick’s theory is not so much libertarianism without foundations, as
libertarianism with unwarranted conclusions. His case against anarchy may be
accepted. His claim that state minimalism is utopian in a positive sense remains
unproven.
264
POLITICAL AND SOCIAL PHILOSOPHY
3 Equality
Both Rawls and Nozick are egalitarians to the extent that they are, in their different
ways, committed to the view that human beings are entitled to an equality of regard
or treatment in some fundamental respect. Rawls’s theory assures citizens equal liberty
and Nozick holds that all humans equally possess fundamental rights. All contem-
porary social and political philosophy affirms that humans are entitled to equality
of something, which suggests that the crucial question is not ‘Why equality?’ but
‘Equality of what?’ (Sen 1992). Of course a demand for equality in respect of some good
is consistent with, indeed may demand, inequality in respect of some other good. This
is true of Nozick’s libertarianism which claims that an equal distribution of individual
liberty must lead to an unequal distribution of income and property. A demand for
equality in some good may take priority over other values, as is the case with Rawls’s
insistence upon the lexical priority of equal liberties.
But although Rawls’s first principle of justice formally guarantees equal citizenship,
the difference principle tolerates social and economic inequality. Rawls sees no incon-
sistency here, though he concedes – as did Berlin – that differential access to resources
will qualify the worth of equal liberty. The more you have the more you can make of a
freedom shared equally with others in society. Socialist critics of liberalism have always | Blackwell |
insisted that this is unsatisfactory, arguing that real equality of freedom demands an
equalization of resources. Some feminists have also argued that the public or legal
equality of the genders is undermined by, and yet serves to disguise, a fundamental
inequality of social power. This characterizes all male–female relationships within
patriarchal society, and its elimination will require more far-reaching reform of society
than simply instituting a principle of equal citizenship.
Walzer’s (1983) argument is important in this context. He suggests that there are
different spheres of justice, each being specified by a set of goods to be distributed and
a consequent principle of their fair distribution. The unequal distribution of some good
need not in itself be unjust, but if this distribution determines the distribution of some
other good then that is unfair. It is not necessarily wrong that some people have more
money than others; it is if their greater wealth buys them political power, office or
greater personal health.
Walzer’s approach highlights the fact that our favoured account of equality depends
on what it is that we wish to see equally distributed. Contemporary egalitarianism
defends three broad fields of application of equality. For a welfare egalitarian the ideal is
a ‘condition of equal well-being for all persons at the highest possible level of well-being’
(Landesman 1983). The central problem for welfare egalitarianism is that it seems com-
mitted to taking account of those pleasures some humans may take from seeing others
do less well, and is also committed to the satisfaction of some acquired expensive tastes.
Resource egalitarianism assigns to each individual a bundle of goods which is envied
by no other individual. The central problem for this version of egalitarianism is that
natural assets (skills, talents and abilities) are unequally distributed and in consequence
people benefit differentially from their use of equally distributed resources. Yet to
include personal talents among resources which are equalized means that the talented
are, in effect, the slaves of the untalented.
265
DAVID ARCHARD
Contemporary egalitarians are now inclined to agree that individuals should not be
compensated for the effects of free choices (such as, obviously, choosing to develop an
expensive taste), but should be compensated for those factors affecting them which are
due to ‘brute luck’ (such as, obviously, a handicap). On Dworkin’s (1981) influential
argument, a principle of
liberal equality should be ‘endowment-insensitive’ but
‘ambition-sensitive’. That is, it should permit individuals’ lives to flourish or founder as
a result of the choices they make, but not in consequence of their natural or social
endowments. Such a principle may not be easy to render determinate and relies upon
a specification of the scope of free choice. Moreover it should be noted that individuals
may not choose the circumstances which cause freely chosen preferences to be more
or less expensive.
Capability or opportunity egalitarians demand the equalization of the capacity to
lead the life that the individual values or chooses to live. They thereby avoid the problem
faced by resource egalitarians of how the same set of resources may, according to the
circumstances, be differentially convertible into achievable standards of living. They
also avoid the problems of welfare egalitarianism which neglects those aspects of good
life which do not reduce to well-being and which cannot acknowledge the fact that
some people, subject to enduring conditions of significant inequality, may adapt their
preferences (and states of well-being) to these conditions and not, in consequence, expe-
rience a significantly lesser degree of welfare. Capability egalitarians, however, must
provide a ranking of capacities which cannot be purely quantitative – a life does not
necessarily go better the more things one can do. But if such a ranking does not reduce | Blackwell |
to one of opportunities for greater or lesser welfare, it is in danger of being an objective
list resting upon a contentious account of the human good.
Finally, any egalitarianism must answer criticisms of both practice and principle.
Egalitarianism may simply be inconsistent with certain immutable features of human
motivation. It may not be possible to combine, within the individual, the attitude of
universal impartiality and personal partiality (Nagel 1991). Again, a society in which
equality is guaranteed may lack the incentives necessary for maximizing the total social
product and thereby improving the well-being of all. Egalitarianism must also meet
Nozick’s challenge that the case for equality is unproven, and anyway trumped by more
fundamental ideals, such as freedom or self-ownership: a prior commitment to liberty
undermines any guarantee of equality, and, correlatively, equality can only be
maintained by the denial of individual freedom. The disputed relationship between
equality and liberty may prove to be the most enduring and pressing issue of political
philosophy. The fact that we have thereby come full circle to the work of Berlin and
Hayek certainly suggests an underlying continuity of concern in the subject.
4 Pluralism and Neutrality
Rawls responded to criticisms of A Theory of Justice in a series of articles which culmi-
nated in his second major book, Political Liberalism (Rawls 1993). Its concern is with
the good order of a modern liberal democratic society. It does not defend a particular
theory of distributive justice, although Rawls continues to think that his own two prin-
266
POLITICAL AND SOCIAL PHILOSOPHY
ciples are suitable to regulate the basic structure of a well-ordered society. Whereas A
Theory of Justice started from Humean circumstances of justice, Political Liberalism starts
from the modern condition of value pluralism, namely that people entertain, for good
reasons, different and probably incompatible comprehensive philosophical doctrines.
Political Liberalism also repudiates the main arguments for the two principles of A Theory
of Justice. The contractarian argument was only a ‘device of representation’; no meta-
physical understanding of the self was or needs to be presupposed; and it is a mistake
to think agreement to the principles of justice could rely on deep-lying acceptance of a
broader moral doctrine of fairness.
Nevertheless, Political Liberalism argues that a political conception of justice could
command the support of an ‘overlapping consensus’ of various comprehensive doc-
trines in the society whose basic structure it regulates. A just society of free and equal
citizens could thus endure over time even though deeply divided in its basic religious,
moral and philosophical outlooks.
Some critics fear that Rawls’s theory only represents a pragmatic accommodation to
the possibilities of consensus within particular societies. Yet Political Liberalism does not
simply defend the politics of compromise and modus vivendi. It is an attempt to specify
the terms of co-operation that can withstand the test of public and rational negotia-
tion, commanding the freely given assent of equals. The worry should rather be that
Rawls can no longer show why the well-ordered society must be one regulated by
certain principles of justice, and vice versa. Rawls’s idea of an ‘overlapping consensus’
may be so indeterminate as to yield very many different outcomes. Or it may deliver
a specific outcome by prescriptively stipulating what shall and shall not count as
reasonable doctrines. Moreover, it may miss the real nature of negotiating political
difference in its representation of views and disagreement as ‘reasonable’.
The last part of A Theory of Justice sketched an account of the development in the
citizens of the just society of the requisite sense of justice. It offered a theory of moral
education through the relationships of association within that society. In Political | Blackwell |
Liberalism Rawls forswears this kind of account. But to that extent any theory of
political justice is ‘thinner’ and less plausible. Indeed there is a general problem of the
basic liberal approach. Rawls’s conviction remains that, short of an unacceptable
coerced unanimity, a domain of public agreement can be secured and clearly separated
from the sphere of the heterogeneous private good. In this he remains faithful to the
liberal vision of a society in which all equally and freely pursue their different lives
constrained by public terms of fair coexistence. Yet such a vision may be impaled on
the horns of a dilemma. Either the terms of public agreement are so insubstantial as
to yield only an empty form of political community. Or they are made substantial only
by violating the requirement that disputed understandings of the good be restricted to
the private sphere. A liberal political order may demand a culture of and education in
liberal values. The good of society may not be so readily separable from the good of its
citizens’ private lives.
Finally it may be held unreasonable to ask people to exclude their deeply held com-
prehensive views from the terms of their political activity. Citizens are required to frame
and articulate their demands in the language of a public reason which is political and
independent of any particular comprehensive view. In a frequently invoked metaphor
267
DAVID ARCHARD
individuals must come ‘naked to the public square’ by setting aside views that are deeply
important to and perhaps even constitutive of their selves. Moreover a liberal political
culture is cut off from the substantive comprehensive views that may historically have
nourished it, leaving it deracinated and its citizens alienated.
Rawls’s starting point in Political Liberalism, as it is now for many other political
philosophers, is value pluralism. This should not be confused with relativism or scepti-
cism about the good. Such pluralism is rather taken to be an inevitable result of the
conscientious exercise by distinct individuals of a similar capacity for reasoning. An
expectation of reasonable disagreement is an acknowledgement of an evident fact
about modern life and a rejoinder to a long Enlightenment tradition which holds reason
to generate convergence on the truth.
Pluralism must not merely be expected but also tolerated because value monism can
only be secured through coercive state interference with individual liberty of con-
science, and because pluralism need not spell disaster if it can still provide the basis of
harmonious social and political co-operation. The problem may be that the toleration
of value pluralism presupposes some foundational value – such as the autonomy whose
exercise by different individuals generates it. Yet if pluralism goes, as it were, all the way
down there is no reason to think that any value can be privileged and not be the subject
of reasonable disagreement between individuals.
Value pluralism is honoured, arguably, if and only if the state, in its laws and
policies, remains neutral between its citizens’ different conceptions of the good. This
doctrine of official neutrality on the question of the good is viewed by many as definitive
or constitutive of contemporary philosophical liberalism (Dworkin 1978; Ackerman
1980: 10–12). A distinction is normally drawn between neutrality with respect to the
justification or aim offered by government for its activity and with respect to its conse-
quences or outcome, most preferring to understand neutrality in the first sense. Thus,
going back to a much earlier liberal thinker, John Locke defended political intolerance
of Catholics (non-neutral in its consequences) not because Catholics were doctrinally
in error (non-neutral in its justification) but because Catholics, in owing allegiance to
the foreign jurisdiction of Rome, represented a danger to the good order of the state
(neutral in its justification) (Locke 1689).
| Blackwell |
The manner in which political neutrality can be achieved is represented either
negatively, as the bracketing out of whatever is the subject of disagreement, or positively,
as the operation of a shared public political reason. Neutrality will be defended by
appeal to the value of equality or that of individual autonomy. On the first a state does
not treat its citizens as equals if it favours one citizen’s views of the good over others.
On the second it is more important that individuals lead the lives they see as good rather
than be led to live the life that the state thinks of as good. In addition a doctrine of
neutrality will be seen as a necessary protection against the excessive, and dangerous,
exercise of state power over individuals.
Value pluralism, and the associated principle of neutrality, stand opposed to perfec-
tionism, the doctrine that there are specifiable human excellences such that some forms
of human life are superior in themselves to others. Rawls rejects perfectionism, but
some liberals have insisted that, nevertheless, his own account of human good is too
thin, and that we can make judgements about the relative worth of different forms of
existence. This need not be inconsistent with a commitment to egalitarianism and
268
POLITICAL AND SOCIAL PHILOSOPHY
tolerance (Haskar 1979; Galston 1980). Moreover, there is a suspicion that Rawlsian
liberalism is, at base, inconsistent. If it is strictly neutral then it cannot subscribe to
any normative understanding of individuals, that they should, for instance, strive to be
purposive, autonomous creatures. But such a view seems presupposed by Rawls. If,
on the other hand, liberalism does include some foundational moral judgements about
human beings, then it cannot reasonably claim objectivity for these judgements and
refuse it for judgements about the good life.
It also seems clear that a liberal democratic culture will flourish to the extent that
its citizens acquire and practise certain virtues, those for instance of tolerance, civility,
respect for others and a willingness to make sacrifices for the common good. Now
whether such virtues are learnt in childhood, or got through good habits, a liberal
society must surely take the decision to encourage those social forms which facilitate
the acquisition of such virtues.
Autonomy and Liberalism
In this context Joseph Raz (1986) is a revisionist liberal. He doubts whether liberty should
be the central value, and defends instead the primacy of autonomy. His understanding
of autonomy is that only a life chosen from among several moral options is autonomous.
Further, Raz believes that, while not all forms of
life are valuable, there may be
several incompatible ones that are. Autonomy requires a plurality of moral life choices,
and this in turn requires the creation and maintenance of social forms conducive
to autonomy. The government has a duty to preserve these forms but, crucially, may
do so in non-coercive ways. Subsidies and taxation can effectively render certain choices
attractive or unattractive relative to others. Thus, Raz bites the bullet, holding that a
liberal political culture should sustain its own core values, and not aspire to an
implausible ideal of neutrality. Liberalism can be tolerant of diversity, interventionist and
anti-perfectionist.
In similar terms William Galston’s liberalism embraces and supports a set of distinctive
liberal purposes that guide liberal public policy and shape liberal justice. These require
the practise of and a civic education in liberal virtues, and the maintenance of a liberal
public culture. Liberalism has a thick enough theory of the good life to be able to rule
out certain practices and encourage others: ‘it is not the absence of an account of the
good that distinguishes liberalism from other forms of political theory and practice. It | Blackwell |
is rather a special set of reasons for restricting the movement from the good to public
coercion’ (Galston 1980: 180). Liberalism is not neutral; rather its governance is not
morally costly in its use of coercive state power.
5 Critics of Liberalism: Communitarianism, Feminism,
and Analytical Marxism
There are three broad movements deserving consideration for their critiques of
liberalism: communitarianism, feminism and Marxism.
269
DAVID ARCHARD
5.1 Communitarianism
The writers gathered under this title – Roberto Unger (1975), Michael Sandel (1982),
Michael Walzer (1983), Charles Taylor (1985), Alasdair MacIntyre (1981, 1988) and
Richard Rorty (1989, 1991) – are disparate and do not consciously subscribe to a
common manifesto. It is more accurate to speak of family resemblances than a single,
shared programme. In the respects in which they are all communitarian they offer not
so much an alternative political view to liberalism as a criticism of its presuppositions.
Communitarians invoke community in criticisms which are both normative and
descriptive, although the distinct kinds of criticism are not always carefully distin-
guished (Caney 1992; Taylor 1989).
The central descriptive criticism concerns the nature of the self or individual, and
charges liberalism with subscribing to an inadequately ‘thin’ understanding of the
‘self ’. The facts specifying the social and historical situation of each person constrain
the kinds of self-understanding she can reach, and the choices she will make. The
Rawlsian individual is not ‘embedded’ in any place or time; she is so emptied of substan-
tial, individuating features as to make it difficult to describe her as choosing a life. How
can a self-less person be said to have any conception of the good or make choices of ends?
This criticism may rest on a misunderstanding. Rawls’s concern is not to define ide-
alized choosers of the good so much as to specify the relevant considerations entering
into a jointly agreed determination of the public rules of fair co-operation. As he insists
in his later work, his theory of justice is not metaphysical but political and assumes no
particular understanding of the self. At the same time the criticism may be overstated.
To claim that individuals are wholly defined, and their identity completely constituted
by their membership of some community at some historical moment, is effectively to
deny them any kind of meaningful choice over their lives. It also seriously undermines
any claim they might make to be MORAL AGENTS (chapter 6). And to the extent that the
communitarian claim is qualified by phrases such as ‘to a large extent’ it is hard to see
what distinguishes the communitarian from the liberal.
The normative criticisms of communitarianism are threefold. First, the priority
which Rawls accords the virtue of justice is alleged to derive from an impoverished
understanding of political association. Rawls claims that justice is the first virtue of
social institutions, that it is required to deal with limited benevolence, moderate scarcity
and modern value pluralism, and that it is needed to protect individuals from being
sacrificed for the society’s greater good. According to communitarian critics, however,
justice is only the ideal of societies which do not display community. Communities
proper do not need to be just, and would not be communities if they felt such a need.
Michael Sandel (1982) characterizes justice as a remedial virtue, binding up in the best
way possible a second-best form of social co-operation.
The sense in which justice has primacy for Rawls has already been indicated, and
Sandel’s criticism appears misplaced. Resolution in the face of death is not less of a
human virtue for being redundant in immortal creatures. Sandel does not claim, as
have some Marxists for instance, that the circumstances of justice will disappear in the
future. He says only that some communities do not display them. But these – most | Blackwell |
notably the family – cannot be the models for society as a whole. The family has those
characteristics which make its members benevolently disposed to one another –
270
POLITICAL AND SOCIAL PHILOSOPHY
closeness, and affectivity rooted in natural relationships – because it is not simply a
smaller kind of society but something quite different altogether. Societies cannot be
familial communities.
The second normative claim of communitarianism is that membership of a political
community is a good which liberalism neglects, ignores, or whose sense it cannot
successfully capture by its own terms. Political association is viewed by liberals and
libertarians as an INSTRUMENTAL GOOD (pp. 216–18). It realizes the compromises
necessary for individuals to derive mutual advantage from co-operating. There is no
other sense of being together as citizens than is required to bring about this end. Rawls
and Nozick both talk of community, or even communities, growing up within this
framework. But these seem inessential and somehow added onto the basic terms of
political association.
This would not be so serious a criticism were it not for the further claim that a liberal
theory of justice needs more. Sandel charges that Rawls defends a difference principle
without foundations. Acceptance of this principle requires a willingness to see one’s
natural assets as communally owned, and yet Rawls’s theory allows for no community
which could lay such a claim to ownership. On Rawls’s account I see my talents as for
others to derive benefits from, yet there is no reason why I should see myself as joined
to these others. Here we meet again the problem for liberalism of showing how a just
society can also be well-ordered and whether political legitimacy must rest upon a sense
of community which liberalism is incapable of supplying.
The third normative communitarian claim is that what is good and just for individ-
uals is defined by the community to which they belong. Alasdair MacIntyre (1981,
1988) is associated with the view that an individual’s good is inseparable from the
ROLE (p. 388), office or social position he fills. Michael Walzer (1983) makes the further
claim that, since the goods to be distributed have particular SOCIAL MEANINGS
(pp. 384–90), the justice of their distribution is relative to these meanings. As these
meanings have particular social and historical location, so do the associated principles
of justice.
However, the besetting danger of any appeal to the existence of distinct under-
standings of the good and the just is RELATIVISM (pp. 395–7). This society is just by its
own lights; that society is just by its own lights. And never shall the two be compared.
Further, shared understandings determine not only what is just but what is unjust.
Yet these particular judgements may violate what, plausibly, ought to be UNIVERSAL
MORAL STANDARDS (pp. 733–6). Imagine a society by whose understanding of the
master–slave relationship 20 lashes a day are fair and sufficient. If a master adminis-
ters 25 he is unjust and if only 15 he behaves generously. But surely any beating is
wrong, and any ‘rules’ of slavery are unjust, whatever the members of the society
believe.
With regard to the communitarians’ normative claims the liberal will respond that
liberalism does not dispute the value of community nor need it neglect its significance.
The liberal will press the communitarian to clarify precisely what sort of political pro-
posals are distinctive of communitarians, worrying that, where they are spelled out,
such proposals suggest an illiberal concession to whatever happens to be the shared
understandings and practices of a particular time and place (Gutman 1985). At best
communitarianism prompts liberalism to demonstrate how its constitutive values are
271
DAVID ARCHARD
| Blackwell |
consistent with the maintenance and reproduction of a good order which, arguably,
needs a shared sense of community.
5.2 Feminism
FEMINISM (chapter 20) offers two quite distinct kinds of criticism of philosophical
liberalism. The first concentrates on the silence of liberal political theory and, being
consciously ad hominem, of its male theorists about women’s place in a just society.
More particularly this silence is compounded by assumptions about what this place
actually is and should remain. A formal commitment to the equality of all is gainsaid
by an endorsement of patriarchalism, which is not always only implicit. Patriarchalism
here means the doctrine of the subordination of women to male power.
The Public and the Private
The crucial assumption underpinning liberal patriarchalism is that to the distinction
between public and private spheres of activity corresponds a difference of nature
between men and women, and the roles for which these natures best suit them. The
woman is confined to the family where her biology equips her to bear, rear and care.
The woman is thus doubly oppressed: excluded from the public, political sphere where
the liberal principle of equality operates, and the inferior of her male provider within the
private household.
In terms of oppression related to the distinction between public and private spheres,
Susan Moller Okin (1989) accuses Rawls of perpetuating liberal patriarchalism. He
never broaches the issue of sexual justice, yet assumes both the continued existence of
the institution of the family, and, more pertinently, the traditional sexual division of
labour within this institution. Okin can reasonably claim that a family whose structure
is deeply unjust cannot, as Rawls expects, be the appropriate institution for acquiring
a sense of justice. Okin’s critique of the family is nevertheless not a radical one. She
merely insists that Rawlsian principles of justice be universal in scope and extend to all
institutions, including the family.
The second line of feminist criticism concerns the alleged maleness of a preoccupa-
tion with justice and rights. Appeal is made to a distinctive female ethics which empha-
sizes attachment, responsibility, context and particularity as opposed to independence,
rights, abstraction and universality. The idea that women speak in another moral ‘voice’
to that of men is due principally to the work of Carol Gilligan (1982) in the psychol-
ogy of moral development. But the contrast between an ethics of care and one of justice
now enjoys wide currency in moral and political philosophy. Gilligan herself does not
see these moral outlooks as mutually exclusive; nor does she see each of them as nec-
essarily associated with one particular gender. Indeed she seems to favour an account
of moral development which emphasizes the socializing role of parents, whatever their
gender.
From the standpoint of social and political philosophy this particular line of criti-
cism bites only if the prevailing liberal accounts of the good society may be judged
defective for omitting mention of care. It begs too many questions to suggest that justice
272
POLITICAL AND SOCIAL PHILOSOPHY
applies in the public sphere, and care in the private. It would also grossly over-simplify
to suggest that social co-operation could be governed either by rules of justice or by an
ethos of care. Indeed there are reasons to agree with the Rawlsian view that justice has
primacy. Care alone cannot determine who should be in receipt of what goods. And
where care fails or falls short of what is desired, justice specifies what can be legitimately
expected of others. Nevertheless, the opposition between care and justice highlights –
as does the communitarian critique – the extent to which a society governed solely by
respect for rights and a ‘sense of justice’ may not generate any real sense of relatedness | Blackwell |
and interdependence among its members.
5.3 Analytical Marxism
As the title implies analytical Marxists have displayed the virtues of anglophone
philosophy in general, that is argumentative rigour, scrupulous attention to the text,
and careful conceptual analysis. Whether such work is Marxist is debatable. It
certainly eschews wholesale subscription to every aspect of Marx’s work, and prefers
instead separately to appraise the distinct claims that may be said to constitute this
theory. It has rejected a certain understanding of history, TELEOLOGICAL (pp. 319–20)
and HEGELIAN (chapter 33) in origin, and, in the case of Jon Elster (1985) at least,
embraced a robust METHODOLOGICAL INDIVIDUALISM (pp. 397–9) that is very unfamiliar
among Marxists. It has been willing to jettison what many would see as central Marxist
claims, for instance the LABOUR THEORY OF VALUE (chapter 34), and has increasingly
conceded the attractiveness of non-Marxian theories in political philosophy.
However, analytical Marxism has insisted upon the specific character of capitalism,
and sought to explicate the manner both in which such a society is fundamentally
flawed, and in which socialism represents a feasible and desirable alternative. Yet there
are deep problems with such an approach. In the first instance it needs to be shown
that Marx had a theory of justice by which CAPITALISM (pp. 755–6) can be judged
unjust. This task confronts a familiar paradox. Marx employs a language with appar-
ent moral import, yet explicitly disdains moral criticism and theory (Lukes 1985). The
paradox is not easy to resolve, and any resolution may simply have to accommodate
the fact that Marx was not consistent in his outlook (Geras 1985).
Second, if capitalism is unjust then for Marxists that injustice must inhere in some
significant feature of the relationship between capitalist and proletarian. Two candi-
dates suggest themselves. The first is that the capitalist exploits the proletarian. Yet on
any plausible account of exploitation that can be offered it is arguable that some form
of exploitation may characterize every society, including socialism. Moreover, the
difference in ownership of resources which defines the capitalist relation need not be
unfair. It may have arisen in a perfectly legitimate fashion, for instance through the
superior efforts, talents, assiduity and prodigality of the capitalist. The idea of a ‘cleanly
generated capitalism’ conforms to Robert Nozick’s prescription of a just distribution
that has arisen by just steps from an originally just situation.
The second candidate for explaining the injustice of
the capitalist relation is
coercion. Workers may be unfairly forced to work for their employers. The difficulty is
that for any one worker the alternatives to employment are not so stark and restrictive
as would be required to establish coercion. This is especially true in a modern
273
DAVID ARCHARD
welfare society where it must also be acknowledged that avenues of escape from one’s
class do exist for the talented and hard working.
Difficulties in appreciating the specific inequity of capitalism are compounded by
imprecision in the recommendations of socialist justice. Notoriously Marx said little
about the lineaments of the future society, and what he did say suggests utopianism in
the pejorative sense of this word. Talk of a society ‘beyond justice’ might imply the view
that the circumstances of justice will be transcended. This is unrealistic. Or it might
imply the irrelevance of any evaluative criteria to such a society. This is dangerously
naive. Marxists can engage with other contemporary political philosophers on the
grounds of equality, emphasizing the deep, structured inequalities of present capitalist
society and the manner in which these inequalities corrupt any formal equality of | Blackwell |
civic rights. But if they are to be radical egalitarians they may differ little from some
liberals, or be beset by difficulties which derive from their commitment to principles,
such as that of self-ownership, which liberals do not share (Cohen 1990).
There is a final important point. Any theory which indicts the present society of
fundamental injustice and recommends a future perfect society needs an account of
the means of transforming the first into the second. Traditionally Marxism relied on
some combination of
the historical guarantee of revolutionary change and the
uniquely important role of the proletariat. Analytical Marxism is not teleologically
optimistic and has tended to reticence about proletarian activism. This is partly due to
its individualist presuppositions, which make it hard to appreciate the factors disposing
to and inhibiting COLLECTIVE AGENCY (pp. 397–9). It is also due to the need for an
account of why what is in the interests of the majority of society’s members coincides
with the morally desired emancipation of all. This is increasingly difficult, for there is
no longer a group of
that
orthodoxy – being the exploited bulk of capitalist society with nothing to lose and
everything to gain by overthrowing that society and its injustices.
individuals who satisfy the various requirements of
Analytical Marxism has performed a signal service by bringing Marxism within the
fold of contemporary political philosophy. But it may have done so at the price of expos-
ing serious if not fatal shortcomings in Marxism. And to the extent that analytical
Marxists have become political philosophers they have arguably ceased to be Marxists.
6 Individuals and Communities
A standard charge against liberalism is that it is individualistic. Methodological indi-
vidualism is discussed elsewhere, and the various senses in which liberalism might be
said to neglect community were considered in the section on communitarianism. What
needs to be examined here is the significance political philosophy should accord
to groups or communities, and the relationship of
individuals to these collective
entities. For while each of us is an individual we are also social creatures, belonging to
particular tribes, cultures, religions, races and nations. To deny these facts would be
productive of an implausible theory and an impoverished political practice.
There are at least two ways in which group identity is significant. In the first instance
there is the question of how the state should treat the existence within its jurisdiction
of stable, enduring, well-defined groups with their own history, culture and way of life.
274
POLITICAL AND SOCIAL PHILOSOPHY
This is the problem of cultural pluralism. In the second instance there is the question
of where the boundaries of any state or jurisdiction should be drawn and what role
should be played in this context by nationhood. This is the problem of nationality and
nationalism.
6.1 Cultural pluralism
Although political philosophy has, from its inception with the Greeks, tended to assume
the cultural or ethnic homogeneity of the ‘people’ whose obligation a legitimate state
commands and to whom a set of principles of justice may apply, the fact is that all
modern societies comprise distinct stable groups whose members identify themselves –
and are identified by others – by reference to some combination of shared race, religion,
nationality, language, culture or history. How should the state respond?
It could insist on denying the fact of difference either by enforcing a ‘republican’
ideal wherein citizens have no allegiances or identities other than those which consti-
tute them as members of the polity, or by supporting assimilationist practices whereby
members of cultural minorities must acquire the identity of the dominant community.
Such measures of compulsory homogenization are widely perceived as unfair in | Blackwell |
denying to individuals something of great value, namely the expression of their own
particular communal identity.
A culturally pluralist state, by contrast, honours the existence of plural identities by
measures which may range from underwriting the right of persons belonging to
minorities to enjoy their own culture, to ‘communalist’ measures which positively
protect and preserve the distinct groups recognized within society.
Liberals are unwilling to see the warrant for such measures in the existence of group
rights which do not straightforwardly reduce to the rights of the individual members
of the group in question. Rather they have argued that the protection of groups can be
defended in so far as doing so protects and advances the interests of individuals as
members of groups (Raz 1986: 207–9). The value of a culture is said to be that of cul-
tural membership, its value to the individuals who are members (Buchanan 1991). In
an original and arresting argument, Kymlicka (1989) rejects the idea of group rights
but commends a policy of actively seeking to preserve cultures. He does so by arguing
that our cultural membership is a good in so far as it provides the necessary context
from within which we are able to make and evaluate meaningful, rational, autonomous
choices of life. Charles Taylor (1992) has tried to defend a liberalism which might
permit the state to nourish and protect a particular culture, so long as it was also able
to secure the rights of those who do not share the dominant culture.
The major problem is that liberalism may be left with no other criterion for apprais-
ing different cultures than the extent to which they nurture the kind of individuals
favoured by liberals. Should a liberal society tolerate a minority culture which does not
respect autonomy, even if it supplies its members with an otherwise worthwhile life? Is
cultural pluralism a good thing only if the various cultures are all consistent with liberal
ideals of individuality? Moreover any account of cultural pluralism must acknowledge
the possible disadvantages of diversity. These may include the increased possibility of
social conflict, and the undermining of the shared sense of community which is
necessary for good political order.
275
DAVID ARCHARD
6.2 Nationalism and nationality
Political philosophy has, until recently, remained largely silent on the questions of
nationalism or nationality; or it has dismissed such questions as somehow unworthy
of philosophical consideration (Pettit and Goodin 1993: 7). Yet there is no more salient
fact about the contemporary world, nor any more potent source of conflict and
violence, than the existence, actual and disputed, of nations. It is also noteworthy that
political philosophy has assumed the existence of distinct nation-states and concerned
itself with the application to such entities of principles of justice, equality and rights
(Canovan 1996).
Although they are closely related terms, often used as synonyms, and frequently
hyphenated, ‘state’ and ‘nation’ have distinct meanings. A nation is a community of
people bound together over time by some significant, shared characteristic such as
language, race or culture. A state is an independent, sovereign political association of
people inhabiting a bounded territory. Nationalism, as a doctrine, makes the factual
claim that humanity is and always has been naturally divided into nations. The
normative claims of nationalism are that nations should be states and that states
should be nations.
Contemporary philosophical defenders of nationalism (Miller 1995; Tamir 1993)
have sought to answer the criticisms of claims of nationalism, and to show that a
defence of nationalism is consistent with, indeed demanded by, a proper defence of
liberalism. To the charge that nations are fictive products of modernity, defenders
of nationalism will insist that modern nations do have premodern ethnic origins, and | Blackwell |
that even false beliefs can have instrumental value if they sustain a valuable sense of
community.
States should be nations because a principle of nationality may supply the ‘fellow
feeling’ J. S. Mill thought necessary for good government (Mill 1975: ch. 16) or for the
acceptance of redistributive principles of justice (Miller 1995). Nations should be states
because democratic self-government is coextensive with national self-determination,
and because membership of a nation being a constitutive element of
individual
identity and well-being, statehood is an essential means of protecting nationality.
To those who insist that the defence of nationality represents an unwarranted
partiality inconsistent with the cosmopolitanism which properly realizes the global
scope of any acceptable political philosophy, the friends of nationalism will insist that it
is necessary to make an accommodation with the realities of the modern world. Further
partiality is not morally unjustified so long as its demonstration remains constrained
by a recognition of the minimum owed to those who are not one’s own co-nationals.
However, critics of nationalism will reply that since there is no well-bounded
territory containing an ethnically homogeneous population the demands of nationalism
are productive of grossly unacceptable political results: at best discrimination against
national minorities within the state, at worst the forcible expulsion and slaughter of
these minorities. Moreover the number of potential, aspirant nation-states greatly
exceeds the number of possible viable states, and internal secession would continue to
the point of non-viability.
Sympathetic political philosophical treatments of nationality may thus stop short of
conceding the full demands of nationalism, arguing, for instance, that those ends
276
POLITICAL AND SOCIAL PHILOSOPHY
thought to require the coincidence of nation and state may be served by something less,
such as federalism (Buchanan 1991). Unsympathetic treatments of nationality may
insist that it is possible to construct a non-national principle of political community,
around for instance loyalty to the constitutive principles of a particular polity
(Habermas 1992). Both camps may seek to defuse the sting of nationalism by
encouraging a political disassociation of ‘nation’ and ‘state’. This may be managed
through the development of trans- and international institutions, the protection of
subnational ethnic plurality, and the provision for decentralized political representation.
Political philosophy cannot now ignore the brute facts of nationality. A proper
acknowledgement of these facts should consist in a recognition of what can be changed
and what cannot, and, in consequence, of the fact that whatever value national
identity has may be secured only at the expense of inseparable disadvantages.
7 Political Philosophy and Politics
This last section considers what is political philosophy and what is politics. First, then,
what is it thought that a political theory should do? I do not mean by this the question
of whether a theory should be about justice, the state, the class struggle, or whatever.
I mean what is a theory taken to be doing when it answers a self-chosen question such
as ‘What is justice?’ There are four broad sorts of answer.
The first is that a theory should be built upon FOUNDATIONS (pp. 40–1) that are uni-
its procedure of argumentation and
versally recognized and evident truths;
reasoning should similarly be unexceptionable and acceptable to any rational person.
It is likely that these foundations will comprise some understanding of the individual
human being.
Both Rawls’s A Theory of Justice and Nozick’s Anarchy, State, and Utopia may be rep-
resented as political theory in this foundational sense. The problem with this approach
is that it risks being empty or contentious. Either the foundational understanding of | Blackwell |
the human being commands universal agreement in virtue of being emptied of real
substantive content, or is a recognizably interesting view but thereby partial and
controversial. A familiar criticism of Rawls is that his contracting individuals are not
those of any time and place, but historically and culturally located agents – Western,
liberal men.
A response to these problems is self-consciously to adopt a stance that is rooted in
one’s own culture and history. Borrowing Plato’s famous metaphor, Michael Walzer
contrasts a philosophical attitude that walks out of the cave and seeks the high ground
of universal objectivity with one that takes its stand in the cave and with its inhabi-
tants. This ‘way of doing philosophy is to interpret to one’s fellow citizens the world of
meanings that we share’ (Walzer 1983: xiv). The problems with this approach are
threefold. First, as has already been noted, it courts relativism: the good society is each
and every society that judges itself to be good by its own lights. Second, a society’s
practices, however unjust they may seem to the reasonable outsider, are not open to
criticism if, on the inside, that society lacks the shared meanings which could inform
such a critique. Third, the approach tends to conservatism. It implies that a society’s
meanings are defined by its existing practices. But there is no space from which to judge
277
DAVID ARCHARD
these practices. Or, put another way, the meanings by which the practices could be
judged appear to float outside the society (Cohen 1986).
Rawls appears to have moved from his original foundationalism towards this second
understanding of political theory. In Political Liberalism (Rawls 1993) he views the
liberal conception of justice as expressing ideas implicit within the institutions and
public culture of constitutional democratic regimes. Yet this is not a simple shift from
universal to particular, from transcendental rules of justice to the particular virtues of
American constitutionalism. ‘Kantianism in one country’ is a fine and witty definition
of Rawls’s political philosophy. It is possible, and plausible, to believe that modern
democracy is an historical achievement which institutes general moral principles –
equality of respect for individual choices of life, and the public justifiability of agreed
rules of social co-operation. Actually existent institutions may do no more than
approximate to these ideals, but they nevertheless aspire to them.
At the same time Rawls would now insist that political philosophy is turned to when
a society’s shared understandings break down and come into conflict. Philosophy
resolves these conflicts by ascending to abstractions which are nevertheless also
somehow uncovered through fundamental ideas deep and implicit within the society’s
culture (ibid.: 44–6). This is somewhat mysterious. It may also betray a false optimism
that the plurality of fundamental beliefs which characterizes modern society is, at some
level, eliminable. On the other hand, if the implication is that we can reach outside the
terms of our present, public culture for values which validate it, then too many crucial
questions have been begged. This at least is the charge of Richard RORTY (pp. 783–4)
(1989, 1991). Whether or not democratic liberalism stands at the end of ideological
history is a question which anglophone philosophy is on the whole ill-equipped to
acknowledge or answer, not least because it lacks the means to theorize ideas of
modernity and historical progress. Philosophy in the English-speaking world is
notoriously incapable of ‘reflexive social understanding’; that is, being in a position to
interrogate itself about its relation to the society and history in which it is situated
(Williams 1980).
One way to open a space between presently shared meanings and preferred alterna- | Blackwell |
tives is given by the third approach to political philosophy which may be termed CRITIQUE
(chapter 32). Marx famously remarked that science would be unnecessary if reality and
appearance coincided. His scientific concern was to disclose the real workings of
capitalism which could not be apparent to those who lived and worked under it.
Analogously the task of political philosophy may be to reveal what our shared mean-
ings do not and cannot say about our political world. The function of critique is to
display these gaps and thus the distance between actuality and the moral pretensions
generated by that actuality. To the extent that analytical Marxism shares many of the
premises and concepts of contemporary philosophical liberalism, it may be said to have
eschewed critique in favour of criticism. However, feminist political theory does aspire
to critique when it seeks to show how unspoken assumptions – about women’s nature,
the public–private divide, the family – explain why liberalism is both silent about, and
yet eloquent in its implicit endorsement of, women’s continued subordination.
The fourth and final approach to political philosophy is hostile to any misjudged
rationalist ambitions theory might have. It repudiates the idea that political association
must self-consciously realize any desired end or purposes. Instead it sees the tasks of
278
POLITICAL AND SOCIAL PHILOSOPHY
political theory as more modest and restricted. It is the articulation of the commonly
acknowledged rules of conduct which inform our actual practices. It salutes the his-
torical achievements of modern liberalism, not least the emergence of ‘individuality’
and ‘civility’; that is, law and civic order. Yet it refuses to see these eminently practical
achievements as conforming to universal, objective principles that may be laid bare.
Political theorizing should be faithful to the knowledge already presupposed in our
practice; it should ‘pursue the intimations’ of established traditions.
The author of this approach is Michael Oakeshott (1962, 1975). His sceptical and
nuanced conservatism does not figure large in most surveys of contemporary political
philosophy, perhaps because he speaks at a tangent to its main concerns. Yet his work
has been influential in a British conservatism, ably represented by Roger Scruton
(1984), which keeps its distance from both American philosophical liberalism and the
free market, anti-statism that often passes for contemporary right-wing thinking.
At the same time there is in such work a Hegelian sensitivity to history and practice
which importantly distinguishes it from the simple appeal to shared meanings which
characterizes the second approach.
Our second important question is, ‘What is politics?’ That is, what is the scope of
political activity, what distinguishes it from other forms of human activity, and what is
its importance? Here it is best to consider a number of oppositions. First, there is that
between what we could call instrumental and expressivist understandings of politics.
According to the instrumental understanding, politics is a means used by essentially
independent individuals to secure the agreements which are necessary if they are to
obtain the benefits of co-operation and avoid the costs of non-agreement. It is the
politics of bargaining, compromise, accommodation, and achieving a modus vivendi.
The political sphere is like a market-place in which individuals come to make their
separate deals and then return to live their lives. Politics has no further function than
is necessary to facilitate and protect the agreements made.
On the EXPRESSIVIST (pp. 384–90) understanding, political activity is itself valuable.
It is an important way in which human beings express themselves as social, co-
operative creatures enjoying an interdependent existence. This is the politics of
participation, community and republican citizenship. The political sphere is a forum in | Blackwell |
which people come together as citizens, and a vital constituent of the full life led by all.
Politics is impoverished to the extent that it falls short of exhibiting this character.
On the whole liberals, and certainly libertarians, have favoured the former under-
standing. Their preference derives from according a primary importance to liberty and
believing that individual purposes will be various. Political activism, of the sort envis-
aged by expressivists, is unlikely to be spontaneously universal, and will be unaccept-
able if coerced. Perhaps in part for this reason liberals, despite a clear preference for it
as a regime, have had little to say about how democracy should actually work and what
particular form it should take. Democracy tends to be seen as merely the formalized
process by which SOCIAL CHOICES (pp. 391–5) are generated from the rational prefer-
ences of individuals. Famous paradoxes and difficulties characterize the translation of
many individual choices into a single public choice.
By contrast there is an important and influential deliberative model of democratic
politics. A key figure is Jürgen Habermas (1992), who has sought to demonstrate the
principled presuppositions of everyday communication, such as the speaker’s implicit
279
DAVID ARCHARD
claims to be truthful, right and sincere. From these foundations he defends a view of
moral norms as those which would, and could, be agreed upon by the members of a
communicative community who recognize only the force of the better argument. The
relevance of such ideas to an ideal of deliberative or discourse politics whereby
individuals reach reasoned agreement on the collective good is evident. The problems
are also clear. If the model is intended to be a description of the actual political world
it is at best unrealistic, and arguably undervalues disagreement and contestation as
constitutive features of political activity. Politics may be ineliminably agonistic. If the
model is an ideal then it needs to be shown that it does not import and employ values,
such as equality, which cannot be found solely in the nature and presuppositions of
conversational speech.
The concern of socialists and feminists with democratic theory and practice is attrib-
utable in part to the expressivist ideal of political participation as a good (Pateman
1970). It is also due to a worry about reconciling the formal equality of political citi-
zenship with the various inequalities, of race, class, and gender, which characterize
civil society.
Related to but distinct from the instrumental–expressivist distinction is that between
public and private, already introduced in section 5. As this distinction is now under-
stood, the public comprises the political, legal and economic, the world in which indi-
viduals work, vote and are accountable to the rest of society for their actions. The
private is the personal and familial, the sphere of the household in which individuals
love, play and generally retreat from the public world.
To this distinction corresponds differences of motivation, relatedness and loyalty.
The public world is cold, impersonal, governed by abstract rules, in which independent
and mutually disinterested individuals meet. The private is the emotionally warm haven
in which individuals are bound by particular relationships of affection, loyalty and
mutual interdependence.
Liberals have, on the whole, refused to see the private as political. Indeed the private
sphere is that with which the state has no business and into which it should not intrude.
This is plausible in so far as liberals have characterized private activity as self-
regarding in J. S. Mill’s sense; that is, whose harmful effects, if any, are confined to the
agent. Consensual sexual behaviour and procreative decisions, according to liberals, are
paradigmatically private in this regard.
It is to feminism that we owe a critique of the public–private divide. This insists first | Blackwell |
that the scope and character of the private is determined publicly by law and policy. It is
not that the private domain pre-exists and limits that of the public. Rather the terms of
privacy are set within the public sphere. Indeed the very inauguration of the private, and
civil society, may be due to an unnoticed but deeply illegitimate ‘sexual contract’ in
which the male first subordinated the female to serve his sexual purposes (Pateman
1988).
Second, the private is political if
‘political’ extends to describe any structure or
relationship in which some individuals exercise significant power over others. The
family is the object of interest here. The traditional institution is marked by a familiar
sexual division of labour with the husband dominant over the wife. Any characteriza-
tion of familial relations as private amounts to a public endorsement of their continu-
ing inequity.
280
POLITICAL AND SOCIAL PHILOSOPHY
Even on their own principles liberals should be interested in the family. For
it can support or undermine broader social justice. This is not only in so far as the
family is, as noted earlier, a site of moral education, a place where a sense of justice is
learned. It is also that the family is the major means by which assets – property as well
as natural endowments – are transferred across generations. Differences between
familial situations remain a crucial obstacle to securing equality of opportunity
(Fishkin 1983).
The final opposition sets the political against the economic. Although the economic
is included within the public sphere, there is a tendency among liberals to be relatively
indifferent towards various economic arrangements. These, to simplify their view, are
to be preferred on grounds of efficiency not morality. In A Theory of Justice Rawls leaves
open the choice between a private property economy and socialism (Rawls 1972: 258).
In Political Liberalism he maintains that the question is ‘not settled at the level of
first principles of justice’, but depends on the contingencies of a country’s particular
institutions and historical circumstances (Rawls 1993: 338). But the economic is
political in three important regards.
First, liberal principles of justice apply to the distributive sphere, that is they deter-
mine who gets what goods once they are produced. But principles restricted to distrib-
ution leave out of account the productive sphere, concerning what is produced by
whom. Decisions may be taken as to what goods are produced and in what quantities.
It can sometimes seem as if the goods to be distributed in liberal theories of justice have
just dropped from heaven, quite explicitly in Bruce Ackerman’s (1980) account, where
he speaks of manna.
Second, there is a stronger claim associated with Marx that production determines
distribution; that is, that a distribution of goods to be consumed is a consequence of
how production itself is organized. If this determinist thesis is true then principles of
distributive justice are in an important sense beside the point.
Third, the economic sphere is one to which political considerations directly apply.
Work constitutes a significant part of an individual’s life. In itself it can be rewarding
or frustrating, self-realizing or alienating. It can be self-directed or performed under
conditions of subordination to another. Work is also instrumentally valuable in so far
as payment for employment determines one’s level of subsistence and the character of
one’s work affects one’s social status and self-esteem.
It is important then to recognize that employment – an individual’s right to it, and
the determination of its conditions – are proper subjects for political inquiry. Socialists
have attended to these issues. They have defended an extension of democracy to the
work place, and have been more generally concerned about the proper balance between | Blackwell |
market and state control, private and public ownership.
8 Conclusion
Political philosophy is alive and well. It was not quite moribund in 1950 but the
post-1970 revival now makes it appear so. The current dominance of philosophical
liberalism may irk those who feel that its approach pre-emptively biases consideration
important matters and ignores crucial facts. But those who feel that way must
of
281
DAVID ARCHARD
recognize that it is to liberals that we owe the revival of political philosophy and our
present ability to speak to the problems of our social and political existence. We may
not wish to speak in the language of liberalism. But then philosophers can no longer,
as Weldon once argued, be silent, and the onus is on those who dissent to perfect and
practise the alternative languages of political criticism.
Further Reading
Of the general surveys of social and political philosophical work, Kymlicka (1990) is the best.
It is comprehensive, incisive and judicious. Plant (1991) is well-informed and successfully
the main concerns of contemporary political theory (though he
brings out many of
unaccountably ignores feminism). Pettit (1980) and Brown (1986) are also useful if more
narrowly concerned with the main theories of distributive justice. Philosophy and Public Affairs
remains the flagship journal of political philosophy in the present era. Goodin and Pettit (1993)
is a volume in the present series. It contains extended essays on various disciplinary
contributions to the subject, shorter entries on major ideologies, and short notes on special topics.
The contributors are distinguished practitioners of the subject, the writing is almost uniformly
excellent, and the whole text supplies an illuminating picture of post-Rawlsian political
philosophy.
The Oxford Political Theory series, edited by David Miller and Alan Ryan, contains excellent
books by distinguished and particularly well qualified contributors on particular topics within
contemporary political philosophy.
On Rawls the literature is voluminous. Daniels (1975) collects the best of the earlier critical
essays, and other standard commentaries are Barry (1973) and Wolff (1977). Kukathas and
Pettit (1990) compares ‘early’ and ‘late’ Rawls, but before publication of Political Liberalism.
Paul (1981) does for Nozick what Daniels does for Rawls, and Wolff (1991) offers an
unprejudiced but critical account of Nozick’s theory.
Mulhall and Swift (1996) provides a fine, clearly written review of the debate between
it struggles somewhat to find the common
liberals and their communitarian critics, even if
themes in communitarianism. In addition to Gutman (1985), the following also supply useful
evaluations of communitarianism: Kymlicka (1988) and Buchanan (1989).
G. A. Cohen has worried at and about the legacy of Marx and its relationship to current
political philosophy with more acuity and assiduity than anyone. Cohen (1988) collects together
some of his best pieces.
Kittay and Meyers (1987) gives a good sense of Gilligan’s impact upon moral and political
thinking. Jaggar (1983) is a standard introduction to feminist political theory, while Grimshaw
(1986) is also clearly written and relevant.
An introduction to British conservatism is provided by Covell (1986), and Franco (1990) offers
a comprehensive, if cautious account of his subject.
Kymlicka (1995) is a comprehensive collection of important articles on cultural and national
pluralism, while Beiner (1999) offers a fine collection of pieces by leading theorists on the topic
of nationalism.
Chambers (1996) is a very good evaluation of
the ideal of Habermasian deliberative
democracy within the context of Rawlsian political philosophy.
References
Ackerman, B. 1980: Social Justice in the Liberal State. New Haven, CT: Yale University Press.
Barry, B. 1973: The Liberal Theory of Justice. Oxford: Clarendon Press.
282
| Blackwell |
POLITICAL AND SOCIAL PHILOSOPHY
Beiner, R. (ed.) 1999: Theorizing Nationalism. Albany: State University of New York Press.
Berlin, I. 1958: Two Concepts of Liberty. In Four Essays on Liberty. Oxford: Clarendon Press.
Brown, A. 1986: Modern Political Philosophy. Harmondsworth: Penguin Books.
Buchanan, A. 1989: Assessing the Communitarian Critique of Liberalism. Ethics, 99 (July),
852–82.
—— 1991: Secession: The Morality of Political Divorce from Fort Sumter to Lithuania and Quebec.
Boulder, CO: Westview Press.
Caney, S. 1992: Liberalism and Communitarianism: A Misconceived Debate. Political Studies, 40,
273–89.
Canovan, M. 1996: Nationhood and Political Theory. Cheltenham: Edward Elgar.
Chambers, S. 1996: Reasonable Democracy: Jürgen Habermas and the Politics of Democracy. Ithaca,
NY: Cornell University Press.
Cohen, G. A. 1978: Karl Marx’s Theory of History: A Defence. Oxford: Oxford University Press.
—— 1988: History, Labour, and Freedom: Themes from Marx. Oxford: Clarendon Press.
—— 1990: Marxism and Contemporary Political Philosophy, or: Why Nozick Exercises some
Marxists more than he does any Egalitarian Liberals. Canadian Journal of Philosophy, supple-
mentary volume 16, 363–87.
Cohen, J. 1986: Review of Walzer, Spheres of Justice. Joumal of Philosophy, 83: 8, 457–68.
Covell, C. 1986: The Redefinition of Conservatism. Basingstoke: Macmillan.
Daniels, N. (ed.) 1975: Reading Rawls. Oxford: Blackwell.
Dworkin, R. 1978: Liberalism. In S. Hampshire (ed.) Public and Private Morality. Cambridge:
Cambridge University Press.
—— 1981: What is Equality? Part 1: Equality of Welfare and Part 2: Equality of Resources.
Philosophy and Public Affairs, 10: 3, 185–246, and 10: 4, 283–345.
Elster, J. 1985: Making Sense of Marx. Cambridge: Cambridge University Press.
Fishkin, J. S. 1983: Justice, Equal Opportunity, and the Family. New Haven, CT: Yale University
Press.
Franco, P. 1990: The Political Philosophy of Michael Oakeshott. New Haven, CT: Yale University
Press.
Galston, W. A. 1980: Justice and the Human Good. Chicago: University of Chicago Press.
Geras, N. 1985: The Controversy about Marx and Justice. New Left Review, 150, 47–85.
Gilligan, C. 1982: In a Different Voice. Cambridge, MA: Harvard University Press.
Goodin, R. E. and Pettit, P. (eds) 1993: A Companion to Contemporary Political Philosophy. Oxford:
Blackwell.
Gray, J. 1984: Hayek on Liberty. Oxford: Blackwell.
Grimshaw, J. 1986: Feminist Philosophers. Brighton: Wheatsheaf.
Gutman, A. 1985: Communitarian Critics of Liberalism. Philosophy and Public Affairs, 14: 3,
308–22.
Habermas, J. 1992: Citizenship and National Identity: Some Reflections on the Future of Europe.
Praxis International, 12, 1–33.
—— 1996: Moral Consciousness and Communicative Action (translated by Christian Lenhardt and
Shierry Weber Nicholsen). Cambridge, MA: MIT Press.
Haksar, V. 1979: Equality, Liberty and Perfectionism. Oxford: Oxford University Press.
Jaggar, A. M. 1983: Feminist Politics and Human Nature. Totowa, NJ: Rowman and Allanheld.
Kittay, E. F. and Meyers, D. T. (eds) 1987: Women and Moral Theory. Savage, MD: Rowman and
Littlefield.
Kocis, R. A. 1980: Reason, Development, and the Conflict of Human Ends: Sir Isaiah Berlin’s
Vision of Politics. American Political Science Review, 74: 1, 38–52.
Kukathas, C. and Pettit, P. 1990: Rawls: A Theory of Justice and its Critics. Cambridge: Polity
Press.
283
DAVID ARCHARD
Kymlicka, W. 1988: Liberalism and Communitarianism. Canadian Journal of Philosophy, 18, 2
(June), 181–204.
—— 1989: Liberalism, Community, and Culture. Oxford: Clarendon Press.
—— 1990: Contemporary Political Philosophy. Oxford: Clarendon Press.
—— (ed.) 1995: The Rights of Minority Cultures. Oxford: Oxford University Press.
Landesman, B. 1983: Egalitarianism. Canadian Journal of Philosophy, 13, 27–56.
Laslett, P. 1950: Introduction. In P. Laslett (ed.) Philosophy, Politics and Society. Oxford: Blackwell.
Locke, J. 1689: Epistola de Tolerantia. English translation, A Letter on Toleration (introduction and
| Blackwell |
notes by J. W. Gough). Oxford: Clarendon Press.
Lukes, S. 1985: Marxism and Morality. Oxford: Clarendon Press.
MacIntyre, A. 1981: After Virtue. London: Duckworth.
—— 1988: Whose Justice? Whose Rationality? London: Duckworth.
Mill, J. S. 1975 [1861]: Considerations on Representative Government. In Three Essays. Oxford:
Oxford University Press.
Miller, D. 1995: On Nationality. Oxford: Clarendon Press.
Mulhall, S. and Swift, A. 1996: Liberals and Communitarians, 2nd edn. Oxford: Blackwell.
Nagel, T. 1991: Equality and Partiality. Oxford: Oxford University Press.
Nozick, R. 1974: Anarchy, State and Utopia. Oxford: Blackwell.
Oakeshott, M. 1962: Rationalism in Politics. London: Methuen.
—— 1975: On Human Conduct. Oxford: Clarendon Press.
Okin, S. M. 1989: Justice, Gender, and the Family. New York: Basic Books.
Pateman, C. 1970: Participation and Democratic Theory. Cambridge: Cambridge University Press.
—— 1988: The Sexual Contract. Cambridge: Polity Press.
Paul, J. (ed.) 1982: Reading Nozick. Oxford: Blackwell.
Pettit, P. 1980: Judging Justice. London: Routledge and Kegan Paul.
Pettit, P. and Goodin, R. (eds) 1993: A Companion to Contemporary Political Philosophy. Oxford:
Blackwell.
Plant, R. 1991: Modern Political Thought. Oxford: Blackwell.
Popper, K. 1945: The Open Society and its Enemies, 2 vols. Vol. 1: Plato. Vol. 2: Hegel and Marx.
London: Routledge and Kegan Paul.
Rawls, J. 1972: A Theory of Justice. Oxford: Oxford University Press.
—— 1993: Political Liberalism. New York: Columbia University Press.
Raz, J. 1986: The Morality of Freedom. Oxford: Clarendon Press.
Rorty, R. 1989: Contingency, Irony and Solidarity. Cambridge: Cambridge University Press.
—— 1991: Objectivity, Relativism and Truth. Cambridge: Cambridge University Press.
Sandel, M. 1982: Liberalism and the Limits of Justice. Cambridge: Cambridge University Press.
Scruton, R. 1984: The Meaning of Conservatism, 2nd edn. Basingstoke: Macmillan.
Sen, A. 1992: Inequality Re-examined. Oxford: Clarendon Press.
Tamir, Y. 1993: Liberal Nationalism. Princeton, NJ: Princeton University Press.
Taylor, C. 1985: Philosophy and the Human Sciences: Philosophical Papers, Vol. 2. Cambridge:
Cambridge University Press.
—— 1989: Cross-Purposes: The Liberal Communitarian Debate. In N. Rosenblum (ed.)
Liberalism and the Moral Life. Cambridge, MA: Harvard University Press.
—— 1992: Multiculturalism and The Politics of Recognition (commentary by A. Gutman, edited by
S. C. Rockefeller, M. Walzer and S. Wolf). Princeton, NJ: Princeton University Press.
Unger, R. M. 1975: Knowledge and Politics. New York: Free Press.
Walzer, M. 1983: Spheres of Justice. New York: Basic Books.
Weldon, T. D. 1953: The Vocabulary of Politics. Harmondsworth: Penguin Books.
Williams, B. 1980: Political Philosophy and the Analytical Tradition. In M. Richter (ed.) Political
Theory and Political Education. Princeton, NJ: Princeton University Press.
284
Wolff, J. 1991: Robert Nozick. Cambridge: Polity Press.
Wolff, R. P. 1977: Understanding Rawls. Princeton, NJ: Princeton University Press.
POLITICAL AND SOCIAL PHILOSOPHY
Discussion Questions
1 Are equality and liberty incompatible ideals?
2 Is value pluralism an ineliminable feature of modern society?
3 Are there any feasible ways in which the circumstances of
justice could be
transcended?
4 How egalitarian is the ‘difference principle’?
5 What roles does the contractarian argument play in Rawls’s theory of justice?
6 Should a theory of justice be ‘metaphysical’ or ‘political’?
7 Is the distribution of natural assets morally arbitrary?
8 Are rights ‘side constraints’ on actions?
9 Can we be said to own ourselves?
10 Is justice a matter of legitimate acquisition of goods, whatever the pattern of their
distribution?
11 What should there be equality of?
12 Should autonomy rather than liberty be the central value of liberalism?
13 Is the idea of a ‘liberal community’ a contradiction in terms?
14 Does liberalism depend on an unacceptable conception of the individual?
15 Is a caring society morally preferable to a just one? | Blackwell |
16 Can the family ever be a force for justice?
17 What for Marxists is wrong with capitalism?
18 Is ‘analytical Marxism’ Marxism at all?
19 Can political philosophy go beyond what we happen here and now to believe and
value?
20 Is the human being a political animal?
21 Is anything really private in the sense of being beyond the public gaze and public
concern?
22 Is the ideal of official neutrality on the question of the good life either feasible or
desirable?
23 Why should minority cultures be protected or preserved?
24 Does justice lie beyond, between or within nations?
25 What inequalities in our material circumstances are permissible?
26 Is liberal nationalism an oxymoron?
27 Are we all liberals now?
285
9
Philosophy of Science
DAV I D PA P I N E AU
The philosophy of science can usefully be divided into two broad areas. On the one hand
is the epistemology of science, which deals with issues relating to the justification of
claims to scientific knowledge. Philosophers working in this area investigate such ques-
tions as whether science ever uncovers permanent truths, whether objective decisions
between competing theories are possible and whether the results of experiment are
clouded by prior theoretical expectations. On the other hand are topics in the meta-
physics of science, topics relating to philosophically puzzling features of the natural
world described by science. Here philosophers ask such questions as whether all events
are determined by prior causes, whether everything can be reduced to physics and
whether there are purposes in nature. You can think of the difference between the epis-
temologists and the metaphysicians of science in this way. The epistemologists wonder
whether we should believe what the scientists tell us. The metaphysicians worry about
what the world is like, if the scientists are right. Readers will wish to consult chapters
on EPISTEMOLOGY (chapter 1), METAPHYSICS (chapter 2), PHILOSOPHY OF MATHEMAT-
ICS (chapter 11), PHILOSOPHY OF SOCIAL SCIENCE (chapter 12) and PRAGMATISM
(chapter 36).
1 The Epistemology of Science
1.1 The problem of induction
Much recent work in the epistemology of science is a response to the problem of
induction. Induction is the process whereby scientists decide, on the basis of various
observations or experiments, that some theory is true. At its simplest, chemists may
note, say, that on a number of occasions samples of sodium heated on a Bunsen
burner have glowed bright orange, and on this basis conclude that in general all heated
sodium will glow bright orange. In more complicated cases, scientists may move
from the results of a series of complex experiments to the conclusion that some funda-
mental physical principle is true. What all such inductive inferences have in common,
PHILOSOPHY OF SCIENCE
however, is that they start with particular premises about a finite number of past obser-
vations, yet end up with a general conclusion about how nature will always behave.
And this is where the problem lies. For it is unclear how any finite amount of infor-
mation about what has happened in the past can guarantee that a natural pattern will
continue for all time.
After all, what rules out the possibility that the course of nature may change, and
that the patterns we have observed so far turn out to be a poor guide to the future? Even
if all heated sodium has glowed orange up till now, who is to say it will not start glowing
blue sometime in the next century?
In this respect induction contrasts with deduction. In deductive inferences the
premises guarantee the conclusion. For example, if you know that Either this
substance is sodium or it is potassium, and then learn further that It is not sodium,
you can conclude with certainty that It is potassium. The truth of the premises
leaves no room for the conclusion to be anything but true. But in an inductive | Blackwell |
inference this does not hold. To take the simplest case, if you are told, for properties A
and B, that Each of the As observed so far has been B, this does not guarantee that All As,
including future ones, are Bs. It is perfectly possible that the former claim may be true,
but the latter false.
The problem of induction seems to pose a threat to all scientific knowledge. All
scientific discoveries worth their name are in the form of general principles. Galileo’s
law of free fall says that ‘All bodies fall with constant acceleration’; Newton’s law of
gravitation says that ‘All bodies attract each other in proportion to their masses and
in inverse proportion to the square of the distance between them’; Avogadro’s law
says that ‘All gases at the same temperature and pressure contain the same number
of molecules per unit volume’; and so on. The problem of
induction calls the
authority of all these laws into question. For if our evidence is simply that these laws
have worked so far, then how can we be sure that they will not be disproved by future
occurrences?
1.2 Popper’s falsificationism
One influential response to the problem of
induction is due to Sir Karl Popper
(1902–94). In Popper’s (1959a, 1963, 1972) view, science does not rest on induction
in the first place. Popper denies that scientists start with observations, and then infer a
general theory. Rather, they first put forward a theory, as an initially uncorroborated
conjecture, and then compare its predictions with observations to see whether it stands
up to test. If such tests prove negative, then the theory is experimentally falsified, and
the scientists will seek some new alternative. If, on the other hand, the tests fit the
theory, then scientists will continue to uphold it – not as proven truth, admittedly, but
nevertheless as an undefeated conjecture.
If we look at science in this way, argues Popper, then we see that it does not need induc-
tion. According to Popper, the inferences which matter to science are refutations, which
take some failed prediction as the premise, and conclude that the theory behind that pre-
diction is false. These inferences are not inductive, but deductive. We see that some A is
not-B, and conclude that it is not the case that All As are Bs. There is no room here for the
premise to be true and the conclusion false. If we discover that some body falls with
287
DAVID PAPINEAU
increasing acceleration (say because it falls from a great height, and so is subject to a
greater gravitational force as it nears the earth), then we know for sure that all bodies do
not fall with constant acceleration. The point here is that it is much easier to disprove
theories than to prove them. A single contrary example suffices for a conclusive disproof,
but no number of supporting examples will constitute a conclusive proof.
So, according to Popper, science is a sequence of conjectures and refutations. Scien-
tific theories are put forward as hypotheses, and they are replaced by new hypotheses
when they are falsified. However, if scientific theories are always conjectural in this way,
then what makes science better than astrology, or spirit worship, or any other form of
unwarranted superstition? A non-Popperian would answer this question by saying that
real science proves its claims on the basis of observational evidence, whereas supersti-
tion is nothing but guesswork. But on Popper’s account, even scientific theories are
guesswork – for they cannot be proved by the observations, but are themselves merely
undefeated conjectures.
Popper calls this the ‘problem of demarcation’: what is the difference between
science and other forms of belief? His answer is that science, unlike superstition, is
at least falsifiable, even if it is not provable (Popper 1959a: ch. 2). Scientific theories
are framed in precise terms, and so issue in definite predictions. For example, Newton’s
laws tell us exactly where certain planets will appear at certain times. And this | Blackwell |
means that if such predictions fail, we can be sure that the theory behind them is
false. By contrast, belief systems like astrology are irredeemably vague, in a way
which prevents their ever being shown definitely wrong. Astrology may predict that
Scorpios will prosper in their personal relationships on Thursdays, but when faced
with a Scorpio whose spouse walks out on a Thursday, defenders of astrology are
likely to respond that the end of the marriage was probably for the best, all things
considered. Because of this, nothing will ever force astrologists to admit their theory is
wrong. The theory is phrased in such imprecise terms that no actual observations can
possibly falsify it.
Popper himself uses the criterion of falsifiability to distinguish genuine science, not
just from traditional belief systems like astrology and spirit worship, but also from
Marxism, psychoanalysis and various other modern disciplines that he denigrates as
‘pseudo-sciences’. According to Popper, the central claims of these theories are as unfal-
sifiable as those of astrology. Marxists predict that proletarian revolutions will be suc-
cessful whenever capitalist regimes have been sufficiently weakened by their internal
contradictions. But when faced with unsuccessful proletarian revolutions, they simply
respond that the contradictions in those particular capitalist regimes have not yet weak-
ened them sufficiently. Similarly, psychoanalytic theorists will claim that all adult neu-
roses are due to childhood traumas, but when faced by troubled adults with apparently
undisturbed childhoods, they will say that those adults must nevertheless have under-
gone private psychological traumas when young. For Popper, such ploys are the
antithesis of scientific seriousness. Genuine scientists will say beforehand what obser-
vational discoveries would make them change their minds, and will abandon their
theories if these discoveries are made. But Marxists and psychoanalytic theorists frame
their theories in such a way, argues Popper, that no possible observations need ever
make them adjust their thinking.
288
PHILOSOPHY OF SCIENCE
1.3 The failings of falsificationism
At first sight Popper seems to offer an extremely attractive account of science. He
explains its superiority over other forms of belief, while at the same time apparently
freeing it from any problematic dependence on induction. Certainly his writings have
struck a chord within the scientific community. Popper is one of the few philosophers
ever to have become a Fellow of the Royal Society, an honour usually reserved for
eminent scientists.
In the philosophical world, however, Popper’s views are more controversial. This is
because many philosophers feel that his account of science signally fails to solve the
problem with which he begins, namely, the problem of induction (for example, see Ayer
1956: 71–5; Worrall 1989). The central objection to his position is that it only accounts
for negative scientific knowledge, as opposed to positive knowledge. Popper points out
that a single counter-example can show us that a scientific theory is wrong. But he says
nothing about what can show us that a scientific theory is right. Yet it is positive knowl-
edge of this latter kind that makes science important. We can cure diseases and send
people to the moon because we know that certain causes do always have certain results,
not because we know that they do not. Useful scientific knowledge comes in the form ‘All
As are Bs’, not ‘It’s false that all As are Bs’. Since Popper only accounts for the latter kind
of knowledge,heseemsto leave out what is most interesting and important about science.
Popper’s usual answer to this objection is that he is concerned with the logic of pure
scientific research, not with practical questions about technological applications. Sci-
entific research requires only that we formulate falsifiable conjectures, and reject them | Blackwell |
if we discover counter-examples. The further question of whether technologists should
believe those conjectures, and rely on their predictions when, say, they administer some
drug or build a dam, Popper regards as an essentially practical issue, and as such not
part of the analysis of rational scientific practice.
But this will not do. After all, Popper claims to have solved the problem of induction.
But the problem of induction is essentially the problem of how we can base judgements
about the future on evidence about the past. In insisting that scientific theories are just
conjectures, and that therefore we have no rational basis for believing their predictions,
Popper is simply denying that we can make rational judgements about the future.
Consider these two predictions: (1) when I jump from this tenth-floor window I shall
crash painfully into the ground; (2) when I jump from the window I will float like a
feather to a gentle landing. Intuitively, it is more rational to believe (1), which assumes
that the future will be like the past, than (2), which does not. But Popper, since he rejects
induction, is committed to the view that past evidence does not make any beliefs about
the future more rational than any others, and therefore that believing (2) is no less
rational than believing (1).
Something has gone wrong. Of course believing (1) is more rational than believing
(2). In saying this, I do not want to deny that there is a problem of induction. Indeed it
is precisely because believing (1) is more rational than believing (2) that induction is
problematic. Everybody, Popper aside, can see that believing (1) is more rational than
believing (2). The problem is then to explain why believing (1) is more rational than
believing (2), in the face of the apparent invalidity of induction. So Popper’s denial of
289
DAVID PAPINEAU
the rational superiority of (1) over (2) is not so much a solution to the problem of induc-
tion, but simply a refusal to recognize the problem in the first place.
Even if it fails to deal with induction, Popper’s philosophy of science does have some
strengths as a description of pure scientific research. For it is certainly true that many
scientific theories start life as conjectures, in just the way Popper describes. When
Einstein’s general theory of relativity was first proposed, for example, very few scien-
tists actually believed it. Instead they regarded it as an interesting hypothesis, and were
curious to see whether it was true. At this initial stage of a theory’s life, Popper’s
recommendations make eminent sense. Obviously, if you are curious to see whether a
theory is true, the next step is to put it to the observational test. And for this purpose it
is important that the theory is framed in precise enough terms for scientists to work out
what it implies about the observable world – that is, in precise enough terms for it to be
falsifiable. And of course if the new theory does get falsified, then scientists will reject
it and seek some alternative, whereas if its predictions are borne out, then scientists
will continue to investigate it.
Where Popper’s philosophy of science goes wrong, however, is in holding that sci-
entific theories never progress beyond the level of conjecture. As I have just suggested,
theories are often mere conjectures when they are first put forward, and they may
remain conjectures as the initial evidence first comes in. But in many cases the accu-
mulation of evidence in favour of a theory will move it beyond the status of conjecture
to that of established truth. The general theory of relativity started life as a conjecture,
and many scientists still regarded it as hypothetical even after Sir Arthur Eddington’s
famous initial observations in 1919 of light apparently bending near the sun. But by
now this initial evidence has been supplemented with evidence in the form of gravita-
tional red-shifts, time-dilation and black holes, and it would be an eccentric scientist | Blackwell |
who nowadays regarded the general theory as less than firmly established.
Such examples can be multiplied. The heliocentric theory of the solar system, the
theory of evolution by natural selection and the theory of continental drift all started
life as intriguing conjectures, with little evidence to favour them over their competitors.
But in the period since they were first proposed these theories have all accumulated a
great wealth of supporting evidence. It is only those philosophers who have been
bemused by the problem of induction who view these theories as being no better than
initial hypotheses. Everybody else who is acquainted with the evidence has no doubt
that these theories are proven truths.
1.4 Bayesianism
If we insist, against Popper, that we are fully entitled to believe at least some scientific
theories on the basis of past evidence, then we are committed to finding some solution
to the problem of induction. One currently popular account of the legitimacy of induc-
tion is found within Bayesianism, named after Thomas Bayes (c.1701–61) (Horwich
1982; Howson and Urbach 1989).
Bayesians are philosophers who hold that our beliefs, including our beliefs in scien-
tific theories, come in degrees. Thus, for example, I can believe to degree 0.5 that it will
rain today, in the sense that I think there is a 50 per cent likelihood of rain today. Simi-
larly, I might attach a 0.1 degree of belief to the theory that the strong nuclear and
290
PHILOSOPHY OF SCIENCE
electro-weak forces are the same force – I think it unlikely, but allow that there is a one-
in-ten possibility it may turn out true.
As these examples indicate, Bayesians think of degrees of belief as the extent to
which you subjectively take something to be PROBABLE (pp. 167–8). Accordingly, they
argue that your degrees of belief ought to satisfy the axioms of the probability calcu-
lus. (See the box below for the Dutch Book Argument for this thesis.) It is important to
realize, however, that while Bayesians think of degrees of belief as probabilities in this
mathematical sense, they still think of them as subjective probabilities. In particular,
they allow that it can be perfectly rational for different people to attach different sub-
jective probabilities to the same proposition – you can believe that it will rain today to
degree 0.2, while I believe this to degree 0.5. What rationality does require, according
to the Bayesians, is only that if you have a subjective probability of 0.2 for rain, then
you must have one of 0.8 for its not raining, while if I have 0.5 for rain, then I must
have 0.5 for its not raining. That is, both of us must accord, in our different ways, with
the theorem of the probability calculus that Prob(p) = 1 - Prob(not-p).
At first sight, this element of subjectivity might seem to disqualify Bayesianism as a
possible basis for scientific rationality. If we are all free to attach whatever degrees of
belief we like to scientific theories, provided only that we are faithful to the structure of
the probability calculus, then what is to stop each of us from supporting different theo-
ries, depending only on individual fads or prejudices? But Bayesians have an answer;
namely, that it does not matter what prejudices you start with, as long as you revise
your degrees of belief in a rational way.
Bayesians derive their account of how to revise degrees of belief, as well as their
name, from Bayes’s theorem, originally proved by Thomas Bayes in a paper published
in 1763. Bayes’s theorem states:
Prob(H/E) = Prob(H) ¥ Prob(E/H)/Prob(E).
The simple proof of this theorem is given in the box. But the philosophical significance
of the theorem is that it suggests a certain procedure for revising your degrees of belief
in response to new evidence. Suppose that H is some hypothesis, and E is some newly
discovered evidence. Then Bayesians argue that, when you discover E, you should adjust
your degree of belief in H in line with the right-hand side of the above equation: that | Blackwell |
is, you should increase it to the extent that you think E is likely given H, but unlikely
otherwise. In other words, if E is in itself very surprising (like light bending in the vicin-
ity of the sun) but at the same time just what you would expect given your theory H
(the general theory of relativity), then E should make you increase your degree of belief
in H a great deal. On the other hand, if E is no more likely given H than it would be on
any other theory, then observing E provides no extra support for H. The movement
of the tides, for example, is no great argument for general relativity, even though it
is predicted by it, since it is also predicted by the alternative Newtonian theory of
gravitation.
Note in particular that this strategy for updating degrees of belief in response to evi-
dence can be applied to inductive inferences. Consider the special case where H is some
universal generalization – all bodies fall with constant acceleration, say – and the evi-
dence E is that some particular falling body has been observed to accelerate constantly.
291
DAVID PAPINEAU
If this observation was something you did not expect at all, then Bayesianism tells you
that you should increase your degree of belief in Galileo’s law significantly, for it is just
what Galileo’s law predicts. Of course, once you have seen a number of such observa-
tions, and become reasonably convinced of Galileo’s law, then you will cease to find
new instances surprising, and to that extent will cease to increase your degree of belief
in the law. But that is as it should be. Once you are reasonably convinced of a law, then
there is indeed little point in gathering further supporting instances, and so it is to the
credit of Bayesianism that it explains this.
The Bayesian account of how to revise degrees of belief seems to make good sense.
In addition, it promises a solution to the problem of induction, since it implies that
positive instances give us reason to believe scientific generalizations.
There are, however, problems facing this account. For a start, a number of philoso-
phers have queried whether Bayes’s theorem, which after all is little more than an arith-
metical truth, can constrain what degrees of belief we adopt in the future (see the box
below). And even if we put this relatively technical issue to one side, it is unclear how
far the Bayesian account really answers the worry raised above, that the subjectivity of
degrees of belief will allow different scientists to commit themselves arbitrarily to dif-
ferent theories. The Bayesian answer to this worry was that Bayes’s theorem will at least
constrain these different scientists to revise their degrees of belief in response to the evi-
dence in similar ways. But, even so, it still seems possible that the scientists will remain
on different tracks, if they start at different places. If two scientists are free to attach
different prior degrees of belief in Galileo’s law, and both update those degrees of belief
according to Bayes’s theorem when they learn the evidence, will they not still end up
with different posterior degrees of beliefs?
The standard answer to the objection is to appeal to convergence of opinion. The idea
is that, given enough evidence, everybody will eventually end up in the same place, even
if they have different starting-points. There are a number of theorems of probability
theory showing that, within limits, differences in initial probabilities will be ‘washed
out’, in the sense that sufficient evidence and Bayesian updating will lead to effectively
identical final degrees of belief. So in the end, argue Bayesians, it does not matter if you
start with a high or low degree of belief in Galileo’s law – for after 1,000 observations of
constantly falling bodies you will end up believing it to a degree close to 1 anyway.
However, interesting as these results are, they do not satisfactorily answer the fun-
damental philosophical questions about inductive reasoning. For they do not work for | Blackwell |
all possible initial degrees of belief. Rather, they assume that the scientists at issue, while
differing among themselves, all draw their initial degrees of belief from a certain range.
While this range includes all the initial degrees of belief that seem at all intuitively plau-
sible, there are nevertheless other possible initial degrees of belief that are consistent
with the axioms of probability, but which will not lead to eventual convergence. So, for
example, the Bayesians do not in fact explain what is wrong with people who never end
up believing Galileo’s law because they are always convinced that the course of nature
is going to change tomorrow. Of course, Bayesians are right to regard such people as
irrational. But they do not explain why they are irrational. So they fail to show why all
thinkers must end up with the same attitude to scientific theories. And in particular
they fail to solve the problem of induction, since they do not show why all rational
thinkers must expect the future to be like the past.
292
PHILOSOPHY OF SCIENCE
Bayesianism
The Dutch Book Argument
The axioms of probability require that
(1) 0 £ Prob(P) £ 1, for any proposition P
(2) Prob(P) = 1, if P is a necessary truth
(3) Prob(P) = 0, if P is impossible
(4) Prob(P or Q) = Prob(P) + Prob(Q), if P and Q are mutually exclusive.
Bayesians appeal to the Dutch Book Argument to show why subjective degrees of belief
should conform to these axioms. Imagine that your degrees of belief did not so conform.
You believe proposition P to degree y, say, and yet do not believe not-P to degree 1 - y.
(You thus violate the conjunction of axioms (2) and (4), because P or not-P is a neces-
sary truth.) Then it will be possible for somebody to induce you to make bets on P and
not-P in such a way that you will lose whatever happens. A set of bets that guarantee
that you will lose whatever happens is called a ‘Dutch book’. The undesirability of such
a set of bets thus provides an argument that any rational person’s subjective degrees of
belief should satisfy the axioms of the probability calculus.
Bayes’s Theorem
The conditional probability of P given Q - Prob(P/Q) – is defined as Prob(P and
Q)/Prob(Q). Intuitively, Prob(P/Q) signifies the probability of P on the assumption that Q
is true. It immediately follows from this definition that
Prob(H/E) = Prob(H) ¥ Prob(E/H)/Prob(E)
This is Bayes’s theorem. As you can see, it says that the conditional probability Prob(H/E)
of some hypothesis H given evidence E is greater than Prob(H) to the extent that E is
improbable in itself, but probable given H.
Bayesian Updating
Bayesians recommend that if you observe some evidence E, then you should revise your
degree of belief in H, and set your new Probt’(H) equal to your previous conditional degree
of belief in H given E, Probt(H/E), where t is the time before you learn E, and t¢ after.
Bayes’s theorem, applied to your subjective probabilities at t, then indicates that this
will increase your degree of belief in H to the extent that you previously thought E to be
subjectively improbable in itself, but subjectively probable given H.
This Bayesian recommendation, that you revise your degree of belief in H by setting it
equal to your old conditional degree of belief in H given E, should be distinguished from
Bayes’s theorem. Bayes’s theorem is a trivial consequence of the definition of conditional
probability, and constrains your degrees of belief at a given time. The Bayesian recom-
mendation, by contrast, specifies how your degrees of belief should change over time.
Bayes’s theorem is uncontentious, but it is a matter of active controversy whether there
is any satisfactory way of defending the Bayesian recommendation (Hacking 1967; Teller
1973).
293
DAVID PAPINEAU
1.5 Instrumentalism versus realism
At this stage let us leave the problem of induction for a while and turn to a different
difficulty facing scientific knowledge. Much of science consists of claims about unob- | Blackwell |
servable entities like viruses, radio waves, electrons and quarks. But if these entities are
unobservable, how are scientists supposed to have found out about them? If they
cannot see or touch them, does it not follow that their claims about them are at best
speculative guesses, rather than firm knowledge?
It is worth distinguishing this problem of unobservability from the problem of induc-
tion. Both problems can be viewed as difficulties facing theoretical knowledge in science.
But whereas the problem of induction arises because scientific theories make general
claims, the problem of unobservability is due to our lack of sensory access to the subject
matter of many scientific theories. (So the problem of
induction arises for general
claims even if they are not about unobservables, such as ‘All sodium burns bright
orange’. Conversely, the problem of unobservability arises for claims about unobserv-
ables even if they are not general, such as ‘One free electron is attached to this oil drop’.
In this section and the next, however, it will be convenient to use the term ‘theory’
specifically for claims about unobservables, rather than for general claims of any kind.)
There are two general lines of response to the problem of unobservability. On the
one hand are realists, who think that the problem can be solved. Realists argue that the
observable facts provide good indirect evidence for the existence of unobservable en-
tities, and so conclude that scientific theories can be regarded as accurate descriptions
of the unobservable world. On the other hand are instrumentalists, who hold that we
are in no position to make firm judgements about imperceptible mechanisms. Instru-
mentalists allow that theories about such mechanisms may be useful ‘instruments’ for
simplifying our calculations and generating predictions. But they argue that these the-
ories are no more true descriptions of the world than the ‘theory’ that all the matter in
a stone is concentrated at its centre of mass (which is also an extremely useful assump-
tion for doing certain calculations, but clearly false).
Earlier this century instrumentalists used to argue that we should not even interpret
theoretical claims literally, on the grounds that we cannot so much as meaningfully
talk about entities we have never directly experienced. But nowadays this kind of
semantic instrumentalism is out of favour. Contemporary instrumentalists allow that
scientists can meaningfully postulate, say, that matter is made of tiny atoms containing
nuclei orbited by electrons. But they then take a SCEPTICAL (pp. 45–56) attitude to such
postulates, saying that we have no entitlement to believe them (as opposed to using them
as an instrument for calculations).
An initial line of argument open to realism is to identify some feature of scientific
practice and then argue that instrumentalism is unable to account for it. One aspect of
scientific practice invoked in this connection has been the unification of different kinds
of theories in pursuit of a single ‘theory of everything’ (Friedman 1984); other features
of science appealed to by realists have included the use of theories to explain observable
phenomena (Boyd 1980), and the reliance on theories to make novel predictions
(Smart 1963). For, so the realist argues, these aspects of scientific practice only make
sense on the assumption that scientific theories are true descriptions of reality. After
all, says the realist, if theories are simply convenient calculating devices, then why
294
PHILOSOPHY OF SCIENCE
expect different theories to be unifiable into one consistent story? Unification is clearly
desirable if our theories all aim to contribute to the overall truth, but there seems no
parallel reason why a bunch of instruments should be unifiable into one big ‘instru-
ment of everything’. And similarly, the realist will argue, there seems no reason to
expect a mere calculating instrument, as opposed to a true description of an underly- | Blackwell |
ing reality, to yield a genuine explanation of some past occurrence, or a reliable pre-
diction of a future one.
However, this form of argument tends to be inconclusive. There are two possible lines
of response open to instrumentalists. They can offer an instrumentalist account of the
relevant feature of scientific practice. Alternatively, they can deny that this feature
really is part of scientific practice in the first place. As an example of the first response,
they could argue that the unification of science is motivated, not by the pursuit of one
underlying truth, but simply by the desirability of having a single all-purpose calculat-
ing instrument rather than a rag-bag of different instruments for different problems.
The second kind of response would be to deny that unification is essential to science to
start with. Thus Nancy Cartwright argues that science really is a rag-bag of different
instruments. She maintains that scientists faced with a given kind of problem will stan-
dardly deploy simplifying techniques and rules of thumb which owe nothing to general
theory, but which have shown themselves to deliver the right answer to the kind of
problem at hand (Cartwright 1983).
Similar responses can be made by instrumentalists to the arguments from explana-
tion and prediction. Instrumentalists can either retort that there is no reason why the
status of theories as calculating instruments should preclude them from giving rise to
predictions and explanations; or they can query whether scientific theories really do
add to our ability to predict and explain to start with. Not all these lines of response are
equally convincing. But between them they give instrumentalism plenty of room to
counter the initial realist challenge.
1.6 Theory, observation and incommensurability
A different line of argument against instrumentalism focuses on the distinction
between what is observable and what is not. This distinction is crucial to instrumen-
talism, in that instrumentalists argue that claims about observable phenomena are
unproblematic, but claims about unobservables are not. However, a number of writers
have queried this distinction, arguing that observation reports are not essentially dif-
ferent from claims about unobservables, since they too depend on theoretical assump-
tions about the underlying structure of reality. Norwood Hanson (1958) has argued,
for example, that scientists before and after Copernicus saw different things when they
looked at the Sun: whereas pre-Copernicans regarded the Earth as stationary and so
saw the Sun revolving round it, post-Copernican scientists saw the Sun as stationary
and the Earth as rotating. Similarly, Hanson (1963) argues that the photographic plate
which looks like a squiggly mess to a lay observer is seen as displaying a well-defined
electron–positron pair by an experienced particle physicist. Examples like these under-
mine the distinction between what is observable and what is not, since they show that
even judgements made in immediate response to sensory stimulation are influenced by
fallible theories about reality.
295
DAVID PAPINEAU
Figure 9.1 The Müller–Lyer. Although all three lines are the same length, the top line,
with inward-pointing arrowheads, appears to be shorter, and the bottom line, with outward-
pointing arrowheads, appears to be longer, than the ‘neutral’ middle line.
Nor is the point restricted to recherché observations of astronomical bodies or sub-
atomic particles. Even immediate perceptual judgements about the colour, shape and
size of medium-sized physical objects can be shown to depend on theoretical assump-
tions implicit in our visual systems. Perhaps the best-known illustration is the
Müller–Lyer illusion (see figure 9.1), which shows how our visual system uses complex
assumptions about the normal causes of certain kinds of retinal patterns to draw con-
clusions about the geometry of physical figures. And analogous illusions can be used
to demonstrate the presence of other theoretical presuppositions in our visual and other | Blackwell |
sensory systems.
As I said, in the first instance the unclarity of the observable–unobservable distinc-
tion counts against instrumentalism rather than realism. After all, it is instrumental-
ism, not realism, which needs the distinction, since instrumentalism says that we should
be sceptical about unobservable claims, but not observable ones, whereas realism is
happy to regard both kinds of claims as belief-worthy, so does not mind if they cannot
be sharply distinguished.
However, there is another way of responding to doubts about the theory–observa-
tion distinction. For note that the arguments against the observable–unobservable
distinction do not in fact vindicate the realist belief-worthiness of claims about unob-
servables; rather, they attack realism from the bottom up, and undermine the belief-
worthiness of claims about observables, by showing that even observational claims
depend on fallible theoretical assumptions. Obviously, if there is no observable–unob-
servable distinction, then all scientific claims are in the same boat. But on reflection it
seems that the boat they all end up in is the instrumentalist boat of sceptical disbelief,
not the realist one of general faith in science.
A number of influential recent philosophers of science, most prominently T. S. Kuhn
(1962) and Paul Feyerabend (1976), have embraced this conclusion wholeheartedly,
and maintained that no judgements made within science, not even observational judge-
ments, can claim the authority of established truth. Rather, they argue, once scientists
have embraced a theory about the essential nature of their subject matter, such as geo-
centrism, or Newtonian dynamics, or the wave theory of light, they will interpret all
observational judgements in the light of that theory, and so will never be forced to
296
PHILOSOPHY OF SCIENCE
recognize the kind of negative observational evidence that might show them that their
theory is mistaken. Kuhn and Feyerabend independently lit on the term incommensu-
rable to express the view that there is no common yardstick, in the form of theory-
independent observation judgements, which can be used to decide objectively on the
worth of scientific theories. Instead, they argue, decisions on scientific theories are
never due to objective observational evidence, but are always relative to the presuppo-
sitions, interests and social milieux of the scientists involved.
Kuhn’s and Feyerabend’s blanket relativism has provoked much discussion among
philosophers of science, but won few whole-hearted converts. Much of the discussion
has focused on the status of observations. Most philosophers of science are prepared to
accept that all observational judgements in some sense presuppose some element of
theory. But many balk at the conclusion that observations therefore never have any
independent authority to decide scientific questions. After all, they point out, most
simple observations, such as that a pointer is adjacent to a mark on a dial, presuppose
at most a minimal amount of theory, about rigid bodies, say, and about basic local
geometry. Since such minimal theories are themselves rarely at issue in serious scien-
tific debates, this minimal amount of theory-dependence provides no reason why obser-
vations of pointer readings should not be used to settle scientific disputes. If a scientific
theory about the behaviour of gases, say, predicts that a pointer will be at a certain place
on a dial, and it is observed not to be, then this decides against the theory about gases.
It is not to the point to respond that, in taking the pointer reading at face value, we are
making assumptions about rigid bodies and local geometry. For nothing in the debate
about gases provides any reason to doubt these assumptions. And this of course is why
scientists take such pains to work out what their theories imply about things like pointer
readings – since observations of pointer readings do not depend on anything con- | Blackwell |
tentious, they will weigh with all sides in the scientific debate.
So, despite the arguments of Kuhn and Feyerabend, nearly all philosophers are real-
ists about pointer readings and similar observable phenomena. But this still leaves us
with the original disagreement between realism and instrumentalism about less directly
observable entities. For, even if claims about pointer readings are uncontroversially
belief-worthy, instrumentalists can still argue that theories about viruses, atoms and
gravitational waves are nothing more than useful fictions for making calculations.
The realist response, as I said, is that the observable facts provide good indirect evi-
dence for these theoretical entities, even if we cannot observe them directly. However,
there are two strong lines of argument that instrumentalists can use to cast doubt on
this suggestion. In the next two sections I shall discuss ‘the underdetermination of
theory by evidence’ and ‘the pessimistic meta-induction from past falsity’.
1.7 The underdetermination of theory by evidence
The argument from underdetermination asserts that, given any theory about unob-
servables that fits the observable facts, there will be other incompatible theories that fit
the same facts. And so, the argument concludes, we are never in a position to know
that any one of these theories is the truth.
Why should we accept that there is always more than one theory that fits any set
of observable facts? One popular argument for this conclusion stems from the
297
DAVID PAPINEAU
‘Duhem–Quine thesis’. According to this thesis, any particular scientific theory can
always be defended in the face of contrary observations by adjusting auxiliary hypoth-
eses. For example, when the Newtonian theory of gravitation was threatened by obser-
vations of anomalous movements by the planet Mercury, it could always be defended
by postulating a hitherto unobserved planet, say, or an inhomogeneous mass distribu-
tion in the Sun. This general strategy for defending theories against contrary evidence
seems to imply that the adherents of competing theories will always be able to main-
tain their respective positions in the face of any actual observational data.
Another argument for underdetermination starts, not with competing theories, but
with some given theory. Suppose that all the predictions of some particular theory are
accurate. We can construct a ‘de-Ockhamized’ version of this theory (reversing William
of Ockham’s ‘razor’ which prescribes that ‘entities are not to be multiplied beyond
necessity’), by postulating some unnecessarily complicated unobservable mechanism
which nevertheless yields a new theory with precisely the same observational conse-
quences as the original one.
Both of these lines of reasoning can be used to argue that more than one theory
about unobservables will always fit any given set of observational data. Does this make
realism about unobservables untenable? Many philosophers conclude that it does. But
this is too quick. For we should recognize that there is nothing in the arguments for
alternative underdetermined theories to show that these alternative theories will
always be equally well-supported by the data. What the arguments show is that different
theories will always be consistent with the data. But they do not rule out the possibility
that, among these alternative theories, one is vastly more plausible than the others, and
for that reason should be believed to be true. After all, ‘flat earthers’ can make their
view consistent with the evidence from geography, astronomy and satellite pho-
tographs, by constructing far-fetched stories about conspiracies to hide the truth, the
effects of empty space on cameras, and so on. But this does not show that we need take
their flat-earthism seriously. Similarly, even though Newtonian gravitational theory
can in principle be made consistent with all the contrary evidence, this is no reason not | Blackwell |
to believe general relativity. Nor is our ability to ‘cook up’ a de-Ockhamized version of
general relativity a reason to stop believing the standard version unencumbered with
unnecessary entities.
Nevertheless, as I said, many contemporary philosophers of science do move directly
from the premise that different theories are consistent with the observational evidence
to the conclusion that none of them can be regarded as the truth. This is because many
of them address this issue from an essentially Popperian perspective. For if you follow
Popper in rejecting induction, then you will not believe that evidence ever provides posi-
tive support for any theory, except in the back-handed sense that the evidence can fail
to falsify it. Accordingly you will think that all theories that have not been falsified are
on a par, and in particular that any two theories that are both consistent with the
evidence are equally well-supported by it.
So the arguments for underdetermination do present a problem to Popperians, since
Popperians have no obvious basis for discriminating among different theories consis-
tent with the data. But, as I pointed out above, these need not worry those of us who
diverge from Popper in thinking unfalsified theories can be better or worse supported
by evidence, for we can simply respond to the underdetermination arguments by
298
PHILOSOPHY OF SCIENCE
observing that some underdetermined theories are better supported by the evidence
than others.
Now that we have returned to Popper, it is worth noting that the Duhem–Quine
argument also raises a more specific problem for Popperians. Recall that Popper’s
overall philosophy raised the ‘problem of demarcation’, the problem of how to distin-
guish science from other kinds of conjecture. Popper’s answer was that science, unlike
astrology, or Marxism and psychoanalytic theory, is falsifiable. But the Duhem–Quine
argument shows that even such eminently scientific theories as Newtonian physics are
not falsifiable in any straightforward sense, since they can always save themselves in
the face of failed predictions by adjusting auxiliary hypotheses.
Not only does this cast doubt on Popper’s dismissal of Marxism and psychoanalysis
as unscientific, but it seems to undermine his whole solution to the demarcation
problem. If such paradigmatic scientific theories as Newtonian physics are not falsifi-
able, then it can scarcely be falsifiability that distinguishes science from non-science.
Still, this is Popper’s problem, not ours (see Harding 1975). If we do not reject induc-
tion, then we do not have a problem of demarcation. For we can simply say that what
distinguishes successful scientific theories from non-science is that the observational
evidence gives us inductive reason to regard scientific theories as true.
The arguments in the latter part of this section have presupposed that a certain kind
of inductive argument is legitimate. The kind of inductive argument relevant to under-
determination is not simple ‘enumerative’ induction, from observed As being Bs to ‘All
As are Bs’, but rather inferences from any collection of observational data to the most
plausible theory about unobservables that is consistent with that data. But these are
species of the same genus; indeed, enumerative inductions can themselves be inter-
preted as treating ‘All As are Bs’ as the most plausible extrapolation consistent with the
observed As being Bs. My attitude to this more general category of inductive inferences
remains the same as my attitude to enumerative induction, which I outlined earlier. We
do not yet have an explanation of why inductive inferences are legitimate, and to that
extent we still face a problem of induction. But it is silly to try to solve that problem by
denying that inductive inferences are ever legitimate. And that is why the underdeter-
mination of theory by data does not constitute a good argument for instrumentalism.
For to assume that we are never entitled to believe a theory, if there are others | Blackwell |
consistent with the same data, is simply to assume the illegitimacy of induction.
The Underdetermination of Theory by Observational Data (UTD)
There are two arguments for the UTD. The first is based on the Duhem–Quine thesis, orig-
inally formulated by the French philosopher and historian Pierre Duhem (1861–1916)
and later revived by the American logician W. V. O. Quine (b. 1908). Duhem (1951) and
Quine (1951) point out that a scientific theory T does not normally imply predictions P
on its own, but only in conjunction with auxiliary hypotheses H.
T & H fi P
So when P is falsified by observation, this does not refute T, but only the conjunction of
T & H.
299
DAVID PAPINEAU
not-P fi not-(T & H)
So T can be retained, and indeed still explain P, provided we replace H by some alterna-
tive, H’, such that
T & H’ fi not-P.
This yields the Duhem–Quine thesis: any theoretical claim T can consistently be retained
in the face of contrary evidence, by making adjustments elsewhere in our system of
beliefs. The UTD follows quickly. Imagine two competing theories T1 and T2. Whatever
evidence accumulates, versions of T1 and T2, conjoined with greatly revised auxiliary
hypotheses if necessary, will both survive, consistent with that evidence, but incompat-
ible with each other.
The other argument, first put forward by physicists like Henri Poincaré (1854–1912)
and Ernst Mach (1838–1916) at the turn of the twentieth century, has a different
starting-point. Imagine that T1 is the complete truth about physical reality, and that it
implies observational facts O. Then we can always construct some ‘de-Ockhamized’ T2
which postulates more complicated unobservable mechanisms but makes just the same
observational predictions O. (Glymour 1980: ch. 5.)
For example, suppose we start with standard assumptions about the location of bodies
in space-time and about the forces acting on them. A de-Ockhamized theory might then
postulate that all bodies, including all measuring instruments, are accelerating by
1ft/sec.2 in a given direction, and then add just the extra forces required to explain this.
This theory would clearly have exactly the same observational consequences as the
original one, even though it contradicted it at the unobservable level.
To bring out the difference between the two arguments for UTD, note that the
Duhem–Quine argument does not specify exactly which overall theories we will end up
with, since it leaves open how T1’s and T2’s auxiliary hypotheses may need to be revised;
the de-Ockhamization argument, by contrast, actually specifies T1 and T2 in full detail,
including auxiliary hypotheses. In compensation, the Duhem–Quine argument promises
us alternative theories whatever observational evidence may turn up in the future;
whereas the de-Ockhamization argument assumes that all future observations are as T1
predicts.
1.8 The pessimistic meta-induction
I turn now to the other argument against realism. This argument takes as its premise
the fact that past scientific theories have generally turned out to be false, and then
moves inductively to the pessimistic conclusion that our current theories are no doubt
false too. (This is called a ‘meta-induction’ because its subject matter is not the natural
world, but scientific theories about the natural world.)
There are plenty of familiar examples to support this argument. Newton’s theory of
space and time, the phlogiston theory of combustion, and the theory that atoms are
indivisible were all at one time widely accepted scientific theories, but have since been
recognized to be false. So does it not seem likely, the pessimistic induction concludes,
that all our current theories are false, and that we should therefore take an instru-
mentalist rather than a realist attitude to them? (See Laudan 1981.)
300
PHILOSOPHY OF SCIENCE
This is an important and powerful argument, but it would be too quick to conclude
that it discredits realism completely. It is important that the tendency to falsity is much | Blackwell |
more common in some areas of science than others. Thus it is relatively normal for
theories to be overturned in cosmology, say, or fundamental particle physics, or the
study of primate evolution. By contrast, theories of the molecular composition of dif-
ferent chemical compounds (such as that water is made of hydrogen and oxygen), or
the causes of infectious diseases (chickenpox is due to a herpes virus), or the nature of
everyday physical phenomena (heat is molecular motion), are characteristically
retained once they are accepted.
Nor need we regard this differential success rate of different kinds of theories as
some kind of accident. Rather, it is the result of the necessary evidence being more
easily available in some areas of science than others. Paleoanthropologists want to
know how many hominid species were present on earth 3 million years ago. But
their evidence consists of a few pieces of teeth and bone. So it is scarcely surprising
that discoveries of new fossil sites will often lead them to change their views. The
same point applies on a larger scale in cosmology and particle physics. Scientists in
these areas want to answer very general questions about the very small and the very
distant. But their evidence derives from the limited range of technological instruments
they have devised to probe these realms. So, once more, it is scarcely surprising
that their theories should remain at the level of tentative hypotheses. By contrast, in
those areas where adequate evidence is available, such as chemistry and medicine,
there is no corresponding barrier to science moving beyond tentative hypotheses to
firm conclusions.
The moral is that realism is more defensible for some areas of science than others.
In some scientific subjects firm evidence is available, and entitles us to view certain
theories, like the theory that water is composed of H2O molecules, as the literal truth
about reality. In other areas the evidence is fragmentary and inconclusive, and then we
do better to regard the best-supported theories, such as the theory that quarks and
leptons are the ultimate building blocks of matter, as useful instruments which accom-
modate the existing data, make interesting predictions, and suggest further lines for
research.
At first sight this might look like a victory for instrumentalism over realism. For
did not instrumentalists always accept that we should be realists about observable
things, and only urge instrumentalism for uncertain theories about unobservable
phenomena? But our current position draws the line in a different place. Instrumen-
talism, as originally defined, takes it for granted that everything unobservable is
inaccessible, and that all theories about unobservables are therefore uncertain. By
contrast, the position we have arrived at places no special weight on the distinction
between what is observable and what is not. In particular, it argues that the pessimistic
meta-induction fails to show that falsity is the natural fate of all theories about
unobservables, but only that there is a line within the category of theories about
unobservables, between those theories that can be expected to turn out false and those
whose claims to truth are secure. So our current position is not a dogmatic instru-
mentalism about all unobservables, but merely the uncontentious view that we should
be instrumentalists about that sub-class of theories which are not supported by
adequate evidence.
301
DAVID PAPINEAU
1.9 Naturalized epistemology of science
In the last decade or so a number of philosophers of science have turned to a natural-
ized approach to scientific knowledge (Kitcher 1992). In place of traditional attempts
to establish criteria for scientific theory-choice by a priori philosophical investigation, | Blackwell |
the naturalized approach regards science itself as a subject for a posteriori empirical
investigation. Accordingly, naturalized epistemologists look to the history, sociology
and psychology of science, rather than to first principles, to identify criteria for the
acceptability of scientific theories.
One apparent difficulty facing this kind of naturalized epistemology of science is that
it is unclear how empirical investigation can ever yield anything more than descriptive
information about how scientists actually operate. Yet any epistemology of science
worth its name ought also to have a normative content – it ought to prescribe how
scientists should reason, as well as describe how they do reason. David HUME (chapter
31) first pointed out that there is a logical gap between ‘is’ and ‘ought’. A naturalized
epistemology based on the empirical study of science seems fated to remain on the
wrong side of this gap.
However, there is room for naturalized epistemologists to reply to this charge.
They can agree that the empirical study of science cannot by itself yield prescriptions
about how science ought to be done. But empirical study can still be relevant to such
prescriptions. Suppose it is agreed that technological fertility, in the sense of generating
technological advances, is a virtue in a scientific theory. Then the history, sociology
and psychology of science might be able to show us that certain kinds of research
strategies are effective at developing technologically fertile theories. More generally,
given any agreed theoretical end Y, empirical study can show that research strategy
X is an effective means to that end. The empirical study of science can thus yield the
hypothetical prescription that, if you want Y, then you ought to adopt means X.
It is this kind of hypothetical prescription that naturalized philosophers of science seek
to establish: they look to the history, sociology and psychology of science to show us
that scientists who choose theories on grounds X will in general achieve theories with
characteristic Y.
Can the naturalized study of science tell us which research strategies are an effec-
tive means to theoretical truth? Different naturalized philosophers of science give dif-
ferent answers to this question. Many are suspicious of the idea of theoretical truth,
and instead prefer to stick to the study of how to achieve more practical ends like tech-
nological fertility, simplicity and predictive accuracy. However, there seems no good
reason for this restriction. There is nothing obviously incoherent in the idea of look-
ing to the empirical study of science to tell us which research strategies have proved a
good way of developing true theories. Indeed, the discussion of the ‘pessimistic meta-
induction’ in the previous section amounted to the sketch of just such an investigation,
in that it appealed to the history of science to decide whether or not the standard pro-
cedures of scientific theory-choice succeed in identifying true theories. It is not difficult
to imagine more detailed and specific studies of this kind of issue.
Let me now return briefly to the issue with which I began, namely, the problem of
induction. It is possible that the naturalized study of how to get at the scientific truth
will enable us to make headway with this problem. For an empirical investigation into
302
PHILOSOPHY OF SCIENCE
science might be able to show us that a certain kind of inductive inference is in general
a reliable guide to scientific truth. And this would then provide a kind of vindication of
that inductive method (see Papineau 1993: ch. 5).
It is true that this kind of defence of induction will inevitably involve an element of
circularity. For when we infer that certain kinds of induction are in general a reliable
guide to truth, on the basis of evidence from the history of science, this will itself be an
inductive inference. It is a matter of some delicacy, however, whether this circularity is
vicious.
| Blackwell |
Defenders of this naturalized defence of induction will point out that, from their
point of view, a legitimate criterion of theory-choice need not be an a priori guide to
truth, but only an empirically certifiable one. Given this, the original argument against
induction, that it is not logically valid, will not worry naturalized philosophers of
science. Induction may not provide any a priori guarantee for its conclusions; but from
the naturalized point of view, this does not show that induction is in any way illegiti-
mate, since it leaves it open that induction may be an empirically reliable guide to the
truth. And if there is nothing to show that induction is illegitimate, naturalized philoso-
phers of science can then argue, why should we not use it to investigate the worth of
inductive inferences? Maybe this is less satisfying a defence of induction than we might
originally have hoped for. But perhaps it is defence enough.
2 The Metaphysics of Science
2.1 Causation
Many issues in the metaphysics of science hinge on the notion of causation. This
notion is as important in science as it is in everyday thinking, and much scientific
theorizing is concerned specifically to identify the causes of various phenomena.
However, there is little philosophical agreement on what it means to say that one event
is the cause of another.
Modern discussion of causation starts with David Hume, who argued that causation
is simply a matter of CONSTANT CONJUNCTION (p. 720). According to Hume (1978), one
event causes another if and only if events of the type to which the first event
belongs regularly occur in conjunction with events of the type to which the second
event belongs. This formulation, however, leaves a number of questions open. Firstly,
there is the problem of distinguishing genuine causal laws from accidental regularities.
Not all regularities are sufficiently lawlike to underpin causal relationships. Being a
screw in my desk could well be constantly conjoined with being made of copper,
without its being true that these screws are made of copper because they are in my
desk. Secondly, the idea of constant conjunction does not give a direction to causation.
Causes need to be distinguished from effects. But knowing that A-type events are con-
stantly conjoined with B-type events does not tell us which of A and B is the cause and
which the effect, since constant conjunction is itself a symmetric relation. Thirdly, there
is a problem about probabilistic causation. When we say that causes and effects are
constantly conjoined, do we mean that the effects are always found with the causes,
or is it enough that the causes make the effects probable?
303
DAVID PAPINEAU
Many philosophers of science during the past century have preferred to talk about
explanation rather than causation. According to the covering-law model of explanation,
something is explained if it can be deduced from premises which include one or more
laws. As applied to the explanation of particular events, this implies that one particu-
lar event can be explained if
it is linked by a law to some other particular event.
However, while they are often treated as separate theories, the covering-law account of
explanation is at bottom little more than a variant of Hume’s constant conjunction
account of causation. This affinity shows up in the fact that the covering-law account
faces essentially the same difficulties as Hume: (1) in appealing to deductions from
‘laws’, it needs to explain the difference between genuine laws and accidentally true
regularities; (2) it omits the requisite directionality, in that it does not tell us why we
should not ‘explain’ causes by effects, as well as effects by causes; after all, it is as easy
to deduce the height of a flagpole from the length of its shadow and the laws of optics,
as to deduce the length of the shadow from the height of the pole and the same laws; | Blackwell |
(3) are the laws invoked in explanation required to be exceptionless and deterministic,
or is it acceptable, say, to appeal to the merely probabilistic fact that smoking makes
cancer more likely, in explaining why some particular person developed cancer?
In what follows I shall discuss these three problems in order (treating them as prob-
lems that arise equally both for the analysis of causation and the analysis of explana-
tion). After that I shall consider some further issues in the metaphysics of science.
The Covering-Law Model of Explanation
According to this model (originally proposed by Hempel and Oppenheim (1948) and
further elaborated in Hempel (1965)) one statement (the explanandum) is explained by
other statements (the explanans) if and only if the explanans contains one or more laws,
and the explanandum can be deduced from the explanans. In the simplest case, where
the explanandum is some particular statement to the effect that some individual a has
property E, we might therefore have:
a has C
For all x, if x has C, then x has E
a has E
For example, we might deduce that a piece of litmus paper turned red, from the law that
all litmus paper placed in acid turns red, together with the prior condition that this piece
of litmus paper was in fact placed in acid. The model can accommodate more compli-
cated explanations of particular events, and can also allow explanations of
laws
themselves, as when we deduce Kepler’s law that all planets move in ellipses, say, from
Newton’s law of universal gravitation and his laws of motion.
As applied to the explanation of particular events, the covering-law model implies a
symmetry between explanation and prediction. For the information that, according to
the model, suffices for the explanation of some known event should also enable us to
predict that event if we did not yet know of it. Many critics have fastened on this impli-
cation of the model, however, and pointed out that we can often predict when we do not
have enough information to explain (as when we predict the height of the flagpole from
304
PHILOSOPHY OF SCIENCE
its shadow) and can often explain when we could not have predicted (as when we explain
X’s cancer on the basis of X’s smoking).
These examples suggest that genuine explanations of particular events need to cite
genuine causes, and that the reason the covering-law model runs into counter-examples
is that it adds nothing to the inadequate constant conjunction analysis of causation,
except that it substitutes the term ‘law’ for ‘constant conjunction’. To get a satisfactory
account of explanation we need, firstly, to recognize that explanations of particular
events must mention causes, and, secondly, to improve on the constant conjunction
analysis of causation.
There is a variant of the covering-law model which allows non-deterministic expla-
nation as well as deterministic ones. This is termed the ‘inductive–statistical (I–S)’ model,
by contrast with the original ‘deductive–nomological (D–N)’ model. An example would
be:
a drinks 10 units of alcohol per diem
For p per cent of xs, if x drinks 10 units of alcohol per diem, x has a damaged liver
a has a damaged liver
Here the explanandum cannot be deduced from the explanans, but only follows with an
inductive probability of p; and the inference appeals to a statistical regularity, rather than
an exceptionless nomological generalization. In Hempel’s original version of this model,
it was required that the probability of the explanandum be high. A better requirement,
however, as explained in the section on probabilistic causation below, is that the par-
ticular facts in the explanans need only make the probability of the explanandum higher
than it would otherwise have been.
2.2 Laws and accidents
There are two general strategies for distinguishing laws from accidentally true gener-
alizations. The first stands by Hume’s idea that causal connections are mere constant
conjunctions, and then seeks to explain why some constant conjunctions are better | Blackwell |
than others. That is, this first strategy accepts the principle that causation involves
nothing more than certain events always happening together with certain others, and
then seeks to explain why some such patterns – the ‘laws’ – matter more than others
– the ‘accidents’. The second strategy, by contrast, rejects the Humean presupposition
that causation involves nothing more than happenstantial co-occurrence, and instead
postulates a relationship of ‘necessitation’, a kind of ‘cement’, which links events that
are connected by law, but not those events (like being a screw in my desk and being
made of copper) that are only accidentally conjoined.
There are a number of versions of the first Humean strategy. The most successful,
originally proposed by F. R. Ramsey (1903–30), and later revived by David Lewis
(1973), holds that laws are those true generalizations that can be fitted into an ideal
system of knowledge. The thought here is that the laws are those patterns that are
somehow explicable in terms of basic science, either as fundamental principles them-
selves, or as consequences of those principles, while accidents, although true, have no
such explanation. Thus, ‘All water at standard pressure boils at 100°C’ is a consequence
305
DAVID PAPINEAU
of the laws governing molecular bonding; but the fact that ‘All the screws in my desk
are copper’ is not part of the deductive structure of any satisfactory science. Ramsey
neatly encapsulated this idea by saying that laws are ‘consequences of those proposi-
tions which we should take as axioms if we knew everything and organized it as simply
as possible in a deductive system’ (Ramsey 1978: 130).
Advocates of
the alternative non-Humean strategy object that the difference
between laws and accidents is not a linguistic matter of deductive systematization, but
rather a metaphysical contrast between the kind of links they report. They argue that
there is a link in nature between being at 100°C and boiling, but not between being in
my desk and being made of copper, and that this is nothing to do with how the descrip-
tion of this link may fit into theories. According to D. M. Armstrong (1983), the most
prominent defender of this view, the real difference between laws and accidents is
simply that laws report relationships of natural necessitation, while accidents only
report that two types of events happen to occur together.
Armstrong’s view may seem intuitively plausible, but it is arguable that the notion
of necessitation simply restates the problem, rather than solving it. Armstrong says that
necessitation involves something more than constant conjunction: if two events are
related by necessitation, then it follows that they are constantly conjoined; but two
events can be constantly conjoined without being related by necessitation, as when the
constant conjunction is just a matter of accident. So necessitation is a stronger rela-
tionship than constant conjunction. However, Armstrong and other defenders of this
view say very little about what this extra strength amounts to, except that it distin-
guishes laws from accidents. Armstrong’s critics argue that a satisfactory account of
laws ought to cast more light than this on the nature of laws.
2.3 The direction of causation
Hume said that the earlier of two causally related events is always the cause, and the
later the effect. However, there are a number of objections to using the earlier–later
‘arrow of time’ to analyse the directional ‘arrow of causation’. For a start, it seems in
principle possible that some causes and effects could be simultaneous. More seriously,
the idea that time is directed from ‘earlier’ to ‘later’ itself stands in need of philosophi-
cal explanation – and one of the most popular explanations is that the idea of ‘move-
ment’ from earlier to later depends on the fact that cause–effect pairs always have a
given orientation in time. However, if we adopt such a ‘causal theory of the arrow of | Blackwell |
time’, and explain ‘earlier’ as the direction in which causes lie, and ‘later’ as the
direction of effects, then we will clearly need to find some account of the direction
of causation which does not itself assume the direction of time.
A number of such accounts have been proposed. David Lewis (1979) has argued
that the asymmetry of causation derives from an ‘asymmetry of overdetermination’.
The overdetermination of present events by past events – consider a person who dies
after simultaneously being shot and struck by lightning – is a very rare occurrence.
By contrast, the multiple ‘overdetermination’ of present events by future events is
absolutely normal. This is because the future, unlike the past, will always contain mul-
tiple traces of any present event. To use Lewis’s example, when the president presses
the red button in the White House, the future effects do not only include the dispatch
306
PHILOSOPHY OF SCIENCE
of nuclear missiles, but also his fingerprint on the button, his trembling, the further
depletion of his gin bottle, the recording of the button’s click on tape, the emission of
light waves bearing the image of his action through the window, the warming of the
wire from the passage of the signal current, and so on, and on, and on.
Lewis relates this asymmetry of overdetermination to the asymmetry of causation
as follows. If we suppose the cause of a given effect to have been absent, then this
implies the effect would have been absent too, since (apart from freaks like the light-
ning–shooting case) there will not be any other causes left to ‘fix’ the effect. By con-
trast, if we suppose a given effect of some cause to have been absent, this does not imply
the cause would have been absent, for there are still all the other traces left to ‘fix’ the
cause. Lewis argues that these counterfactual considerations suffice to show why
causes are different from effects.
Other philosophers appeal to a probabilistic variant of Lewis’s asymmetry. Follow-
ing Reichenbach (1956), they note that the different causes of any given type of effect
are normally probabilistically independent of each other; by contrast, the different
effects of any given type of cause are normally probabilistically correlated. For example,
both obesity and high excitement can cause heart attacks, but this does not imply
that fat people are more likely to get excited than thin ones; on the other hand, the
fact that both lung cancer and nicotine-stained fingers can result from smoking does
imply that lung cancer is more likely among people with nicotine-stained fingers. So
this account distinguishes effects from causes by the fact that the former, but not the
latter, are probabilistically dependent on each other.
2.4 Probabilistic causation
The just-mentioned probabilistic account of the direction of causation is normally for-
mulated as part of a more general theory of probabilistic causation. Until relatively
recently philosophers assumed that the world fundamentally conforms to determinis-
tic laws, and that probabilistic dependencies between types of events, such as that
between smoking and lung cancer, merely reflected our ignorance of the full causes.
The rise of quantum mechanics, however, has persuaded most philosophers that deter-
minism is false, and that some events, like the decay of a radium atom, happen purely
as a matter of chance. A particular radium atom may decay, but on another occasion
an identical atom in identical circumstances might well not decay.
Accordingly, a number of philosophers of science have put forward models of cau-
sation which require only that causes probabilify, rather than determine, their effects.
The earliest such model was the ‘inductive–statistical’ version of the covering-law
model of explanation (Hempel 1965). Unlike deterministic ‘deductive–nomological’
explanations, such inductive–statistical explanations required only that prior condi- | Blackwell |
tions and laws imply a high probability for the event to be explained, not that this event
will certainly happen. However, even this seems too strong a requirement for
probabilistic causation. After all, smoking unequivocally causes lung cancer, but even
heavy smokers do not have a high probability of lung cancer, in the sense of a proba-
bility close to 1. Rather, their smoking increases their probability of lung cancer, not to
a high figure, but merely from a low to a less low figure, but still well below 50 per cent.
So more recent models of probabilistic causation simply require that causes
307
DAVID PAPINEAU
should increase the probability of their effects, not that they should give them a high
probability (Salmon 1971).
This kind of model needs to guard against the possibility that the probabilistic asso-
ciation between putative cause and putative effect may be spurious, like the probabilis-
tic association between barometers falling and subsequent rain. Such associations are
not due to a causal connection between barometer movements and rain, but rather to
both of these being joint effects of a common cause, namely, in our example, falls in
atmospheric pressure. The obvious response to this difficulty is to say that we have a
cause–effect relationship between A and B if and only if A increases the probability of
B, and this association is not due to some common cause C. However, this is obviously
incomplete as an analysis of causation, since it uses the notion of (common) cause in
explaining causation.
It would solve this problem if we could analyse the notion of common cause in prob-
abilistic terms. It seems to be a mark of common causes that they probabilistically
‘screen off’ the associations between their joint effects, in the sense that, if we consider
cases where the common cause is present and where it is absent separately, then the
probabilistic association between the joint effects will disappear. For example, if it is
given that the atmospheric pressure has fallen, then a falling barometer does not make
it any more likely that it will rain; and similarly, if the atmospheric pressure has not
fallen, a faulty falling reading on a barometer is no probable indicator of impending
rain. (Numerically, if C is a common cause, and A and B its joint effects, we will find
that A and B are associated – Prob(B/A) < Prob(B) – but that C and its absence render
A irrelevant to B – Prob(B/A & C) = Prob(B/C) and Prob(B/A & not-C) = Prob(B/not-
C).) It remains a matter of some debate, however, whether this characteristic
probabilistic structure of common causes is enough to allow a complete explanation of
causation in probabilistic terms, or whether further non-probabilistic considerations
need to be introduced.
2.5 Probability
Philosophical interest in probabilistic causation has led to a resurgence of interest in
the philosophy of probability itself. Probability raises philosophical puzzles in its own
right, quite apart from its connection with causation. What exactly is the ‘probability’
of a given event? The only part of the answer that is uncontroversial is that probabil-
ities are quantities that satisfy the axioms of the probability calculus I specified earlier
when discussing Bayesianism. But this leaves plenty of room for alternative philoso-
phical views, for there are a number of different ways of interpreting these axioms.
One interpretation is the subjective theory of probability, which equates probabil-
ities with subjective degrees of belief. This is the interpretation assumed by Bayesian
confirmation theory. Most philosophers are happy to agree that subjective degrees of
belief exist, and that the Dutch Book Argument (see the above box on Bayesianism)
shows why they ought to conform to the axioms of probability. But many, if not all,
philosophers argue that we need a theory of objective probability in addition to this | Blackwell |
subjective account.
One possible objective interpretation is the frequency theory, originally put forward
by Richard von Mises (1957). According to this theory, the probability of a given kind
308
PHILOSOPHY OF SCIENCE
of result is the number of times this result occurs, divided by the total number of occa-
sions on which it might have occurred. So, for example, the probability of heads on a
coin toss is the proportion of heads in some wider class of coin tosses.
This theory, however, faces a number of difficulties. For a start, it has problems
in dealing with ‘single-case probabilities’. Consider a particular coin toss. We can
consider it as a member of the class of all coin tosses, or of all tosses of coins with that
particular shape, or of all tosses made in just that way, or so on. However, these
different ‘reference classes’ may well display different frequencies of heads. Yet
intuitively it seems that there ought to be a unique value for the probability of heads
on a particular toss of a particular coin. Perhaps this difficulty can be dealt with by
specifying that the single-case probability should equal the relative frequency in
the reference class of all tosses that are similar in all relevant respects to the particular
toss in question. But there remain difficulties about which respects should count as
‘relevant’ in this sense.
In addition, there is the problem that many of these more specific reference classes
will only be finite in extent. Coins with a certain distinctive shape may only be tossed
in some given way ten times in the whole history of the universe. Yet the probability of
heads on these tosses is unlikely to be equal to the relative frequency in the ten tosses,
for luck may well yield a disproportionately high, or low, number of heads in ten tosses.
Because of this, frequency theorists standardly appeal, not to actual reference classes,
but to hypothetical infinite sequences, and equate the probability with the limit that the
relative frequency would tend to if the relevant kind of trial were repeated an infinite
number of times. Critics of the frequency theory object that this reliance on hypotheti-
cal infinite reference sequences makes probabilities inadmissibly abstract.
Because of these difficulties, many contemporary philosophers of probability have
adopted the ‘propensity’ theory of probability in place of the frequency theory. The ear-
liest version of this theory, proposed by Popper (1959b), simply modified the frequency
theory by specifying that only those relative frequencies generated by repeated trials on
a given ‘experimental set-up’ should count as genuine probabilities. This arguably deals
with the problem of single-case probabilities, but it still leaves us with hypothetical
reference classes. To avoid this, later versions of the propensity theory do not define
probabilities in terms of frequencies at all, but simply take probabilities to be primitive
propensities of particular situations to produce given results.
This kind of propensity theory does not seek to define objective probabilities in terms
of frequencies, but in effect simply takes single-case probabilities as primitive (see Mellor
1971). But it can still recognize a connection between probabilities and frequencies.
For, as long as propensities are assumed to obey the axioms of the probability calculus
(though this assumption itself merits some debate), it will follow that, in a sufficiently
long sequence of independent trials in each of which the propensity to produce B is p,
the overall propensity for the observed frequency of B to differ by more than a given
amount from p can be made arbitrarily small, in accord with the Law of Large Numbers.
(Note how the italicized second use of ‘propensity’ in this claim prevents it serving as
a definition of propensities in terms of frequencies.)
Both the frequency theory and the propensity theory have their strengths. The fre- | Blackwell |
quency theory has the virtue of offering an explicit definition of probability, where the
propensity theory takes probabilities as primitive. On the other hand, the propensity
309
DAVID PAPINEAU
theory has no need to assume hypothetical reference sequences, whereas these are
essential to the frequency theory.
It may seem that the frequency theory, because it offers an explicit definition, is better
able than the propensity theory to explain how we find out about probabilities. But this
is an illusion. The trouble is that the frequency theory’s explicit definition is in terms of
frequencies in INFINITE SEQUENCES (p. 355). But our evidence is always in the form of fre-
quencies in finite samples. So the problem of explaining how we can move from fre-
quencies in finite samples to knowledge of probabilities is as much a problem for the
frequency theory as for the propensity theory. (There are various suggestions about
how to solve this ‘problem of statistical inference’, none of them universally agreed.
My present point is merely that this problem of statistical inference arises in just the
same way for both frequency and propensity theorists.)
In the face of continued debate about the interpretation of objective probability,
some philosophers have turned to physics, and in particular to the notion of probabil-
ity used in quantum mechanics, to resolve the issue. Unfortunately, quantum mechan-
ics is no less philosophically controversial than the notion of probability. There are
different philosophical interpretations of the formal theory of quantum mechanics,
each of which involves different understandings of probability. Because of this it seems
likely that philosophical disputes about probability will continue until there is an agreed
interpretation of quantum mechanics.
The Interpretation of Quantum Mechanics
Modern quantum mechanics says that the state of any given system of microscopic par-
ticles is fully characterized by its ‘wave function’. However, instead of specifying the exact
positions and velocities of the particles, as is done in classical mechanics, this ‘wave func-
tion’ only specifies probabilities of the particles displaying certain values of position, and
velocity, if appropriate measurements are made. Schrödinger’s equation then specifies how
this wave function evolves smoothly and deterministically over time, analogously to the
way that Newton’s laws of motion specify how the positions and velocities of macro-
scopic objects evolve over time – except that Schrödinger’s equation again only describes
changes in probabilities, not exact values.
On the orthodox interpretation of quantum mechanics, quantum probabilities
change into actualities only when ‘measurements’ are made. If you measure the posi-
tion of a particle, say, then its position assumes a definite value, even though nothing
before the measurement determined exactly what this value would be.
There is something puzzling about this, however, since any overall system of mea-
sured particle and measuring instrument is itself just another system of microscopic par-
ticles, which might therefore be expected to evolve smoothly according to Schrödinger’s
equation, rather than to jump suddenly to some definite value for position. To account
for this, the orthodox interpretation says that in addition to the normal Schrödinger evo-
lution, there is a special kind of change which occurs in ‘measurements’, when the wave
function suddenly ‘collapses’ to yield a definite value for the measured quantity.
The ‘measurement problem’ is the problem of explaining exactly when, and why,
these collapses occur. The story of
‘Schrödinger’s cat’ makes the difficulty graphic.
Suppose that some unfortunate cat is sitting next to a poison dispenser which is wired
310
PHILOSOPHY OF SCIENCE
up to emit cyanide gas if an electron emitted from some source turns up on the right half
of some position-registering plate, but not if the electron turns up on the left half. The | Blackwell |
basic quantum mechanical description of this situation says that it is both possible that
the electron will turn up on the right half of the plate and that it will turn up on the left
half, and therefore both possible that the poison is emitted and that it is not, and there-
fore both possible that the cat is alive and that it is dead. One of these possibilities only
becomes actual when the wave function of the whole system collapses. But when does
that happen? When the electron is emitted? When it reaches the plate? When the cat dies
or not? Or only when a human being looks at the cat to see how it is faring?
There seems no principled way to decide between these answers. Because of this,
many philosophers reject the orthodox view that physical systems are completely char-
acterized by their wave functions, and conjecture that, in addition to the variables
quantum mechanics recognizes, there are various ‘hidden variables’ which always
specify exact positions and velocities for all physical particles. It is difficult, however, for
such hidden variable theories to reproduce the surprising phenomena predicted by
quantum mechanics, without postulating mysterious mechanisms that seem inconsis-
tent with other parts of physics.
A more radical response to the measurement problem is to deny that the wave func-
tion ever does collapse, and somehow to make sense of the idea that reality contains both
a live cat and a dead cat. This ‘many-worlds’ interpretation of quantum mechanics flies
in the face of common sense, but its theoretical attractions are leading an increasing
number of philosophers to take it seriously.
2.6 Teleology
We normally explain some particular fact by citing its cause: for example, we explain
why some water freezes by noting that its temperature fell below 0°C. There are cases,
however, where we seem to explain items by citing their effects instead. In particular,
this kind of explanation is common in biology. We often explain some biological trait
by showing how it is useful to the organism in question: for instance, the explanation
of the polar bear’s white fur is that it camouflages it; the explanation of human
sweating is that it lowers body temperature, and so on. Similar explanations are also
sometimes offered in anthropology and sociology.
Until fairly recently most philosophers of science took such functional or teleologi-
cal explanations at face value, as an alternative to causal explanation, in which items
are explained, not by their causal antecedents, but by showing how they contribute to
the well-being of some larger system. Carl Hempel’s covering-law model of explanation
embodied an influential version of this attitude. According to Hempel, causal explana-
tions and functional explanations are simply two different ways of exemplifying the
covering-law model: the only difference is that in causal explanations the explaining fact
(lower temperature) temporally precedes the explained fact (freezing), whereas in func-
tional explanations it is explained fact (white fur) that comes temporally before the con-
sequence (camouflage) which explains it.
Most contemporary philosophers of science, however, take a different view, and
argue that all explanations of particular facts are really causal, and that functional
explanations, despite appearances, are really a subspecies of causal explanations. On
311
DAVID PAPINEAU
this view, the reference to future facts in functional explanations is merely ap-
parent, and such explanations really refer to past causes. In the biological case,
these past causes will be the evolutionary histories that led to the natural selection
of the biological trait in question. Thus the functional explanation of the polar bear’s
colour should be understood as referring us to the fact that their past camouflaging
led to the natural selection of their whiteness, and not to the fact that they may | Blackwell |
be camouflaged in the future. Similarly, any acceptable functional explanations in
anthropology or sociology should be understood as referring us back in time to the
conscious intentions or unconscious selection processes which caused the facts to be
explained (see Wright 1973; Neander 1991). (There remains the terminological matter
of whether functional explanations understood in this way ought still to be called ‘tele-
ological’. Traditional usage reserves the term ‘teleology’ for distinctively non-causal
explanations in terms of future results. But most contemporary philosophers are happy
to describe disguised causal explanations that make implicit reference to selection
mechanisms as ‘teleological’.)
2.7 Theoretical reduction
Another philosophical question about subject matters like biology is whether they can
be reduced to lower-level (in the sense of ontologically more basic) sciences like chem-
istry and physics. Obviously, this is an issue that arises not just for biology, but also for
such other ‘special’ natural sciences as geology and meteorology, and also for such
human sciences as psychology, sociology and anthropology.
One science is said to ‘reduce’ to another if its categories can be defined in terms of
the categories of the latter, and its laws explained by the laws of the latter. Reduction-
ists argue that all sciences form a hierarchy in which the higher can always be reduced
to the lower. Thus, for example, biology might be reduced to physiology, physiology to
chemistry, and eventually chemistry to physics.
Reductionism can be viewed either historically or metaphysically. The historical
question is whether science characteristically progresses by later theories reducing
earlier ones. The metaphysical question is whether the different areas of science
describe different realities, or just the one physical reality described at different levels of
detail. Though often run together, these are different questions.
Taken as a general thesis, historical reductionism is false. Recall the earlier discus-
sion of the ‘pessimistic meta-induction from past falsity’. This involved the claim that
new theories characteristically show their predecessors to be false. To the extent that
this claim is true, historical reductionism is false: for a new theory can scarcely explain
why an earlier theory was true, if it shows it is false.
In the earlier discussion I argued that there are some areas of science, like
molecular biology and medical science, to which the pessimistic meta-induction
does not apply. If this is right, then we can expect that in these areas new theories
will indeed normally reduce old ones. But I did not dispute that there are other areas
of science, like cosmology and fundamental particle physics, in which the normal
fate of old theories is to be thrown out. It follows that we must reject historical
reductionism, understood as the thesis that all science proceeds by new theories reduc-
ing old ones.
312
PHILOSOPHY OF SCIENCE
This does not mean, however, that metaphysical reductionism is false. Even if science
proceeds towards the overall truth by fits and starts, there may be general reasons for
expecting that this overall truth, when eventually reached, will reduce to physical truth.
One possible such argument stems from the causal interaction between the phenom-
ena discussed in the special sciences and physical phenomena. Biological, geological
and meteorological events all unquestionably have physical effects. It is difficult to see
how they could do this unless they are made of physical components.
It is doubtful, however, whether this suffices to establish full-scale reductionism, as
opposed to the weaker thesis (sometimes called ‘token-identity’) according to which
each particular higher-level event is identical with some particular physical event. Thus,
for example, it might be true that one animal’s aggressive behaviour can be equated | Blackwell |
with a given sequence of physical movements, and another animal’s aggressive behav-
iour can be equated with another sequence of physical movements, without there being
any uniform way of defining ‘aggressive behaviour’, for all animals, in terms of physi-
cal movements. The case-by-case token-identity will explain how each instance of
aggressive behaviour can have physical effects, like causing intruding animals to move
away. But without any uniform definition of ‘aggressive behaviour’ in terms of physi-
cal movements there is no question of reducing ethology (the science of behaviour) to
physics, and so no question of explaining ethological laws by physical laws. Instead, the
laws of ethology and other special sciences will be sui generis, identifying patterns
whose instances vary in their physical make-up, and which therefore cannot possibly
be explained in terms of physical laws alone (see Fodor 1974).
Acknowledgements
I would like to thank Stathis Psillos for helping me with this chapter.
Further Reading
For an introduction to the problem of induction, and Popper’s solution, see Popper (1959a), espe-
cially chapter 1. The problem and Popper’s solution are further discussed in O’Hear (1989). There
are two excellent introductions to Bayesian philosophy of science: Horwich (1982) and Howson
and Urbach (1989).
The best modern defence of instrumentalism is Van Fraassen (1980). Churchland and Hooker
(1985) offers a good collection of essays on the realism–instrumentalism debate. Kitcher (1993)
contains a strong defence of realism.
The classic works on the theory-dependence of observation and the incommensurability of
theories are Hanson (1958), Kuhn (1962) and Feyerabend (1976). A good collection of essays
on these issues is Hacking (1981). The best sources for the underdetermination of theories by
evidence and the pessimistic meta-induction are respectively Quine (1951) and Laudan (1981).
For a survey of recent work on naturalized epistemology, see Kitcher (1992).
Most modern discussions of explanation begin with the title essay in Hempel (1965). Expla-
nation and its relation to causation are further explored by the essays in Ruben (1993). Arm-
strong (1983) provides an excellent account of the general problem of distinguishing laws from
accidents, as well as his own solution. Chapter 7 of O’Hear (1989) contains a good introduction
to both probability and probabilistic causation. The best contemporary non-specialist discussion
of the problems of quantum mechanics is to be found in chapters 11–13 of Lockwood (1989).
313
DAVID PAPINEAU
The view that events discussed in the special sciences are token-identical but not reducible to
physical events is defended in Fodor (1974).
References
Armstrong, D. 1983: What is a Law of Nature? Cambridge: Cambridge University Press.
Ayer, A. J. 1956: The Problem of Knowledge. London: Macmillan.
Boyd, R. 1980: Scientific Realism and Naturalistic Epistemology. In P. Asquith and R. Giere (eds)
PSA 1980 vol. 2, 613–62. East Lansing, MI: Philosophy of Science Association.
Cartwright, N. 1983: How the Laws of Physics Lie. Oxford: Oxford University Press.
Churchland, P. and Hooker, C. (eds) 1985: Images of Science. Chicago: University of Chicago Press.
Duhem, P. 1951 [1906]: The Aim and Structure of Physical Theory (translated by P. Wiener).
Princeton, NJ: Princeton University Press.
Feyerabend, P. 1976: Against Method. London: New Left Books.
Fodor, J. 1974: Special Sciences. Synthèse, 28, 97–115.
Friedman, M. 1984: Foundations of Spacetime Theories. Princeton, NJ: Princeton University Press.
Glymour, C. 1980: Theory and Evidence. Princeton, NJ: Princeton University Press.
Hacking, I. 1967: Slightly More Realistic Personal Probability. Philosophy of Science, 34, 311–25.
—— (ed.) 1981: Scientific Revolutions. Oxford: Oxford University Press.
Hanson, N. R. 1958: Patterns of Discovery. Cambridge: Cambridge University Press.
—— 1963: The Concept of the Positron. Cambridge: Cambridge University Press.
Harding, S. (ed.) 1975: Can Theories be Refuted? Dordrecht: Reidel. | Blackwell |
Hempel, C. 1965: Aspects of Scientific Explanation. New York: Free Press.
Hempel, C. and Oppenheim, P. 1948: Studies in the Logic of Explanation. Philosophy of Science,
15, 135–75.
Horwich, P. 1982: Probability and Evidence. Cambridge: Cambridge University Press.
Howson, C. and Urbach, P. 1989: Scientific Reasoning. La Salle, IN: Open Court.
Hume, D. 1978 [1739]: A Treatise of Human Nature (edited by P. H. Nidditch). Oxford: Clarendon
Press.
Kitcher, P. 1992: The Naturalists Return. Philosophical Review, 101, 53–114.
—— 1993: The Advancement of Science. New York: Oxford University Press.
Kuhn, T. S. 1962: The Structure of Scientific Revolutions. Chicago: Chicago University Press.
Laudan, L. 1981: A Confutation of Convergent Realism. Philosophy of Science, 48.
Lewis, D. 1973: Counterfactuals. Oxford: Blackwell.
—— 1979: Counterfactual Dependence and Time’s Arrow. Noûs, 13, 455–76.
Lockwood, M. 1989: Mind, Brain and the Quantum. Oxford: Blackwell.
Mellor, D. 1971: The Matter of Chance. Cambridge: Cambridge University Press.
Mises, R. von 1957: Probability, Statistics and Truth. London: Allen and Unwin.
Neander, K. 1991: The Teleological Notion of Function. Australasian Journal of Philosophy, 69,
454–68.
O’Hear, A. 1989: An Introduction to the Philosophy of Science. Oxford: Clarendon Press.
Papineau, D. 1993: Philosophical Naturalism. Oxford: Blackwell.
Popper, K. 1959a: The Logic of Scientific Discovery. London: Hutchinson.
—— 1959b: The Propensity Interpretation of Probability. British Journal for the Philosophy of
Science, 10, 25–42.
—— 1963: Conjectures and Refutations. London: Routledge and Kegan Paul.
—— 1972: Objective Knowledge. Oxford: Clarendon Press.
Quine, W. V. O. 1951: Two Dogmas of Empiricism. In From a Logical Point of View. New York:
Harper.
314
PHILOSOPHY OF SCIENCE
Ramsey, F. 1978 [1929]: General Propositions and Causality. In D. H. Mellor (ed.) Foundations.
London: Routledge and Kegan Paul.
Reichenbach, H. 1956: The Direction of Time. Berkeley: University of California Press.
Ruben, D.-H. (ed.) 1993: Explanation. Oxford: Oxford University Press.
Salmon, W. 1971: Statistical Explanation and Statistical Relevance. Pittsburgh, PA: University of
Pittsburgh Press.
Smart, J. 1963: Philosophy and Scientific Realism. London: Routledge and Kegan Paul.
Teller, P. 1973: Conditionalization and Observation. Synthèse, 28, 218–58.
Van Fraassen, B. 1980: The Scientific Image. Oxford: Clarendon Press.
Worrall, J. 1989: Why Both Popper and Watkins Fail to Solve the Problem of Induction. In F.
D’Agostino and I. Jarvie (eds) Freedom and Rationality. Dordrecht: Kluwer.
Wright, L. 1973: Functions. Philosophical Review, 82, 139–68.
Discussion Questions
1 Is it any less rational to accept induction than it is to accept deduction?
2 Is it always a mistake to save a theory when it has been falsified?
3 How can we show that it is more rational to believe some hypotheses than to
believe others?
4 How do scientific theories move beyond the stage of conjecture?
5 Is belief a matter of degree?
6 Does Bayes’s theorem help us to deal rationally with evidence?
7 Do scientific claims about unobservable entities differ in status from scientific
claims about observable entities?
8 Is instrumentalism mistaken if it cannot account for some features of scientific
practice? How can we determine what features are really a part of scientific practice?
9 Are general theories or piecemeal procedures more important to our basic
characterization of science?
10 Does all observation depend on theoretical assumptions? What are the implica-
tions of your answer for an account of unobservables?
11 ‘Given any theory about unobservables which fits the observed facts, there will
always be other incompatible theories which fit the same facts.’ Does this make realism
about unobservables untenable?
12 Can we be realists in some areas of science and instrumentalists in others?
13 Should we look to the history, sociology, and psychology of science, rather than | Blackwell |
to first principles, to identify criteria for the acceptability of scientific theories?
14 Can a naturalized study of science tell us which research strategies are an
effective means to theoretical truth?
15 Is there any good reason for a philosopher to prefer to talk about explanation
rather than causation?
16 How can we distinguish laws from accidentally true generalizations?
17 Must we posit an ideal system of knowledge in order to understand the notion of
‘law’? What if there cannot be such a system?
18 Do scientific laws involve necessity? In what sense?
19 How can we explain the direction of causation?
20 Can we accept a model of causation according to which causes probabilify, rather
than determine, their effects?
315
DAVID PAPINEAU
21 What problem poses greater difficulties for frequency theories of probability:
‘single-case probabilities’ or a reliance on hypothetical infinite reference sequences?
22 If ‘propensities’ are primitive, can a propensity theory give us any insight into the
nature of probability?
23 How are interpretations of quantum mechanics relevant to philosophical disputes
about probability?
24 Can we give up common sense in favour of a ‘many-worlds’ reality containing
both Schrödinger’s cat alive and Schrödinger’s cat dead?
25 Are teleological explanations an alternative to causal explanations or a kind of
causal explanation?
26 How can we determine whether all sciences form a hierarchy in which the higher
can always be reduced to the lower?
27 Do different areas of science describe different realities, or just one physical reality
at different levels of detail?
316
10
Philosophy of Biology
E L L I O T T S O B E R
Philosophy of biology is a branch of the PHILOSOPHY OF SCIENCE (chapter 9). Some of
its characteristic questions concern the relationship of biology to the rest of science;
others are internal to the methods and results of biology itself. Philosophy of biology
also bears a diverse set of relationships to other areas of philosophy. Questions about
vitalism and reductionism closely parallel discussions of the mind–body problem in
PHILOSOPHY OF MIND (chapter 5). Evolutionary theory’s relationship to the argument
from design brings it into contact with the PHILOSOPHY OF RELIGION (chapter 15). And
the idea that human evolution has implications about morality and human nature
connects it with ETHICS (chapter 6).
1 The Subject Matter of Biology
1.1 The definition of life
Biology is conventionally defined as the science of life. Philosophers, and scientists when
they wax philosophical, wonder what being alive amounts to. Should life be defined in
terms of the molecule DNA? Well, we know that organisms pass traits to their offspring
by transmitting genes to them, and genes are made of DNA. These offspring begin life as
one-celled organisms; the genes they contain and the environments they occupy then
lead them to grow into multi-cellular organisms containing many different types of cell.
Thus, DNA is fundamental to the processes that comprise heredity and development.
DNA and RNA are central to organic processes as they occur on earth. But must life
be based on DNA and RNA? Could other molecules play the same role in heredity and
development? What we earthlings call ‘organic chemistry’ is the chemistry of carbon
compounds, but could life forms, if they exist in other galaxies, be built out of silicon?
Perhaps it is parochial to think of life in terms of the physical structures that happen
to mediate life processes on earth. Is it possible to provide more general and more
abstract criteria for what it takes to be alive?
As already mentioned, organisms reproduce, pass traits to their offspring, and
develop in the course of their lifetimes. In addition, they extract energy from their envi-
ronment and use this energy to repair tissue damage and to engage in homoeostatic
ELLIOTT SOBER
processes that keep some of their characteristics constant despite the flux that occurs | Blackwell |
in their environments. Not all living things do all of these, but most do most. Perhaps
‘alive’ is a cluster concept of the sort described by WITTGENSTEIN (chapter 39); being
alive means that the system engages in one or more of these characteristic life
processes. We best conceive of these life processes in a way that leaves open how many
types of physical structure can instantiate them.
Although biology is the science of life, being alive is not a theoretical concept in
biology. Biology has developed as a discipline without having anything terribly precise
inquiry is. ARISTOTLE (chapter 23) once
to say about exactly what its domain of
observed that it is a mistake to demand more precision of a concept than is needed.
Perhaps ‘alive’ is not a crisply delimited category in nature (for parallel remarks about
the notion of ‘mind’, see Sober 1990).
1.2 Vitalism, supervenience and reductionism
How are the biological properties and processes just listed related to physical properties
and processes? This question precisely parallels the mind–body problem in the philoso-
phy of mind. Just as Cartesian dualism maintains that individuals have mental proper-
ties in virtue of possessing minds that are made of a non-physical substance, so there is
a position in philosophy of biology that holds that living things have biological pro-
perties in virtue of containing a non-physical substance (a Bergsonian elan vital) that
animates them with life. This is vitalism, in one of the many senses that that term
has acquired.
If vitalism were true, the SUPERVENIENCE THESIS (pp. 80–1) much discussed in the
philosophy of mind would be false. This thesis maintains that if two individuals were
physically identical and lived in physically identical environments, then they would be
identical in all respects. They would have the same psychological properties, and they
would have the same biological characteristics as well. The slogan that sums up super-
venience is ‘no difference without a physical difference’. Supervenience is a synchronic
claim, different from the diachronic claim that constitutes the thesis of causal
determinism.
Most biologists and philosophers of biology believe that the supervenience thesis is
correct. For example, if two organisms are physically identical and live in physically
identical environments, then they must have the same fitness value (Rosenberg 1978,
1985; Mills and Beatty 1979; Brandon 1978; Sober 1984). An organism’s fitness is its
ability to survive and reproduce. In part, the supervenience thesis has been accepted
because the alternative seems never to have led to anything substantial in science.
Science has made great strides in understanding the physical bases of respiration,
digestion, reproduction and other biological processes. But the hypothesis that living
things possess an immaterial something has led to nothing. Still, the question may be
asked of how strongly the track records of physicalism on the one hand and vitalism
on the other establish that the supervenience thesis is true (Sober 1999a).
Although physicalism in the form of the supervenience thesis is widely accepted, most
philosophers of biology reject the claim that biology reduces to physics. Their
reasons recapitulate the discussion of FUNCTIONALISM (pp. 178–9) and multiple
realizability in philosophy of mind. Although an organism’s fitness supervenes on its
318
PHILOSOPHY OF BIOLOGY
physical properties and the physical properties of its environment, fitness is not itself a
physical property. What do a fit orchid and a fit otter have in common? There is no
physical property, or set of physical properties, that defines what fitness is. Fitness is
multiply realizable. The theory of natural selection states generalizations in terms of the
concept of fitness; since fitness is not a physical property, the theory of natural selection | Blackwell |
cannot be reduced to physics. This is anti-reductionism without vitalism. There are no
immaterial substances in the subject matter of biology, but biological properties describe
similarities that exist in spite of physical differences. Every fit organism is a physical
thing, but fitness is not a physical property. Similar reasoning has led some philosophers
to argue that Mendelian genetics cannot be reduced to theories in molecular biology
(Kitcher 1984); however, there is room to explore what these theories amount to, and
also to question how reductionism should be understood (Waters 1990; Sober 1999b).
The multiple realizability argument against reductionism presupposes a very specific
conception of what reductionism requires. However, there are a number of contend-
ing philosophical analyses of reduction that need to be evaluated to decide how that
concept should be understood. Nagel (1961) set the agenda for subsequent discussion
by arguing that one theory reduces to another if the first can be deduced from the
second, once appropriate ‘bridge laws’ are provided that connect the vocabularies of
the different theories to each other. Schaffner (1976) observed that false theories are
sometimes reduced to true ones by showing that the reduced theory is a good approxi-
mation of the reducing theory (for example, Newtonian mechanics is sometimes said
to reduce to the theory of special relativity on the grounds that relativistic particles
approach Newtonian trajectories as they slow down); however, this poses a problem for
Nagel’s account, since a false theory cannot be deduced from a true one if the bridge
laws are true. Schaffner suggested that the reducing theory must correct the reduced
theory, and that the two theories must exhibit an appropriate analogy. Other proposals
have been made and it has been recognized that ‘reduction’ and ‘reductionism’ each
have multiple meanings (Wimsatt 1979; Dupré 1993; Rosenberg 1994).
1.3 Teleology
Philosophers of biology have for a long time debated how the concept of teleology (goal-
directedness) is treated in modern biology. If it forms a part of that science, does this
count against reductionism? After all, it seems clear that teleology has had no place in
physics since the seventeenth century. The answer is that philosophers have endorsed
various reductionist accounts of what function claims mean, but that the reduction
involved is not to physics, but to the general concept of causality. Since causal concepts
are widely used in the rest of science, we may draw the conclusion that there is nothing
irreducible about teleological discourse in biology.
Most philosophical discussion of this issue derives from the work of Wright (1973,
1976) and Cummins (1975). Wright defended an etiological account of function. To
describe the function of the heart is to say why organisms have hearts. Hearts evolved
in the lineages leading to modern organisms because hearts helped organisms to
circulate their blood. The organ was not retained because it made noise. The function
of the heart, therefore, is to pump blood, not to make noise. This is not to deny that
organisms sometimes benefit from the fact that hearts make noise; for example, babies
319
ELLIOTT SOBER
are comforted by hearing their mothers’ heartbeat, and patients are diagnosed when
physicians listen to their heartbeats. But, according to Wright, these benefits are not
part of the heart’s function if they did not help cause the heart to evolve. Wright’s
account crisply distinguishes function from fortuitous benefit.
The etiological account faces some interesting objections. Scientists were making
function claims long before evolutionary theory was developed. What could Harvey
have meant by his claim that the function of the heart is to pump blood? In addition,
the Wrightian pattern is sometimes present in traits that do not have a function. If a
man fails to exercise because he is obese, and he remains obese because he fails to exer- | Blackwell |
cise, are we prepared to say that the function of obesity is to prevent exercise? For these
and other reasons, non-etiological accounts of function are of interest (Boorse 1976).
Cummins’s proposal was to understand function claims as summarizing a kind of
analytic decomposition. If the organism has certain capacities, then the function of its
organs may be understood in terms of how they contribute to those capacities. To say
that the function of the heart is to pump blood is to describe what hearts now do, not
why they evolved. Function claims are like descriptions of the workings of an assem-
bly line; the whole factory is able to produce a product because the various stations
along the assembly line make their separate contributions to that end result. Cummins’s
proposal has been criticized for its liberality. According to Cummins, the heart has as
many functions as it has effects on the containing system. The heart pumps blood, but
it also makes noise, and it weighs a couple of pounds. Each of these constitutes a func-
tion that the heart has, because each contributes to some property of the organism as
a whole. Cummins pointed out that we may not be equally interested in all of these
effects, which is why we may be disinclined to talk about the function of the heart’s
noisiness or weight. However, for Cummins, there is no objective, interest-independent
distinction between function and fortuitous benefit.
Wright’s etiological account and Cummins’s alternative have been refined, and the
debate continues (see Allen, Bekoff and Lauder 1998; Buller 1999). However, it is worth
considering the possibility that both concepts are needed in biology (Sober 1993;
Godfrey-Smith 1993, 1994). It is important to distinguish the reason a trait evolved from
the beneficial effects the trait has once it is present. But it also is important to analyse
the workings of the current organism. Which account captures the real meaning of the
word ‘function’ may be less important than the fact that both are, broadly speaking,
causal accounts. Wright focuses on phylogeny, whereas Cummins focuses on ontogeny.
Again, we must realize that ‘function’ is not a theoretical term used in biology, but is an
informal concept that is used to talk about biological issues. Clarity is important if we are
to avoid miscommunication, but clarity does not always require univocity.
2 The Structure of Evolutionary Theory
Although philosophers generally agree with Dobzhansky (1973) and other biologists
that evolution is the central unifying idea in biology, a good deal of work goes on in
biology that is not evolutionary in its content. Evolution is standardly defined as
changes in the genetic composition of populations. But ecologists, physiologists,
anatomists and molecular biologists often look at properties and processes found in
320
PHILOSOPHY OF BIOLOGY
populations, organisms and organic molecules that do not involve genetic change. Some
philosophical work has been done on these non-evolutionary subjects, but the fact
remains that evolutionary theory has been the central focus of philosophy of biology.
2.1 Darwin’s two-part hypothesis
For a long time, philosophy of biology languished in the shadow of philosophy of
physics. In part this was because philosophy of science for many years was developed by
philosophers who knew about physics, but often knew little about other sciences; in
addition, these philosophers often viewed physical theories like Newtonian mechanics,
relativity theory and quantum mechanics as paradigms. Questions about the philoso-
phy of science were posed and answered with these physical theories in mind; the net
result was that other sciences had to measure up to these physical standards, or be
viewed as second rate. This ‘physics worship’ was not just a consequence of the personal
inclinations of the philosophers involved; it also was grounded in the philosophical | Blackwell |
theses that these philosophers often defended. Reductionism and claims about the unity
of science (Oppenheim and Putnam 1958) conferred upon physics a privileged status.
If relativity theory and quantum mechanics are one’s paradigms, then it makes
sense to define science as the search for laws of nature. Laws are usually taken to be
universal generalizations that do not mention any place, time or individual, and which
are empirical and nomologically necessary. Newton’s universal law of gravitation, for
example, does not mention any individual place, time or thing; it describes the
gravitational attraction that must exist between any two objects that have mass. If
science is the search for laws, the question immediately arises of how Darwin’s theory
of evolution can count as scientific. It is difficult to find a law of evolution in Darwin’s
writings. His major insight was to formulate, test and apply the following two
hypotheses: (1) all the organisms now on earth share common ancestors – there is a
single tree of life; (2) natural selection was an important cause of the characteristics
that living things on earth are observed to have. Neither of these claims is a law; both
are historical hypotheses about the earth’s past.
Nomothetic Sciences and Historical Sciences
Rather than consign the claims for the single tree of life and natural selection to the back-
ground of Darwin’s achievement and promote some universal claim to the centre of our
attention, it is better to recognize that not all sciences have the search for laws as their
primary goal. Nomothetic sciences are like this, but historical sciences are not. The con-
trast in physics between mechanics and astronomy illustrates this difference. In mechan-
ics the main goal is to discover the laws of motion. One may want to discover the truth
values of singular statements about particular objects in order to test proposed laws, but
this information about particulars is merely a means to an end. In astronomy the main
goal is to discover facts about the composition and history of objects in the cosmos. One
may use laws of nature to test singular historical hypotheses, but this information about
laws is merely a means to an end. Darwin’s theory of evolution is first and foremost a
historical hypothesis; it is an attempt to reconstruct the history of the living things we
find around us. Darwin’s theory of evolution and Newton’s theory of gravitation are
different, not just in their subject matters, but in their logical structure.
321
ELLIOTT SOBER
Sparrows
Robins
Crocodiles
Sparrows
Robins
Crocodiles
(SR)C
S(RC)
Figure 10.1
2.2 Phylogenetic inference
Darwin argued that the best explanation of many of the similarities we observe among
different species is the hypothesis that they share common ancestors. However, not all
similarities have this evidential significance; for example, the fact that sharks and
dolphins both are shaped like torpedoes can easily be explained by the hypothesis that
this shape was useful for each group of organism to move efficiently through water. We
would expect to find the torpedo shape even if the two groups had arisen independently.
In contrast, those similarities that cannot be explained by the functional requirements
of similar environments provide better evidence of common descent. This is why
vestigial organs are such telling indications of shared ancestry. Why do human foetuses
have gill slits, if they are not related to organisms in which gill slits are useful? The
logic of these inferences conforms to the Likelihood Principle: a set of observations is
said to favour one hypothesis over another precisely when the first hypothesis confers
on the observations a higher probability than the second one does (Sober 1988, 1993,
1999c; Royall 1997). A similarity S exhibited by two species is evidence that they
have a common ancestor precisely when Pr (S/common ancestry) > Pr (S/separate
ancestry).
| Blackwell |
If all species on earth share common ancestors, the question remains as to which
species are closely related and which are related only more distantly. The methodology
of this problem has been intensively studied in evolutionary biology. One principle that
is often used is termed phylogenetic parsimony. It was independently proposed by Hennig
(1966) and by Edwards and Cavalli-Sforza (1964). Consider the two phylogenetic trees
depicted in figure 10.1. The tips of the trees represent present-day species or superspe-
cific taxa; interior nodes represent ancestors. The (SR)C tree represents the hypothesis
that sparrows and robins have a common ancestor that is not an ancestor of croco-
diles; the S(RC) tree represents the claim that robins and crocs are more closely related
to each other than either is to sparrows.
We know by observation that sparrows and robins have wings, but crocodiles do not.
Can this observation be used to discriminate between the two genealogical hypotheses?
If we assume that the species at the root of the tree lacked wings, then we are saying
that sparrows and robins share a derived character (an apomorphy) that crocodiles
do not possess. The (SR)C tree is more parsimonious than the S(RC) tree in this
case, because (SR)C is able to explain the observations at the tips of the tree by
322
PHILOSOPHY OF BIOLOGY
Ingroup
Outgroups
wingless
wingless
?
Figure 10.2
postulating a single change in character state (from wingless to winged) in the tree’s
interior, whereas S(RC) requires two such changes to explain the observations. On
the other hand, if the species at the root of the trees had wings, then sparrows and
robins would share an ancestral character (a plesiomorphy), and the two hypotheses
would be equally parsimonious, since each could explain the observations by postulat-
ing a single change in character state. Thus, the principle of phylogenetic parsimony
asserts that derived similarities are evidence of relatedness, while ancestral similarities
are not.
How is one to determine whether wings or lack of wings is the ancestral condition
in the problem just described? The principle of parsimony addresses this question as
well. Even if we don’t know how robins, sparrows and crocodiles are related, we still
may know that these three groups are more closely related to each other than any of
them is to various other groups (for example, daffodils). If the outgroup species depicted
in figure 10.2 all lack wings, then phylogenetic parsimony dictates that we should
conclude that winglessness is the ancestral condition for the ingroup as well.
To infer phylogenetic relationships, why should parsimony be used, rather than a
criterion of overall similarity, which accords evidential significance to derived and
ancestral similarities alike? Some have argued that Popperian considerations con-
cerning FALSIFIABILITY (pp. 287–90) suffice to justify phylogenetic parsimony (see, for
example, Eldredge and Cracraft 1980; Wiley 1981); others have suggested a likelihood
analysis instead (Sober 1988).
Phylogenetic inference provides an interesting context for considering philosophical
questions about the role of a parsimony principle in scientific reasoning. Although
philosophers often complain that it is unclear what it means for one theory to be simpler
or more parsimonious than another, it is clear enough what it means for one geneal-
ogical hypothesis to provide a more parsimonious explanation of the observations than
another. And in answer to the question of why we should use parsimony to infer
genealogical relatedness, it seems unsatisfactory to reply that a preference for parsi-
monious hypotheses is part of what it means to think scientifically. Why is it ‘unscien-
tific’ to use overall similarity as one’s guide? Parsimony is not an end in itself; if a
323
ELLIOTT SOBER
preference for parsimonious explanations is justified, it is justified because it provides a | Blackwell |
means of achieving some more ultimate epistemic end.
2.3 Adaptationism
Natural selection is one among several processes that can influence the evolution of a
trait. Since a lineage starts evolving with some pre-existing set of ancestral traits, the
characteristics exhibited by descendants may show the influence of that ancestral
condition. For example, suppose an ancestral bear species has a fur thickness of 5 cm.
The climate then gets colder, so that the optimal fur thickness for one of its descendants
would be 12 cm. If the descendant achieves a fur thickness of, say, 10 cm, one may
want to attribute this outcome both to natural selection and to phylogenetic inertia,
sometimes called ancestral influence (Orzack and Sober 2001). Another factor that
can prevent natural selection from moving species to optimal trait values is the
underlying genetics. For example, if the fittest phenotype is coded by a heterozygote,
it will be impossible for all individuals to exhibit that optimal phenotype. In
similar fashion, random genetic drift, correlation of characters, and other factors can
prevent natural selection from causing the fittest available phenotype to evolve (Sober
1993).
The debate about ‘adaptationism’ that has been going on for the last twenty-some
years in evolutionary biology does not concern Darwin’s tree of life hypothesis; nor does
it concern the modest claim that natural selection has influenced most of the pheno-
typic traits that species exhibit. Rather, the question concerns the power of natural
selection. Is selection merely one among several important influences on trait evolution,
or is it the only important influence (Orzack and Sober 1994; Sober 1993)? Can
phylogenetic inertia and genetic constraints be ignored in predicting the descendant
bear’s fur thickness? Adaptationism embodies a relatively monistic conception of trait
evolution; its alternative is evolutionary pluralism (Gould and Lewontin 1979).
Adaptationism, Pluralism and Methodology
Separate from the substantive debate about the processes that have governed evolution,
the debate about adaptationism has had an important methodological dimension. Gould
and Lewontin (1979) accused adaptationists of endorsing ‘just-so stories’ – of accepting
explanations uncritically. Adaptive hypotheses need to be tested rigorously, which means
that they should be tested against non-adaptive alternative explanations. Gould and
Lewontin also suggested that adaptationism violates Popper’s injunction that scientific
hypotheses should be falsifiable. Notice that neither of these criticisms, even if they were
correct, would show that pluralism is true and monism is false as claims about nature.
The fact that a proposition has been accepted uncritically does not entail that the
proposition is false.
The charge that adaptationism is untestable should not be taken at face value. It is
true that if one adaptationist hypothesis is rejected, then another can be invented. This
is because adaptationism is an ism – it specifies the kind of explanation that all or most
traits have without saying anything in detail about what specific adaptationist hypothe-
ses are true. Consider, for example, the fact some species reproduce sexually while others
324
PHILOSOPHY OF BIOLOGY
reproduce asexually. Adaptationists will insist that this variation exists because sexual
reproduction is better in some circumstances and asexual reproduction is better in
others. However, this general claim does not tell us why selection favours sexual repro-
duction in some cases and asexual reproduction in others. Adaptationism is a research
programme in something like the sense that Lakatos (1978) described (Mitchell and
Valone 1990); the failure of one or several adaptationist models does not mean that the
research programme is bankrupt. The same point applies to evolutionary pluralism: if
one pluralistic model fails, another can be invented, but this does not show that plural- | Blackwell |
ism should be rejected. It is important that scientists rigorously test specific hypotheses,
whether they are adaptationist or pluralistic in form; however, the isms that govern
research programmes are not so easily tested. Only the accumulation of successes and
failures over the long run will tell us whether adaptationism is true as a claim about
nature (Orzack and Sober 1994).
2.4 The units of selection controversy
Whether or not the concept of function is understood historically (section 1.3), it is
pretty clear that evolutionists use the concept of adaptation in this way. To say that
wings are an adaptation for flying does not mean that wings now are useful because
they help organisms to fly; it means that wings evolved because they helped organisms
to fly. The relevant consideration is in the past tense, not the present. Wings could be
an adaptation for flying even if it now is disadvantageous for the organisms in question
to leave the ground. Historical origin and present advantage are not automatically
linked. Notice also that wings can be adaptations for flying even if adaptationism is
wrong in what it says about wings; to say that natural selection helped cause the trait
to evolve does not mean that selection was the only important influence on the trait’s
evolution.
When Darwin argued that this or that trait is an adaptation, he usually had in mind
that the trait evolved because it helped individual organisms to survive and reproduce.
For example, predators have sharp teeth because individuals with sharp teeth do better
in the struggle for existence than individuals that have dull teeth. The explanation is
not that the trait evolved because it helped the whole species, or the whole ecosystem.
The adaptations that Darwin discussed are mainly individual adaptations. These indi-
vidual adaptations evolved by the process of individual selection, wherein individuals
compete with other individuals in the same species.
There were a few occasions, however, in which Darwin saw things differently. Why
are there sterile workers in some species of social insect? This is not because sterile indi-
viduals do better in surviving and reproducing than fertile individuals. Rather, Darwin
argued that nests that contain sterile workers do better than nests that do not. Darwin
introduced the idea that some traits are group adaptations – they evolved because they
helped groups, not because they helped individual organisms. These adaptations
evolved by the process of group selection, wherein groups compete with other groups in
the same species.
Darwin thought that group selection is required to understand important features
of human morality. Here is how he analysed the problem in The Descent of Man:
325
ELLIOTT SOBER
It must not be forgotten that although a high standard of morality gives but a slight or no
advantage to each individual man and his children over the other men of the same tribe,
yet that an increase in the number of well-endowed men and advancement in the stan-
dard of morality will certainly give an immense advantage to one tribe over another. There
can be no doubt that a tribe including many members who, from possessing in a high
degree the spirit of patriotism, fidelity, obedience, courage, and sympathy, were always
ready to aid one another, and to sacrifice themselves for the common good, would be
victorious over most other tribes; and this would be natural selection. At all times
throughout the world tribes have supplanted other tribes; and as morality is one
important element in their success, the standard of morality and the number of
well-endowed men will thus everywhere tend to rise and increase. (Darwin 1981: 166)
The character traits that Darwin describes are disadvantageous to individuals, but
advantageous to groups. A courageous man who risks his life in warfare is less fit than
a coward in the same group who plays it safe; yet groups that include courageous | Blackwell |
individuals are fitter than groups that include only cowards.
Biologists came to use the term ‘altruistic’ for traits that are deleterious to indivi-
duals but advantageous to groups. This terminology is potentially confusing, since
altruism, so defined, describes the fitness costs and benefits of the trait, not how or even
whether the individual thinks and feels. Sterile workers in a nest of social insects and
the warriors whom Darwin described are both altruistic in the evolutionary sense, even
though the warriors have views about their own behaviour while the insects presum-
ably do not. And just as a mindless organism can be an evolutionary altruist, it also is
possible for a behaviour that enhances the actor’s own fitness to be motivated by desires
that are psychologically altruistic (Sober and Wilson 1998).
The idea that evolution involves both individual selection and group selection was
standard fare in the heyday of the Modern Synthesis in evolutionary biology, roughly
from 1930–60. Biologists invoked individual selection to explain some traits and group
selection to explain others. Mathematical population geneticists, such as R. A. Fisher,
J. B. S. Haldane and Sewall Wright, briefly expressed reservations about the ability of
group selection to cause altruistic traits to evolve, but these theoretical points were not
given much credence by biologists in the trenches; these biologists thought they
observed altruism in nature and they sought to explain altruism in the only way they
knew how – by invoking the hypothesis of group selection.
This complacent pluralism about the units of selection was shattered in the 1960s,
when group selection was attacked by a number of biologists. The most thorough and
devastating critique was G. C. Williams’s (1966) book Adaptation and Natural Selection.
Williams touched a nerve and his vigorous rejection of adaptations that exist for the
good of the group spread quickly through the evolution community. The arguments
that Williams advanced were popularized ten years later by Richard Dawkins (1976)
in his widely read book The Selfish Gene. The idea that some traits evolve because they
benefit the group was replaced by genic selectionism – the idea that all traits evolve
because they are good for the gene. However, not all biologists subscribed to this new
viewpoint; they argued that group selection had been rejected prematurely (for
example, Lewontin 1970; Wilson 1980; Wade 1978). Since then, multi-level selection
theory – the view that selection occurs at all levels of the biological hierarchy – has been
making a comeback (Sober and Wilson 1998).
326
PHILOSOPHY OF BIOLOGY
Assessing Group Selection: An Empirical or
Conceptual Controversy
This scientific controversy is philosophically interesting because group selection was
rejected in the 1960s for a number of reasons, and these reasons differ so much in
character that it is unclear what in the controversy is empirical and what is conceptual.
The schizophrenic character of the arguments that biologists produced may be illustrated
by their discussion of the evolution of sex ratio – the mix of males and females that a
population exhibits. Williams (1966) treats this as an empirical test case for group
selection. He notes, following Fisher (1957), that individual selection would favour an
even sex ratio, whereas group selection would sometimes favour a female-biased sex ratio
(the latter because groups maximize their productivity by having the smallest number of
males consistent with all the females being fertilized). Williams claims that the sex ratios
observed in nature are uniformly even, and concludes that the empirical hypothesis of
group selection is mistaken, at least as it pertains to the evolution of sex ratio.
A year later, W. D. Hamilton published a paper called ‘Extraordinary Sex Ratios’.
Hamilton (1967) reported that female-biased sex ratios are very common in the social | Blackwell |
insects. One might expect that this empirical finding, interpreted through the lens of
Williams’s argument, would have led biologists to conclude that group selection is impli-
cated in the evolution of sex ratio. However, this is precisely how the model was not
interpreted. The reason is that Hamilton’s paper provides a mathematical theory for
understanding how uneven sex ratios evolve. Solving Hamilton’s equations allows one
to identify the fittest (the ‘unbeatable’) sex ratio strategy that an individual female should
follow in determining her mix of sons and daughters; this formalism was widely inter-
preted as showing how individual selection could promote the evolution of a female-
biased sex ratio. One of the few biologists who did not interpret Hamilton’s model in this
way was Hamilton himself; Hamilton says, though only in a footnote, that his model
represents the action of group selection. Although Hamilton embraced the idea of group
selection in 1967 and later publications, his followers were more Hamiltonian than
Hamilton himself.
Williams (1992) concurs with Hamilton’s interpretation. Yet many biologists con-
tinue to view group selection as beyond the pale, and they cite Hamilton and Williams
as having shown why.
Sterelny and Kitcher (1988) argue in favour of genic selectionism on the grounds that
all adaptations can be interpreted as evolving because they benefit the genes that code
for them. Biased sex ratios, extreme altruism and so on, can all be brought under the
rubric of the selfish gene. The reason group selection is the wrong way to think about
nature is not that it is empirically disconfirmed; rather, the problem is that it is insuffi-
ciently general. Although all adaptations can be treated as genic adaptations, it is false
that all adaptations can be interpreted as group adaptations. According to Sterelny and
Kitcher, the solution to the units of selection problem involves adopting a convention,
not answering a factual question about the history of life. For them, the fact that a trait
can be interpreted as a genic adaptation, no matter what the trait is like, is a point in
favour of the genic convention. However, those who think that the units of selection
problem is an empirical issue view this as a defect, not a strength, of the framework that
Sterelny and Kitcher recommend (Wimsatt 1980; Sober and Lewontin 1982; Lloyd
1988; Sober and Wilson 1994, 1998).
327
ELLIOTT SOBER
2.5 Are there laws of evolution?
Although Darwin’s fundamental contribution was the formulation, test, and applica-
tion of two historical hypotheses (see also Kitcher 1993: ch. 2), the broad subject of
modern evolutionary theory includes much more than historical hypotheses.
The subject is peppered with what biologists call ‘models’. These are mathematical
statements that describe what the evolutionary consequences would be of specified
initial conditions. For example, Fisher’s analysis of sex ratio is an if/then statement.
Fisher (1957) described a sufficient condition that ensures that a population with an
uneven sex ratio will evolve towards an even sex ratio, and remain there. Fisher’s theory
has all the earmarks of a law of nature, save one. It is general, it does not refer to any
place, time or individual, and it has counterfactual force. However, Fisher’s model is an
a priori mathematical truth. This does not mean that the claim that Fisher’s model
applies to a real world population is a priori; whether the population exhibits random
mating and satisfies the other assumptions of the model is an empirical matter, and
whether the population exhibits an even sex ratio is an empirical matter as well. The
antecedent is empirical and the consequent is too, but the conditional is a priori (Sober
1984, 1993).
Empirical generalizations exist in evolutionary biology, but they often fail to have the | Blackwell |
modal force that laws are supposed to have. Cope’s Rule, for example, says that species
evolve in the direction of increased size. This is, at best, a rule of thumb that summa-
rizes a pattern found in the fossil record (Hull 1974). The problem is not just that there
are exceptions (a law, after all, can be probabilistic), but that there is nothing in our
understanding of the evolutionary process that tells us that evolution must have this
sort of directionality. A fundamental property of natural selection is that it is ‘oppor-
tunistic’. If growing larger is advantageous, selection will push the population along
that trajectory; but if growing smaller is advantageous, selection will favour that
outcome, instead. It is an accident of circumstance whether the environment favours
one transition rather than the other. If the history of life on earth happens to contain
more environments of the first kind than of the second, Cope’s Rule will be true as a
statistical generalization, but it will not be a law.
The same point holds about increasing complexity. To be sure, complexity has often
increased; however, there also are many cases in which complexity has declined. For
example, the evolution of parasites from their free-living ancestors often involves the
loss of organs; selection often leads organisms to lose features that they no longer need.
Still, if life started simple, an increase in complexity was bound to occur. Biologists gen-
erally regard this as an artefact of the initial conditions that obtained, not as reflecting
some resolute directionality in the evolutionary process itself. As an analogy, consider
a game played on a one-row checker board that has a thousand squares. The game is
played in a sequence of steps; at each step, a coin is tossed to decide how the checker
will be moved. Heads means the checker moves one square to the right and tails means
that it moves one square to the left. Of course, if the coin lands tails when the checker
is on the left-most square, the checker can’t move to the left; in this case, one must toss
the coin again. The same applies if the checker is on the right-most square when the
coin lands heads. Since the coin is fair, there is no directionality in the laws that govern
the checker’s trajectory. The laws entail no net tendency for the checker to move to the
328
PHILOSOPHY OF BIOLOGY
right or to the left. However, an asymmetry can be introduced by the checker’s initial
location. Suppose we begin the game with the checker in the left-most square. After fifty
tosses, the checker will almost certainly occupy a square that is to the right of where
it began (Maynard-Smith 1988; Gould 1989; Sober 1994a).
I so far have suggested that there are two types of generalization in evolutionary
biology that fail to be scientific laws in the usual sense. The conditional statements that
comprise mathematical models are a priori, and empirical generalizations about evolu-
tionary trends often are accidentally true, if true at all. This does not show that there
are no laws. Are there any? Beatty (1995) has argued that there are none, not just none
in evolutionary biology, but none in biology as a whole. His reason for this claim is that
every biological generalization is true in virtue of a contingent evolutionary process’s
making it so. For example, if the organisms in a population obey Mendel’s law of inde-
pendent assortment, this is because evolution led this regularity to evolve. If selection
had favoured some other pattern of heredity, that pattern might have evolved instead.
This is Beatty’s evolutionary contingency thesis.
One question that needs to be considered about Beatty’s argument is depicted in the
following diagram:
fi
E
t
0
(If P then Q)
t
t
1
2
If the evolutionary regularity ‘If P then Q’ holds true between times t1 and t2 only
because contingent evolutionary events E happened to take place at time t0, then it
makes sense to say that the regularity is contingent. However, this leaves it open that | Blackwell |
the more complex conditional ‘if E occurs, then (if P then Q) will be true later’ holds
true non-contingently (Sober 1997a). This point does not establish that biological laws
exist, but it does show that one cannot establish that there are no laws just by pointing
out that regularities depend on earlier contingencies. Furthermore, if causality entails
the existence of laws (a metaphysical claim that should not be accepted uncritically;
Anscombe (1975), for example, denies it), then the causal dependency of ‘If P then Q’
on E entails the existence of a law.
Sensitivity to Initial Conditions
Gould (1989) has emphasized the idea that evolutionary outcomes often exhibit a great
deal of sensitivity to initial conditions. Even a small change in the conditions that obtained
millions of years ago would have made a huge difference in the distribution of life forms
that we find on earth now. If a certain meteor strike had not occurred, the dinosaurs
would not have gone extinct; if the dinosaurs had not gone extinct, mammals would not
have proliferated; and if the mammalian explosion had not occurred, no mammal resem-
bling Homo sapiens would have made its appearance. According to Gould, ‘replaying the
tape’ would reveal that many of the interesting features of the life forms we find around
us are accidents of history.
The claim that there is sensitivity to initial conditions has the following form:
although X1 leads to Y1, it also is true that X2 leads to Y2, where X1 and X2 are very similar,
while Y1 and Y2 are very different. The usual gloss on this claim is that there are two laws:
329
ELLIOTT SOBER
X1 leads to Y1 and X2 leads to Y2. Thus, the sensitivity thesis does not entail that there are
no biological laws. Notice also that Gould is not claiming that all features of living things
exhibit sensitivity to initial conditions; he is merely calling attention to the importance
of this phenomenon. It is perfectly possible that some features of life are robust (that is,
insensitive to perturbations in initial conditions) while others are not. The frequency of
evolutionary convergence shows that similar outcomes often arise from rather different
starting points. What is sorely needed in this area are biologically well motivated theo-
ries concerning which features should be robust and which should exhibit sensitivity.
Evolutionary biology very much needs to move this problem beyond the reporting of
piecemeal intuitions about examples.
3 The Philosophical Significance of Evolutionary Theory
3.1 The argument from design
The two parts of Darwin’s theory of evolution by natural selection both conflict with
at least one traditional reading of Genesis. First, the theory postulates a single tree of
life, and thus contradicts the claim that different groups of organisms were separately
created. Second, the theory postulates the mindless process of natural selection as the
cause of life’s adaptive features, and thus contradicts the claim that adaptive features
arose by the intelligent design of a creator. There are additional, though less central,
points of conflict as well; for example, Darwin held that the earth is ancient while one
literal reading of the Bible says that the earth is young.
Not only does the theory of evolution conflict with one set of religious doctrines; it
also conflicts with an influential argument for the existence of God – the ARGUMENT FROM
DESIGN (pp. 478–80). The fifth of THOMAS AQUINAS’s (chapter 24) five ways of proving
the existence of God contends that things that ‘act for an end’ must either have minds
or be produced by an intelligent designer. In the century in Britain preceding Darwin’s
publication of the Origin, this argument was repeatedly elaborated. It was not simply
one argument for the existence of God; it became the fundamental argument.
Perhaps the most famous formulation of the design argument is that given by
William Paley, in his book Natural Theology (1805). Paley proposed an analogy. If you | Blackwell |
were walking across a heath and found a stone, you would not dream of concluding
that the stone was produced by an intelligent designer. However, if you found a watch,
you would conclude without hesitation that the watch was produced by a watchmaker.
The difference between the watch and the stone is that the watch exhibits adaptive
complexity. The watch as a whole measures out equal intervals of time, and the parts
of the watch conspire to allow the watch to perform that function; were any of those
parts even modestly different from the way they are, the watch would not keep time.
Paley then argues that what is true of the watch also holds for many features of
organisms. The vertebrate eye, for example, also exhibits adaptive complexity. Paley
concludes that organisms are the result of intelligent design no less than the watch is
the handiwork of a watchmaker.
Philosophers have not always agreed on what the form of the design argument is.
For example, HUME (chapter 31), in his Dialogues Concerning Natural Religion (1779),
330
published some thirty years before Paley’s Natural Theology appeared, suggests that the
design argument is an analogical argument that has the following form:
PHILOSOPHY OF BIOLOGY
Watches are produced by intelligent design.
Organisms are like watches.
Organisms were produced by intelligent design.
The double line separating premisses from conclusion indicates that the argument is
not intended to be deductively valid. Hume suggests that the strength of the analogi-
cal argument depends on the degree to which watches and organisms resemble each
other – the more similar they are, the more strongly the premisses support the conclu-
sion. He then points out that there are many dissimilarities – watches are made of metal
and glass, organisms breathe, and so on. Hume concludes that the design argument is
a very weak analogical argument.
A more charitable reading of the argument from design construes it as an inference
to the best explanation (pp. 42–3) (what C. S. PEIRCE (chapter 36) called an abductive argu-
ment), one that does not depend on there being any strong degree of overall
similarity between organisms and artefacts. According to this reading of the argument,
Paley is using the Likelihood Principle to discriminate between two hypotheses:
Od Organisms were created by an intelligent designer.
Or Organisms were produced by random mindless processes.
Paley claims that the observed adaptive complexity of organisms is rendered much
more probable by the former hypothesis than by the latter. Paley discusses the watch
on the heath to make vivid the power of this type of inference in an uncontroversial
example. The adaptive complexity of the watch favours the first of the following
hypotheses over the second:
Wd The watch was created by an intelligent designer.
Wr The watch was produced by random mindless processes.
The point of the analogy is just to explain how the Likelihood Principle works. It does
not matter how similar or dissimilar watches and organisms are overall, so long as they
share the single feature of adaptive complexity (Sober 1993).
While he studied at Cambridge, Darwin read Paley and greatly admired Paley’s
discussion of adaptive contrivances. Darwin eventually came to reject the argument
from design, and to move towards agnosticism, because he thought he saw so many
features of organisms that conflict with the hypothesis that organisms were created by
a benevolent, omniscient and omnipotent designer. For example, Darwin was revolted
by the parasitic wasps that lay their eggs on paralysed caterpillars; when the eggs hatch,
they eat their hosts alive, leaving the brain until last (Desmond and Moore 1991). If an
intelligent designer built so much suffering into the living world, this designer does not
deserve our reverence. Darwin saw the living world as saturated with pain and death;
this is what one should expect according to the hypothesis of evolution by natural
331
| Blackwell |
ELLIOTT SOBER
selection, but not what one should expect according to the hypothesis of intelligent
design as Darwin understood that hypothesis. This is a version of the ARGUMENT FROM
EVIL (pp. 480–3). The argument is not best put by saying that the quantity of evil found
in the world deductively entails that there is no God; rather, the observations are claimed
to play the non-deductive role of favouring atheism over theism.
This Darwinian argument is a likelihood argument, just like Paley’s, but Darwin
considers a hypothesis that Paley does not, and he considers an observation that differs
from the one on which Paley focused. The hypothesis of evolution by natural selection
does not claim that living things change their features at random. A process is random
when its possible outcomes are equiprobable, or nearly so. Tossing a coin, spinning a
roulette wheel, drawing cards from a deck – these are examples of random processes.
However, natural selection is a process in which some traits have much higher proba-
bilities of spreading than others. The rule that governs selection processes is that fitter
traits tend to increase in frequency and less fit traits tend to decline. In addition to
elaborating a hypothesis that Paley did not consider, Darwin also emphasized the
ubiquity of adaptive imperfections. Vestigial organs, for example, are not what one
should expect to find if
living things were created from scratch by a benevolent,
intelligent and powerful engineer. However, since natural selection works on the array
of variation that ancestral populations happen to contain, it is no surprise that these
processes leave traces of ancestral forms that have no adaptive rationale.
Paley anticipated one aspect of this Darwinian argument. He says that we should
infer a watchmaker when we see the watch, even if the watch turns out to be imperfect.
Paley emphatically denies that imperfect adaptation undercuts the design argument. To
be sure, Paley delighted in what he took to be the many perfections found in nature.
However, it is clear that he did not think of this as part of his argument for the existence
of a designer. Rather, the ostensible perfection of nature comes in later, as evidence that
bears on the designer’s characteristics. Adaptive complexity, even when imperfect,
proves there is a designer. Further details then show that this designer is benevolent.
What, then, is the logical status of the design argument, and how is Darwin’s theory
related to it? Darwin and his successors have claimed that imperfect adaptations favour
evolutionary theory over intelligent design (see, for example, Gould 1980). Should they
concede that perfect adaptation would favour intelligent design over purely random
mindless processes? I suggest that this interpretation concedes too much to Paley. Even
a perfect watch is no evidence of
intelligent design unless we have independently
attested auxiliary information about the intentions and abilities that the putative
designer would have if he existed. What probability does the design hypothesis confer
on the features of the watch that we observe? We observe, for example, that the watch
is made of metal and glass. How probable is it that an intelligent designer would use
these materials? If intelligent designers never use these materials, the design hypothe-
sis will be less likely than the hypothesis of random mindless processes. However, if we
assume that intelligent designers often use metal and glass, we reach the opposite
verdict. And if we simply do not know what the putative intelligent designer would be
inclined to do, we will be unable to compare the likelihoods of the two hypotheses.
Paley is right that we do not and should not hesitate to infer a watchmaker from the
observed features of the watch; this is because we know a great deal about the incli-
nations and abilities that human designers have. However, when we shift from watch
332
| Blackwell |
PHILOSOPHY OF BIOLOGY
to organism, the situation changes. What do we know about the desires and abilities of
this putative designer of organisms? I suggest that we do not know enough to compare
the likelihoods of the two hypotheses. Paley’s analogy between watch and organism in
fact conceals a deep disanalogy. As a result, the design argument fails on its own terms;
there is no need to bring in Darwinian theory to see this. It is not true that the observed
features of organisms put the design hypothesis to the test and that what we observe
disconfirms the hypothesis; rather, the present suggestion is that the design hypothesis
is untestable (Sober 1993, 1999d). Defenders of the design argument are not entitled
to assume without argument propositions that describe the features that organisms
would have if they were produced by an intelligent designer. Critics of the argument
are not entitled to do this, either.
3.2 Species, essentialism and human nature
Species have long been a favourite example that philosophers cite when they discuss
natural kinds. For example, MILL (chapter 35) in his System of Logic (1874) claims that
human being is a natural kind, but the class of snub-nosed individuals is not, on the grounds
that ‘Socrates is a human being’ allows one to predict many other characteristics that
Socrates has, but ‘Socrates is snub-nosed’ does not. Aristotelian essentialism burdens the
concept of natural kind with a more substantial characterization. Natural kinds not only
have predictive richness; in addition, they have ESSENCES (pp. 112–13). The essence of a
natural kind is the necessary and sufficient condition that members of the kind must
satisfy in order to be members. An individual’s possessing this essence is what makes it
belong to the kind in question. In addition, it is a necessary truth that all and only the
members of the kind have this essential property. The essence also does a good deal of
explanatory work; the fact that an individual has this species-typical essence explains
many other features that the individual possesses.
Besides citing biological species as examples, philosophers often point to the chemi-
cal elements as paradigm natural kinds. Gold is a kind of substance; its essence is said
to be the atomic number 79. This atomic number is what makes a lump of matter an
instance of gold. And atomic number explains many other properties that gold things
have. Kripke (1972) and Putnam (1975) have suggested that science is in the business
of empirically discovering the essences of natural kinds. Formulated in this way, essen-
tialism is not established by the existence of trivial necessary truths. It is a necessary
truth that all human beings are human beings, but this does not entail that there is an
essence that human beings have. It is also important to separate the claim that kinds
have essences from the claim that individuals in the kind have essential properties (Enç
1986). The fact that gold has one atomic number and lead another does not entail that
an individual cannot persist through time as it changes from being made of lead to
being made of gold.
The example of the chemical elements illustrates a further feature that kind essences
are supposed to have. Gold’s essence is purely qualitative; ‘atomic number 79’ does not
refer to any place, time or individual. In principle, gold could exist at any place or at
any time. What makes two things members of the same natural kind is that they
are similar in the requisite respect. There is no requirement that they be causally or
spatio-temporally related to each other in any way.
333
ELLIOTT SOBER
Although philosophers who accept this essentialist picture of the chemical elements
usually think that chemistry has already discovered the essences that various chemi-
cal kinds possess, they must admit that biology has to date not delivered the goods with
respect to biological species. If biological species have essences and these essences are
not beyond the ken of science, then the essentialist must claim that biology will even- | Blackwell |
tually reveal what these species essences are. However, there are strong reasons to think
that Darwin’s theory of evolution undermines this essentialist picture of biological
species (Hull 1965; Mayr 1975; Sober 1980). It is not just that biology has not dis-
covered these species essences yet; rather, the way species are conceptualized in evolu-
tionary theory suggests that species simply do not have essences. They are not natural
kinds, at least not on the usual essentialist construal of what a natural kind is.
The reasons for this conclusion need to be stated carefully. The fact that species
evolve is, per se, not a conclusive argument against essentialism. Just as the essential-
ist can agree that lead might be transmuted into gold, so the essentialist can agree that
a lineage might be transformed from one species into another. And the fact that there
are vague boundaries between species is not, in itself, a refutation of essentialism,
either. If a piece of matter is transformed from lead to gold, perhaps there will be
intermediate stages of the process in which it is indeterminate whether the matter
belongs to one natural kind or the other (Sober 1980).
Unfortunately, there still is disagreement in evolutionary biology about how the
species category should be understood. Perhaps the most popular definition is Mayr’s
(1963, 1970) biological species concept (for others, see Ereshefsky 1992). Its anti-
essentialist consequences are to a large degree also the consequences that other species
concepts have, so we may examine it as an illustrative example. Mayr’s basic idea is
that a biological species is an ensemble of local populations that are knit together by
gene flow. The individuals within local populations reproduce with each other. And
migration between local populations means that there is reproduction between
individuals in different populations as well. This system of populations is reproductively
isolated from other such systems. Reproductive isolation can be a simple consequence
of geographical barriers, or it can mean that the organisms have behavioural or
physiological features that prevent them from producing viable fertile offspring even
when they are brought together. A consequence of reproductive isolation is that two
species can evolve different characteristics in response to the different selection
pressures imposed by their different environments. However, the different phenotypes
that evolve are not what make the two species two; it is reproductive isolation, not
physical dissimilarity, that is definitive.
Mayr (1963) initially allowed two populations to belong to the same species if there
was actual or potential interbreeding between them, but he later changed the defini-
tion so that actual interbreeding was required (Mayr 1970). This raises the question of
what the time scale is on which interbreeding must take place. How often must indi-
viduals in different local populations reproduce with each other for the two populations
to belong to the same species? Indeed, the same question can be posed about indivi-
duals living in the same local population. Another detail that needs to be added to the
sketch just provided concerns individuals that exist at different times. Human beings
who are alive now are not having babies with human beings who lived thousands
of years ago. Reproduction is something that occurs between contemporaneous
334
PHILOSOPHY OF BIOLOGY
individuals. So what makes human beings now and human beings thousands of years
ago members of the same species? A necessary condition is that human beings now are
descended from human beings a thousand years ago. But this is clearly not sufficient;
otherwise, a present-day species could not be descended from a different, ancestral,
species. Finally, I should note that Mayr’s definition excludes the possibility of asexual | Blackwell |
species, and this is another of its features that has made it controversial.
The thing to notice about Mayr’s definition is that qualitative similarity is neither
necessary nor sufficient for conspecificity. Members of the same species may have very
different characteristics. And if creatures just like tigers evolved independently in other
galaxies, they would not belong to the species to which earthly tigers belong. What
makes for conspecificity are the causal and historical connections that arise from repro-
ductive interactions. Biological species and chemical elements are very different in this
regard.
Evolutionary biologists talk about species in the same way they discuss individual
organisms. Just as individual organisms bear genealogical relationships to each
other, so species are genealogically related. Just as individual organisms are born,
develop and die, so individual species come into existence, evolve and go extinct.
These considerations led Hull (1978, 1988) to develop Ghiselin’s (1974) suggestion
that species are individuals, not natural kinds. There is room to doubt, however, that
species are as functionally integrated as individual organisms often are. The parts of
a tiger depend on each other for survival; excise an arbitrary 30 per cent of a tiger,
and the whole tiger dies. However, the extinction of 30 per cent of a species rarely
causes the rest of the species to go extinct. This suggests that individuality (in the sense
of functional interdependence of parts) comes in degrees, and that species are often
less individualistic than organisms often are. Still, Hull and Ghiselin’s main thesis
remains; perhaps it should be stated by saying that species are historical entities (Wiley
1981).
Similar points apply to higher taxonomic categories. Although ordinary language
may suggest that carnivores all eat meat and that mammals all nurse their young, this
is not how biologists understand Carnivora and Mammalia as taxa. These taxa are
understood genealogically; they are monophyletic groups, meaning that they include an
ancestral species and all of its descendants (Sober 1992). Pandas belong to Carnivora
because they are descended from other species that belong to Carnivora; the fact that
pandas are vegetarians does not matter. Superspecific taxa, like species themselves, are
conceptualized as big physical objects; they are chunks of the genealogical nexus. And
just as species are often not very individualistic, superspecific taxa are even less so
(Ereshefsky 1991).
The chemical kinds do not comprise an ad hoc list. Rather, there is a theory, codified
in the periodic table of elements, that tells us how to enumerate these chemical kinds
and how they are systematically related to each other. To say what the chemical kinds
are, we can just consult this theory; we do not, in addition, have to go out and do field
work. No such theory exists in biology for species and higher taxa; field work is
basically the only method that science has for assembling a list of species. The terms
‘botanizing’ and ‘beetle collecting’ both allude to this feature of systematic biology.
Species and higher taxa are things that happen to come into existence owing to the
vagaries of what transpires in the branching tree of life.
335
ELLIOTT SOBER
It does not follow that there are no natural kinds in evolutionary biology. Perhaps
sexual reproduction is a kind; perhaps being a predator is another. What makes it true
that two organisms each reproduce sexually, or that both are predators, is that they are
similar in some respect; it is not required that they be historically connected to each
other. The sexual species do not form a monophyletic group, and neither do the preda-
tors. These kind terms appear in models of different evolutionary processes; there are
models that explain why sex might evolve, and models that describe the dynamics of | Blackwell |
predator–prey interactions. Although Darwin’s theory of evolution undermines essen-
tialist interpretations of species and higher taxa, it is another matter whether essen-
tialism is the right way to understand these other, non-taxonomic, theoretical concepts.
If biological species lack essences, then Homo sapiens lacks an essence. If the essen-
tialist is right about chemical kinds, then it makes sense to ask what the nature of gold
is. But what could it mean to talk about ‘human nature’? Taken literally, the idea of
human nature is the idea that there is an essence that all human beings, and only they,
possess. This thought is at the root of ARISTOTLE’s (chapter 23) ethical theory. It also is
at the root of many normative claims to the effect that this or that human behaviour
is ‘unnatural’. This peculiar expression can be given a sensible biological reading, if
‘natural’ is simply equated with the idea of being found in nature. Understood in this way,
everything that human beings do, both the good and the bad, is natural. However, when
‘natural’ and ‘unnatural’ are not used in this way, it is important to demand that an
explanation be given of what the terms are supposed to mean. Normative ethical claims
should not be permitted to masquerade as biology.
3.3 Evolutionary epistemology
Evolutionary epistemology is a field divided into two research programmes (Bradie
1994). First, there are attempts to use ideas from evolutionary biology to explain
various features of human cognition. Second, there is the attempt to develop theories
of cultural and scientific change that are analogous to theories of biological evolution.
These lines of inquiry are independent of each other, so that the success or failure of
the one does not automatically entail the success or failure of the other. Nor is either
research programme particularly unified; this means, for example, that one evolution-
ary model of scientific change might fail miserably, while another succeeds admirably.
Sociobiology (Wilson 1975) and evolutionary psychology (see the papers collected in
Barkow, Cosmides and Tooby 1992) provide examples of the first type of evolutionary
epistemology. Sociobiologists usually focus on different features of human behaviour
and seek to explain them as adaptive responses to ancestral environments. The debate
about adaptationism, discussed earlier, grew out of the vociferous controversy that
engulfed sociobiology. Critics of sociobiology believed that sociobiology’s methodologi-
cal defects were just the tip of the iceberg; the real problem, they thought, was the
pervasive influence of naive adaptationism in evolutionary biology as a whole (Gould
and Lewontin 1979). Evolutionary psychology differs from sociobiology, not by being
less adaptationist, but by shifting its focus from behaviour to cognitive mechanisms as
the items requiring evolutionary explanation. Evolutionary psychologists tend to think
of the mind as a highly modularized set of devices, each having evolved as a solution to
a different adaptive problem; they also tend to think that cognitive adaptations vary very
336
PHILOSOPHY OF BIOLOGY
little among human beings. Both the modularity hypothesis and the universality
hypothesis have been questioned (Sterelny and Griffiths 1999; Wilson 1994).
This first type of evolutionary epistemology has also been pursued by philosophers.
For example, there has been considerable debate about whether, or in what circum-
stances, natural selection can be expected to favour belief acquisition devices that are
reliable – that is, that produce mostly true beliefs (Stephens 2001). There also has been
investigation of when natural selection will favour innate and inflexible characteristics,
and when it will favour traits that are learned and plastic (Godfrey-Smith 1991, 1996;
Sober 1994b). Godfrey-Smith (1996) also explores the hypothesis that mentality
evolved as a response to environmental complexity (Sober 1997b).
| Blackwell |
Examples of the second kind of evolutionary epistemology may be found in pro-
posals developed by Popper (1973), Campbell (1974) and Hull (1988), which view
scientific change as the result of a competition among theories, with the fittest theory
surviving. This does not mean that theories survive because they enable those who
believe them to have more babies; nor do evolutionary epistemologists propose that
scientific theories are passed from individual to individual by genetic transmission.
Whereas models of biological evolution standardly measure fitness in terms of repro-
ductive output and treat genes as the mechanism of inheritance, evolutionary models
of scientific change replace genetic transmission with teaching and learning, and think
of fitness as an idea’s propensity to spread. Fit ideas are attractive, regardless of whether
they affect biological survival and reproduction (Sober 1993).
There are some obvious disanalogies between biological models of evolution by
natural selection and selectionist models of scientific change. For one thing, novel
biological variation arises by random mutation; this means that what causes a mutation
to occur has nothing to do with whether the mutation will be useful. In contrast, it seems
clear that novel scientific ideas are often invented with the goal of being useful; scientists
don’t make up hypotheses at random. However, this and other disanalogies do not
undermine the claim that the ideas in a scientific community compete with each other;
and that there is a selection process in which some ideas spread while others disappear
from the scene. Thus, it seems clear that scientific change can be described as a selection
process. The substantive task is to say how the concept of fitness should be understood
in this context. What makes one idea more attractive than another in a scientific
community? Surely there are many factors that can influence an idea’s attractiveness,
and the mix of features that make an idea attractive in one scientific context may differ
from the mix that matters in another. Perhaps some episodes of scientific change are
driven largely by observational evidence, while others occur because of religious,
metaphysical or political commitments. Evolutionary epistemologists who want to
model scientific change as a selection process must address the questions that historians
of science debate when they consider ‘internal’ versus ‘external’ factors. Merely saying
that scientific change occurs by a selection process is not enough.
Another variety of evolutionary epistemology may be found in models of cultural
group selection (Boyd and Richerson 1985). As Darwin observed in his discussion of
human morality, groups compete with each other, and groups with more adaptive
ideologies will out-compete groups whose ideas are less fit. Again, there is no require-
ment that these ideational elements are transmitted genetically; groups may vary simply
because the individuals in one generation faithfully teach their mores to the next. Group
337
ELLIOTT SOBER
competition may mean that people in one group kill the members of others, or it may
simply mean that the conquerors impose their mores on the conquered. Indeed, the
spread of ideas may not involve ‘conquest’ at all, if people in one group freely adopt ideas
from another.
3.4 Evolutionary ethics
HUME’s (chapter 31) distinction between is and ought – between claims that are purely
descriptive and claims that are normative – must be borne in mind when we ask what
relevance evolutionary theory has for questions about morality. Darwin suggested that
evolutionary theory helps us understand why human morality has some of the features
that it has. It remains to be seen how much of the content of human morality can be
explained in this way, and how much should be thought of as arising by non-adaptive
cultural processes. For example, some cultures impose restrictive norms on the cloth- | Blackwell |
ing that men and women may wear, while others encourage personal innovation. Even
if evolutionary theory has nothing to say about why this pattern of variation exists, it
is possible that the theory has an important bearing on other patterns (Sober and
Wilson 1998).
What can evolutionary theory tell us about which normative ethical claims, if any,
are true? Although many social Darwinists thought that evolutionary outcomes are
always good (because they thought that evolution is inherently ‘progressive’), there has
also been a strong tradition that maintains that evolutionary outcomes are often
morally deplorable. Just as Darwin did not rejoice in the behaviour of parasitic wasps,
so Huxley (1997) thought that the point of morality was to combat the instincts we
inherit; more recently, and in the same vein, G. C. Williams (1989) has suggested that
nature is a ‘wicked old witch’.
Another line of thinking in evolutionary ethics holds that evolutionary considera-
tions show that there can be no objective normative truths. The claim is not just that
the moral principles we now happen to hold are mistaken; the claim is stronger – that
no moral convictions can be true (Ruse and Wilson 1986). Evolution has created in us
the illusion that there are objective moral standards; there is an adaptive advantage in
believing this fiction. In reply, it should be noted that the mere fact that our moral beliefs
are produced by evolution does not show that there are no moral truths. After all, our
simple mathematical beliefs may be the result of evolution, but that does not show that
there are no mathematical truths (Kitcher 1994). However, more needs to be said here,
since the subjectivist does not have to maintain that the non-existence of moral truths
deductively follows from the claim that the morality we accept is the product of
evolution.
A better formulation of subjectivism maintains that it is exceedingly improbable that
the morality we accept is true, given that it is produced by evolution. After all, natural
selection causes traits to evolve because they provide reproductive benefits. Why should
a process bent on maximizing reproduction lead us to moral beliefs that are true? The
thing to notice about this line of thought is that it does not entail that there are no
moral truths; rather, it advances the more modest hypothesis that the moral beliefs we
presently have are not true. In addition, this argument fails to attend to the fact that
evolution has given human beings the capacity to reason; if this capacity can lead us
338
PHILOSOPHY OF BIOLOGY
to discover theoretical truths in science that provide no practical benefit in terms of
survival and reproduction, why should reason not also be able to lead us to discoveries
about the good and the right? Perhaps there are reasons to doubt this, but the mere fact
that natural selection aims to maximize reproduction should not lead us to embrace
ethical subjectivism.
A slightly different argument for ethical subjectivism maintains that we can provide
a fully satisfactory explanation of why we behave the way we do, and also explain why
we accept the moral principles we do, without postulating a realm of ethical truths
(Harman 1977; Ruse and Wilson 1986). Objectivism about ethics is unparsimonious; a
simpler explanation of human thought and behaviour traces these phenotypic features
back to the social context of human culture and the biological context of human
evolution. The right reply to this argument for subjectivism is that normative ethical
principles are not in the business of explaining why people think and act as they do.
The point of morality is to regulate behaviour, not to explain it. One might just as well
argue that objective epistemological norms do not exist because they are not needed to
explain why people think as they do. Psychology is in the business of explaining how
people in fact think; epistemology, however, is in a normative line of work (Sober 1993).
| Blackwell |
Acknowledgements
I am grateful to David Hull and Christopher Stephens for their help in the preparation of this
chapter.
References
Allen, C., Bekoff, M. and Lauder, G. (eds) 1998: Nature’s Purposes: Analyses of Function and Design
in Biology. Cambridge, MA: MIT Press.
Anscombe, E. 1975: Causality and Determination. In E. Sosa (ed.) Causation and Conditionals.
Oxford: Oxford University Press.
Barkow, J., Cosmides, L. and Tooby, J. (eds) 1992: The Adapted Mind: Evolutionary Psychology and
the Generation of Culture. Oxford: Oxford University Press.
Beatty, L. 1995: The Evolutionary Contingency Thesis. In G. Wolters and J. Lennox (eds) Concepts,
Theories, and Rationality in the Biological Sciences: The Second Pittsburgh–Konstanz Colloquium in
the Philosophy of Science. Pittsburgh: University of Pittsburgh Press.
Boorse, C. 1976: Wright on Functions. Philosophical Review, 85, 70–86.
Boyd, R. and Richerson, P. 1985: Culture and the Evolutionary Process. Chicago: University of
Chicago Press.
Bradie, M. 1994: Epistemology from an Evolutionary Point of View. In E. Sober (ed.) Conceptual
Issues in Evolutionary Biology, 2nd edn. Cambridge, MA: MIT Press.
Brandon, R. 1978: Adaptation and Evolutionary Theory. Studies in the History and Philosophy of
Science, 9, 181–206.
—— 1990: Adaptation and Environment. Princeton, NJ: Princeton University Press.
—— 1996: Concepts and Methods in Evolutionary Biology. Cambridge: Cambridge University Press.
Buller, D. (ed.) 1999: Function, Selection and Design. Series in Philosophy and Biology. New York:
State University of New York Press.
Campbell, D. 1974: Evolutionary Epistemology. In P. Schilpp (ed.) The Philosophy of Karl Popper.
La Salle, IN: Open Court.
339
ELLIOTT SOBER
Cummins, R. 1975: Functional Analysis. Journal of Philosophy, 72, 741–64. Reprinted in E.
Sober (ed.) (1994) Conceptual Issues in Evolutionary Biology, 2nd edn. Cambridge, MA: MIT
Press.
Darwin, C. 1981 [1871]: The Descent of Man, and Selection in Relation to Sex. Princeton, NJ:
Princeton University Press.
Dawkins, R. 1976: The Selfish Gene, 2nd edn. Oxford: Oxford University Press.
Desmond, A. and Moore, J. 1991: Darwin – the Life of a Tormented Evolutionist. New York: Times
Warner.
Dobzhansky, T. 1973: Nothing in Biology Makes Sense Except in the Light of Evolution.
American Biology Teacher, 35, 125–9.
Dupré, J. 1993. The Disorder of Things. Cambridge, MA: Harvard University Press.
Edwards, A. and Cavalli-Sforza, L. 1964: Reconstruction of Evolutionary Trees. In V. Heywood
and J. McNeill (eds) Phenetic and Phylogenetic Classification. New York: Systematics Association
Publication, 6, 67–76.
Eldredge, N. and Cracraft, J. 1980: Phylogenetic Patterns and the Evolutionary Process. New York:
Columbia University Press.
Enç, B. 1986: Essentialism Without Individual Essences: Causation, Kinds, Supervenience and
Restricted Identities. Midwest Studies in Philosophy, 11, 403–26.
Ereshefsky, M. 1991: Species, Higher Taxa, and the Units of Evolution. Philosophy of Science, 58,
84–101.
—— (ed.) 1992: The Units of Evolution: Essays on the Nature of Species. Cambridge, MA: MIT Press.
Fisher, R. 1957 [1930]: The Genetical Theory of Natural Selection, 2nd edn. New York: Dover Books.
Fox Keller, E. and Lloyd, E. (eds) 1992: Keywords in Evolutionary Biology. Cambridge, MA: Harvard
University Press.
Ghiselin, M. 1974: A Radical Solution to the Species Problem. Systematic Zoology, 23, 536–44.
Godfrey-Smith, P. 1991: Signal, Decision, Action. Journal of Philosophy, 88 (12), 709–22.
—— 1993: Functions: Consensus Without Unity. Pacific Philosophical Quarterly, 74, 196–208.
—— 1994: A Modern History Theory of Function. Nous, 28 (3), 344–62.
—— 1996: Complexity and the Function of Mind in Nature. Cambridge: Cambridge University
Press.
Gould, S. 1980: The Panda’s Thumb. Harmondsworth: Penguin Books.
—— 1989: Wonderful Life. New York: W. W. Norton.
Gould, S. and Lewontin, R. 1979: The Spandrels of San Marco and the Panglossian Paradigm: A | Blackwell |
Critique of the Adaptationist Programme. Proceedings of the Royal Society of London B, 205,
581–98. Reprinted in E. Sober (ed.) (1994) Conceptual Issues in Evolutionary Biology, 2nd edn.
Cambridge, MA: MIT Press.
Hamilton, W. 1967: Extraordinary Sex Ratios. Science, 156, 477–88.
Harman, G. 1977: The Nature of Morality: An Introduction to Ethics. Oxford: Oxford University
Press.
Hennig, W. 1966: Phylogenetic Systematics. Urbana: University of Illinois Press. (Revision and
translation of Hennig’s 1950 Grundzuge einer Theorie der phylogenetishen Systematik.)
Hull, D. 1965: The Effect of Essentialism on Taxonomy: 2000 Years of Stasis. British Journal for
the Philosophy of Science, 15, 314–26; 16, 1–18.
—— 1974: Philosophy of Biological Sciences. Englewood Cliffs, NJ: Prentice-Hall.
—— 1978: A Matter of Individuality. Philosophy of Science, 45, 335–60. Reprinted in E. Sober
(ed.) (1994) Conceptual Issues in Evolutionary Biology, 2nd edn. Cambridge, MA: MIT Press.
—— 1988: Science as a Process. Chicago: University of Chicago Press.
Hull, D. and Ruse, M. (eds) 1998: Philosophy of Biology. Oxford: Oxford University Press.
Hume, D. 1947 [1779]: Dialogues Concerning Natural Religion. New York: Thomas Nelson and
Sons.
340
PHILOSOPHY OF BIOLOGY
Huxley, T. 1997 [1893]: Evolution and Ethics. In A. Barr (ed.) The Major Prose of Thomas Henry
Huxley. Athens, GA: University of Georgia Press.
Kitcher, P. 1984: 1953 and All That: A Tale of Two Sciences. Philosophical Review, 93, 335–73.
Reprinted in E. Sober (ed.) (1994) Conceptual Issues in Evolutionary Biology, 2nd edn.
Cambridge, MA: MIT Press.
—— 1993: The Advancement of Science. New York: Oxford University Press.
—— 1994: Four Ways of Biologicizing Ethics. In E. Sober (ed.) Conceptual Issues in Evolutionary
Biology, 2nd edn. Cambridge, MA: MIT Press.
Kripke, S. 1972: Naming and Necessity. In G. Harman and D. Davidson (eds) The Semantics of
Natural Language. Dordrecht: Reidel.
Lakatos, I. 1978: Falsification and the Methodology of Scientific Research Programmes. In
The Methodology of Scientific Research Programmes: Philosophical Papers, vol. 1. Cambridge:
Cambridge University Press.
Lewontin, R. 1970: The Units of Selection. Annual Review of Ecology and Systematics, 1, 1–14.
Lloyd, E. 1988: The Structure and Confirmation of Evolutionary Theory. Westport, CT: Greenwood
Press.
Maynard-Smith, J. 1988: Evolutionary Progress. In M. Nitecki (ed.) Evolutionary Progress.
Chicago: University of Chicago Press.
Mayr, E. 1963: Animal Species and Evolution. Cambridge, MA: Harvard University Press.
—— 1970: Populations, Species, and Evolution. Cambridge, MA: Harvard University Press.
—— 1975: Typological versus Populational Thinking. In Evolution and the Diversity of Life.
Cambridge, MA: Harvard University Press. Reprinted in E. Sober (ed.) (1994) Conceptual
Issues in Evolutionary Biology, 2nd edn. Cambridge, MA: MIT Press.
Mill, J. S. 1874: A System of Logic, 8th edn. Harper and Brothers.
Mills, S. and Beatty, J. 1979: The Propensity Interpretation of Fitness. Philosophy of Science, 46,
263–88. Reprinted in E. Sober (ed.) (1994) Conceptual Issues in Evolutionary Biology, 2nd edn.
Cambridge, MA: MIT Press.
Mitchell, W. and Valone, T. 1990: The Optimization Research Program: Studying Adaptations by
their Function. Quarterly Review of Biology, 65, 43–52.
Nagel, E. 1961: The Structure of Science. London: Routledge and Kegan Paul.
Oppenheim, P. and Putnam, H. 1958: Unity of Science as a Working Hypothesis. In H. Feigl,
G. Maxwell and M. Scriven (eds) Minnesota Studies in the Philosophy of Science, vol. 2.
Minneapolis: University of Minnesota Press.
Orzack, S. and Sober, E. 1994: Optimality Models and the Long-run Test of Adaptationism.
American Naturalist, 143, 361–80.
—— 2001: Adaptationism, Phylogenetic Inertia, and the Method of Controlled Comparisons. In
Adaptationism and Optimality. Cambridge: Cambridge University Press.
| Blackwell |
Paley, W. 1805: Natural Theology. Rivington.
Popper, K. 1973: Objective Knowledge. Oxford: Clarendon Press.
Putnam, H. 1975: The Meaning of Meaning. In K. Gunderson (ed.) Language, Mind and
Knowledge: Minnesota Studies in the Philosophy of Science, vol. 7. Minneapolis: University of
Minnesota Press.
Rosenberg, A. 1978: The Supervenience of Biological Concepts. Philosophy of Science, 45,
368–86.
—— 1985: The Structure of Biological Science. Cambridge: Cambridge University Press.
—— 1994: Instrumental Biology, or the Disunity of Science. Chicago: University of Chicago
Press.
Royall, R. 1997: Statistical Evidence – A Likelihood Paradigm. London: Chapman & Hall.
Ruse, M. 1973: The Philosophy of Biology. London: Hutchinson University Library.
Ruse, M. and Wilson, E. 1986: Moral Philosophy as Applied Science. Philosophy, 61, 173–92.
341
ELLIOTT SOBER
Reprinted in E. Sober (ed.) (1994) Conceptual Issues in Evolutionary Biology, 2nd edn.
Cambridge, MA: MIT Press.
Schaffner, 1976: Reductionism in Biology: Prospects and Problems. In R. S. Cohen et al. (eds)
PSA 1974. Dordrecht: Reidel.
Sober, E 1980: Evolution, Population Thinking, and Essentialism. Philosophy of Science, 47,
350–83. Reprinted in E. Sober (ed.) (1994) Conceptual Issues in Evolutionary Biology, 2nd edn.
Cambridge, MA: MIT Press.
—— 1984: The Nature of Selection. Cambridge, MA: MIT Press. (Second edition (1994) Chicago:
University of Chicago Press.)
—— 1988: Reconstructing the Past: Parsimony, Evolution, and Inference. Cambridge, MA: MIT
Press.
—— 1990: Putting the Function Back Into Functionalism. In W. Lycan (ed.) Mind and Cognition:
A Reader. Oxford: Blackwell.
—— 1992: Monophyly. In E. Fox Keller and E. Lloyd (eds) Keywords in Evolutionary Biology.
Cambridge, MA: Harvard University Press.
—— 1993: Philosophy of Biology. Boulder, CO: Westview Press. (Second edition 1999.)
—— 1994a: Progress and Direction in Evolution. In J. Campbell and W. Schopf (eds) Creative
Evolution?! Boston: Jones and Bartlett.
—— 1994b: The Adaptive Advantage of Learning and a priori Prejudice. In From a Biological Point
of View. Cambridge, MA: Cambridge University Press.
—— 1997a: Two Outbreaks of Lawlessness in Recent Philosophy of Biology. In L. Darden (ed.)
PSA 1996, 458–67.
—— 1997b: Is the Mind an Adaptation for Coping with Environmental Complexity? Biology and
Philosophy, 12, 539–50.
—— 1999a: Physicalism from a Probabilistic Point of View. Philosophical Studies, 95 (1–2),
135–74.
—— 1999b: The Multiple Realizability Argument Against Reductionism. Philosophy of Science,
66 (4).
—— 1999c: Modus Darwin. Biology and Philosophy, 14 (2), 253–78.
—— 1999d: Testability. Proceedings and Addresses of the American Philosophical Association, 73 (2),
47–76.
Sober, E. and Lewontin, R. 1982: Artifact, Cause, and Genic Selection. Philosophy of Science, 47,
157–80.
Sober, E. and Wilson, D. 1994: A Critical Review of Philosophical Work on the Units of Selection
Problem. Philosophy of Science, 61, 534–55. Reprinted in D. Hull and M. Ruse (eds) Philosophy
of Biology. Oxford: Oxford University Press.
—— 1998: Unto Others – the Evolution and Psychology of Unselfish Behaviour. Cambridge, MA:
Harvard University Press.
Stephens, C. 2001: When is it Selectively Advantageous to Have True Beliefs? Sandwiching the
Better Safe than Sorry Argument. Philosophical Studies, 105, 161–89.
Sterelny, K. and Griffiths, P. 1999: Sex and Death: An Introduction to the Philosophy of Biology.
Chicago: University of Chicago Press.
Sterelny, K. and Kitcher, P. 1988: The Return of the Gene. Journal of Philosophy, 85, 339–61.
Wade, M. 1978: A Critical Review of Models of Group Selection. Quarterly Review of Biology, 53,
101–14.
Waters, C. K. 1990: Why the Anti-reductionist Consensus Won’t Survive the Case of Classical
Mendelian Genetics. PSA 1990, vol. 1, 125–39. Reprinted in E. Sober (ed.) (1994) Conceptual
Issues in Evolutionary Biology, 2nd edn. Cambridge, MA: MIT Press.
| Blackwell |
Wiley, E. 1981: Phylogenetics: The Theory and Practice of Phylogenetic Systematics. London: John
Wiley.
342
PHILOSOPHY OF BIOLOGY
Williams, G. 1966: Adaptation and Natural Selection. Princeton, NJ: Princeton University Press.
—— 1989: A Sociobiological Expansion of Evolution and Ethics. In J. Paradis and G. C. Williams
(eds) T. H. Huxley Evolution and Ethics: With New Essays on its Victorian and Sociobiological
Context. Princeton, NJ: Princeton University Press.
—— 1992: Natural Selection: Domains, Levels and Challenges. Oxford: Oxford University Press.
Wilson, D. 1980: The Natural Selection of Populations and Communities. Menlo Park, CA: Benjamin
Cummings.
—— 1994: Adaptive Genetic Variation and Human Evolutionary Psychology. Ethology and
Sociobiology, 15, 219–35.
Wilson, E. 1975: Sociobiology: The New Synthesis. Cambridge, MA: Harvard University Press.
Wimsatt, W. 1979: Reduction and Reductionism. In P. Asquith and H. Kyburg (eds) Current
Research in Philosophy of Science. Philosophy of Science Association, 352–77.
—— 1980: Reductionistic Research Strategies and their Biases in the Units of Selection
Controversy. In T. Nickles (ed.) Scientific Discovery, vol. 2. Dordrecht: Reidel.
Wright, L. 1973: Functions. Philosophical Review, 82, 139–68. Reprinted in E. Sober (ed.) (1994)
Conceptual Issues in Evolutionary Biology, 2nd edn. Cambridge, MA: MIT Press.
—— 1976: Teleological Explanations: An Etiological Analysis of Goals and Functions. Berkeley:
University of California Press.
Discussion Questions
1 What constitutes being alive?
2 How should we understand fitness?
3 If biological properties supervene upon physical properties, can biology be reduced
to physics?
4 Is reductionism compatible with the multiple physical realizability of biological
properties?
5 What is required for one theory to be reducible to another?
6 How should we understand the role of function in biology?
7 Must we take account of our explanatory interests to distinguish between the
function and the fortuitous benefit of an organ?
8 Are both Newton’s theory of gravitation and Darwin’s theory of evolution
scientific?
9 How can we determine which species are closely related and which are related only
more distantly?
10 What is the role of parsimony in scientific reasoning?
11 Can we explain why natural selection does not always result in species having
optimal traits?
12 Is natural selection merely one among several important influences on trait
evolution, or is it the only important influence?
13 What is the difference between testing hypotheses and testing research
programmes?
14 Why in biology is it important to recognize that historical origin and present
advantage are not automatically linked?
15 Do more than one unit of selection have a role in the evolutionary process?
16 Is the choice between individuals and groups as the units of selection a matter of
adopting a convention rather than a matter of answering a factual question about the
history of life?
343
ELLIOTT SOBER
17 Are there any laws of evolution?
18 How can a priori models have a part to play in empirical science?
19 Does the adaptive complexity of organisms justify the argument from design?
20 Could species, as understood in evolutionary theory, have essences?
21 What makes human beings now and human beings thousands of years ago
members of the same species?
22 Are species individuals or natural kinds? What could count as natural kinds in
evolutionary biology?
23 What follows from the claim that species are historical entities?
24 Is evolutionary psychology more securely based than sociobiology?
25 Is biological evolution a good model for explaining social and cultural change?
26 What are the consequences for philosophy of accepting a modularity account of
the mind?
27 Can natural selection help to explain the existence and character of human
mentality?
28 Does an evolutionary theory of the character of human morality help us to assess | Blackwell |
moral claims?
29 Do evolutionary considerations undermine the claim that there are objective
moral truths?
344
11
Philosophy of Mathematics
M A RY T I L E S
Since the time of ancient Greece, mathematics has been intimately tied to philosophy,
both as a model of knowledge and as an object of philosophical reflection. Are numbers
real? What is a proof? Is mathematics more certain than other knowledge? Can finite
minds have knowledge of infinity? How can mathematics apply to the world? In this
chapter, changing philosophical conceptions of mathematics, changes in the historical
context of mathematical thought and changes in mathematics itself are explored in
relation to a basic question: how can theoretical reasoning about non-concrete mathe-
matical objects be both secure and so useful? Platonic, Aristotelian and Kantian
approaches to mathematics are examined in their own right and to place in context the
problems and proposed solutions of the major modern schools of logicism, formalism
and intuitionism. Recent developments which do not seek foundations for mathematics
are also considered. Many of the discussions of great historical figures in this volume,
especially FREGE AND RUSSELL (chapter 37), are relevant to the present chapter. Readers
will also wish to consult chapters on EPISTEMOLOGY (chapter 1), METAPHYSICS (chapter
2), PHILOSOPHY OF LANGUAGE (chapter 3), PHILOSOPHY OF LOGIC (chapter 4) and
PHILOSOPHY OF SCIENCE (chapter 9).
Introductions to the philosophy of mathematics often begin where Körner’s (1960)
influential introduction began, outlining three positions: logicism, formalism and in-
tuitionism. These were the three contending schools to emerge from nineteenth-century
mathematical moves to provide rigorous foundations for mathematical analysis
(including infinitesimal calculus). The problem for the philosophy of mathematics has
been that (1) these seemed to represent all the reasonable positions available, and (2)
in the light of Gödel’s incompleteness theorems and other results proved by Turing,
Church, Skolem and Tarski in the 1930s, neither logicism nor formalism seemed a
philosophically viable position. Intuitionism, whilst having its philosophical credentials
intact, was unacceptable to most mathematicians because it involved discarding parts
of classically accepted mathematics. This apparent impasse partly explains the decline
of interest in the philosophy of mathematics since the first part of this century, when
for a while, with the work of Russell and Whitehead, and the Vienna Circle logical
positivists, it seemed to occupy centre stage.
MARY TILES
Hao Wang, in his perceptive retrospective analysis of the philosophy of mathe-
matics of this period (Wang 1968), explains this trajectory in terms of the wider move-
ments of analytic philosophy, which were initially strongly empiricist and which
consequently inherited a hostility toward abstract objects. Mathematics presents a
significant challenge to empiricism, as was made painfully evident by the prominence
of mathematics in Einstein’s relativity theories and in the development of quantum
mechanics. How is empiricism to give an adequate explanation of the certainty, clarity,
universality and applicability of mathematics? Wang’s answer is that the above
mentioned developments in logic and the foundations of mathematics show that
analytic empiricism cannot meet this challenge. It is therefore only as philosophy in
general begins to move beyond analytic empiricism that more fruitful approaches to
philosophy of mathematics can begin to be explored.
But how exactly is such a move to be made? Because the three foundational schools
originated in the work of philosophically minded mathematicians who were seeking to
resolve sophisticated conceptual problems that had arisen in the context of ongoing
mathematical research, much of the work in foundations is itself technical and mathe- | Blackwell |
matical in character. There is no sharp line where mathematics ends and philosophy
begins. Indeed, one of the striking features of the development of mathematics in the
twentieth century was the way in which attempts to solve philosophical and con-
ceptual problems generated new mathematics. Thus from the standpoint of the early
twenty-first century, work in the foundations of mathematics can by no means be
ignored, but it may need to be re-evaluated. What is its status? What exactly does it tell
us about contemporary mathematics? These questions must now be added to the philo-
sophical agenda. So one of the ways of moving beyond the presuppositions built into
philosophy of mathematics by analytic empiricists is to recontextualize their work on
logic and foundations in the hope of understanding its motivations, achievements and
shortcomings. A first step is to sketch, using broad strokes, major features in the land-
scape of Western philosophical reflection on mathematics, many of which were already
put in place by the ancient Greeks.
1 Basic Tasks for a Philosophy of Mathematics
The basic puzzle, of which the challenge to empiricism is but a specialized variant, is to
explain how theoretical reasoning about highly abstract objects can be so useful. The
features of mathematics which seemed to mark it off from other disciplines are that its
objects are not encountered in the world of sense experience, and yet, perhaps as a con-
sequence, mathematical claims can be provided with proofs which seem to establish
them with an exactness and certainty unparalleled in other branches of knowledge.
But how is it that people shut away from the outside world, thinking, calculating and
proving theorems about things which we can never hope to see or touch (numbers such
as 2, 1 million, p, or structures such as circles, rings and vector spaces), produce results
which other, very practical people find so useful in their dealings both with the every-
day world and with the esoteric worlds of high-tech science? What right have we to be
so confident of the correctness of these results?
346
PHILOSOPHY OF MATHEMATICS
A
x
O
B
O
x
C
E
D
Figure 11.1 Proof that the internal angles of a triangle add up to the sum of two right
angles.
The straightforward answer to this last question is that mathematicians are able to
offer proofs of their results. For example, as geometers before the time of Aristotle real-
ized, measurements of the internal angles of plane triangles do not yield one exact
result, but results that cluster round two right angles, and, because measuring gives
no insight into why this clustering should occur, it affords no assurance that a ‘rogue’
triangle might not some day be found whose internal angles had a markedly different
sum. A proof, on the other hand, that shows why, given what it is to be a triangle, its
internal angles must add up to two right angles, assures us that no rogue triangle exists.
One such proof is shown in figure 11.1.
Such a proof is possible only if it can be assumed that all the lines involved are per-
fectly straight and have no thickness (something which is not true of the lines in the
diagram). Lines which have thickness do not meet in points and if they have wriggles
in them there is no sense to be given to an exact measurement of the angle between
them. So even to have internal angles whose sum is exactly equal to two right angles
is possible only for a perfect plane triangle, which we could not encounter in experi-
ence. The theorem proved below will hold more or less exactly for those triangles we
encounter to the extent that they are more or less perfectly triangular.
There is an important assumption made here about the applicability of results
proved about idealizations: a result proved about an idealization will hold for situations
which approximate to that ideal in direct proportion to the closeness with which they
approximate to the ideal. This assumption has traditionally underwritten applications | Blackwell |
of mathematics, but, with recent work in chaos theory, it has been shown not to be
valid for systems exhibiting sensitive dependence on initial conditions (Stewart 1989:
chs 7, 14). This work opens up new questions about the connection between proofs and
the use of results proved.
But how can we tell that a purported proof really is a proof? All proofs have to start
from some assumptions (we assumed above some properties of parallel lines and of
addition of angles). How can we be certain that these are correct? Furthermore, how
can we be sure that all inferences are made correctly? To ask for a proof for every
assumption will lead to an infinite regress. So if there are to be any proofs there must
be some self-evident starting-points, which can be known to be necessarily true and
which are neither capable of, nor stand in need of, further justification.
347
MARY TILES
This means that if mathematics is a discipline in which proofs are required then it
must be one which exhibits a systematic hierarchical order. Some propositions con-
cerning mathematical entities will appear as first principles, accepted without proof:
these are the foundations upon which subsequent knowledge must be built. Basic the-
orems are proved from first principles: these theorems are in turn used to prove further
results, and so on. This is the sense in which, so long as mathematicians demand and
provide proofs, their discipline must necessarily be organized along lines approximat-
ing the pattern already to be found in the Elements of Geometry of Euclid (which dates
from around 300 BC) – a work which for centuries served as an exemplar.
The question of what principles of inference are generally valid (rationally com-
pelling, or truth-preserving) is traditionally assigned to LOGIC (chapter 4). But one may
wonder whether there are not some specifically mathematical principles of inference.
Take, for example, Euclid’s principle that if A = B and C = D, then A + C = B + D. Is this
a logical principle (a specific application of the SUBSTITUTIVITY OF IDENTICALS (p. 800))?
Is it a basic principle of the theory of part and whole? If so, does this belong to logic or
to mathematics? Or is it a specifically mathematical principle governing extensive
magnitudes?
At this point it is already possible to see how concern for principles of valid deduc-
tive inference and for the nature and status of first principles becomes a natural part
of any philosophy of mathematics that takes proof to be that on the basis of which
mathematical propositions gain their aura of privileged certainty, or necessary
truth. However, to equate the philosophy of mathematics with studies in logic
and foundations would seem to ignore one half of the combination of features that
make mathematics so philosophically puzzling and intriguing. It leaves out questions
concerning the utility of mathematics.
So far we have the following schema of questions structuring the problematic of
philosophical reflections on mathematics:
Basic Problem
How can theoretical reasoning about non-concrete mathematical objects be both so
secure and so useful?
If the question of security is presumed to be answered by the fact that theoretical
reasoning produces proofs of mathematical propositions, then we have to distinguish
between first principles and results proved from them, to get something like the
following subsidiary problems:
1 What is the nature and status of the first principles of mathematics?
1.1 Can its first principles be identified?
1.2 How can these first principles be known with certainty?
2 How is mathematical knowledge acquired?
2.1 Discovery: How are discoveries made? How are results established or proofs
found?
2.2 Justification: What constitutes a proof? What principles of
inference are
available? What is their nature and what grounds have we for thinking them
reliable?
348
3 Given the answers under (1) and (2), how is the applicability of proved mathemat- | Blackwell |