Home Applied Naturalism
Tenets of Naturalism Consequences
worldview that takes human beings
and their behavior to be fully embedded within the natural, material
gets expressed in a wide range of contexts, from politics to
obesity to punishment to spirituality. The contents below are an
Currents in Naturalism now continues in the form of
Memeing Naturalism, a weblog, your comments invited.
reconfigured - the New York Times columnist gets it right about
causation, this time
roundup - news stories fret about freedom and responsibility, but not to
Spirituality, naturalized? - Owen Flanagan widens the religious horizon at
the Templeton Foundation
Time and free
will - Robert Krulwich's close encounter with the block universe
vs. science - to explain all isn't to express all
End - outgrowing supernaturalism
Naturalizing goes on apace - the Edge gets edgy on human nature
retribution - Richard Dawkins applies naturalism to criminal justice
Death of the
soul: just what the doctor ordered - John Horgan's undue pessimism
Will naturalism drive us crazy, or
undermine an open society? - properly understood, no
A more compassionate libertarianism - Cathy Young makes
Ethical Culture challenged on
free will - the "third lie" exposed as such
Rocker into naturalism
- the founder of Bad Religion gets philosophical
Searching for Ethics in a
New America - ethical alternatives to the myth of
The Theory of
Negligent Design -
how we really got started, according
to Stanislaw Lem
- Sam Harris on free will, from his book The End of Faith
Will Provine on
the front lines - speaking out against ultimate responsibility
The Limits of
Reason - Cathy Young wants retribution for no good reason
- conservative think tanks explore neuroscience and free will
discover unvarnished naturalism and live to tell the tale
mechanism - even if science shows us to be "smart robots," we'd still have
excellent reasons to defend human rights
and free will - admitting that behavior has causes doesn't erase the
distinction between right and wrong
Levitation of David Brooks - must we float free of causality to be moral
Continues to Evolve: Julian Sanchez reviews Owen Flanagan's The
Problem of the Soul
Luck Swallows Everything
A Question for Brights: How Naturalistic Are You?
Free will? Not really
Reason Evolves: Reason
magazine publishes an interview with Daniel Dennett
Boston Globe: Neuroscience enters the debate on free will
Free will in the news:
neuroscience and freedom (The Economist), causality and capital
punishment (New York Times)
Scalia's Scenario: retribution, religion, and the death penalty
why scientists won't need to do this
Who Wrote the Book of Life?:
naturalism's simply more fun than theism
Seeing Drugs as a Choice or a Brain Anomaly:
substance abuse and free will
Playing God, Carefully:
why biotechnology need not devalue life
The Myth of Willpower, unmasked as
such by diet experts
On the Supposed Inscrutability of Evil:
sociologists plead ignorance
DNA and Destiny;
Smoking is a Choice: radical autonomy in the New York Times
William Provine: Free Will a
Underreporting Anti-Depressant Use Tied to
Stigma of Mental Illness
analysis of the Columbine massacre written two years ago, New York Times
columnist David Brooks opined that “My instinct is that Dylan Klebold was a
self-initiating moral agent who made his choices and should be condemned for
them. Neither his school nor his parents determined his behavior.”
According to Brooks back then, Klebold and his behavior aren’t fully traceable
to determinants – he created himself as a moral agent, and is condemnable on
Fast forward to
a May 7, 2006 op-ed “Marshmallows
and Public Policy” in which Brooks presents a thoughtful analysis of the
determinants of self-control in children. Here’s a very different
columnist, deeply interested in understanding behavior and in applying that
knowledge to build communities and schools that allow kids to become responsible
citizens, not killers.
He points out that
self-control in children is positively correlated with better SAT scores,
attending better colleges, less involvement with drugs, and other measures of
adult stability and satisfaction. Given this connection, he asks the right
question about causality: “…how do we get people to master the sort of
self-control that leads to success?” Kids differ in their innate
capacity for delaying gratification, no doubt, but self-discipline is also a
learned skill, and Brooks wants society to pay more attention to teaching it.
Tellingly, and rightly, he
discounts “sheer willpower” as a factor in explaining where self-control comes
from. After all, willpower is just another name for self-control,
and we can’t suppose that it creates itself – a logical impossibility.
Instead, he recommends we follow University of Virginia psychologist Jonathan
Heidt’s suggestion to create “stable, predictable environments for children, in
which good behavior pays off.”*
So in this op-ed Brooks does
not suppose, as he did in discussing Dylan Klebold two years ago, that something
other than environment and heredity determines how kids turn out. If
becoming a self-disciplined adult is fully caused, why should we suppose that
becoming a morally good person, which after all centrally involves control
capacities, is not? The upshot is that present Brooks is implicitly
calling earlier Brooks into question As his current reasoning suggests,
it’s the full complement of causes, not a capacity to rise above one’s
circumstances, that explains whether or not self-control and moral virtue
are achieved. From a naturalist’s standpoint this shift in perspective is
However, my guess is that
Brooks would still resist this conclusion about moral agency, and insist there’s
something about it that transcends causation. After all, this is a central dogma
of our culture, especially for conservatives: we are first causes of ourselves
in a way that makes us really responsible – we are “moral
levitators.” But if we accept that self-control is determined, we have
to say what other aspects of agenthood aren’t, and then provide a plausible
alternative account of them. This is difficult, to put it mildly, once we
If Brooks wants to believe
that Klebold was a self-initiating moral agent and that kids are fully
determined in their control capacities, he ends up in implicit
self-contradiction, in which case he has some self-reconciliation to do.
But that’s OK; after all, who doesn’t? Reconfiguring ourselves in the
light of new insights, we all agree, is how we become better moral agents.
puzzling in Brook’s analysis is that he pooh-poohs what he calls “structural
reforms” such as smaller class sizes and universal day care.
Clearly, the creation of stable environments for kids in which they are
taught self-control could profitably include such reforms.
A naturalistic understanding ourselves
challenges some conventional notions of freedom and responsibility, as the
following news stories make clear. But we needn't fall into a moral panic.
Paradoxically enough, seeing that we don't
ultimately create ourselves gives us greater opportunities for
self-control, as the last piece illustrates. The titles in quotes are
the original articles.
Will: You Only Think You Have It" There's considerable controversy
among philosophers about whether people think having free will requires us
to be free of determinism, or not (see the
Research page at the CFN). According to Zeeya Merali in the May 6
issue of New Scientist, a new theory of quantum phenomena developed
by Dutch physicist Gerard 't Hooft reveals reality to be fundamentally
deterministic, and "abandoning the uncertainty of quantum physics means we
must give up the cherished notion that we have free will." John Conway
and Simon Kochen, professors of mathematics at Princeton, also believe free
will requires indeterminism, and Kochen is quoted as saying that if 't
Hooft's theory is right, "Our lives could be like the second showing of a
movie - all actions play out as thought they are free, but that freedom is
an illusion." It's curious that Merali, Conway and Kochen think
(as do many, perhaps) that indeterminism would somehow give us a free will
worth wanting, to use Daniel Dennett's phrase from his book Elbow Room.
As David Hume saw long ago, indeterminism can't possibly give us authorship
or responsibility for our actions. Whatever sorts of freedom and
responsibility we have (and we
do have some as natural creatures), they don't gain power or
plausibility by denying determinism.
Out, Man. But Is It Quantum Physics?" Writing in the New York
Times science section, Dennis Overbye also relates physics to free will
in a review of the movie "What the Bleep Do We Know?". He ends
the article saying: "I'd like to believe that, like Galileo, I would
have the courage to see the world clearly, in all its cruelty and beauty,
'without hope or fear,' as the Greek writer Nikos Kazantzakis put it. Take
free will. Everything I know about physics and neuroscience tells me it's a
myth. But I need that illusion to get out of bed in the morning. Of all the
durable and necessary creations of atoms, the evolution of the illusion of
the self and of free will are perhaps the most miraculous. That belief is
necessary to my survival.
But I wouldn't call
it good physics." One wants to know, of course, how an illusion you
know is an illusion gets you out of bed. You can't call something a belief
if you believe it's false, so the free will illusion probably isn't
necessary for Overbye's survival. Yet, like
John Horgan, he persists in claiming it is. Such is the power of
"belief in belief" in (contra-causal) free will, to borrow yet another
phrase from Dennett (Breaking The Spell).
Here's another spell that needs breaking.
eating salmon lower the murder rate?" - As reported by Stephen Mihm in
New York Times Magazine, researchers in Britain discovered a
correlation between consumption of omega-3 fatty acids (found in fish such
as salmon) and lower rates of anti-social behavior among prisoners.
But, Mihm worries, "What would it mean if we found a clear link between diet
and violent behavior? To start with, it might challenge the notion that
violence is a product of free will." And further: "...there's something that
many people may find unnerving about the idea of curing violent behavior by
changing what people eat. It threatens to let criminals evade responsibility
for their actions. Think, for example, of the infamous 'Twinkie defense,' in
which an accused murderer's lawyer suggested that junk food was partly to
blame for his client's compromised mental state." What's operating here is
the idea that free will operates outside of cause and effect, so when we
discover the empirical causes of violence, justifications for holding people
responsible seem to collapse. But of course knowing the real causes of
violence doesn't mean we let criminals go free. It simply means
there's no good justification for
retribution, and that we'll be smarter in preventing crime (safe, healthy
communities and salmon for everyone) and rehabilitating offenders (life
skills education, job training and, of course, salmon).
Mind Matters, But So Does Morality" - Interviewed by Carey
Goldberg of the Boston Globe, Harvard psychologist Jerome Kagan warns
against supposing that determinism, biological and environmental, obviates
all ascriptions of responsibility: "It is dangerous to be
lulled into believing that an adolescent who commits a violent act of
aggression 'couldn't help it' because of temperament or life experiences
and, therefore, should not be held responsible. Every adolescent, save the
tiny proportion with serious brain damage, knows that harming another is
wrong and has the ability to inhibit that behavior." The question,
though, is what our responsibility practices should be, given that, as Kagan
understands, the ability to inhibit harmful behavior is fully determined by
the interaction of innate temperament and life experiences. One sort
of responsibility practice often overlooked in discussions of harmful
behavior is to deliberately increase adolescents' powers of self-control
(see below and also "Brooks,
Reconfigured"). To imagine that kids simply choose to misbehave in
a way that transcends the failure to learn self-control is to set them up as
targets for punitive retribution, and retribution need not (and should not)
be part of our responsibility practices.
Better Kids, Naturalistically - Jeffrey Bruns is marketing software to
help children learn to be more successful and responsible. He takes an
unflinchingly causal, Skinnerian view of behavior, which he claims will give
parents more non-punitive leverage in getting their little darlings to shape
says: "The law of cause and effect is predictable and irreversible.
Knowing how to use the law, kids can attract success and happiness.
Ignorance of the law can result in boredom, frustration, and failure, which
can lead to fear, drugs and suicide.” And
here's a sure draw for overworked caretakers:
“In about three
weeks, children start to change their strategy from arguing to get what they
want, to looking for ways to earn it. Kids go from not doing their chores
and you having to constantly remind them, to asking for extra chores to help
out. Think of all of the time you will save.” Now, your results may
vary, but at least Bruns is taking the science of human behavior seriously.
New York Times David Brooks should know about
this, see his piece advocating more attention to teaching children
self-control skills, mentioned
Duke philosopher and brain scientist Owen Flanagan
recently completed his tenure as
John Templeton Foundation Fellow at
the University of Southern California,
during which he delivered
lectures, to be published by MIT Press. In his last
Spirituality Naturalized?, Flanagan says
"Naturalism, as I conceive it, is plenty broad enough to make room for
robust conceptions of the sacred, the spiritual, the sublime, and of moral
excellence." That a Templeton fellow defends an
spirituality is most encouraging, given the
Templeton Foundation's aversion (thus far) to what it sees as "the flatness
of a purely naturalistic, secularized view of reality"
If Flanagan manages to widen
their conception of what counts as authentically religious,
this will certainly advance Templeton's contribution to the science-religion
Time and Free Will
A Radio Lab production with science
reporter Robert Krulwich called "Against
Time," the section on "No Special Now,"
explores the somewhat discomfiting implications of the Einsteinian 4-dimensional
"block universe" for free will. Courtesy of physicist Brian Greene, who
questions the idea that the future is open, and neuroscientist V. S.
Ramachandran, who discusses the famous Libet experiments on the timing of
readiness potentials, Krulwich discovers that he isn't
perhaps quite "in charge" the way he thought he was. Greene is sympathetic
to Krulwich's concerns, but can't honestly reassure
him about free will, and tries to distract him with multi-verse
cosmology. But Krulwich
doesn't buy it; he wants his free will
back. Ramachandran is trenchantly definitive: the
unconscious readiness potential precedes the conscious choice to move one's
finger by .5 seconds, so consciousness can't be in control the way we thought. No
solace for Krulwich.
Green talks about time in his
terrific book The Fabric of the Cosmos, chapter 5, "The Frozen River,"
and chapter 15, "Teleporters and Time Machines," (see pp. 451-8 re free will).
Each moment we experience as flowing from future to past is actually "an eternal
and immutable feature of spacetime," so past, present, and future co-exist
in the block universe. Time as a dimension is simply there, just as
up/down, left/right and forward/back are all there, laid out in front of
us. This means all our past, present and future actions co-exist as
well, strange as it may seem. But this can be understood as a time neutral
re-statement of what science, from our (illusory) time-bound conscious
perspective, describes as causal relations over time. The
way one moment effortlessly gets transformed into
the next – no hindrance or obstacle, just a smooth transition
– suggests the next moment was (is) simply there, waiting for
the mind to experience. Naturalist attorney Bob
Gulack explores time and free will in one of his talks for the Ethical Culture
Society of Bergen County, New Jersey, see
In his New York Times
review of Dennett's Breaking the Spell, Leon Wieseltier writes: "Scientism,
the view that science can explain all human conditions and expressions, mental
as well as physical, is a superstition, one of the dominant superstitions of our
day...". But are science's explanatory ambitions, over-reaching or
not, rightly called scientism? Responding to
Duke philosopher Owen Flanagan deftly
as follows: "First, ‘scientism’, as most intellectuals and philosophers
understand it, is not the tame regulative hypothesis (which is falsifiable) that
science can, in principle, explain ‘all human conditions and expressions,’ but
the incredible view that everything worth expressing can be expressed in a
scientific idiom. Most naturalistic thinkers, including Dennett and myself,
think that science can, in principle, explain the nature and function of art,
music, and religion. But no one, save possibly long dead positivists, ever
thought that science could express whatever is worth expressing. So let’s
accept that what Bach, Mozart, Coltrane, Michelangelo, Buddha, Jesus, and
Mohammed expressed cannot be expressed scientifically. This leaves open the
possibility that science can shed light on their musical, artistic, and
spiritual productions, including what is expressed and why. This is all
Dennett’s important project assumes, not ‘scientism.’" See also Flanagan's
further critique of scientism and what he calls "global metaphysical
Science for Monks: Buddhism and Science, pp. 15-17.
The threat of creeping naturalism (as some might call it) is
on display at Edge.Org, which features a
forum of noted thinkers
on their candidate for the world's most dangerous idea. Ideas can be
"dangerous" simply by upsetting conventional wisdom, but they might also pose a
danger by undermining what some suppose are psychologically or socially
necessary assumptions about human nature. As many contributors to
the forum show, we are being naturalized at a terrific pace, in that we
can increasingly understand ourselves without appealing to anything immaterial
or supernatural. The biological and cognitive sciences are compiling
explanations of human behavior which leave no role for the soul or contra-causal
free will, the power of the person to choose and act without her and her choices
being fully caused in turn. Not surprisingly, the challenge to such
bedrock assumptions can be perceived as dangerous in both the superficial and
deeper senses. As Australian art critic Miriam Cosic says in a
piece about the Edge forum (emphasis added):
The other idea suffusing answers to the Edge's 2006 question is
that evolutionary psychology may have explanations for behaviours,
thoughts even, that will dismantle the edifice that holds up our
idea of what it is to be human. The most appalling ramification
has little to do with why men don't listen and why women can't read
maps. Rather it calls into question the very existence of reason and
of free will: the assumption of which has lain at the heart of every
culture's moral system.
But it's also quite possible to understand these developments
- on the assumption that science gets it right - as our coming of age as a
sentient species. The increasing awareness of naturalism is childhood's
end, to borrow the title of Arthur C. Clarke's novel. We're gradually
growing up, getting too big, cognitively, for our supernatural attire. The
danger of science to our conventional understandings of human nature is
undeniable, but whether the end of our illusions - in particular the illusion
that we are causally privileged over nature - is dangerous to us and our
culture, is an open question. Daniel Dennett thinks Darwinism a
dangerous idea in the first sense, but certainly not the second, since he
believes we can live in the light of the truth of natural (and artificial)
selection. Likewise, it might be the case that we can live, even flourish,
while understanding ourselves as entirely natural creatures, without souls or
contra-causal freedom. Indeed, organizations which champion science and
naturalism, such as the Center for Naturalism, the Center for Inquiry, and the
American Humanist Association, are betting that an empirical understanding of
ourselves is the best way forward, and that supernaturalism, not naturalism,
poses the greater threat to psychological health, social stability, and the
To grow up and grow old gracefully as a species, we have to
get a clear understanding of the implications of naturalism, otherwise we might
fall into a moral panic, in particular a free will
panic. A few contributors to the Edge unfortunately misrepresent to a
greater or lesser degree these implications, abetting unnecessary fears, for
instance that we are now "merely" physical creatures, or that naturalism
undercuts all viable notions of responsibility, rationality, and political
liberty. These misrepresentations tend to hold onto normative criteria
rooted in supernaturalism (e.g., that the non-physical is somehow more dignified
than the physical, that we must be contra-causally free to be rational or be
moral agents), when in fact there are naturalistic alternatives that fill the
bill quite nicely. These misrepresentations, some of which are critiqued
below, also tend to assume that our social practices, for instance our criminal
justice system, are somehow immutable or optimal, when in fact the
naturalization of human nature suggests there's considerable room for
On the other hand, some contributors tout the marvelous
possibilities of naturalism, for instance
who speaks to a naturalistic spirituality. Not
only must we defuse fears, we must display
the ethical and practical viability of taking on a
science-based view of who we are. This will make growing up positively
attractive, not a bitter pill or a fall from grace. More on this below.
A fair number of the 119
responses at the Edge
on dangerous ideas have to do with the looming
naturalization of human nature, which takes us off
our pedestal in the tradition of Copernicus, Freud, and
Darwin. It's a fascinating ride to browse
through them, great stuff on a wide range of topics, not just
naturalization. Five members of the Center for Naturalism advisory
board have posted responses:
are further entries of note mostly regarding the
impact of naturalism, in no particular order except the first. This is
not to suggest that the other contributions aren't equally worth looking at.
Not all these hyperlinks work properly, so you may have to search some
on dangerous ideas,
comes out nicely against retribution, saying that "Retribution as a moral
principle is incompatible with a scientific view of human behaviour."
Just as we wouldn't rationally "punish" an old jalopy for not running right,
so too it doesn't make good sense to inflict pain and suffering on offenders
just for their suffering's sake, without the prospect of achieving any
consequential benefit. This is the essence of retribution,
that punishment need not entail any benefits, but
it's difficult to defend
retribution if we dispense with the freely
willing, self-made self that simply deserves to suffer. So
Dawkins has done us a huge favor by drawing out one of the primary ethical
and practical implications of a naturalism that denies contra-causal free
will. On the other hand, it isn't the case,
as he puts it, that
"a truly scientific, mechanistic view of
the nervous system make[s] nonsense of the very
idea of responsibility." Even if we are fully determined creatures, as
science tends to show, we must still continue to hold each other
responsible - as compassionately and as non-punitively as possible - since
that's partially how we learn to behave responsibly. We are not
ultimately responsible, of course, but we are nevertheless properly subject
to moral evaluation, rewards and sanctions. Seeing that we can
naturalize moral responsibility, that we need not abandon it, is one of
reassurances we can offer to those
fearful that a scientific understanding of ourselves undermines the basis
for ethics and the social order. If we don't present naturalism
accurately, we'll end up like
David Honigmann of the Financial Times, who thinks that in
abolishing free will, Dawkins and other naturalists show that "Holding
people responsible for their behaviour is... completely irrational."
of the soul:
just what the doctor ordered
Edge forum on the
world's most dangerous ideas, science writer John Horgan's candidate for
that honor is that
we have no souls.
As he points out, neuroscience is rapidly closing the explanatory gaps that
leave something for the immaterial soul to do. That the brain might do
everything he calls the "depressing hypothesis." After all, doesn't
the soul give us "a fundamental
autonomy, privacy and dignity"? And wouldn't a full understanding of
the "neural code" allow unprecedented manipulation via brain control, and
unlimited self-modification, threatening the very notion of an innate human
nature? Perhaps, but Horgan's concerns can best be allayed by coming
to terms with what science has to say about ourselves, and realizing that
the "fundamental autonomy, privacy and dignity" conferred by the soul is not
only non-existent, but unnecessary. After all, there are vital
naturalistic sorts of autonomy and dignity which, if we're lucky, we
enjoy in spades. And these stem from freedoms, rights (e.g., to
privacy), and responsibilities that are social and political, not
metaphysical. There may indeed be no human soul-essence, but that's
another sort of freedom to explore. Besides, seeing that
consciousness, choice and all our higher capacities arise out of the "mere"
matter of the brain helps re-enchant the physical world. So all's
well without the soul and its companion myth, contra-causal free will.
We just need to remain vigilant about our civil liberties, but we were doing
Will determinism drive us crazy, or
undermine an open society?
Writing at the Edge
forum on dangerous ideas,
(scroll down after click) worries we might go
literally insane believing in determinism: we won’t be able to
integrate our conceptual
understanding that we are determined creatures with our phenomenal
self-models. But these don’t conflict precisely
because the former is conceptual, the later phenomenal. How does it
feel to be a perfectly determined creature (on the assumption we are)?
Just as we presently do, even if that feeling
might involve what we conceptually know is the illusion of being
undetermined or ultimately self-caused in some respect. We stay sane since
the conscious self-model, as Metzinger himself shows in
his tour de force Being No One: The Self-Model Theory of Subjectivity,
is an extremely robust phenomenal construction of the brain,
generally impervious to mere concepts. And
besides, it’s not clear that the feeling of being a contra-causal agent is
essential to the self-model anyway. There’s
probably cultural variability in the contra-causal
agent illusion, in that the feeling of being a self may
not always be interpreted as having contra-causal freedom. And
some people (such as
Susan Blackmore) have gotten
rid of it; they deny feeling as if they’re ultimately self-caused or
uncaused in any respect, and they get around in the world just fine.
So there’s no insurmountable problem here.
worries about the anti-democratic implications of determinism: “Making
a complex society work implies controlling the behavior of millions of
people; if individual human beings can control their own behavior to
a much lesser degree than we have thought in the past, if bottom-up
doesn't work, then it becomes tempting to control it top-down, by the
state.” But hold the phone. I control my behavior in that its
bottom-up and top-down systems that result
in what I do, no one else’s (one of Daniel Dennett's
favorite points against free will panic).
I’m not "out of control"
just because determinism might be the case. It’s just that there’s no
separate uncaused or indeterministic libertarian self pulling the strings.
Since I’m not out of control, the state has no good
justification to encroach on my liberty to act voluntarily within the
law. So there’s no
implication from determinism, or from losing the
“robust conscious experience of free will,” to totalitarianism.
properly understood, the challenge to
contra-causal free will posed by determinism
isn't a danger, either psychologically or politically. There
might in fact be personal and social benefits in challenging the myth of the
self-made self, and besides, it’s more interesting and honest to live in the
light of what neuroscience shows to be the case about ourselves.
Childhood’s end, right?
A more compassionate libertarianism
Cathy Young, syndicated columnist and
contributor at Reason, recently took an enlightened
view of poverty - for a
libertarian. She comes across as reasonably compassionate, compared
for instance to Randian Objectivists, the
radical me-firsters some of whom advocated withholding aid for hurricane
victims. Young disavows such cold-blooded reliance on "personal
responsibility," acknowledging that people can't simply bootstrap themselves out
of poverty: "Most of us, if born into bad
circumstances, would have likely ended up trapped in the same self-defeating
patterns." Of course she still takes a small government position,
saying that "spending more money won't cure poverty," when progressives would
argue that more money, intelligently allocated, can make quite a difference.
Nevertheless, overall Young models a more altruistic libertarianism that takes a
causal understanding of the culture of poverty seriously. This is
progress, even if Young isn't yet a progressive, as evidenced by her views on
challenged on free will
Ethical Culture Society of Bergen County, NJ was regaled with a
hard-hitting and very entertaining talk by Robert Gulack on what he calls
the "third lie" of contra-causal free will, the first two being god and
immortality. He cites a host of luminaries, all of whom were skeptics
about such freedom (Spinoza, Hume, Mill, Jefferson, Lincoln, Twain, Einstein,
Darrow), and draws out the progressive implications of seeing ourselves as fully
caused participants in the natural order. And he reassures us that, just
as we don't need god to be good, "In just the same way, ethics can exist
without free will. We can make ethical commitments even though we are not,
in some ultimate sense, free to choose what those commitments will be. In
fact, we do make ethical commitments when and only when we are caused to make
them. " By all means read the rest of what Gulack has to say - it's an
excellent example of how naturalists can, and should, challenge what Alan Watts
called the taboo against knowing who we are.
Rocker into naturalism
Turns out that Greg Graffin, founder of the
Bad Religion, is a full-fledged
naturalist. He studied at Cornell with ally-of-naturalism
Will Provine, doing his Ph.D thesis on
Atheism and the Naturalist Worldview: Perspectives from Evolutionary Biology."
He's also running the
Cornell Evolution Project, and he's got a
that explains the main findings, well worth a look. Stay tuned for more
from Graffin about naturalism in the next few years.
Searching for Ethics in
a New America
professor of religion
is working on a Ford Foundation project,
Searching for Ethics in a New America,
in which she exposes the roots of our common cultural misunderstanding of the
human person as free and self-originating. She's
conducting interviews with immigrant Buddhists, Muslims, and native Navajos
to search for more realistic ways to understand human action and ethics.
Regarding which, she has a paper
here on Spinoza and naturalizing ethics just out in
Cognitive, Emotive, and Ethical Aspects of Decision Making in
Humans and in Artificial Intelligence,
Volume III. In it she writes: "The
doctrine of the freedom of the will is problematic because it both
mis-describes the human person and also has negative personal, social, and
public policy consequences. Assigning to the individual complete
responsibility for his or her triumphs or failures aggrandizes the
privileged and blames the poor and needy for their situation. It suggests
that all solutions are individual rather primarily social and systemic."
The Theory of
Negligent Design, according to
Scene: The Rhohchian's have sponsored a motion
to accept Earth as a member of the Galactic Council, but the Iridian
representative challenges the motion by relating the true story of humankind's
"I shall now put a few final questions to the honorable
delegation from Rhohchia! Is it not true that many years ago there
landed on the then dead planet of Earth a ship carrying your flag, and that,
due to a refrigerator malfunction, a portion of its perishables had gone
bad? Is it not true that on this ship there were two spacehands,
afterwards stricken from all the registers for unconscionable dealing with
duckweed liverworts, and that this pair of arrant knaves, these Milky Way
ne'er-do-wells, were named Gorrd and Lod? Is it not true that
Gorrd and Lod decided, in their drunkenness, not to content themselves with
the usual pollution of a defenseless, uninhabited planet, that their notion
was to set off, in a manner vicious and vile, a biological evolution the
likes of which the world had never seen before? Is it not true that
both these Rhohches, with malice aforethought, devised a way to make of
Earth - on a truly galactic scale - a breeding ground for freaks, a cosmic
side show, a panopticum, an exhibit of grisly prodigies and curios, a
display whose living specimens would one day become the butt of jokes told
even in the outermost Nebulae? Is it not true that, bereft of all
sense of decency and ethical restraint, both these miscreants then emptied
on the rocks of lifeless Earth six barrels of gelatinous glue, rancid, plus
two cans of albuminous paste, spoiled, and that to this ooze they added some
curdled ribose, pentose, and levulose, and - as though that filth were not
enough - they poured upon it three large jugs of a mildewed solution of
amino acids, then stirred the seething swill with a coal shovel twisted to
the left, and also used a poker, likewise bent in the same direction, as a
consequence of which the proteins of all future organisms on Earth were
Left-handed?! And finally, is it not true that Lod, suffering at the time
from a runny nose and - moreover - egged on by Gorrd, who was reeling
from an excessive intake of intoxicants, did willfully and knowingly sneeze
into that protoplasmal matter, and, having infected it thereby with the most
virulent viruses, guffawed that he had thus breathed 'the bloody breath of
life' into those miserable evolutionary beginnings?! And is it not
true that this leftwardness and virulence were thereafter transmitted and
handed down from organism to organism, and now afflict with their continuing
presence the innocent representatives of the race Artefactum Abhorrens,
who gave themselves the name of 'homo sapiens' purely out of
simple-minded ignorance? And therefore is it not true that the
Rhohches must not only pay the Earthling's initiation fee, to the tune of a
billion tons of platinum, but also compensate the unfortunate victims of
their planetary incontinence - in the form of Cosmic Alimony?!"
- from Stanislaw Lem, The Star Diaries, "The Eighth
Voyage," 1976 Avon Press paperback, pp. 42-43.
Sam Harris writes about contra-causal free will in a footnote
from his book The End of Faith,
and pretty much nails it as a morally harmful, logically incoherent illusion.
Just one quibble about agency at the end....
The belief that human
beings are endowed with freedom of will underwrites both our religious
conception of "sin” and our judicial ideal of
"retributive justice.” This makes free will a
problem of more than passing philosophical interest. Without freedom of
will, sinners would just be poorly calibrated clockwork, and any notion of
justice that emphasized their punishment (rather than their
rehabilitation or mere containment) would seem deeply incongruous. Happily,
we will find that we need no illusions about a person’s place in the causal
order to hold him accountable for his actions, or to take action ourselves.
We can find secure foundations for ethics and the rule of law without
succumbing to any obvious cognitive illusions.
Free will is actually
more than an illusion (or less) in that it cannot even be rendered coherent
conceptually, since no one has ever described a manner in which mental and
physical events could arise that would attest to its existence. Surely, most
illusions are made of sterner stuff than this. If, for instance, a man
believes that his dental fillings are receiving radio broadcasts, or that
his sister has been replaced by an alien who looks exactly like her, we
would have no difficulty specifying what would have to be true of the world
for his beliefs to be, likewise, true. Strangely, our notion of “free of
will” achieves no such intelligibility. As a concept, it simply has no
descriptive, or even logical, moorings. Like some perverse, malodorous rose,
however we might attempt to enjoy its beauty up close, it offers up its own
The idea of free will
is an ancient artifact of philosophy, of course, as well as a subject of
occasional, if guilty, interest among scientists—e.g., M. Planck, Where
Is Science Going? trans. and ed. J. Murphy (1933; reprint,
Woodbridge, Conn.: Ox Bow Press, 1981); B. Libet, “Do We Have Free
Will?” Journal of Consciousness Studies 6, nos. 8–9 (1999): 47–57; S.
A. Spence and C. D. Frith, “Towards a Functional Anatomy of
Volition,” ibid., 11–29; A. L. Roskies, “Yes, But Am I Free?”
Nature Neuroscience 4 (2001): 1161; and D. M.
Wegner, The Illusion of Conscious Will (Cambridge: MIT Press, 2002).
It has long been obvious, however, that any description of the will in terms
of causes and effects sets us sliding toward a moral and logical crevasse,
for either our wills are determined by prior causes, and we are not
responsible for them, or they are the product of chance, and we are not
responsible for them. The notion of free will seems particularly suspect
once we begin thinking about the brain. If a man’s "choice”
to shoot the president is determined by a certain pattern of neural
activity, and this neural activity is in turn the product of prior
causes—perhaps an unfortunate coincidence of an unhappy childhood, bad
genes, and cosmic-ray bombardment—what can it possibly mean to say that his
will is "free”? Despite the clever exertions of
many philosophers who have sought to render free will
"compatible” with both deterministic and
indeterministic accounts of mind and brain, the project appears to be
hopeless. The endurance of free will, as a problem in need of analysis, is
attributable to the fact that most of us feel that we freely author
our own actions and acts of attention (however difficult it may be to make
sense of this notion in logical or scientific terms). It is safe to say that
no one was ever moved to entertain the existence of free will because it
holds great promise as an abstract idea.
In physical terms,
every action is clearly reducible to a totality of impersonal events merely
propagating their influence: genes are transcribed, neurotransmitters bind
to their receptors, muscle fibers contract, and John Doe pulls the trigger
on his gun. For our commonsense notions of agency to hold, our actions
cannot be merely lawful products of our biology, our conditioning, or
anything else that might lead others to predict them—and yet, were our
actions to be actually divorced from such a causal network, they would be
precisely those for which we could claim no responsibility. It has been
fashionable, for several decades now, to speculate about the manner in which
the indeterminacy of quantum processes, at the level of the neuron or its
constituents, could yield a form of mental life that might stand free of the
causal order; but such speculation is entirely oblique to the matter at
hand—for an indeterminate world, governed by chance or quantum
probabilities, would grant no more autonomy to human agents than would the
incessant drawing of lots. In the face of any real independence from prior
causes, every gesture would seem to merit the statement "I
don’t know what came over me.” Upon the horns of this dilemma, fanciers of
free will can often be heard making shrewd use of philosophical language, in
an attempt to render our intuitions about a person’s moral responsibility
immune to worries about causation. (See Ayer, Chisholm, Strawson, Frankfurt,
Dennett, and Watson—all in G. Watson, ed.,
Free Will [Oxford: Oxford Univ. Press, 1982].) Although we can find no
room for it in the causal order, the notion of free will is still accorded a
remarkable deference in philosophical and scientific literature, even by
scientists who believe that the mind is entirely dependent upon the workings
of the brain.
What most people
overlook is that free will does not even correspond to any subjective
fact about us. Consequently, even rigorous introspection soon grows as hostile
to the idea of free will as the equations of physics have, because apparent
acts of volition merely arise, spontaneously (whether caused, uncaused, or
probabilistically inclined, it makes no difference), and cannot be traced to
a point of origin in the stream of consciousness. A moment or two of serious
self-scrutiny and the reader might observe that he no more authors the next
thought he thinks than the next thought I write.
- The End of Faith, pp. 262-4
Here's the quibble: We can still talk about human
agents and agency in a deterministic context, since when I act freely - that is,
without being coerced - I do author my actions, since no one else does.
Put another way, I am, partially, my actions. Human agents,
although fully caused, don't disappear under naturalism, about which see
here. But such naturalized freedom, agency and authorship don't
support the ultimate sort of praise and blame that accrues to the contra-causal,
self-made self, as Harris makes clear.
Will Provine on the
front lines, again
Cornell biology professor Will Provine continues to fight the
good fight against contra-causal free will. Most recently on August 29
(2005) he gave a
lecture for the Bioethics Society of Cornell. As the Cornell Sun
He added that if society
recognized the absence of free will, society would ultimately be much kinder
to its less fortunate.
“I hated the idea of human
free will,” Provine added. He also argued that humans mostly provide their
own moral guidance, and that “ultimate moral responsibility is nonexistent.”
He admitted, “Free will is the hardest [preconception] … to give up.”
The lecture received mixed
reactions from the crowd.
reactions are no surprise when challenging centuries of received wisdom
about human agency. Although many academics recognize the incoherence
of libertarian free will, few are willing to come out and say so in a public
forum, or suggest the significant consequences of giving up the idea of
contra-causal freedom for our attitudes and behavior. Provine is to be
congratulated for taking a strong, explicit stand on a matter of such
controversy and importance. And he's been at this a long time, see
The Limits of
Survival at Reason magazine, Cathy Young considers the conference on
The New Neuromorality hosted by the American Enterprise Institute.
Since Naturalism.Org has two takes on this conference, one
here (directly below) and one here, and since
Young's pro-retribution views have been critiqued
here, what follows is just a brief rejoinder to a few questionable
[Joshua] Greene’s 'dirty
little secret' was that the soul does not exist,
[Stephen] Morse’s was that we still have no clue
'how the brain enables the mind'
and produces mental states or moral judgments.
That there is no immaterial soul, he argued, doesn’t mean that
'we are not the kind of creatures we think we are—conscious,
rational, intentional beings'; science or no
science, the physicalist model must be resisted for the sake of human
dignity and 'the good life we can live together.'"
wrong to think we have no clue about how the brain enables mind, since clues are
mounting daily, some of which Greene is discovering in MRI scans of brains
during moral decision-making. Morse is also wrong to suppose we must
resist physicalism, since physicalism is no threat to personhood or dignity or
the good life. No one supposes that persons can be understood at
the physical level of neurons and neurotransmitters, but they are nevertheless
composed of such sub-personal, material elements. That we are fully
physical creatures is simply testament to the amazing (but not miraculous)
powers of matter, properly organized. Considerably more about Morse's
presentation is here.
to do away with the soul is not exactly a prescription for no more
squabbling. Nor is doing away with retributive justice. [Steven]
Pinker noted, somewhat ambivalently, that
'the thirst for retribution'—punishment
as 'just deserts' and a
way to right the moral balance—may be inherent in human nature, and a legal
system that does not satisfy this need may never command enough respect to
be effective. Confirming this point, Greene acknowledged that in a host of
studies people evaluating hypothetical crimes assess punishment based on
their notions of just deserts, not deterrence."
That the thirst for retribution
might be inherent in human nature is of course not an argument in its favor,
since there are many natural impulses worth resisting so long as they have no
moral justification, for instance to cheat, dominate, enslave, or kill. A
legal system that instead appealed to our capacity to understand causality,
which in turn undercuts the assumption of the self-caused self that deserves
retributive punishment, is not an impossibility. True, for it to command
respect requires that we marginalize the retributive impulse, but that's exactly
what Greene's dismantling of the soul helps us to do. That his research
shows the prevalence of desert-based responses argues for public education, not
resignation to retribution.
Young concludes by saying:
the big philosophical picture, perhaps Morse’s advice—to simply go on
treating each other as autonomous and rational creatures—makes the most
sense, even if
rationality may be his code word for soul. I’m not sure even
traditional ethics ever treated the autonomous human self as completely
exempt from external causes. And one need not be a believer in immaterial
souls to think that, just maybe, the rational and moral consciousness packed
inside our brains is something more than the sum of our neurons."
Young and Morse are right: we have to treat each other as
rational and autonomous creatures, but in the light of naturalism there's no
longer any good reason to treat each other as first causes deserving of
retributive punishment. That traditional commonsense ethics admits
we are caused in some respects doesn't negate the fact that it still clings to
the myth of contra-causal agency, which is the usual justification for punishing
people without regard to consequences. Young is also right that our
rational and moral consciousness is more than the sum of our neurons: it's one
of the higher level emergent properties of our socialized brains.
But again, there's nothing in such emergence that justifies our retributive
punishment practices (about which see the Criminal
page). That Young, Morse and other retributivists nevertheless countenance
such practices shows the limits of reason in the face of an entrenched and
irrational commitment to our punitive legal tradition.
sans soul, courtesy of conservatives
American Enterprise Institute (AEI),
a conservative think-tank, hosted a one day conference in June, 2005 on
The New Neuromorality, a meditation on the impact of neuroscience on our
conceptions of self, responsibility, free will, ethics and the law.
The entire proceedings are available
here, and they're well worth a look. Speakers included Harvard
cognitive scientist Steven Pinker, UPenn law professor Stephen Morse, and
Princeton neurophilosopher Joshua Greene, among others.
striking about the presentations is the general acceptance of neural
materialism, or more broadly, a naturalistic determinism. In their
talks on how neuroscience might influence our thinking about moral
responsibility and criminal justice, Morse describes himself as "a
good-enough-for-government-work determinist" and both Pinker and Greene
explicitly debunk contra-causal free will. This means, necessarily,
that all three favor conceptions of responsibility, moral and criminal, that
are brain-based, not soul-based. Pinker suggests that when assessing
culpability we shouldn't ask any longer whether someone has free will, only
whether or not they are deterrable. Similarly, Greene argues that,
having put the soul out of a job, we should move from a retributive model of
punishment toward a more humane deterrence-based system, in which we stop
supposing people deeply deserve to suffer for their crimes. Morse,
equally the materialist and determinist, nevertheless holds out for
retributivism, even though he concedes the function of the law is to guide
behavior (why he does so will be food for thought in a forthcoming analysis,
now available here).
of this affair contrasts markedly with a 1998 conference on more or less the
Neuroscience and the Human Spirit, hosted by another conservative
think tank, the Ethics and Public Policy Center (EPPC).
There, many were concerned that neuroscience threatens widely held beliefs
about free will and the human "spirit" (soul), and some presenters did their
best to defend dualism, (although this proved difficult since most were
scientists). That those meeting in 2005 weren't worried about the
death of the soul and its special freedom might reflect a growing acceptance
of naturalism, at least among the intelligentsia. Or it might be a
matter of the particular speakers at each event, since the AEI panel was
overall pretty liberal (which speaks to the open-mindedness of Sally Satel,
organizer of the conference). In any case, both the AEI and the EPPC
are to be congratulated for providing forums in which the implications of
scientific naturalism for our self-concept and for policy were thoughtfully
No Offense Taken
Naturalists and supernaturalists are equally
standard issue human beings with largely the same complement of needs, but
they seem to inhabit very different epistemic and metaphysical universes, at
least according to what they say. To the supernaturalist, the project
of naturalizing such things as rationality and ethics seems absurd, since
there's no external guarantor of truth or moral principles. Without
god and a causally privileged free will, what's to prevent us from being
systematically misguided? How, without certain foundations, or a
causally uncorrupted point of view, can we certify our beliefs? For
the naturalist, these are admittedly tough problems, but resorting to
supernatural justifications seems too easy an out - tennis without a net, as
Daniel Dennett puts it. There's got to be independent evidence for
something special outside or above natural causality, otherwise we're simply
positing the backup we need - how convenient. And really, having all
these problems solved in one fell swoop is simply too dull a prospect.
Better a wild universe than tame, naturalists think.
Such differences came vividly into focus recently as the Center for
Naturalism was discovered by Christian evangelicals. They were
delighted to have found, at last, actual unabashed proponents of naturalism
incautious enough to reveal that crazy worldview in all its illogicality.
Joe Carter of the Evangelical Outpost got the ball rolling with a nice
Naturalism for Dummies, which sparked a good deal of additional comment at
other religious blogs, and then more at Joe's place, including a
roundup of posts from fellow religionists and a meditation on the
absurdity of naturalist ethics.
It's good occasionally to see yourself through the opposition's eyes just to
understand their concerns, so I recommend naturalists have a look (and it's
not unamusing to witness such incredulity). The denial of
contra-causal free will, not surprisingly, catches a good deal of flack,
since this seems to undercut choice, moral responsibility and ethics.
And how can we be merely collections of molecules without souls? After
all, molecules can't create meaning, or understand anything, or make free
choices. How can an authentic spiritual response to existence arise if
we don't have literal spirits residing in us? Since we obviously do
understand, make choices, behave ethically, and have spiritual lives,
naturalism must be false.
So you get the essentialist picture, and there's no help for it,
reassurances about naturalism notwithstanding. Between the
naturalist and supernaturalist there are very different cognitive
commitments and very different tastes in what a universe should look like.
There are desires for security, comfort, specialness, and scripted meaning
on the one side vs. excitement, questioning, perplexity, and
astonishment on the other. They pity our unmoored floundering, and we
their staid incuriosity (to generalize unfairly about both sides just for
effect). But would we have it any other way? Imagine there were
no opponents to poke fun at us, and none for us to generalize unfairly
that would be a dull universe. So thanks Joe, and keep up the good
we really just smart robots?” in Reason,
April, 2005) is worried about the encroaching scientific understanding of our
brains and behavior. If science shows us to be simply smart biological
machines, he believes this undermines liberal democracy, human rights, moral
responsibility, and self-worth; all is permitted and authoritarian regimes will
Fortunately, he argues, John Searle (Mind: A Brief Introduction) and Jeff
Hawkins (On Intelligence) have shown the mechanistic thesis is false, so
we needn’t worry. Human beings, although part of nature, nevertheless have
a special something that grounds our dignity and value.
difficulty is that Silber doesn’t quite specify what this special something
might be. Is it consciousness? Nothing in Searle’s biological
naturalism or in Hawkins’ account of intelligence requires that our capacity for
consciousness couldn’t be computable and thus a property of a machine, once we
understand the functions of the neural processes subserving consciousness.
Could it be free will? But even Searle admits that the experience of free
will might be an illusion, perhaps an adaptive illusion at that (although it’s
more likely the result of not being able to see the causal workings of our own
brains). Could it be personhood? But personhood rests on physically
instantiated capacities for sentience and self-concern, and complex though these
are, there’s no reason in principle why intelligent machines might not someday
have moral claims on us, were they given such capacities (on this point, see
I Robot, and Benjamin
Soskis’ article “Man
and the machines” in Legal Affairs).
doesn’t establish the existence of a special human something (a soul, perhaps?),
Silber needn’t worry that the mechanistic thesis poses a threat. Even if
it turns out that we’re amazingly complex biological machines, we nevertheless
remain persons, and our desire to be treated as ends in ourselves won’t
diminish. After all, that’s “hard-wired” into the very neural architecture
of our brains, as are the rest of our basic motives and desires. We’d
still love and protect our families, fear death, abhor tyranny, enjoy a good
meal, and generally life would go on, minus the belief in the soul. So we
can relax: there’s no moral or political threat stemming from science, should it
unmask us as “mere” machines. Even if we are, we’ll continue to defend our
freedoms with all the resources nature has given us.
1. This is also Paul Davies' worry
about the scientific attack on contra-causal free will, see "Davies'
Really Dangerous Idea."
Liberals, evil, and free will
Tibor Machan, writing in the
Desert Dispatch (and reprinted in Free Inquiry,
Oct-Nov, 2005), inveighs against liberals, claiming that "Liberals
tend to excuse all evil with stories about bad luck and disease and a bunch
of other impersonal forces that make people do bad things."
He goes on to say that "The
basic philosophical thesis behind the liberal mentality...is the denial of
So according to Machan, by
accepting that evil has causes,
liberals deny free will, and in so doing deny the basis for moral judgments.
But is it true that if
everything is caused, everything is excused?
hardly the case that liberals deny free will. Liberals, like most
people of all political persuasions, tend to suppose that we have
contra-causal freedom. True, they are more likely to look for causes,
since they are less likely than conservatives to suppose that people are
self-made (see George Lakoff's book
Moral Politics on this). But most liberals, regrettably, are not
yet full-fledged naturalists in their understanding of persons and their
relationship to the world.
But even if
they did deny free will, would that make liberals the dangerous deniers of
morality, as Machan seems to think? No. First, we don't lose our
moral compass when we acknowledge that persons and their behavior, like
everything else in nature, are entirely caused phenomena. After all,
we still retain our deeply held desires to protect ourselves and our loved
ones, and to promote a more flourishing, humane society. Second, we
still have all our causal powers available to bring to bear in defending
these values, so we don't lose our efficacy as agents. In short,
we don't need to suppose, as Machan thinks we must, that there's something
self-caused within each person to justify moral judgments and enforce
standards of right and wrong. For more on this see "Materialism
liberals must "toss their derisive
attitude toward the rest of us who think it is perfectly sensible to
distinguish between good and evil, right and wrong." But liberals
aren't derisive of such distinctions, and to say so is a calumny. They
simply are more likely to think, justifiably, that such distinctions are
compatible with admitting that behavior, including
evil, has causes.
Machan is very much like David
Brooks (see immediately below on "moral levitation")
in supposing we must be causally privileged over nature in some respect to
be moral agents. But there's no evidence that we are thus privileged,
or that such exalted status is necessary to ground our moral practices.
The Moral Levitation of David Brooks
- must we float free of causality to count as
In his latest
book, Freedom Evolves, Tufts University philosopher Daniel Dennett coins
the wonderful term “moral levitation” – you’ll even find it in the index.
It names what some philosophers and many lay people
think is required for morally responsible choices: “Real autonomy, real freedom,
requires the chooser be somehow suspended, isolated from the push and pull
of…causes, so that when decisions are made, nothing causes them except you!”
(p.101-2, original emphasis).
Times regular David Brooks expresses this view perfectly, writing in his May
15, 2004 column, “Columbine:
Parents of a Killer,” that “My instinct is that Dylan Klebold was a
self-initiating moral agent who made his choices and should be condemned for
them. Neither his school nor his parents determined his behavior.”
Klebold was self-initiating, Brooks isolates Klebold from the causal push and
pull of school and parents, disconnecting him from the world so that he can
count as a “real” moral agent. Brooks seems to think that Klebold’s
choices are morally condemnable only if he wasn’t determined to make
them. But as Dennett, myself, and
others continue to point out, such supernatural moral levitation isn’t in
the least necessary to sustain judgments of right and wrong, or to justify
holding persons responsible. Causal determinism – being fully caused to be
who you are, and do what you do – isn’t a threat to moral agency, although it
undermines certain justifications for punishment which Brooks and other
conservatives may not want to give up.
moral agency survives under determinism because most people, having capacities
of rationality and anticipation, can legitimately be held responsible in
order to “guide
goodness,” as University of Pennsylvania law professor Stephen Morse
succinctly puts it. Those who are insane and those children who haven’t
yet reached the age of reason don’t count as moral agents, because the prospect
of being held accountable simply doesn’t work to shape their behavior.
Rationality and reasons-responsiveness are causal, deterministic functions of
our complex but fully physical brains, and if such functions weren’t
deterministic, they wouldn’t be reliable. Likewise, the processes of
moral, legal, and criminal accountability that shape good behavior (or not, if
the agent or the processes are defective) are causal, not magical or
supernatural in their operations. Dennett explores these themes at length
in Freedom Evolves and his other book on free will, Elbow Room, as
does Duke philosopher Owen Flanagan in his book The Problem of the Soul.
an adolescent having reached the age of reason, and undoubtedly knowing that
what he and Eric Harris were contemplating was wrong, counts as a moral agent.
But he was determined – by his biological endowment, parents, school, bullies,
peer influences, Harris, the availability of guns, and other factors unknown –
to commit mayhem just as certainly as objects fall to earth. To suppose
otherwise is to imagine that human behavior is supernatural in some respect,
magically self-initiated in a way that owes nothing to one’s history or genetic
endowment or current circumstances. We are not causally privileged moral
levitators, and don’t need to be to be judged and held responsible.
Indeed, if we were in some respect independent of causality, then our
responsibility and accountability practices wouldn’t work.
important that our hard won, scientific understanding of behavior should be
reflected in these practices, and in this instance it should modulate our
condemnation of Klebold. Seeing the determinants of his character and
actions, we can no longer demonize him in the way Brooks does – we can no longer
suppose his atrocity had no roots beyond him. The naturalistic
appreciation of causality forces us to acknowledge that Klebold was not
self-initiated in his depravity, but a product of his biology, his parenting,
his friends, his town, and his culture. This doesn’t in the least undercut
the judgment that what he did was depraved, but it illuminates the factors that
made him who he was and therefore materially contributed to the fatal
outcome. This means that retributive justifications for punishment based
on the traditional notion of contra-causal, libertarian free will
– that the agent before us is a
causa sui, the ultimate source of himself and his evil
– lose their footing. Not a happy prospect for
those who relish the imposition of just deserts. (Of course, this is not to say
that we don’t have other very good reasons for detaining dangerous individuals.)
explanatory stance – to acknowledge that there is indeed a full causal
explanation of human behavior, albeit partially hidden to us – is strikingly
absent in Brooks’ analysis of the Columbine massacre (both here and in an
earlier column on Harris), possibly because it conflicts with claiming
retributive satisfactions. According to Brooks, Klebold’s parents,
although they cite the “toxic culture” of the school as a possible contributing
factor, “confess that in the main, they have no explanation.” But not
having a complete explanation in hand is quite different from supposing that no
real-world explanation is conceivable. The latter supposition feeds the
assumption of moral levitation: that morally consequential behavior, whether
good or bad, must somehow arise independently of the push and pull of causality.
It also legitimizes the supposed
inscrutability of evil: the pernicious doctrine that horrific behavior is in
a realm apart, beyond our understanding or control.
interesting and important question is whether Brooks and the legions committed
to the assumption of libertarian free will can be persuaded to examine this
assumption, or first, even see it as an assumption. Despite the
logical and empirical implausibility of contra-causal agency, and despite
Dennett’s and others’ explicit attack on libertarian free will, there are
considerable forces arrayed in its defense. We love our retribution, we love
taking ultimate credit and assigning ultimate blame, and we don’t particularly
like the hard work of figuring out causal explanations. But if we can
demonstrate that moral responsibility survives determinism, and moreover
requires it, then perhaps the fear-based objections to a naturalistic
understanding of ourselves can be overcome. In any case, showing that
David Brooks is committed to something as implausible as moral levitation –
thank you Dr. Dennett – might be a good start.
See also this
letter published in the Times on Brooks'
column, and Brian Leiter's
trenchant critique, quoting Nietzsche to good effect.
Continues to Evolve
As someone involved in promoting naturalism, I was pleased to see Julian
review in Reason of Owen Flanagan's
excellent book, The Problem of the Soul (I’ve reviewed it for
Human Nature Review). Like his colleagues senior editor Jacob Sullum and
science correspondent Ronald Bailey, Sanchez seems willing to take science
seriously regarding ourselves and to more or less accept the implications, which
as he notes do not leave things untouched. We don’t, as Flanagan says,
have Cartesian, contra-causal, libertarian free will,
and this fact has major personal and social consequences,
explored at Naturalism.Org.
Sanchez says that "Perhaps the case for retributive punishment is weakened, but
it would surely be a mistake to conclude that only radical freedom would make it
appropriate to hold people responsible for their actions." Actually, the case
for retribution is very much weakened by naturalism;
see for instance “Against
Retribution”. And Sanchez is certainly right that other sorts of freedom –
the sorts compatible with determinism – are sufficient for moral responsibility,
although they don't support retributive punishment (see "Science
But not everything changes. Among other things, I particularly appreciated
Sanchez’s rebuttal of libertarian alarmist Sheldon Richman, to whom I've
(see point 5 of my commentary). Being fully caused creatures is
not, as Richman supposes, to lose a necessary condition for rationality.
As Daniel Dennett among others has pointed out, it’s only our deterministic
connections to the world that make reliable prediction and control possible.
Any causal unlinking of the mind from its surroundings would make us less, not
On one major point, however, I think Sanchez gets it wrong. He sees
no particular implication from naturalism to any necessary rethinking of social
inequality. But there is an implication: vast differences in
material well-being and opportunities are often justified by appeals to
metaphysical desert based in free will, and once that justification is
subtracted via naturalism, then it's a good deal more difficult to make the case
for such differences. He writes:
“Similarly, critics of liberalism – and some liberals as well – believe that
disparities of wealth and income are justified only if the well off ‘deserve’
what they have in some deep sense. But as the late philosopher Robert Nozick
observed, there are many things to which we are entitled, even though they are
not deserved ‘all the way down.’ Being born with two working eyes is an accident
of fate, not something the sighted have done anything to ‘deserve.’ It
does not follow that our eyes are up for grabs, subject to political
reallocation. Our decisions – our capacities and the uses we make of them – are
as much a constitutive part of us as our bodies. Respect for embodied persons
still requires deference to our ‘unfree’ choices and their consequences.”
The analogy between having eyes and having great wealth or talent is weak, since
virtually all of us are born with eyes, while only a small minority have the
luck to be born into the ranks of the well-off, or to be
endowed with superior mental and physical capacities. Offsetting such
luck with progressive social policies is not to redistribute or rob anyone of
anything essential, but it would be to improve the lot of millions. And
although "respect for embodied persons" is an important value, it doesn't imply
that each of us has a moral right to all our lucky advantages. John Rawls
made this point in A Theory of Justice; see the
Social Policy page, note 1. It's simply to
recognize that personal liberty (for instance, to amass unlimited wealth)
can’t be supposed to trump all other values, all the time, in the ordering of a
This caveat and a few other minor quibbles aside, Sanchez assesses Flanagan’s
book, and the naturalistic picture of ourselves, fairly and positively.
Although libertarians often tend to be vociferous
defenders of radical freedom (after all, they style themselves rugged
individualists, beholden to no one and to no thing), Reason counters this
stereotype with Sanchez’ review and Ronald Bailey's earlier
interview with Daniel Dennett, both of which explicitly challenge
contra-causal free will. Reason thus evinces a commendable courage to
question one of our culture's most cherished beliefs,
something that few newsstand publications dare to do (other exceptions are the
Humanist, Free Inquiry, and
New Scientist). I hope Reason continues to evolve in a
naturalistic direction under the enlightened supervision of Sanchez, Sullum, and
Luck Swallows Everything*
On 9/28/03, the Boston Sunday Globe published an essay
by Matthew Miller, "The
Wages of Luck," in which he draws out the policy implications of the fact
that none of us chooses our parents, innate abilities, or social status at
birth. He suggests that since the social inequalities that result
from such luck aren't deserved, they shouldn't be left unremedied.
Concerning the genesis of such inequalities, conservative economist Milton
Friedman is quoted as saying, remarkably enough, "What you're really
talking about is determinism vs. free will...In a
sense we are determinists and in another sense we can't let ourselves be.
But you can't really justify free will.''
Indeed. I'd only offer the suggestion that we can, and should, permit
ourselves to be determinists, or at least disavow libertarian free will.
All this is in line with what John Rawls wrote some time ago
in his book, A Theory of Justice:
"It seems to be one of the fixed points of our considered
judgments that no one deserves his place in the distribution of native
endowments, any more than one deserves one's initial starting place in society.
The assertion that a man deserves the superior character that enables him to
make the effort to cultivate his abilities is equally problematic, for his
character depends in large part upon fortunate family and social circumstances
for which he can claim no credit. The notion of desert seems not to apply
to these cases" (p. 104).
The upshot is that by accepting Rawls' view of of luck and
desert, Friedman agrees with Miller that more should be done to provide equal
opportunity for education and an improved standard of living, including a
negative income tax. Such an agenda is one of the main policy goals of the
Center for Naturalism, see
It's encouraging that Miller and Friedman are not only making
the connection between determinism and lack of metaphysical desert, but
understand and accept the egalitarian policy implications as well.
*I've borrowed this title from Galen
Strawson's piece on free will.
A Question for Brights:
How Naturalistic Are You?
On July 12, 2003, the New York Times published an op-ed piece by Tufts
philosopher Daniel Dennett, "The
Bright Stuff," on the newly minted term for philosophical naturalists:
"brights." Dennett defines brights as those who hold “a naturalist as
opposed to a supernaturalist world view,” exactly what the coiners of the term
have in mind (see
www.the-brights.net). But as the coiners also point out, there are
many varieties of brights, from hard-boiled confrontational atheists to more
relaxed, irenic humanists. It’s also clear that brights will vary
considerably in their versions of naturalism, both in explicitness and
completeness. In particular, many of those who will end up calling
themselves brights, or naturalists, will still hold that human beings are causal
exceptions to nature by virtue of possessing what philosophers call libertarian
free will. This is the power to cause with out oneself being fully at the
effect of prior or surrounding conditions. Most secular humanists,
free-thinkers, atheists, agnostics and other varieties of brights have not yet
seen that this traditional sort of free will, with its causal exceptionalism, is
just as supernatural as any of the attributes traditionally ascribed to god.
In short, most brights are not yet thorough-going naturalists in their world
view, since they reserve for themselves a special human power to transcend cause
So, one question to ask self-proclaimed brights is "how much of a naturalist are
you?". Have you thought through the implications of a consistent
naturalism for yourself, for understanding human behavior, and therefore for
your attitudes and for social policy? What would it mean to live in the
light of understanding that each and every aspect of ourselves has its origins
in what has come before, and in what surrounds us? Can we, perhaps, learn
to live without the meme of contra-causal free will? To contemplate that
possibility is to challenge some deeply held beliefs about the presumptive
foundations of morality and social order, and to question the legitimacy of
social institutions that impose retributive punishment and take for granted
notions of supernaturalistic, “causa sui”
(self-originated) merit and desert. It is to invite a revolution in our
traditional self-concept that may have effects far beyond the commonplace
rejection, in secular circles, of standard supernatural entities and attributes.
It’s unlikely that many brights will anytime soon come out of the closet to
question free will, since most aren’t yet consistent naturalists. But the
term “bright,” if it stays explicitly connected to the coiners’ original
distinction between naturalism vs. supernaturalism, will have the beneficial
effect of increasing awareness of naturalism itself. It’s of course quite
possible that bright will simply become synonymous with atheist or non-believer,
in which case this consciousness-raising effect will be lost. For
instance, the subtitle in the print version of Dennett’s op-ed was
“Atheists, agnostics, and nonbelievers unite,” and note that as the piece
proceeds, bright more and more comes to signify nonbeliever, not naturalist.
To keep the root meaning of bright – someone holding a naturalist world view –
maximally salient, those introducing and using the term should state this
definition, or consider using “naturalist” as a synonym at some point in the
conversation or discussion. Naturalism, unlike atheism, is a positive
philosophy of human nature and the world based in a commitment to science as a
mode of knowing. The term bright may well serve as a useful umbrella
designation for naturalists of all stripes, but if it ends up synonymous with
nonbeliever, this would be to miss an important opportunity to make naturalism
and its implications known.
Free will? Not really
More and more about free will is making it into the news as naturalism, albeit
obliquely, finds a voice in recently published books and articles. On May
24, 2003, the New Scientist carried an
interview with Daniel Dennett about his book, Freedom Evolves, "Free
will, but not as we know it." Dennett takes the line that our
freedom is a matter of having the rational and cognitive abilities to anticipate
and avoid damaging consequences and bring about positive consequences.
This sort of freedom, the capacity for control, is consistent with determinism
and indeed depends on deterministic connections between events, as
Dennett points out. Because it avoids
fatalism - the erroneous notion that our actions make no difference to
outcomes - Dennett suggests that our worries about freedom and dignity in the
face of determinism are misplaced. There is no need to slide into what
might be called free will panic.
What's largely missing in Dennett's account, however, in both the interview and
the book, is any suggestion that attitudes and social practices based in the
dualistic notion of contra-causal free will - what philosophers call libertarian
free will - might or should change. This sort of free will, Dennett
acknowledges, doesn't exist, so one would think that heralding it's
non-existence would be an opportunity to explore how our lives might improve
were we to drop the idea of such freedom (the mission of Naturalism.Org and the
Center for Naturalism). Although he admits that "there have been
advances which have shown us that people we used to hold fully responsible for
their actions are not," these same advances show that the traditional rationale
for holding people responsible is untenable, in which case the attitudes and
policies predicated on this rationale need reassessment as well. This
complaint is aired further directly below in "Reason
Reason's cover story for May,
"Pulling Our Own
Strings: How Evolution Generates Free Will", an interview with philosopher
Daniel Dennett, who's just published his second book on free will,
Freedom Evolves (the first was Elbow Room). Of considerable
interest is that Reason, a libertarian publication, chose to present
Dennett's case against traditional contra-causal free will (and for a
compatibilist understanding of freedom)
so forthrightly and so prominently. After all, in libertarian circles,
it's often supposed that the basis for political liberty is some sort of
metaphysical, ultimate freedom - the power to choose without oneself being
entirely determined to choose. Being fully caused creatures
logically threatens our status as ultimately autonomous, self-made individuals
who can lay claim to resources and privileges (e.g., unlimited financial
rewards) because we pulled ourselves up by our bootstraps and thus deeply
deserve what's coming to us (the same goes for punishment, of course).
That libertarians do in fact often presume they have such freedom (what
philosophers have called, not entirely coincidentally, "libertarian" free will)
is evidenced by some exchanges posted at Nat.Org,
including a recent letter Reason kindly published in response to Thomas
Szasz. For further evidence, see also a
commentary on a recent op-ed piece by the
fiercely libertarian Sheldon Richman of the Future of Freedom Foundation.
He says, "If we come to believe that metaphysical freedom is impossible, we will
hardly be in a position to complain when our political freedom is taken away."
Reason, by giving Dennett airtime, and by virtue of the fact that
interviewer Ronald Bailey offers no rebuttal to Dennett's claims for
determinism, materialism, and naturalism, is an encouraging counter-example to
the stereotype I exploit above. Bailey, in fact, seems rather comfortable
with determinism, both here and in another excellent piece on
The Battle for Your Brain,
in which he concedes we don't need to be immaterial essences with contra-causal
free will in order to be held responsible (see the section "Authenticity and
Responsibility"). Compared to 1998, when it published Brian
Doherty's "Blame Society First", Reason
has indeed evolved away from the knee-jerk defense of this sort of freedom, for
which it is to be congratulated. This progress seems to be driven by
Bailey's (and perhaps editor Jacob Sullum's) respect for science, as opposed to
tradition, as the arbiter of what we should believe about human nature. (I
realize to speak of evolution isn't strictly to speak of progress, of course,
but it makes for a nice title.)
Further evolution is in order, however. Both Reason (Bailey) and
Dennett (in both his books) studiously avoid articulating one obvious conclusion
that follows from rejecting libertarian free will: that the justifications
for rewards and punishments based on ultimate moral desert,
justifications so beloved by the right, go by the boards. As philosopher
Galen Strawson puts it in an interview
posted on Nat.Org, "These desires for revenge and retribution are just not going
to be the normal human thing if they don’t involve the belief that the hated
person has DMR [deep moral responsibility]. They’re going to be unusual."
How does one, after all, justify retribution
if you're a naturalist?
Since it goes to the very heart of our assumptions about who deserves what and
why, it's no surprise that this conclusion goes significantly unmentioned by
those who are otherwise naturalists: they may not want to rock the boat quite
this much. After all, it calls into question such things as our
extremely punitive criminal justice system and the absurdly high disparities in
access to a decent standard of living. Some incentives and
disincentives are necessary, of course, to properly order a liberal society, but
the extremes at play in our culture can only be justified on grounds of ultimate
desert, something that Dennett and Bailey understand is an illusion.
Furthermore, the conclusion that we are fully determined creatures prompts us to
look at the actual causes of crime and success, instead of crediting the
individual as the sole source of good and evil. Ignorance is
therefore no longer an excuse for inaction, nor can we any longer blame the
victim. Although it's unlikely that anyone soon will publicly make
the case for social policies consistent with inclusive
naturalism, (although see Derk Pereboom's
work for a heartening exception, thus far still confined to the academy), the
conclusion and its implications are there, waiting their time to come.
Neuroscience enters the debate on free will
The October 15th 2002 issue of the Boston Globe science section ran a
lead story, "A Question of Will," on recent advances in neuroscience and their
implications for our conceptions of self and freedom. To my knowledge,
this is among the first articles to appear on this topic in a mainstream US news
outlet. Although hard line skeptics about free will weren't consulted
(e.g., Owen Flanagan,
Stephen Morse), the piece nevertheless suggests that traditional
contra-causal free will is under considerable pressure. It ends with
reassurance from neuroscientist Michael Gazzaniga that "brains are automatic and
people are free." But what sort of freedom is this? Not that of the
uncaused soul, clearly. My letter
in response makes the crucial point that, despite determinism, we can and must
still hold people accountable - compassionately.
A Question of Will
The issue of free will has
perplexed theologians and philosophers for centuries - now neuroscience enters
the age-old debate
by Carey Goldberg, Globe
Try this: At a moment of your
choosing, flick your right wrist. A bit later, whenever you feel like it, flick
that wrist again.
Most likely, you'd swear that
you, the conscious you, chose to initiate that action, that the flickings of
your wrist were manifestations of your will.
But there is powerful evidence
from brain research that you would be wrong. That, in fact, the signal that
launched your wrist motion went out before you consciously decided to flick.
''But, but, but,'' you'd
probably like to argue, ''but it doesn't feel that way!''
With that protest, you would be
joining a great debate among neuroscientists, philosophers and psychologists
that is a modern-day version of the age-old wrangling over free will.
The traditional conundrum went:
''How can God be all-knowing and all-powerful and yet humans still have free
will?'' And later: ''How can everything be governed by the determinist forces of
physics and biology and society, and yet humans still have free will?''
Those questions still concern
many, but the new neuro-flavored debate over free will goes more like this: Is
the feeling of will an illusion, a wily trick of the brain, an after-the-fact
construct? Is much of our volition based on automatic, unconscious processes
rather than conscious ones?
When Daniel M. Wegner, a
Harvard psychology professor and author of a new book,
of Conscious Will, gives talks about his work, audience members sometimes
tell him that if people are not seen as the authors of their actions, it means
anarchy, the end of civilization. And worse. Some theologies, they tell him,
hold that if there is no free will, believers cannot earn a ticket to heaven for
In reality, neuroscience is not
generally tackling the sweeping philosophical issue of free will, but something
much narrower, said Chris Frith, a neuroscientist at University College London.
''There has been much recent
work addressing the question of how it is that we experience having free will,
i.e., why and when we feel that we are in control of our actions,'' he wrote in
That is not to say that
neuroscience will never enter the philosophical fray.
It could even be that, once the
physiological basis of will becomes better understood, ''You'll get a more
mature, larger view of what's going on and the question of free will might
vanish,'' speculated V. S. Ramachandran, director of the Center for Brain and
Cognition at the University of California at San Diego. No one argues about
''vital spirits'' now that we know about DNA, he noted.
Meanwhile, the debate is still
on, and near its center is an 86-year-old University of California professor
emeritus of physiology, Benjamin Libet.
His seminal experiments on
brain timing and will came out back in the mid-1980s, and the results are still
reverberating loudly today.
Just this summer, the journal
Consciousness and Cognition put out a special issue on ''Timing relations
between brain and world'' that prominently featured Libet's work. And, at a
conference, titled ''The Self: from Soul to Brain,'' held by the New York
Academy of Sciences last month, ''Libet'' rolled off more tongues than Descartes
or Kant or Hume or the other philosophers whose names usually come up when the
subject is will.
What Libet did was to measure
electrical changes in people's brains as they flicked their wrists. And what he
found was that a subject's ''readiness potential'' - the brain signal that
precedes voluntary actions - showed up about one-third of a second before the
subject felt the conscious urge to act.
The result was so surprising
that it still had the power to elicit an exclamation point from him in a 1999
paper: ''The initiation of the freely voluntary act appears to begin in the
brain unconsciously, well before the person consciously knows he wants to act!''
Libet's experiments continue to
be criticized from every which angle. At the New York conference, for example,
Tufts philosopher Daniel C. Dennett argued that it could be that the experience
of will simply enters our consciousness with a delay, and thus only seems to
follow the initiation of the action.
But, though controversial, the
Libet experiments still stand and have been replicated. And they have been
joined by a growing body of research that indicates, at the very least, that the
feeling of will is fallible.
Among that research is
the following experiment by Dr. Alvaro Pascual-Leone, director of the Laboratory
for Magnetic Brain Stimulation at the Beth Israel Deaconess Medical Center.
A subject, he said, would be
repeatedly prompted to choose to move either his right or his left hand.
Normally, right-handed people would move their right hands about 60 percent of
Then the experimenters would
use magnetic stimulation in certain parts of the brain just at the moment when
the subject was prompted to make the choice. They found that the magnets, which
influence electrical activity in the brain, had an enormous effect: On average,
subjects whose brains were stimulated on their right-hand side started choosing
their left hands 80 percent of the time.
And, in the spookiest aspect of
the experiment, the subjects still felt as if they were choosing freely.
''What is clear is that our
brain has the interpretive capacity to call free will things that weren't,'' he
Wegner's book discusses a
variety of other mistakes of will. Among them is the ''alien-hand'' syndrome, in
which brain damage leaves people with the sense that their hand no longer
belongs to them, and that it is acting - say, unbuttoning their shirt - out of
Another recent book, ''The
Volitional Brain: Toward a Neuroscience of Free Will,'' includes a
psychiatrist's description of a German patient who felt compelled to stand at
the window all day, willing the sun across the sky.
Wegner argues that ''the
feeling of will is our mind's way of estimating what it thinks it did.'' And
that, he said, ''is not necessarily a perfect estimate.'' It is ''a kind of
accounting system rather than a direct read-out of how the causal process is
In Libet's interpretation, free
will could still exist as a kind of veto power, in the fractions of a second
between the time you unconsciously initiate an action and the time you actually
carry it out.
For example, he said in a
telephone interview, ''The guy who killed the mayor of San Francisco, he was
obviously deliberating in advance, but then when he gets to the mayor, there's
still the process of, does he now pull the trigger? That's the final act now.
That is initiated unconsciously, but he's still aware a couple of hundred
milliseconds before he does it and he could control it, but he doesn't.''
''That is where the free will
is,'' Libet said.
Such veto power is not enough
for many people, however. ''I want more free will than that,'' Dennett
complained at the conference.
He may not get it, but he will
almost surely get more data about it. Some neuroscientists are using new brain
imaging technology to try to pinpoint what happens in the brain when a person
wills something. With its help, and further work being done on patients with
abnormal volition, more progress appears likely.
''I think,'' Frith wrote, that
''in the next few years we will have quite a good understanding of the brain
mechanisms that underlie our feeling of being in control of our actions.'' But
that, he hastened to add, ''does not in any way eliminate free will.''
Further comfort comes from
Michael S. Gazzaniga, director of the Center for Cognitive Neuroscience at
There is no need, he said,
''for depressing nihilistic views that we're all robots walking around on
someone else's agenda. It's the agenda we build through experience, and the
system is making choices.''
And just because some processes
in the brain are automatic does not mean they all are, he said. ''My take,''
Gazzaniga said, ''is that brains are automatic and people are free.''
Carey Goldberg may be
here for TWC letter published in
response, "Accountability is still in play."
Free Will in the News
Neuroscience and freedom
- The May 25th, 2002 issue of the
The Economist ran a story, "Open Your Mind," on the ethics and implications
of brain science. The introduction said: "Genetics may yet threaten
privacy, kill autonomy, make society homogeneous and gut the concept of human
nature. But neuroscience could do all of these things first." This
sentiment lines up with Tom Wolfe's 1996 prediction in "Sorry,
but your soul just died," that neuroscience, not genetics, will pose the
most immediate peril to human freedom and dignity. After a review of the
growing capabilities of brain scans and other sorts of "neurotechnology" to
diagnose disease, screen for neural and cognitive defects, and perhaps even
enhance brain-based capacities, the article gets to the main issue: the threat
of neuroscientific understanding to our concept of free will. "Although
some philosophers see free will as an illusion that helps people to interact
with one another [e.g., see "Is Free Will a Necessary
Fiction?"], others think that it is genuine - in other words, that an
individual faced with a particular set of circumstances really could take any
one of a range of actions. That, however, sits uncomfortably with the idea
that mental decisions are purely the consequence of electrochemical interactions
in the brain, since the output of such interactions might be expected to
be an inevitable consequence of the input."
Here, starkly, is the contrast between what philosophers call "libertarian" free
will - the capacity to have done otherwise in the exact same situation were it
to arise again - and determinism, which denies that such a capacity exists.
Neuroscience reveals that the brain is a more or less deterministic mechanism,
and if mental states somehow just are this mechanism, then libertarian
free will goes by the boards (for a good deal more about this concern, see
Neuroscience and the Human Spirit). After admitting that
neuroscientific understanding of propensities for aggression and violence might
indeed have an impact on assessing criminal responsibility, the article
concludes, rather lamely, by suggesting there's still hope for the human soul.
After all, science hasn't yet found it! Indeed, science never will, but
that can only give comfort to those who put stock in non-scientific claims to
knowledge. For those of us committed to empirical grounds for
knowledge, living without the soul is the only viable option, since not to find
it is good evidence it doesn't exist.
Causality and capital punishment -
On June 22, the New York Times ran an op-ed piece, "The Changing Debate
Over the Death Penalty," by Stuart Banner, law professor at ULCA and author of
The Death Penalty: An American History. The fifth and sixth paragraphs
read (emphasis added):
American history, support for the death penalty has risen and fallen with the
times. In periods when Americans have tended to think of crime as the product of
the criminal's free will, the criminal justice system tilts toward
retribution, and capital punishment has grown more popular. In periods when they
have paid more attention to causes other than the criminal's free will — the
criminal's social context, for example — the system has emphasized
rehabilitation, and the popularity of the death penalty has waned.
"The past 30 years
were a period of strong support for capital punishment, as part of a trend
toward retribution in criminal sentencing. (This trend is also evident in other
sentencing measures like "three strikes" laws.) For the past 250 years, however,
such periods have always been followed by times of growing opposition to the
Banner suggests here that support for the death penalty and other harsh
punishments hinges in part on our perceptions and beliefs about causality and
human behavior. If we come to believe that individuals are
not ultimately self-originating - that they don't have libertarian free
will - then of course they don't ultimately deserve the death penalty.
We must seek outside the offender for the causes of crime, and rehabilitation,
or at least non-punitive detention, seems more justifiable than the retributive
"just deserts" of harsh prison conditions or capital punishment.
Banner stays agnostic on the question of whether free will exists, but there's
little doubt that the growing understanding of the causes of human behavior
(e.g., see above re neuroscience) must undercut the idea that individuals are
ultimately self-caused. Although it's unlikely that an op-ed writer will
any time soon suggest, in an influential public forum such as the Times,
that free will doesn't exist, and that therefore we should rethink our beliefs
and policies regarding punishment, this conclusion is implicit as science
continues to naturalize the self. Education about causality and
crime will eventually help undermine support for capital punishment, but it's a
long haul, since the belief in human causal exceptionalism is deeply entrenched,
and widely thought necessary to ground responsibility and morality. To see
why it isn't, click
here and here.
Legislating Naturalism (3/01)
op-ed about why science and naturalism can't be separated, even though
intelligent design theorists are trying. For a very detailed, cogent, and
persuasive account of these issues, see Stephen D. Schafersman's essay
"Naturalism is an Essential Part of Science and Critical Inquiry"
At the close of Carl Sagan’s
novel, Contact, astronomer protagonist Ellie Arroway discovers within the
structure of mathematics a sign that the universe is the intentional creation of
superhuman intelligence. Far along into the otherwise random sequence of digits
generated by the expansion of pi (the ratio of a circle’s diameter and
circumference), a patterned series of ones and zeros appears.
When arranged in a two
dimensional grid, this series forms a geometrically perfect circle of ones
against a background of zeros. What could this be if not the mark of design?
Sagan writes: "In the fabric of space and in the nature of matter, as in a great
work of art, there is, written small, the artist’s signature. Standing over
humans, gods, and demons…there is an intelligence that antedates the universe."
Earlier this year, the Kansas
Board of Education adopted standards for teaching evolution in public schools
that give short shrift to such a possibility. The Darwinian story endorsed by
these standards says we are the unintentional products of strictly material
processes. But in the ongoing Kansas creationism-evolution debate, one
side claims to see evidence of intelligent design, not in the expansion of pi,
but in the form and function of our very bodies.
Jody Sjogren, the founder of
the Intelligent Design Network (IDN), said that to rule out the possibility of a
purposive agent in explaining how we came to be is "legislating naturalism into
the public school curriculum," not the unbiased pursuit of science. Science,
says the IDN, is perverted by the philosophy of naturalism into assuming that
evolution is necessarily Darwinian - mechanical and unpurposive - when in fact
the hypothesis of intelligent design is a better reading of the fossil and
Even though he obviously
understood the emotional and spiritual attractions of the design hypothesis,
there’s no question that were he alive, Sagan would stand firmly on the other
side of this debate, with those who see science as inevitably allied with, not
biased by, naturalism. Naturalism - the assumption that the universe is of a
piece, with nothing that escapes being constrained by physical laws or being
constituted by the basic building blocks of matter and energy – seems essential
to the scientific method.
To develop explanations which
unify our understanding of the cosmos, which is after all the purpose of
science, its practitioners can hardly rest content with postulating an
intervening intelligence which in turn is not subject to explanation. To do so
splits the universe into two parts, one part natural and potentially explicable,
the other supernatural and out of bounds for investigation. But it is precisely
the possibility of overcoming such bounds in the quest for understanding that
This is why, despite their
protestations of being true to science, those who champion intelligent design
cannot really play the science game, at least not at any deep level. It is also
why it is highly unlikely that design "theorists" will ever be published in
prestigious peer-reviewed journals such as Science, Nature, or
Physical Review Letters. Any so-called explanation of our origins which
includes the equivalent of "and then a miracle occurs" won’t cut any ice within
the scientific community, nor should it.
It is conceivable that Sagan’s
fantasy in Contact might come true, that scientists discover a creator’s
"signature" embedded in pi, or in super-string theory, or in some as yet
undreamed-of mathematical description of nature that fits the empirical facts.
But for any scientist worth her salt the immediate questions raised by such a
discovery would concern the characteristics and origins of this creator. In
other words, how does this designer fit into the universal scheme of
things, considered as a whole?
Unfortunately for intelligent
design theorists, this shows that scientists wishing to teach Darwinian
evolution have no need to legislate naturalism into public school science
standards, since naturalism resides in the very motives and practice of science
itself. As even those in Kansas might say, when contemplating the vast,
interconnected web of nature as it stretches from molecules to galaxies, "we’re
not just in Kansas anymore."
Who Wrote the Book of Life?
Just in case you missed Clinton's public piety at the press conference on the
human genome, here's an op-ed piece which suggests that crediting evolution, not
God, for our "book of life" is a far more interesting context for this
scientific milestone. This appeared in The Humanist,
Who, or what,
wrote the book of life? That was the unasked question that nevertheless got
answered at the press conference announcing the near-completion of the
sequencing of the human genome. President Clinton preempted any doubts about his
religiosity, saying that "Today we are learning the language in which God
created life." Not to be outdone, Francis Collins, head of the public Human
Genome Project, added that "It is …awe-inspiring to realize that we have caught
the first look of our own instruction book, previously known only to God." In
refreshing contrast, Craig Venter of the privately held Celera Genomics remarked
on the "beauty of science" as a collective human endeavor, and the wonder of
discovering that "we are all connected to the commonality of the genetic code
It is both
dispiriting and disquieting that public officials feel the need to pay lip
service to God while celebrating a scientific achievement. Although I’m not
suggesting a lawsuit, Clinton’s piety vaults the wall of church-state
separation. The policy implications are worrisome as well: if God wrote our
instruction book, are we then permitted to rewrite it, or is it an unalterable
sacred text? It would seem that learning the "language in which God created
life" puts us in a better position to play god, but according to Tony
Blair (also at the conference) we have a duty "To ensure that the powerful
information at our disposal is…not abused, to make man his own creator…".
politics and policy, what’s most galling is that giving God credit for authoring
our genome is such a cramped, safe, and utterly uninteresting context for this
discovery compared to the naturalistic view that Venter hints at. The really
marvelous, intriguing thing about the "language" of DNA is that it evolved on
its own, without supervision or purpose, and that we are here on this
earth as a natural contingency, not an outcome of intentional design. Of course
what I and other humanists find enchanting is for many a monstrosity: the
possibility that the universe is a vast unsupervised unfolding of material
properties without an externally imposed purpose is an affront to their desire
that humans play a starring role in a drama with ultimate significance.
Citing God as our
creator functions as a buck-stopping explanation, a paternalistic reassurance
for those looking for a privileged place in the scheme of things. It’s designed
to ward off embarrassing inquiries, very much like a parent who shuts up a
doggedly curious child by saying "Because I say so, that’s why!" Those content
with such tactics transparently fail to ask the next, obvious question about our
purported designer: from whence cometh this creator? Not to ask this question,
or to deem it unreasonable, reveals a basic antipathy towards unrestricted
inquiry, driven by a rather shallow, adolescent fear of the cosmic dark.
At least that’s
what I and other humanists might say, implying that theists should really grow
up and bite the bullet of naturalistic contingency. Don’t you see, we’d
continue, that life’s far more interesting when shorn of any imposed purpose,
however grand or noble? What’s more fascinating, after all, than the questions
of existence, sentience, and the parameters and laws of natural phenomena? Add
God and all these become his possibly arbitrary creations, subtract him and they
become the eternal mysteries themselves, possibly endless in their unraveling,
possibly impenetrable. But for Clinton (if we take him at his word, not always
recommended) et al. such engagement with the unfathomable goes by the boards,
tamed by traditional theism.
these opposing stances, perhaps, are the sometimes conflicting desires for
security and exploration, each of which has a claim on us. Those driven more by
security tell the conventional, comforting story: scientific understanding
decodes the text of a God who makes everything work out in the end, as we join
him after death. Those driven more by the exploratory impulse disdain a designed
universe, preferring the uncertain but exhilarating quest for further
understanding. Security, exemplified by the church, wants an end to questions,
while exploration, exemplified by science, prizes questions that may indeed
So who’s right
here? Humanists by nature tend towards curiosity and impolitic questioning, so
we’ll judge Clinton wrong not just because he implicitly endorsed theism in a
public capacity (hence marginalizing non-believers) but also on aesthetic
grounds: giving God credit for creation takes some of the fun out of life. And
besides, there’s no good evidence God exists, by our lights.
If, in rendering
this judgment, we remember that it originates in our preferences for empiricism
and exploration, we’ll stay true to these preferences and likely show more
compassion toward those (the majority in this country, it seems) with greater
needs for psychological security. We can patiently tolerate their belief that
the genome is the word of God, since we understand they haven’t yet outgrown the
need for a cosmic hand to hold. Eventually, with our gentle, self-critical
proddings and suggestions, former theists will discover that they can walk
unassisted and with excitement into the unknown.
Seeing Drugs as a Choice or a Brain Anomaly
articles on the
note: This appeared in the New York Times on June 24, 2000 in
the Arts and Ideas Section. Massing does a nice job in bringing
together various strands of the debate on the disease model of addiction, in
which free will is the unstated subtext. Some responses follow the
article, and have a look at the Addictions
page as well.)
Dr. Alan I.
Leshner, the director of the National Institute on Drug Abuse, a division of the
National Institutes of Health, is known for his slide shows. Two or three times
a week he gives a speech -- to treatment counselors and prevention specialists,
physicians and policymakers -- and almost all feature slides culled from the
work of the 1,200 researchers supported by his institute. The slides are of
brain scans, and they usually come in pairs. The "before" slides show the
activity of a normal brain; the "after" ones depict a brain that has had
prolonged exposure to drugs. Lacing his presentation with jokes and Yiddish
expressions -- as a youth, Dr. Leshner summered at a Catskills hotel owned by
his grandparents, and he has a bit of Alan King in him -- he tries to translate
the science into plain English.
What the science
shows, he says, is that the brain of an addict is fundamentally different from
that of a nonaddict. Initially, when a person uses hard drugs like heroin or
cocaine, the chemistry of the brain is not much affected, and the decision to
take the drugs remains voluntary. But at a certain point, he says, a
"metaphorical switch in the brain" gets thrown, and the individual moves into a
state of addiction characterized by compulsive drug use. These brain changes,
Dr. Leshner says, persist long after addicts stop using drugs, which is why, he
continues, relapse is so common. Addiction, Dr. Leshner declares, should be
approached more like other chronic illnesses, like diabetes and hypertension.
Going further, he says that drugs so alter the brain that addiction can be
compared to mental disorders like Alzheimer's disease and schizophrenia. It is,
he says, a "brain disease."
In promoting this
concept, Dr. Leshner has stepped forthrightly into a debate that has smoldered
for decades: are drug addicts responsible for their behavior? Should they be
treated as sick people in need of help, or as bad people in need of punishment?
Dr. Leshner has come down squarely on the side of illness. And he is winning
many people over. Today the brain-disease model is widely accepted in the
addiction field, and Barry R. McCaffrey, the White House drug adviser, routinely
Others are not
convinced. "I reject the notion that addicts fall under the spell of drugs and
become a zombie and so are not responsible for anything they do," says Dr. Sally
L. Satel, a senior associate at the Ethics and Public Policy Center in
Washington and a practicing psychiatrist at a methadone clinic. To her and other
critics, the brain-disease model is a new orthodoxy based less on science than
on a desire to soften the stigma attached to addiction.
The idea that
addiction is a disease is not new. In the 1960's Alcoholics Anonymous began
speaking of alcoholism as a disease. But, initially at least, A.A. used the term
figuratively to suggest the tenacious hold drinking has on alcoholics. Over the
last decade or so, however, advances in brain-imaging technology have allowed
researchers to measure the impact of psychoactive substances on the brain with
increasing precision. Investigators have found that drugs like cocaine, heroin
and alcohol increase the brain's production of dopamine, the neurotransmitter
that regulates pleasure, among other things. This helps account for the euphoric
high drug users feel. But these drugs deplete the dopamine pathway, disrupting
the individual's ability to function.
At the Brookhaven
National Laboratory on Long Island, for instance, Dr. Nora D. Volkow has found
that even 100 days after a cocaine addict's last dose, there is significant
disruption in the brain's frontal cortical area, which governs such attributes
as impulse, motivation and drive. Dr. Volkow says that "the disruption of the
dopamine pathways leads to a decrease in the reinforcing value of normal things,
and this pushes the individual to take drugs to compensate." Other researchers
have found a physiological basis for the craving so many addicts experience, but
it is not yet clear how long such physiological changes remain.
Dr. Herbert D.
Kleber, the medical director of the National Center on Addiction and Substance
Abuse in New York, says that the brain-disease concept fits with his experience
with thousands of addicts over the years. "No one wants to be an addict," he
says. "All anyone wants to be able to do is knock back a few drinks with the
guys on Friday or have a cigarette with coffee or take a toke on a crack pipe.
But very few addicts can do this. When someone goes from being able to control
their habit to mugging their grandmother to get money for their next fix, that
convinces me that something has changed in their brain."
But does causing
changes in the brain qualify addiction as a brain disease? Not according to Dr.
Gene M. Heyman, a lecturer at the Harvard Medical School and a research
psychologist at McLean Hospital in Boston. "Since we can visualize the brain of
someone who's craving, people say, 'Ah hah, addiction is a brain disease,' " he
remarks. "But when someone sees a McDonald's hamburger, things are going on in
the brain, too, but that doesn't tell you whether their behavior is involuntary
or not." While acknowledging that addiction does induce compulsive behavior, Dr.
Heyman says that addicts still retain a degree of volition, as evidenced by the
many who stop using drugs.
"Smoking meets the
criteria for addiction, but 50 percent of smokers have quit," he says. This
change, he goes on, is "demonstrably related" to the data about the hazards of
smoking that have emerged since the surgeon general's report on the subject in
1964. By contrast, Dr. Heyman says, "information about schizophrenia hasn't
reduced the frequency of that illness." Dr. Heyman also cites a well-known study
of Vietnam veterans who were dependent on heroin while overseas. Within three
years of their return to the United States, the study found, nearly 90 percent
were no longer using it -- strong evidence, Dr. Heyman says, that the addictive
state is not permanent.
Sally Satel first
became skeptical about the brain-disease model in 1997, when she attended a
conference of the drug-abuse institute on the medical treatment of heroin
addiction. "So pervasive was the idea that a dysfunctional brain is the root of
addiction that I was able to sit through the entire two-and-a-half-day meeting
without once hearing such words as 'responsibility,' 'choice,' 'character' --
the vocabulary of personhood," Dr. Satel wrote in a paper called "Is Drug
Addiction a Brain Disease?"
Written with Dr.
Frederick K. Goodwin and published as a booklet by the Ethics and Public Policy
Center, the paper offers a blistering attack on the drug-abuse institute and its
brain-disease terminology. "Dramatic visuals are seductive and lend scientific
credibility to NIDA's position," the paper states, but politicians "should
resist this medicalized portrait for at least two reasons. First, it appears to
reduce a complex human activity to a slice of damaged brain tissue. Second, and
most important, it vastly underplays the reality that much of addictive behavior
To support that
claim, Dr. Satel cited the results of the Epidemiologic Catchment Area study,
paid for by the National Institute of Mental Health, which asked 20,300 adults
about their psychological history. Of the 1,300 people who were found to have
been dependent on or abusing drugs, 59 percent said they had not been users for
at least a year before the interview; the average time of remission was 2.7
years. "The fact that many, perhaps most addicts are in control of their actions
and appetites for circumscribed periods of time shows that they are not
perpetually helpless victims of a chronic disease," Dr. Satel said.
At the mention of
Dr. Satel, Dr. Leshner bristles. "Simplistic and polarizing," he says of her
writing. More generally, Dr. Leshner maintains that his views have been
distorted and misinterpreted. Still, he says, he has lately modified his
message, giving more recognition to the role of volition in addiction. "Today's
version," he says, is that addiction is "a brain disease expressed as compulsive
behavior; both its development and the recovery from it depend on the
But where does
choice end and compulsion begin? The slide showing that has not yet appeared.
Deciphering Addiction (letter to the Times)
by Mary M.
Cleveland, research director for the Partnership for Responsible Drug
A June 24 Arts and
Ideas pages article describes the debate between two camps of anti-drug
crusaders: those who say drug addiction is an immoral choice and others who see
it as a "brain disease." But it is also possible to see addiction as
an obsessive-compulsive behavioral disorder, akin to compulsive gambling or
repetitive hand-washing. Treatment for such disorders emphasizes helping people
understand and manage their behavior. That includes identifying false
assumptions ("I just have no self-control") and avoiding circumstances that set
off compulsive behavior (hanging out with the guys).
as immoral or diseased makes them view themselves as bad or helpless, and makes
it harder for them to gain self-knowledge or self control.
Editorial Note: This letter pretty much nails it, as long as we
understand "self-control" to mean behaving in a responsible, socially sanctioned
manner, not some sort of magical control exerted by a self independent in some
respect of environment or heredity. Heyman's work in behavioral choice
theory is all about how the voluntary aspects of addictive behavior - what gets
talked about as self-control (or its absence) - are determined by the addict's
social environment. For more on this see my reply to Massing below, and
also see the Addiction page.
Editor's Reply to Massing:
Dear Mr. Massing,
I read with great
interest your June 24 New York Times piece, "Seeing Drugs as a Choice or as a
Brain Anomaly." Underlying this debate, but not usually made explicit, are
assumptions about volition and free will. Until these assumptions themselves are
openly debated I don’t think we’re going to make much headway in resolving the
controversy over the disease model of addiction.
For instance, you
quote Gene Heyman as saying that "addicts still retain a degree of volition."
"Volition" suggests to many people a free choice independent of environment and
heredity, but what Heyman actually means by volition is quite different. It’s
the voluntary component of addictive behavior, that which is sensitive to
consequences, as exemplified by the higher quit rate of smokers exposed to
information about the risks of cancer. Heyman believes (as do I) that voluntary
behavior is just as caused, or determined, as involuntary behavior, but that its
causes lie in transactions between persons and their environments; it’s not
driven directly by brain anomalies (personal communication). There is no role
for free will here, understood as some sort of self-originated choice that’s
independent, in some respect, of a person’s biology or social circumstances.
Like Heyman, Satel
certainly understands the power of environmental contingencies in shaping
addiction, but she consistently reinterprets this sort of causality as a matter
of the addict’s self-control, suggesting to the unwary that there might be a
freely willing self that chooses addiction (or not). For instance, you quoted
her from her (and Fredrick Goodwin’s) booklet, "Is Drug Addiction a Brain
Disease?" saying "The fact that many, perhaps most addicts are in control of
their actions and appetites for circumscribed periods of time shows that they
are not perpetually helpless victims of a chronic disease." But being "in
control" of one’s actions and appetites is nothing over and above having one’s
behavior shaped to conform to social norms by one’s social and interpersonal
situation, perhaps with the help of pharmaceutical interventions. It’s not a
matter of free will.
Satel is aware of
this issue, since in the preface to her pamphlet she writes: "Among the
questions raised by this essay is whether the traditional concept of free will
can be sustained in the face of new knowledge about biological and environmental
forces that shape human behavior." Curiously, however, nowhere in this booklet
does free will get discussed (I hope it will be in future publications from the
Ethics and Public Policy Center, Satel's organization). Instead, Satel gives
plenty of examples of how addictive behavior is a function of various factors
(e.g., it can be influenced by Contingency Management, is exacerbated by
"boredom, depression, stress, anger, and loneliness") but always ends up
ultimately blaming the addict, as in the following: "They are instigators of
their addiction, just as they are agents of their own recovery…or non-recovery.
The potential for self-control should allow society to endorse expectations and
demands of addicts that would never be made of someone with a true involuntary
second sentence of this quote draws on clear the connection between self-control
and reinforcement contingencies, since addicts only have *potential* for
self-control, which gets *realized* by placing expectations and demands on
addicts (or by having them grow up in better social circumstances in which good
behavior is the norm, not the exception, thus avoiding addiction in the first
place). If such is the case, then how can addicts be the "instigators" of their
addiction or recovery?
Satel doesn’t seem
to want to face the implications of a scientific understanding of addiction, or
behavior generally: there is no originative, freely willing agent to praise or
blame for choices. Perhaps this is because she supposes, as do many, that having
got rid of free will there are no longer grounds for holding addicts (or the
rest of us) accountable. But of course this is wrong. The same grounds exist as
before: we want to conform addicts’ behavior to social norms so that they become
responsible adults. Therefore we are justified in arranging social contingencies
which can shape their behavior, or, to use that highly misleading expression,
give them "self-control."
To the extent that
punitive attitudes (some of which I detect in Satel) are based in the notion
that we are the instigator of our own faults, seeing through the myth of free
will constitutes the ultimate distigmatization of addicts. This means that in
choosing contingencies to shape behavior, we can no longer justify punitive
contingencies on the grounds that people could have done otherwise in the
biological, psychological, and social conditions they were faced with growing
up, and that therefore they deserve to suffer. (They might have done otherwise
if conditions had been different, but they weren’t, which is why we have to
change those conditions.) Knowing that choices are not willed independently of
circumstances, our attitudes towards addicts might change to become more
compassionate; as a result we might pay more attention to preventing the
formative conditions of addiction than to after-the-fact sanctions.
However, this is
not to say that "tough love" isn’t sometimes necessary. The threat of losing
privileges as a result of bad behavior does work to instill "self-control". But
the default position differs from Satel’s: it is to minimize the addict’s
suffering in the process of recovery, and head off problems before they start
with all the resources we can muster directed at the conditions which generate
addiction. This is not, as you can imagine, the libertarian laissez-faire
prescription I suspect Satel might endorse.
championing the disease model of addiction (such as Alan Leshner) understand
that voluntary behavior is just as determined as any disease process, they won’t
any longer have to deny the voluntary component of addiction in order to
destigmatize addicts. But they will have to face the fact that a certain number
(one hopes a bare minimum) of negative contingencies may be necessary as a last
resort to restore dignity and responsibility to an addict. So the anti-stigma
folks have to concede something here, as well as the libertarians.
I want to thank
you again for raising this issue, but I think you’ve only hit the proverbial tip
of the iceberg. Whether its depths will be plumbed, given the social reticence
about exploring the issue of free will, is an open question.
...and a letter to
the Times, unpublished:
To the Editors:
usefully examines the disease model of substance abuse, pointing out that there
are voluntary choices involved in drug seeking behavior, even in the late stages
of addiction (Arts and Ideas, June 24). But just as physiological
abnormalities in the addict’s brain can be traced to the chemical effects of
drugs, so too the voluntary aspects of drug use can be traced to the addict’s
social and psychological milieu.
This means that to
treat and prevent addiction effectively we must pay as much attention to the
environment of potential addicts as to their brains. It also suggests that in
destigmatizing addiction, discovering the environmental determinants of choice
is just as important as finding the genetic and physiological mechanisms of
In the light of a
scientific understanding of voluntary behavior, social stigma might still play a
role in helping to reduce drug abuse, but it should be applied only when other,
less punitive means are proved ineffective. Seeing why addicts behave as they do
will force us to acknowledge that were we handed the same genetic and
environmental lot in life, our choices would have been much the same.
Playing God, Carefully
This is a response to Cho, et al., "Ethical Considerations in
Synthesizing a Minimal Genome." Their concern - that biotech might devalue
life - is a good example of what might be called "fear of
The December 10, 1999 issue of
reported that microbiologists may eventually pin down a "minimum genome": the
bare bones, molecularly speaking, of what it takes to make a living organism.
The interplay of DNA, proteins, and other sub-cellular components in supporting
the necessary functions of life – in this case a very simple bacterium – would
be completely understood. Nothing mysterious or "protoplasmic" would remain: the
very mechanism of life would stand revealed in all its complexity.
The same issue also carried a
companion piece by a group of bioethicists on "Ethical Considerations in
Synthesizing a Minimal Genome," in which they grapple with what they believe are
the worrisome implications of such knowledge. "The attempt to model and create a
minimal genome," they say, "represents the culmination of a reductionist
research agenda about the meaning and origin of life that has spanned the 20th
century." This agenda is far from benign, according to these ethicists, since it
challenges the tradition which holds that life is valuable because it is more
than "merely physical." Their worry, in essence, is that "The special status of
living things and the value we ascribe to life may …be undermined by
This is a serious charge, one
that might well tend to foster prejudice against science. If a thorough
understanding of the mechanics of life necessarily devalues it, then shouldn’t
we pull back from the pursuit of biological knowledge? One might expect that the
supposed threat of reductionism would be made clear, but in fact the authors
don’t sustain their indictment. Rather, their article suggests that
reductionism, properly understood, poses little danger. Even with a minimum
genome in hand, science simply isn’t in a position to offer definitive
pronouncements on the meaning of life.
Their worries rest on a
confusion between materialism, the thesis that we essentially are
physical creatures, and what might be called strong reductionism, the claim that
higher level phenomena, such as human behavior, can be completely explained
in terms of its underlying physical mechanisms. Now, some indeed are threatened
by materialism, since being "merely" physical undercuts the traditional
reassurance that the soul might outlive the body. But it’s not clear that anyone
should be worried about strong reductionism, since it’s patently false, and must
be distinguished from the bread and butter science of analyzing biological
processes, which is what work on the minimal genome consists of.
The ethicists point out that "a
reductionist understanding of …human life is not satisfying to those who believe
that dimensions of the human experience cannot be explained by an exclusively
physiological analysis." True enough, but does anyone really suppose that
physiological analysis is even relevant to most human experience? Such strong
reductionism is simply a straw man, not an encroaching scientific agenda.
For instance, a thorough
understanding of the brain at the neural level, while often necessary for
tracing specific mental functions and pathologies, is simply inappropriate for
dealing with the everyday psychodrama of our motives, aspirations,
disappointments and interpersonal interactions. Even though our having
experiences at all may depend on our having properly wired brains, the
of experience derives from its social context, not its substrate in physiology.
In short, since analytical physical science is irrelevant to domains in which it
is useless for explanatory or predictive purposes (which is to say, in much of
our lives) its success cannot threaten our dignity.
The ethicists also suggest that
extensions of minimal genome research, by specifying the genetic definition of
the human organism and its beginnings in utero, will have implications
for the abortion debate. Although they don’t tell us precisely what these
implications are, they do conclude that "the complex metaphysical issues about
the status of human beings cannot be discussed in terms of the presence or
absence of a particular set of genes". Quite true, but this is yet another
illustration of how physiological analysis is not about to rule our
ethical intuitions. Even if we agreed on a definition of human life at the DNA
level, all the contentious issues of fetal viability, birth defects, quality of
life, and the sometimes conflicting interests of mother and potential newborn,
remain to be decided at the social level. Science simply isn’t in competition
with social policy debates, although it can help inform them.
But beyond abortion, the most
pressing issue, they say, is whether identifying minimal genomes, or perhaps
even creating artificial organisms from such blueprints, "constitutes
unwarranted intrusion into matters best left to nature; that is, whether work on
minimal genomes constitutes ‘playing God’." How much should we intervene in the
mechanics of life to suit our desires? An analytical understanding of life’s
mechanisms is the key to genetic engineering, both of other creatures and
ourselves. If we decide we should play God, then we’ll use the key; if not, we
should throw it away.
The authors point out that a
spectrum of views exist on playing God. Many of religious persuasions reject it
as arrogant hubris; others believe that it should be the no-holds-barred
culmination of our capacity for self-design. They themselves recommend a middle
path of careful biotechnological stewardship that "would move forward with
caution into genomic research and with insights from value traditions as to the
proper purposes and uses of new knowledge." They also state that "while there
are reasons for caution, there is nothing in the research agenda for creating a
minimal genome that is automatically prohibited by legitimate religious
If, as these ethicists
conclude, there is no deep moral objection to our playing God -carefully - then
a detailed analysis of life’s mechanisms is simply a means to an end, not an
intrinsic threat to the specialness of life or our attachment to human beings
and other creatures. And it is these attachments that will shape the ends we
seek, and that must channel the use of biotechnology in humane, not monstrous,
Were we to conclude that
playing God is wrong, then advanced biology does pose a threat, and we might
seek to limit research into what once were the mysteries of life. Indeed, the
success of science in showing that simple life forms are mechanisms,
albeit astoundingly complex, lends power to what some feel is a deflationary
materialism: we no longer need mysterious, non-physical explanations for what
life does. The sheer ability to play God, therefore, threatens those who think
God is, or should be, a necessary hypothesis at the physical level. They would
prefer science to fail, even in its proper arena, and one sure way to ensure
failure is to limit biological research.
But in reality, of course, it’s
too late not to play God. By knowing that we have the power to know, even
a decision to "let nature run its course" is yet another God-like choice, albeit
it one that renounces domains of understanding and control. Such a choice would
make us a God of the Deists, a passive onlooker of unfolding creation, rather
than an active participant in shaping our destiny.
The question, therefore, is not
whether we should play God, but what sorts of local gods we will, or should,
become. Will materialism (not the straw man of strong reductionism) demoralize
us, or will we continue to find meaning in our personal and social lives, even
though life itself is understood to be a mechanism? The latter outcome becomes
possible if we grasp that our lives’ meaning need not depend on our being
ethereal, as opposed to purely physical, creatures. Either way, our response to
the success of science will help determine how we play the leading role in which
nature has cast us on this planet.
TWC, January 2000
Scientists Unmask Diet Myth: Willpower
By Jane Fritch
New York Times, Science Times,
October 5, 1999
note: Below is a telling expose of will in
an everyday context. With one exception, the scientists quoted by Jane
Fritch pull no punches in dismissing willpower as a fraud. They
recommend instead the Skinnerian solution to overeating: set up your environment
to make it less likely you'll want to, or be able to eat. If you do a good
job at this, one says, it will even seem as if you've got willpower!
The trick is to generalize this message into other realms of behavior, but of
course that gets controversial...)
thin person, the kind who has always been thin, is confronted by a
chocolate cake with dark fudge icing and chopped pecans. Unmoved, he goes about
his business as if nothing has happened.
A fat person, the kind who has
always struggled with weight, is confronted by the same cake. He feels a little
surge of adrenaline. He cuts a slice and eats it. Then he eats another, and
feels guilty for the rest of the day.
The simplest -- and most
judgmental -- explanation for the difference in behavior is willpower. Some
people seem to have it but others do not, and the common wisdom is that they
ought to get some.
But to weight-loss researchers,
willpower is an outdated and largely discredited concept, about as relevant to
dieting as cod liver oil. And many question whether there is such a thing as
"There is no magical stuff
inside of you called willpower that should somehow override nature," said Dr.
James C. Rosen, a professor of psychology at the University of Vermont. "It's a
metaphor that most chronically overweight dieters buy into."
To attribute dieting success or
failure to willpower, researchers say, is to ignore the complex interaction of
brain chemicals, behavioral conditioning, hormones, heredity and the powerful
influence of habits. Telling an overweight person to use willpower is, in many
ways, like telling a clinically depressed person to "snap out of it."
It is possible, of course, to
recover from depression and to lose weight, but neither is likely to happen
simply because a person wills it to be so, researchers say. There must be some
intervention, either chemical or psychological.
The study of weight loss began
in earnest in the early 1950's, a time when doctors and nutritionists treated
overweight people by telling them to eat less and sending them on their way.
"Willpower was a kind of
all-embracing theory that was used all the time to make doctors feel good and
make patients feel bad," said Dr. Albert Stunkard, a professor of psychiatry at
the University of Pennsylvania who has been studying weight loss for five
"Most people think that
willpower is just a pejorative way of describing your failures," he said.
"Willpower really doesn't have any meaning."
The role of willpower in weight
loss was a major issue among scientists about 30 years ago, when the behavior
modification movement began, Dr. Stunkard said. Until then, the existence -- and
importance -- of willpower had been an article of faith on which most diets were
founded, he said.
The behavior modification
approach had its roots in a 1967 study called "Behavioral Control of
Overeating," which tried to analyze the elements of "self-control" and apply
them to weight loss. The study, by Richard B. Stuart of the University of
Michigan, showed that eight overweight women treated with behavior modification
techniques lost from 26 to 47 pounds over a year. They had frequent sessions
with a therapist and recorded their food intake and moods in diaries. And the
therapists helped them develop lists of alternatives to eating, like reading a
newspaper or calling a friend. One cultivated an interest in caged birds;
another grew violets.
"No effort is made to
distinguish the historical antecedents of the problem and no assumptions are
made about the personality of the overeater," Mr. Stuart wrote in his article,
published in the journal Behavioral Research and Therapy.
After that, the focus of weight
loss programs shifted toward behavioral steps a dieter takes regarding eating,
said Dr. Michael R. Lowe, a professor of clinical psychology at the MCP
Hahnemann University in Philadelphia, and away from "something you search for
Behavior modification is now
the most widely accepted approach to long-term weight loss. Practically, that
means changing eating habits -- and making new habits -- by performing new
behaviors. Most programs now recommend things like pausing before eating to
write down what is about to be eaten, keeping a journal describing a mood just
before eating and eating before a trip to the grocery store to head off impulse
There is also mounting evidence
that behavior affects the chemical balance in the brain, and vice versa. Drugs
like fenfluramine, half of the now-banned fen-phen combination, reduced a
dieter's interest in eating, thereby making willpower either irrelevant or
seemingly available in pill form. And Dr. Stunkard has just completed a study
that showed that people with "night-eating syndrome" -- who overeat in the late
evening, have trouble sleeping and get up in the middle of the night to eat --
have lower-than-normal levels of the hormones melatonin, leptin and cortisol in
Still, to deny the importance
of willpower is to attack a fundamental notion about human character.
"The concept of willpower is
something that is very widely embedded in our view of ourselves," said Dr. Lowe
of MCP Hahnemann University. "It is a major explanatory mechanism that people
use to account for behavior."
But Dr. Lowe said he and others
viewed willpower as "essentially an explanatory fiction." Saying that someone
lacks willpower "leaves people with the sense they understand why the behavior
occurred, when in reality all they've done is label the behavior, not explain
it," he said.
"Willpower as an independent
cause of behavior is a myth," Dr. Lowe said. In his clinical practice, he takes
a behavioral approach to weight control. In part, that involves counseling
dieters to take a more positive attitude about their ability to lose weight. It
also involves some practical steps. "Most importantly," he said, "you need to
learn what behavioral steps you can take before you get in the situation where
you're in the chair in front of the television with a bowl of potato chips."
And, he said, it is important
for dieters to keep in mind that there are formidable forces working against
them and their so-called willpower. "We live in about the most toxic environment
for weight control that you can imagine," Dr. Lowe said. "There is ready, easy
availability of high-fat, high-calorie fast foods that are relatively
affordable, combined with the fact that our society has become about as
sedentary as a society can be."
Not all experts, however,
reject the notion of willpower. Dr. Kelly D. Brownell, director of the Yale
Center for Eating and Weight Disorders, said that this was the most difficult
time in history for dieters, and that it would be a mistake to dismiss the
concept of willpower. "A person's ability to control their eating varies over
time, and you cannot attribute that to biology," he said.
"There's a collective public
loss of willpower because of this terrible food environment that challenges us
beyond what we can tolerate," Dr. Brownell said. "One needs much more willpower
now than ever before just to stay even."
All the temptations
notwithstanding, thousands of people find a way to lose weight and keep it off,
a fact demonstrated by the National Weight Control Registry, a research project
that keeps tabs on people who have lost at least 30 pounds and kept the weight
off for more than a year.
" A lot of times in weight loss
programs patients will say to me that they need to learn to be able to live with
an apple pie in the refrigerator and not eat it," said Dr. Rena Wing, a
professor of psychiatry at the University of Pittsburgh and the Brown University
School of Medicine, who is collaborating on the registry project. Most
behaviorists think dieters should instead arrange their lives so that they
rarely have to confront such temptations.
"If I were to put an apple pie
in front of everybody every minute of the day, I could probably break down
everybody's quote-unquote willpower," she said. "We really are trying to get
away from this notion of willpower. If you make certain plans, you will be able
to engineer your behavior in such a way that you will look as if you have
Supposed Inscrutability of Evil
by Currents editor,
In the wake of the Columbine
High School shootings in Littleton, Colorado, several newspaper articles,
including two by Massachusetts sociologists, downplayed the possibility that we
can ever truly understand the evil behind such events. Evil, it seems, is
not fully explicable by causal factors, but is instead the product of an
intrinsic, self-chosen malevolence within individuals. Alan Wolfe of
Boston University concluded his piece defending suburban culture ("Littleton
Takes the Blame," New York Times Op-Ed, 5/2/1999) with a telling
tautology: "We ourselves should not try to find an explanation for all of life's
mysteries. Not everything requires a sociological analysis. The evil
that was Columbine was not about franchise outlets, cell phones or cliques.
It was about evil." The Boston Globe ran a piece by
University of Massachusetts sociologist John Hewitt ("Trying to make sense of
the senseless," Op-Ed 4/29/99) which went even further in mystifying evil:
"More ominously, this tragedy
might have an explanation that we are not prepared to accept. Science has
taught us to look for peculiar social or psychological circumstances that cause
people to do what they otherwise would not do. The mind does not rest easy with
the idea that seemingly ordinary people who are a bit odd but generally keep to
themselves might quietly be forming awful plans. We would rather think of bad
acts as the unfortunate consequences of discoverable and remediable social and
personal conditions. Yet it is precisely the account we do not wish to believe
that might best capture what happened in Littleton.
"The two dead members of the
'Trench Coat Mafia,' together with their fellows, might simply have chosen evil
in circumstances where others choose to play football or to crave membership in
the National Honor Society."
Last, but not least, the
New York Times ran a Sunday Week in Review article ("Science Looks at
Littleton and Shrugs," 5/5/99) in which Dr. Jeffery Fagan, director of the
Center for Violence Research and Prevention at Columbia University was quoted
saying that "a much more hard-heading approach [to explaining Littleton] says
'Sometimes bad things happen and we can't always explain it.'"
It speaks to the power of the
myth of radical autonomy (see next section) that even experts in the art of
explanation throw up their hands when confronted with extreme acts of violence,
as if these were somehow beyond nature, or culture, and have to be chalked up to
an incomprehensible Evil residing within the person. Such a stance not
only misplaces the person outside of nature and culture, it suggests that evil
is beyond our reach to contain or control: it relieves us of
responsibility for creating a less punitive culture in which retaliatory
episodes such as Littleton (and many, many more unpublicized killings) become
less frequent. Don't blame or try to change suburbia, says Wolfe, evil is
simply evil. Don't look to science to illuminate the killings, says
Hewitt, it's just a personal choice, like choosing to play football. If we
are hardheaded realists about such tragedies, says Fagan, we will admit that
they are ultimately inexplicable. This last bit seems simply wrongheaded,
Such defeatist nonsense is
driven by the deep cultural assumption that individuals are in some sense freely
willing first causes, insulated from biological, psychological and social
factors. On the contrary, only by naturalizing evil - that is, showing how
it arises from such factors - do we stand a chance of conquering it.
Clinging to the myth of autonomy, far from giving us power, guarantees that the
recent history of Littleton will repeat itself, perhaps in a community near you.
For a response to Wolfe published in the Times see the
Letters section, "The End of the Suburban Dream?".
Radical Autonomy in the New York Times
DNA and Destiny
An excerpt from a 11/16/98
New York Times op-ed piece by David P. Barash, a professor of psychology at
the University of Washington at Seattle:
existentialists had it right. From a religious thinker like Kierkegaard to an
atheist like Nietzsche, the existentialists recognized that all human beings
define themselves as unique, responsible individuals. As Simone de Beauvoir put
it, a human being is a being whose essence is having no essence. Or, in Jean
Paul Sartre’s famous phrase, "existence precedes essence."
words, our essence is ours to choose, depending on how we direct ourselves with
all our baggage, DNA included.
not to minimize our gene-based, Darwinian heritage. It is, rather, a reminder
that within the vast remaining range of human possibility left us by our genes
and our evolutionary past, each of us is remarkably, terrifyingly free.
comment: Yeah, right. Like so many responding to the threat of genetic
determinism, Barash ignores the other, complimentary set of determining factors
which shape the self - the environment. Beyond genes and environment, there are
no other factors involved in the development of individuals, certainly not a
self free from such influences. And on what basis would a radically free self
choose to shape itself in one direction over another? On the assumption of
radical autonomy - the existentialist assumption Barash champions - no
explanation could ever be forthcoming. This sort of ill-considered flight to
freedom is typical of those who see determinism as a threat to human dignity.
Smoking is a Choice
In a 11/21/98 letter to the
New York Times, Marc Beauchamp of Falls Church, VA writes:
Your Nov. 19 front-page article
about Jan Binder, a smoker who has been unable to quit, is emblematic of today’s
culture of victimization. Ms. Binder is an addict and outcast not by chance but
because of choices she’s made.
I was a pack-a-day smoker from
my late teens until my late 30’s. I took my last puff eight years ago. What I
learned from my struggle to kick the habit was that you can’t quit for the wrong
reasons: because it costs too much, because your family thinks it’s a vile
habit, because a friend or relative smoked themselves to death. You can only
quit when you decide that’s what you want to do. Period.
It’s a smoker’s choice to be a
comment: Note the startling lack of explanation for why a smoker would
choose to quit: "You can only quit when you decide that’s what you want to do.
Period." The smoker’s decision is imagined to be in isolation from any and
all influences, making it inexplicable except as an act of sheer will, which
itself comes out of the blue. This sets up the agent to take all the blame, and
suggests that outside factors are insignificant in helping smokers to quit. This
is the core of the defense that tobacco companies mount in lawsuits to avoid
accountability. Unfortunately, most juries buy it, wedded as they are to the
fiction of free will.
Dr. William Provine: Free Will a Cultural Myth
American Atheist Conference,
Dr. William Provine, Cornell
University Professor, addressed the convention on the theological concept of
"free will." He began with a detailed discussion on recent developments in
evolutionary biology as formulated by Charles Darwin. He noted that the idea of
a universe created by a deity - "intelligent design" - was refuted by the
findings of science, specifically the doctrine of natural selection.
Provine asserted that "when
you're dead, you're dead," and that in looking at life from an evolutionary
perspective, one sees that there is no ultimate, absolute reference in
formulating an ethical system. "No, human beings on this planet are alone, and
we exist in a world which was made by processes that don't care one whit about
us... we live in a universe that will probably continue to expand for some time,
"The meaning we seek in our
lives cannot be ultimate meaning, but a meaning which we create."
Dr. Provine then continued that
"free will is a terrible cultural myth." He added that "Giving up the idea of
God is great for a rational mind."
Provine also made a strong plea
for competing viewpoints to be aired throughout the academy, and society; he
called upon Atheists to actively debate creationists. "Some of my best friends
are creationists, and they like me, because they know I want to see their ideas
presented and contested in class."
American Atheist Magazine
Underreporting Antidepressant Use Tied to Stigma of Mental Illness
(excerpt chosen and emphasis
added by Currents editor)
...Rost and associates reported
a study of coding practices among primary care physicians treating depression
that may help explain our findings. These researchers used a structured survey
of primary care physicians. They found that 50.3% of respondents who had seen a
patient meeting the DSM-III-R criteria for major depression in the previous 2
weeks admitted that they deliberately miscoded the diagnosis. The most
common diagnoses substituted for depression included fatigue/malaise, insomnia,
headache, anxiety, adjustment/grief reaction, and anorexia, as well as somatic
syndromes such as fibromyalgia and irritable bowel syndrome. These results
correlate closely with the pattern of diagnoses seen in our primary care record
The reasons physicians cited for using alternative diagnostic codes are
intriguing and have interesting clinical, societal, and ethical implications.
"Uncertainty about the diagnosis" was reported by 46.0% of respondents. This
highlights the need for both objective screening and diagnostic tools for common
mental illnesses in primary care practice as well as improved provider
education. A variety of studies have concluded that primary care physicians
significantly under-report and undertreat depression. Paradoxically, it appears
that in many cases they suspect the correct diagnosis but fail to record it due
to uncertainty or other factors described here.
Other reasons reported for miscoding the diagnosis have health system and
societal implications. "Problems with reimbursement for services if depression
is coded" was reported by 44.4% of respondents; this demonstrates that
reimbursement bias also affects the accuracy of the outpatient record. This
finding also reflects the incongruities of the health plan design, which
reimburses treatment for "physical" illnesses but not brain diseases with
demonstrated organic, neurobiologic basis.[40, reference below]
All of these results reflect the stigma of mental illness that continues to
be explicitly reinforced, as is evident in the list of other reasons given for
not identifying depression as the diagnosis. These reasons include
"jeopardize future ability to obtain health insurance" (29.4%), life insurance
(12.8%), employment (10.2%), or disability (6.4%). Moreover, 20.9% of physicians
reported that the "stigma associated with depression was likely to delay
recovery," and 12.3% reported that the stigma would "negatively influence future
care from other providers." Finally, 11.8% of physicians reported that patients
were "unwilling to accept the diagnosis," and 11.2% of patients specifically
requested that depression not be recorded.
40. Shannon, BD:
The brain gets sick, too—The case for equal insurance coverage for serious
mental illness. St Mary's Law J 24:365-398, 1993.
Using Physician-Reported Anti-depressant Claims," Medline, 6/22/98,
comment: The "incongruities of the health plan design" mentioned
above is a good example of how the mental/physical conceptual split controls
both how coverage for illness is allocated and attitudes about
depression. As long as depression is thought of as "mental," it can be
safely excluded from standard medical coverage of physical illnesses, and
it can be chalked up to personal shortcomings. As we move towards
understanding ourselves as strictly physical creatures, whose "mental" problems
are entirely rooted in the brain as it responds to physiological and
environmental influences, the stigma of depression should lessen and medical
coverage should become more equitable. TWC
Home Applied Naturalism
Tenets of Naturalism Consequences