Experience and Autonomy: Why Consciousness Does and Doesn’t Matter

A chapter for Exploring the Illusion of Free Will and Moral Responsibility, Greg Caruso, editor.

Human freedom, responsibility and autonomy have traditionally been linked to or even identified with conscious control of behavior, where consciousness is widely understood as possibly non-identical with its neural correlates. But the rise of neuroscience strongly suggests that brain processes alone are sufficient for behavior control, and indeed nothing non-physical plays a role in scientific explanations of behavior. In this chapter I address three worries that arise in response to investigations of consciousness: the philosophical worry about mental causation; the practical worry about the influence of unconscious processes; and the existential worry that, absent a contra-causal conscious controller, we lack true freedom, responsibility and autonomy. These worries can be defused, I suggest, by

  1. acknowledging the causal powers of brain-based conscious capacities, those associated with (but perhaps not identical to) conscious experience
  2. expanding the reach of conscious capacities by understanding their limitations, and
  3. naturalizing our conceptions of freedom and autonomy.

Introduction and Overview

Consciousness seems a central requirement for responsibility and autonomy. We ordinarily don’t hold each other responsible for behavior which occurs unconsciously, as when sleep walking, or which happens despite our best conscious intentions, such as an inopportune sneeze. On the other hand, conscious control – acting in light of reasons and beliefs that if asked I could articulate on the spot – generally confers responsibility. I’m held accountable as a source of action if I’m awake, aware and acting voluntarily.[1] But what’s so special about consciousness that it confers responsibility? The same question arises with respect to personal autonomy: what is it about consciousness, if anything, that it makes me a genuine author of action?

Neuroscientific studies of consciousness and conscious capacities have brought these questions to the fore. Consciousness, in the sense of having phenomenal experience (paradigmatically, experiences of pain, red and other sensory qualities) is not involved in all phases of generating intentional and voluntary action. For example, the reportable experience of having made a decision may lag behind the unconscious initiation of action in certain simple choice scenarios. In more complex, real world situations, I may be unaware of significant influences on my behavior. Further, there’s no clear demarcation, in terms of causal efficacy in behavior control, between brain processes correlated with conscious experience – what I will call conscious processes – and those not. From a neuro-behavioral standpoint, it’s neurons all the way up and all the way down (or in and out, if you prefer), whether or not conscious experience is involved.

The evidence strongly suggests that neural operations correlated with having conscious experience – the neural correlates of consciousness, or NCC – have critically important cybernetic (action guiding) functions, such as integrating multimodal information and subserving memory, learning and flexible behavior (Kanwisher 2001; Dehaene and Naccache 2001; Jack and Shallice 2001, Parvizi and Damasio 2001, Crick and Koch 2003, Dehaene & Changeux, 2011). But phenomenal experience itself isn’t needed to explain these functions, rather, it accompanies them, for reasons still obscure to the philo-scientific community; it’s the functions themselves that get the behavioral job done.

Appreciating the role of unconscious processes in behavior control, the functional continuity of unconscious and conscious processes, and the fact that the brain accomplishes behavior control on its own may have some initially discomfiting implications, depending on one’s pre- and post-theoretical commitments. First, it sparks a philosophical worry about mental causation: consciousness might be epiphenomenal with respect to behavior control when viewed from a third-person, neuro-behavioral perspective. Appeals to subjective experience may not be needed in scientific accounts of voluntary and intentional action. Second, it suggests that, whatever the causal role of conscious experience, conscious processes (the NCC) may not always be in charge of behavior to the extent we normally suppose; this threatens every day, practical notions of responsibility and autonomy. Third, it poses an existential threat to a folk dualist conception of human agency: that at the core we exist as immaterial conscious controllers separate from our brains - for instance souls - that make contra-causally free choices.

I will argue that these threats to conscious control, properly understood, need not discomfit us. Indeed, by distinguishing the possible behavior-guiding role of conscious experience from that of conscious processes, and by understanding their respective limitations, we might gain in personal autonomy, naturalistically conceived. Understanding how consciousness does, and doesn’t, matter promises to give us greater real world self-efficacy and control.

Before proceeding, let me reiterate what I mean by consciousness for the purposes of this chapter. What I have in mind is phenomenal consciousness, that is, experienced qualitative states such as pain, the sensation of red, the “what it is like” to undergo sensations, emotions, occurent thoughts and phenomenally diffuse states such as inclinations and hunches.[2] The basic qualitative elements of experience – qualia – are notoriously resistant to assimilation by physicalist explanation precisely because subjective qualitative character isn’t (yet) the obvious result, or cause, of any physical process. Moreover, while the physical processes associated with experience are all in principle observationally available, experience itself is not: phenomenal consciousness is a categorically private affair.

Part 1: Defusing the Threat of Epiphenomenalism

The problem of mental causation, or more precisely, phenomenal causation, can be stated as follows: How does conscious experience, as possibly distinct from the neural processes empirically found to be associated with it – conscious processes – contribute to behavior control? One solution to this problem would be to show an identity at some level, say physical or functional, between conscious experience and conscious processes. Since the central behavior controlling role of the neurally instantiated cognitive functions associated with conscious experience is not in doubt, such an identity would automatically confer on consciousness the same causal powers as those functions, solving the problem of phenomenal causation. But, despite the plethora of proposals on offer (Block, Flanagan and Guzeldere 1997, Chalmers 2010, Tononi 2012, Koch 2012, Dennett 1992, Metzinger 2003) there is no canonical theory of such an identity, and no consensus about whether it’s even a live possibility.

Just as there is no accepted identity theory, there is no theory on offer to account for how, if consciousness isn’t identical to its neuro-functional correlates, it adds its own essential contribution to behavior control beyond what those correlates accomplish. No credible dualist interactionist theory seems forthcoming, in which case, short of an identity claim, conscious experience seems simply along for the ride. Epiphenomenalists are happy to, or at least resigned to, bite this bullet: phenomenal feels are somehow produced by neural goings-on, but they don’t in turn affect those goings-on, so are behaviorally inefficacious (Robinson 2010, 2012).  In which case so much the worse for consciousness: it gets caused, but doesn’t get to cause in turn.

In the absence of viable identity or interactionist theories, must we accede to epiphenomenalism? I would suggest not. First, epiphenomenalists (along with the rest of us) have precisely no story about how conscious experience is produced or generated by conscious processes. All we have are observed dependency relations between conscious processes - the neural correlates of consciousness, whatever they turn out to be - and conscious experience: when certain processes are active, then we’re phenomenally conscious. Such relations don’t necessarily entail that phenomenal states are caused by or produced by neural processes as a further effect which then, according to epiphenomenalists, fails to play any subsequent causal role in behavior control. As Dennett puts it, there’s no “double transduction” by which neural activity produces something more (Dennett 1998). Unless there’s an account on offer of how conscious experience is generated as an effect, then worries about whether or not this effect could in turn be a further cause of behavior seem misplaced or premature. Consciousness may not be an effect at all, in any standard sense, and therefore not the sort of thing that could be reasonably expected to play a causal role in addition to its neural correlates. [To see how consciousness might be a non-causal entailment of neurally instantiated representational states, see The appearance of reality.]

Next, I want to suggest that although conscious experience likely can’t figure in third person accounts of behavior, we shouldn’t conclude from this that experience is epiphenomenal. In support of the first part of this claim, I’d propose that naturalistic explanations of a phenomenon normally require that all elements playing a role in the explanation be, in principle, observable. In particular, when explaining your behavior, we can appeal to your brain processes as they are observed to control bodily movements, and we can appeal to your intentional states, plausibly construed as being realized by those (potentially observable) processes. [3] However, we don’t and can’t observe your pain, your sensation of red, or any other of your experiences. Your experience only exists for you as an up-and-running cybernetic system; it isn’t the sort of thing that can be observed at all, not even by you.[4] Consciousness is categorically, existentially private, never publicly accessible to external observers in the way its neural correlates are. Remember also from above that we can’t unproblematically assume an identity between conscious experience and conscious processes, such that we can claim that in observing the neural correlates of experience we are observing experience itself. If, therefore, conscious experience is not an observable, we can’t appeal to it in third person explanations of behavior, any more than we can appeal to invisible ghosts, spirits or souls. Experience doesn’t appear in what we might call third person explanatory space: the space inhabited by observables such as brains, bodies and behavior itself.

If this is so, then, interestingly enough, it’s a mistake to think that experience could be epiphenomenal with respect to behavior. To be epiphenomenal requires at least the logical possibility of exerting a causal influence, but this possibility is foreclosed by the fact that consciousness doesn’t appear in third person explanatory space; it simply isn’t in a position to play a causal role in observation-based explanations of behavior. By contrast, the whistle on Huxley’s steam engine (Huxley 1874) does appear in this space, along with the engine itself, so can fairly be characterized as epiphenomenal with respect to the operation of the engine. The same applies to your appendix: it sits in physical, abdominal space but is vestigial and non-functional, hence epiphenomenal with respect to your bodily economy. Not so for experience: we don’t see it sitting in the brain (or anywhere else, for that matter), so it can’t be epiphenomenal with respect to the brain, the body or behavior.

My third point against epiphenomenalist worries is to note that even if, as I suggest, consciousness can’t figure in third person accounts of behavior, the brain and body get the job done just fine on their own. From an external observational perspective, neural processes are all that’s going on and are sufficient for smart cybernetics at whatever level we care to take, physical, functional or intentional. In the movie Blade Runner, the Tyrell Corporation bio-engineers so-called “replicants” to perform dangerous work for which it’s necessary they be as cognitively and behaviorally adept as their human masters. Why, in creating the artificial neural structures necessary for human-level perception, memory and cognition should Tyrell concern itself with consciousness? Phenomenal states wouldn’t appear in its design specifications for replicants, even though the specifications somehow entail the existence of such states.[5] The same is true of our design via evolution: we needn’t suppose that conscious experience per se was selected for, only the behavior-controlling, neural-instantiated functions with which it is associated.

Since conscious processes are, in conjunction with unconscious processes, sufficient for action control, it shouldn’t worry us that their associated phenomenal states aren’t in a position to play a role in scientific accounts of behavior, or that they weren’t directly selected for in evolution. True, we’ve ended up conscious, but from a functional, cybernetic perspective experience is neither here nor there; it doesn’t matter for purposes of producing intelligent behavior.  Your zombie twin, if such could exist (a question beyond the scope of this chapter), would be just as capable as you.

But there’s an important caveat here.  Even if consciousness doesn’t matter from third person explanatory or design standpoints, it matters tremendously to you as a locus of consciousness. As the host of an ineluctable phenomenal reality, you have very strong preferences in the sorts of experiences you want to undergo. This grounds your claim to be a subject of moral concern, someone capable of suffering, and for granting (the problem of other minds aside) the same moral status to others; it’s also why it matters whether future generations of AIs achieve consciousness.[6] Moreover, from the subjective standpoint of first person explanatory space, it strongly seems as if conscious states do play a causal role: it’s a phenomenally given subjective fact that we wince because we’re in pain and consume chocolate because it tastes so good; this will continue to be our experienced reality no matter what scientists and philosophers tell us.  And since conscious experiences co-occur so reliably with behavior controlling conscious processes, citing experiences as causes of behavior is a very good explanatory proxy for the actual neuro-functional control story. Indeed, for many intra- and interpersonal purposes they are very useful, perhaps even necessary, explanatory fictions – subjective shorthand, one might say. For these reasons, we won’t become skeptics about the importance of conscious experience just because it might not play a causal role in objective explanatory contexts. Subjectively and interpersonally, consciousness will continue to matter. [Points made in this section are elaborated in Respecting privacy: why consciousness isn’t even epiphenomenal.]

Part 2: Expanding Conscious Control

Epiphenomenalism about experience aside, the practical worry about conscious processes, spurred by research in behavioral and social psychology, is that they may not be as central to behavior control as ordinarily assumed. If not, then perhaps we’re not as responsible or as autonomous as we suppose. In this section I’ll consider two research paradigms on threats to conscious control. One, which I will call the precursor paradigm, purports to show that unconscious brain processes – precursors to an experienced choice - determine that choice, so consciousness itself might be by-passed (Nahmias 2011) or just a non-causal addendum to action. The other, what I will call the influence paradigm, suggests that we are unconscious of situational factors and internal biases influencing our behavior, some of which might induce us to act against our endorsed values. It’s the latter paradigm, not the former, that I think should worry us.

First, in light of part 1 above, I’ll reiterate that it’s the role of conscious processes, not consciousness experience, that’s at issue here. Although I’m usually accorded responsible authorship of action only when I’m awake and having experiences, it isn’t experience per se that makes me responsible. As we’ve seen, experience is the subjective indicator of the operation of certain behavior-guiding, information-integrating functions, and it’s these functions, as opposed to those associated with unconscious processes, that matter most for responsibility and autonomy, not experience. It’s our neurally instantiated capacities for being behaviorally flexible, foresightful and corrigible that justify our responsibility practices as means to guide action toward the good (Morse 2004). We don’t hold those without such capacities (young children, the insane) responsible, even though they are conscious. But as a matter of practical necessity we would have to hold a sufficiently sophisticated robot responsible (allocate it behavior-guiding rewards and sanctions) in order to produce acceptable behavior, whether or not we suppose it hosts phenomenal states (Clark 2006).

When acting under the control of conscious processes, we’re able to behave appropriately in novel situations by simulating multiple possible courses of actions in light of memories and the anticipated consequences of those actions (Baumeister, Masicampo and Vohs 2011). We’re also in a position to state reasons and justifications for action to our peers, such that, if our reasons are considered plausible, we gain their trust, respect and cooperation in matters of moral or practical consequence. It’s this process of consciously mediated reason-giving that defines our endorsed self, that which we would defend in public or that wins our acceptance in private deliberations. Conscious processes, therefore, are essential to what we ordinarily identify with as our most capable and most justifiable selves; they permit the formulation of consistent, publicly expressible commitments and values and enable action on behalf of such commitments over a wide range of possible situations. In short, they are essential to the development and actualization of personal integrity and autonomy.

Given this sketch of the importance of conscious processes, let’s consider the precursor paradigm as a threat to conscious control. Besides the pioneering work of Benjamin Libet (1982), a widely cited finding is that of Soon et al. (2008). They were able to extract information from fMRI scans that was predictive, at a rate greater than chance, of subjects’ simple choices (choose the left or right hand to push a button) up to 10 seconds in advance of the subjects’ consciously experienced decision and behavior.  Imagining that this rate eventually approaches 100%, and that the predictive information becomes available in real time (which it was not in the reported experiments), researchers might reliably know before the subjects what their behavior will be under such conditions.

Should this worry us? I think not. Unless we suppose consciousness is something separate from brain function, for instance an immaterial contra-causal controller (the topic of part 3), it shouldn’t surprise us that the experience of having decided, dependent as it is on neural processes, should be preceded by other processes contributing to the choice that don’t give rise to that experience. We already knew that not all brain activity supporting behavior, simple or complex, constitutes conscious processing.

Second, the choices predicted in such experiments are simple, binary and spontaneous – they lend themselves to neural prediction precisely because the experimental situation is stripped of the subject’s need to consider reasons for action and the consequences of alternative possible behaviors that are the raison-d’etre of conscious processes. I don’t need to, and probably can’t, provide reasons for the spontaneous choosing of left over right, and of course I’m not privy to its neural determinants – that’s why it’s spontaneous. If I play by the experimenter’s rules, I don’t know in advance which way I’ll choose, or exactly when, but the experimenter might, given enough trials. But of course this is a far cry from showing that most choices are either predictable via brain scans or solely the result of prior unconscious processing.

Third, note that conscious processes are engaged throughout the experimental situation: subjects are continuously aware of what they are being asked to do, of the apparatus, and of their behavior; without their engagement the experiment couldn’t take place. All that’s been shown by Libet, Soon et al. and other contributors to the precursor paradigm is that a particular conscious process – that associated with the experienced decision and concomitant behavior – is part of a sequence of neural operations, some of which are predictive of the conscious choice. That we aren’t aware of the neural precursors of a choice in its early stages doesn’t itself impugn the causal role of conscious processes, those necessary for learning and for flexible, situationally appropriate, and reasons-responsive behavior, for instance the behavior of those cooperating in a psychology experiment. It only shows that conscious processes aren’t all there is to behavior control. But again, we already knew that.

Given the simplicity of the behaviors involved, the precursor paradigm doesn’t cast doubt on the efficacy of the conscious capacities often deployed when making morally and practically significant choices in the real world. It is thus a minor threat to conscious control compared to the influence paradigm: the possibility that unconscious factors affecting behavior might subvert personal integrity and autonomy.  Researchers such as Bargh (1997), Wilson (2002) and Wegner (2003) have documented the ubiquity and importance of unconscious influences in behavior control, some innocuous, some consequential.  Some examples of the latter: Research on implicit bias (Hardin and Banaji, in press) suggests that many of us unconsciously discriminate against those not of our age, gender, ethnicity, or sexual orientation;  the probability of behaving altruistically can be a function of situational factors that we don’t realize are affecting us (Darley and Batson 1973, Macrae and Johnston 1998); our voting preferences might be determined in part by an unconscious assessment of a candidate’s trustworthiness or competence based on physical appearance (Todorov et al. 2005, Lawson et al. 2010); and when it comes to assessing a candidate’s policies, it turns out that we’re often subject to confirmation bias, the sometimes unconscious tendency to systematically ignore or downplay counter-arguments and disconfirming evidence (Nickerson 1998).

These are just a few instances of how morally and practically consequential behavior can be shaped by influences outside of awareness.[7] Their net effect is that we might end up acting in ways - prejudicial, uncaring, unprincipled, uninformed - that fall short of our consciously endorsed values. Unconscious influences therefore pose a substantial threat to our becoming ideally integrated, publicly justifiable and maximally autonomous selves.

Of course, as much as I might not endorse my unconsciously driven or situationally influenced behavior, it’s still me that’s acting; I’m not just my endorsed self. On a naturalistic understanding, persons consist of relatively stable constellations of physically embodied traits, susceptibilities and dispositions, some of which aren’t always mediated by conscious processes. I can’t therefore conveniently “externalize” the less than admirable unconscious parts of me and their behavioral expression (Dennett 2003, p. 122). Unless I’m acting under duress or other standard excusing conditions, I have to take (reluctant) ownership of and responsibility for actions that might reflect the influence of situational factors and unconscious biases.

However, once informed of these influences – once they become objects of conscious consideration – I can take steps to reduce their effect on my behavior. If it’s demonstrated to me that my voting preferences are mostly a matter of candidates’ physiognomy, I can second guess my superficial initial judgments and look for more objective indicators of a candidate’s competence. Knowing that advertisers and political campaigns are using every trick in the book to sway my preferences, I can monitor my behavior with a skeptical eye: to what extent am I the target of manipulation, and how can I counteract it? Knowing that situations have a material influence on my morally consequential actions, I can work to avoid or modify situations that put my integrity risk, or if that’s not possible, work on strengthening my resolve to do the right thing (Baumeister and Tierney 2011) whatever the situation. 

In short, although the threat of unconscious factors to integrity and autonomy is very real, conscious processes, once engaged, have the capacity to counteract their influence. Even as the work of social psychologists increasingly reveals the role of the unconscious in controlling behavior, that very knowledge ends up expanding the scope of conscious control on behalf of our endorsed selves. The more we understand the extent to which we are unconsciously motivated and influenced, and understand the mechanisms of influence, the better equipped we are to become better integrated, more autonomous agents, less at risk of betraying our core values.

At the same time, however, those with commercial or ideological agendas can use expertise in situational and unconscious control to better manipulate us. We’re therefore caught up in an arms race of control and counter-control, in which it behooves us - the potential unwitting marks of subliminal influence - to keep up with the latest developments in behavioral and neuro-economics and their practical applications, for instance in marketing and political persuasion. In constructing ourselves as morally and cognitively virtuous agents, and in defending against threats to our hard-won autonomy, deploying our conscious capacities in light of science is key.

Understanding the limits and vulnerabilities of conscious control also has implications for responsibility ascriptions. Responsibility for deliberately manipulated behavior is distributed between the agent who acts and those who consciously set out to influence the behavior in question. So although we’re not absolved of responsibility when acting under such  influences (we’re still rationally responsive to rewards, sanctions and information), it’s fair to hold others responsible as well, for instance food, alcohol and tobacco industries that are bent on inducing us to consume products that we’re often better off avoiding. Understanding that an individual’s conscious control capacities are targets of deliberate subversion and circumvention helps to justify regulatory policies that restrict attempts at manipulation, and it highlights the need for educational campaigns that increase awareness of our vulnerabilities and enable counter-control.

Part 3: From Soul Control to Naturalized Autonomy

As much as many philosophers and psychologists might be comfortable with a neuro-behavioral view of why conscious processes matter, many lay folk might not. As possibly “natural born” dualists (Bloom 2004), and under the influence of long-standing religious beliefs in an immortal soul, they might suppose we exist as some sort of immaterial conscious controller, a categorically mental supervisor of action.  This essential, non-physical self isn’t bound by deterministic laws in making choices; instead, consciousness gives it libertarian, contra-causal free will, and it, not the brain, is ultimately in charge of action. But on a naturalistic understanding, such folk dualism has it wrong: consciousness can’t matter in this way since it isn’t independent of, and doesn’t add to, what the brain does in controlling behavior.

Of course, the experience of being a core mental self is robust for many of us: for most waking moments, it feels as if I’m a simple, non-extended “I” located somewhere behind my eyes. If I don’t take correction from the mind sciences, I might suppose that this conscious me is a non-physical substance in charge of action, e.g., a soul or mental controller, not just a phenomenal self-model (Metzinger 2003) that accompanies the neural processes that, from an empirical, third person perspective, are actually controlling  behavior. Here’s a description, gleaned from an online discussion, of what I’ll colloquially call “soul control,” that exerted by the immaterial conscious self, connected to but distinct from the brain:

We have evolved consciousness which gives us the ability to make completely uncaused choices…Though we are tethered to our experiences and our brains, our reasons for our choices are never the causes of our actions. Our choices and actions happen independently of our reasons, i.e., our reasons do not compel us to act in any way. Nothing compels our choices. We're always free going forward. In every moment choice is open to us. That is what consciousness has enabled - free will among living beings.[8]

It’s an empirical question as to what proportion of laypersons subscribe to something along the lines of the view just quoted. Cross-cultural research suggests that beliefs in something akin to libertarian free will is widely held (Sarkissian et al. 2010), and a survey of a sample of the U.S. general population (not college students) found that majorities evinced beliefs in a non-physical soul that governs behavior (Nadelhoffer in press). A 2009 Harris poll found that 71% of respondents reported belief in a soul that survives death (Harris Poll 2009).

To those of dualistic persuasions, freedom, dignity and autonomy might be at risk should it turn out that the conscious self isn’t the controller they suppose it to be. If the brain’s action guiding capacities, even those associated with consciousness, are simply the working out of complex causes and effects, unsupervised by a self that transcends causal laws, then maybe we aren’t really authors of action but just pass-throughs for impersonal causation. And if we’re not personally responsible in the way contra-causal control makes possible – that is, ultimately, buck-stoppingly responsible – perhaps we’re not responsible at all and can’t be held responsible.  In which case, why behave morally? The naturalistic understanding of consciousness-as-accompaniment, not independent controller, might provoke demoralization by determinism.

This possibility seems borne out by research in which subjects exposed to statements challenging free will, compared to controls not thus exposed, cheated more (Vohs and Schooler 2008) and behaved more aggressively (Baumeister, Masicampo and DeWall 2009) in experimental situations. Such findings seem to support the recommendation, floated by philosopher Saul Smilansky (2005), that we should conceal the truth that we’re likely not exceptions to natural cause and effect. In particular, we can’t responsibly let non-academics know that, because the conscious self isn’t an immaterial contra-causal controller, consciousness doesn’t matter in the way they suppose it does.

But of course it will be difficult to keep the science-based debunking of soul control under wraps. Indeed, the trend seems to be toward more publicly-expressed skepticsm about libertarian freedom, exampled by atheist Sam Harris’s monograph Free Will (Harris 2012), critiques from established scientists (Snyder 2012), and a spate of articles in the news and popular press (Clark 2010, Nadelhoffer 2011, Ortega 2012). Moreover, any attempt to suppress public awareness and discussion of naturalistic understandings of the self violates a cardinal value of science and the open society: the free exchange of information and opinion.

Still, as Dennett reminds us in his books on free will (1984, 2003), there are better and worse ways to let this particular cat out of the bag. What might help in addressing worries about losing soul control (not that we ever had it) is to show why an immaterial, causally exempt controller couldn’t possibly contribute to making choices. This is for a very simple reason, suggested by the quote above:  a controller not at the influence of anything, in particular one’s own motives and reasons for action, isn’t in a position to guide behavior. Were consciousness an uninfluenced arbiter hovering above neurally-encoded motivation, perception and cognition, it would literally have no reason to add its (immaterial) weight to any alternative under consideration. Any decision process with an unmotivated decider having the final say would of necessity fail to decide. That, and as we’ve seen in part one, there’s no plausible account of how an immaterial controller would get a grip on the brain and body. So not only is there no scientifically respectable evidence for the radically libertarian agent, such an agent would have neither the motives nor the means to contribute to practical decision-making. The ultimate control, authorship and responsibility conferred by the soul are not only naturalistic impossibilities, but not worth wanting, as Dennett (1984) would put it, for actual choice-making.[9]

This leads to a second, crucially important reminder to put front and center when debunking soul control:  human persons, by virtue of their neurally instantiated perceptual, cognitive and behavior control capacities, are real agents that exert robust local control. As much as science suggests we might be fully determined in our wants, character and abilities, we’re just as causally effective – more so, in many cases – as the genetic and environmental factors that shape us. Determinism isn’t fatalism, the idea that our actions don’t make a difference in determining our fates (Houlton in press).

What’s needed for authentic authorship and autonomy is what we already are: very sensitive informavores, endowed by nature and nurture with physically embodied capacities to select between alternatives, often well in advance of action, as determined by our needs, desires and best guesses about the consequences of behavior. Dennett (2012) expresses this nicely:

When the ‘control’ by the environment runs through your well-working perceptual systems and your undeluded brain, it is nothing to dread; in fact, nothing is more desirable than being caused by the things and events around us to generate true beliefs about them that we can then use in modulating our behavior to our advantage!

In contrast to Dennett’s optimistic take on determinist agency, Sam Harris says in his monograph that we are “bio-chemical puppets,” (Harris 2012, 47) and the book’s cover tells the same story: the letters spelling “Free Will” dangle from marionette strings. Regrettably, this suggests that without soul control we end up as passive victims – puppets – of cause and effect.[10] In comparison with the ultimate (but impossible) control exerted by an immaterial conscious decider, we might be tempted to conclude that a naturalistic understanding of agency disempowers us. But if we abandon dualism, see the impossibility of soul control, and acknowledge the causal contribution of our behavior-guiding capacities (conscious and unconscious), there’s no justification for thinking of ourselves as puppets.

Getting this right is essential, since the puppet view could well prove demoralizing in just the ways suggested by the findings of Vohs and Schooler (2008) and Baumeister, Masicampo and DeWall (2009). Further, if a science-based, naturalistic view of the self is thought to imply the end of effective agency, this will impede the acceptance of science and naturalism: no one wants to be proved a puppet. In showing that consciousness doesn’t matter in the way many folk might suppose, scientists and naturalists must take care not to give the impression that our conscious capacities don’t matter, or that we cease being agents who can be held responsible, at least in the consequentialist sense of being answerable to a moral community (Nadelhoffer 2011).

On the positive side, debunking soul control, done properly, can increase our autonomy, paradoxically enough, by revealing that there’s no core self that’s immune to influence.  Knowing that we are fully caused underscores the importance of becoming aware, as our time and energy permit, of all the behaviorally significant factors impinging on us, including attempts at manipulation via situational and subliminal determinants. As discussed in part 2, such awareness widens the potential scope of individual conscious control. And seeing the self as completely embedded in a causal matrix draws attention to the full range of conditions - genetic and environmental, and intentionally mediated or not - that shape the person. Although individuals are local controllers who must be held responsible in order to guide their action toward the good, the buck no longer stops with them (more precisely, it never did). Dismantling beliefs in the immaterial conscious controller and ultimate personal responsibility therefore promises to empower us, make our responsibility ascriptions more realistic, and make our responsibility practices less retributively punitive (Greene and Cohen 2004, Clark 2005b, Waller 2011, Shariff et al. 2012). On the other hand, protecting these beliefs, as some recommend, keeps us in the dark about the actual causes of behavior and abets regressive criminal justice and social polices premised on the myth of the ultimately responsible self (Clark 2004).

Conclusion

An adequate response to folk worries about the role of the conscious self will demonstrate the reality and viability of a naturalized autonomy (Waller 1998). Such autonomy, expressed in voluntary behavior in a social context, establishes the basis for naturalized responsibility: although we are completely products of conditions we didn’t choose, we are nevertheless identifiable sources of action, justifiably subject to behavior-guiding and norms-reinforcing interventions – our responsibility practices. To dethrone consciousness as a contra-causal controller is therefore not a threat to either autonomy or responsibility, suitably naturalized, and may well help to improve our responsibility practices as judged from a progressive humanistic standpoint. [11]

What is a threat, as we’ve seen in part 2, is the role of unconscious situational and motivational influences that drive behavior in opposition to our endorsed values. We are not just our conscious processes, those that play central roles in guiding complex, flexible behavior and that somehow give rise to conscious experience; we are, for better or worse, our unconscious susceptibilities and biases as well. Fortunately, the science that brings the role of unconscious processes to our (conscious) attention also makes them potential targets for intervention, potentially increasing the scope of reflective control on behalf of our endorsed values, and better integrating and strengthening the self we aspire to be. Unfortunately, the same knowledge, deployed by skilful manipulators, can be used against us, so the fulfilment of naturalized autonomy is by no means assured. Conscious vigilance, informed by science, will always be in order, and we must hold would-be manipulators responsible when their agendas conflict with our own best interests.

As we’ve seen, both conscious and unconscious processes matter in their respective behavior controlling roles. But so does consciousness phenomenal experience – itself, even if it turns out not to be the contra-causal controller many folk might suppose (part 3), and even if it doesn’t play a causal role in third person explanations (part 1). Whether or not naturalized conceptions of autonomy and agency ever replace the folk dualism of soul control, and whatever the ultimate impact of understanding unconscious processes might be on our autonomy, the subjective significance of phenomenal experience will stand unchallenged. And the puzzle of explaining consciousness, which matters considerably to scientists and philosophers, still stands as well. The first person reality of phenomenal experience, bringing with it the reality of moral concern, still awaits a conceptually and empirically transparent integration into a naturalized worldview.

TWC, July 2013

References

Baars, Bernard J. 1997. “Contrastive Phenomenology: A Thoroughly Empirical Approach to Consciousness.” In The Nature of Consciousness: Philosophical Debates, edited by Ned Block, Owen Flanagan and Guven Guzeldere, 187-202. Cambridge: MIT Press/Bradford Books.

Bargh, John. A. 1997. “The Automaticity of Everyday Life.”  In The Automaticity of Everyday Life: Advances in Social Cognition, edited by Robert S. Wyer, Jr., 1-61. Mahwah, NJ: Erlbaum.

Baumeister, E. J. Masicampo, and C. Nathan DeWall. 2009. “Prosocial Benefits of Feeling Free: Disbelief in Free will Increases Aggression and Reduces Helpfulness.” Personality and Social Psychology Bulletin 35(2): 260–268.

Baumeister, E. J. Masicampo, and Kathleen D. Vohs. 2011. “Do Conscious Thoughts Cause Behavior?” Annual Review of Psychology 62:331-361.

Baumeister, Roy F. and John Tierney. 2011. Willpower: Rediscovering the Greatest Human Strength. New York: Penguin Press.

Block, Ned. 1995. “On a Confusion about a Function of Consciousness.” Behavioral and Brain Sciences 18: 227-287.

Block, Ned, Owen J. Flanagan and Guven Guzeldere, editors. 1997. The Nature of Consciousness: Philosophical Debates. Cambridge, MA: MIT Press/Bradford Books.

Bloom, Paul. 2004.  Descartes Baby. New York: Basic Books.

Chalmers, David J. 2010. The Character of Consciousness. Oxford: Oxford University Press.

Clark, Thomas W. 2004. “Facing Facts: Policy Implications of the Humanist Commitment to Science.” In Toward a New Political Humanism, edited by Barry F. Seidman and Neil J. Murphy, 343-354, Amherst, NY: Prometheus Press.

Clark, Thomas W. 2005a. “Killing the Observer.” Journal of Consciousness Studies 12 (4–5): 38–59.

Clark, Thomas W. 2005b. “Crime and Causality: Do Killers Deserve to Die?Free Inquiry 25(2): 34-37.

Clark, Thomas W. 2006. “Holding Mechanisms Responsible.” Lahey Clinic Medical Ethics Journal 13(3): 10-11.

Clark, Thomas W. 2010. “Free Will Roundup.” Accessed October 26, 2012. Free Will Roundup.

Crick, Francis and Christof Koch. 2003. “A Framework for Consciousness.”Nature Neuroscience 6(2): 119-126.

Darley, John M. and Daniel C. Batson. 1973. “‘From Jerusalem to Jericho’: A Study of Situational and Dispositional Variables in Helping Behavior.” Journal of Personality and Social Psychology 27 (1): 100-108.

Dehaene, Stanislas and Lionel Naccache. 2001. “Towards a Cognitive Neuroscience of Consciousness: Basic Evidence and a Workspace Framework.”Cognition 79:1-37.

Dehaene, Stanislas and Jean-Pierre Changeux. 2011. “Experimental and Theoretical Approaches to Conscious Processing.”Neuron 70, April 28, 2011.

Dennett, Daniel C. 1984. Elbow Room: The Varieties of Free Will Worth Wanting. Cambridge, MA: MIT Press.

Dennett, Daniel C. 1992. Consciousness Explained. Boston: Little, Brown and Company.

Dennett, Daniel C. 1998. “The Myth of Double Transduction.” InToward a Science of Consciousness II, The Second Tucson Discussions and Debates, edited by Stuart R. Hameroff, Alfred W. Kaszniak, and Alwyn C. Scott, 97-107. Cambridge, MA: MIT Press.

Dennett, Daniel C. 2003. Freedom Evolves. New York: Viking Press.

Dennett, Daniel C. 2012. “Erasmus: Sometimes a Spin Doctor is Right.” Amsterdam: Erasmus Prize Institution, forthcoming.

Greene, Joshua and Jonathan Cohen. 2004. “For the Law, Neuroscience Changes Nothing, and Everything.” ,Philosophical Transactions of the Royal Society of London, B 359:1775–1785. Accessed October 26, 2012. doi:10.1098/rstb.2004.1546.

Hardin, Curtis D. and Mahzarin R Banaji. In press. “The Nature of Implicit Prejudice: Implications for Personal and Public Policy.” In The Behavioral Foundations of Policy, edited by Eldar Shafir, 13-31. Princeton: Princeton University Press.

Harris, Sam. Free Will. 2012. New York: Free Press.

Harris Poll. 2009. “What People Do and Do Not Believe In.” New York: Harris Interactive, Inc. Accessed on October 26, 2012. http://www.harrisinteractive.com/vault/Harris_Poll_2009_12_15.pdf.

Houlton, Richard. In Press. "From Determinism to Resignation and How to Stop It.” In Decomposing the Will, edited by Andy Clark, Julian Kiverstein and Tillman Vierkant, Cambridge: Oxford University Press.

Huxley, Thomas. 1874. "On the Hypothesis that Animals are Automata, and its History." The Fortnightly Review (n.s.) 16: 555–580.

Jack, Anthony I. and Tim Shallice. 2001. “Introspective Physicalism as an Approach to the Science of Consciousness.”Cognition79:161-196.

Kanwisher, Nancy. 2001. “Neural Events and Perceptual Awareness.” Cognition 79: 89-113.

Koch, Christof. 2012. Consciousness: Confessions of a Romantic Reductionist. Cambridge, MA: MIT Press. 

Lawson, Chappell, Gabriel S. Lenz, Michael Myers, and Andy Baker. 2010. “Candidate Appearance, Electability, and Political Institutions: Findings from Two Studies of Candidate Appearance.” World Politics 62(4): 561–93.

Libet, Benjamin, E. W. Wright and C. A. Gleason. 1982. “Readiness Potentials Preceding Unrestricted ‘Spontaneous’ vs. Pre-planned Voluntary Acts.” Electroencephalography and Clinical Neurophysiology 54:322-335.

Macrae, C. Neil and Lucy Johnston. 1998. “Help, I Need Somebody: Automatic Action and Inaction.” Social Cognition 16: 400-417.

Metzinger, Thomas. 2003. Being No One. Cambridge, MA: MIT Press/Bradford Books.

Morse, Stephen J. 2004. "Reason, Results, and Criminal Responsibility." University of Illinois Law Review 2: 363-444. 

Nadelhoffer, Thomas. 2011. “The Threat of Shrinking Agency and Free Will Disillusionism.” In Conscious Will and Responsibility: A Tribute to Benjamin Libet, edited by Lynn Nadel and Walter Sinnott-Armstrong, 173-188. Oxford: Oxford University Press.

Nadelhoffer, Thomas. In press. “Dualism, Libertarianism, and Scientific Skepticism about Free Will.” In Moral Psychology: Neuroscience, Free Will, and Responsibility (Vol. 4), edited by Walter Sinnott-Armstrong. Cambridge, MA: MIT Press.

Nahmias, Eddy. 2011. “Intuitions about Free Will, Determinism, and Bypassing.” In The Oxford Handbook on Free Will, 2nd Edition, edited by Robert Kane, 555-576. Oxford: Oxford University Press.

Nickerson, Raymond S. 1998. “Confirmation Bias: An Ubiquitous Phenomenon in Many Guises.” Review of General Psychology 2(2): 175-220.

Ortega, George. 2012. “Free Will Refuted in the News: An Explosion of Coverage since 2010.” Accessed October 26, 2012. http://causalconsciousness.com/free%20will%20in%20the%20news.htm.

Parvizi, Josef and Antonio Damasio. 2001. “Consciousness and the Brainstem.” Cognition 79(1-2): 135-159.

Robinson, William S. 2010. “Epiphenomenalism.” Wiley Interdisciplinary Reviews: Cognitive Science 1(4):539–547.

Robinson, William S. "Epiphenomenalism." 2012. In The Stanford Encyclopedia of Philosophy (Summer 2012 Edition), edited by Edward N. Zalta. Accessed October 26, 2012. http://plato.stanford.edu/archives/sum2012/entries/epiphenomenalism/.

Sarkissian, Hagop, Amita Chatterjee, Felipe De Brigard, Joshua Knobe, Shaun Nichols, and Smita Sirker. 2010. “Is Belief in Free Will a Cultural Universal?” Mind and Language 25(3):346-358.

Shariff, Azim F., Johan Karremans, Joshua D. Greene, Corey Clark, Jamie Luguri, Roy F. Baumeister, Peter Ditto, Jonathan W. Schooler and Kathleen D. Vohs. 2012. “Diminished Belief in Free Will Increases Forgiveness and Reduces Retributive Punishment.” Manuscript submitted for publication.

Smilansky, Saul. "Free Will, Fundamental Dualism, and the Centrality of Illusion." 2005. In The Oxford Handbook on Free Will, 2nd Edition, edited by Robert Kane, 425-441. Oxford: Oxford University Press.

Snyder, Sam. 2012. “The End of Free Will.” Accessed October 26, 2012. http://samsnyder.com/free-will/

Sommers, Tamler. 2011. Relative Justice: Cultural Diversity, Free Will, and Moral Responsibility. Princeton University Press.

Soon, Chung Siong, Marcel Brass, Hans-Jochen Heinze and John-Dylan Haynes. 2008.  “Unconscious Determinants of Free Decisions in the Human Brain.” Nature Neuroscience 11:543-545.

Todorov, Alexander, Anesu N. Mandisodza, Amir Goren and Crystal C. Hall. 2005. “Inferences of Competence from Faces Predict Election Outcomes.” Science 308(5728):1623–1626.

Tononi, Giulio. 2012. Phi: A Voyage from the Brain to the Soul. New York: Pantheon.

Vohs, Kathleen. D. and Jonathan W. Schooler. 2008. “The Value of Believing in Free Will: Encouraging a Belief in Determinism Increases Cheating.” Psychological Science 19:49–54.

Waller, Bruce N. 1998. The Natural Selection of Autonomy (SUNY Series in Philosophy and Biology). Albany: State University of New York Press. 

Waller, Bruce N. 2011. Against Moral Responsibility. Cambridge, MA: MIT Press.

Wegner, Daniel M. 2003. The Illusion of Conscious Will. Cambridge, MA: MIT Press/Bradford Books.

Wilson, Timothy D. 2002. Strangers to Ourselves: Discovering the Adaptive Unconscious. Cambridge, MA: Harvard University Press.

Notes

[1] In this chapter I put aside, for the most part, questions concerning moral responsibility, credit and blame. Although I’m sympathetic with Bruce Waller (2011), Tamler Sommers (2011) and other moral responsibility skeptics, as a practical matter we can and must be held responsible in the consequentialist sense described by Thomas Nadelhoffer (2011).

[2] Phenomenal consciousness contrasts with so-called access consciousness as described originally by Ned Block: information is access conscious if it’s accessible for action control, whether or not there’s any concurrent phenomenology (Block 1995).

[3] Intentional states such as beliefs and desires are attributed on the basis of behavioral criteria, so the claim that certain behavior controlling neural processes realize these states seems unproblematic, even if their neural realizers can’t be precisely specified.

[4]  As experiencing subjects, we consist of our qualitative states, including the feeling of being an experiencing subject; we’re not in an observational relationship to experience itself (Clark 2005a).

[5] Replicants are portrayed as fully sentient beings, with a desire to live as ends in themselves, hence the poignant moral dimension of Blade Runner.

[6] For an enlightening discussion of the ethical problems posed by creating smart AIs, see the final chapter of Metzinger (2003).

[7] For reviews of literature on unconscious influences on behavior, see Baumeister, Masicampo and Vohs (2011) and Nadelhoffer (2011).

[8] Posted by Rose-ellen Caminer in a discussion at the Facebook Naturalism group.

[9] Soul control is worth wanting, however, if one wants to preserve strong (e.g., Kantian) desert-based conceptions of responsibility. Dropping libertarian free will may make it more difficult to sustain such conceptions, in contrast to consequentialist conceptions (Greene and Cohen 2004, Clark 2005b, Nadelhoffer 2011, Waller 2011, Shariff et al. 2012).

[10] To be fair to Harris, he also says that we are puppets that can pull their own strings. So although “we are ultimately being steered,” (p. 47), it seems we also have some local control on his view.

[11] On improving our responsibility practices, see the references cited in note 9.

Other Categories: