The strange situation of being conscious
Perhaps the most startling revelation in neuroscientist Anil Seth’s wonderfully written book on consciousness is that the world you so concretely see, hear, and touch is in fact an experiential, brain-based construction, not the world itself. What you’ve got in experience is an appearance of reality that your brain somehow cooks up. We naively take the appearance to be reality itself, when in fact, as Seth puts it, waking consciousness is like a brain-induced hallucination, albeit one that’s constrained by sensory input, thus a controlled hallucination. Making the same point in his book Being No One, philosopher Thomas Metzinger likens consciousness to “vigorously dreaming at the world.” As conscious subjects we are in the Matrix: we inhabit, we experientially consist of, a simulation that the brain constantly updates as we go about our business. This is very strange situation indeed, given that the world in experience seems untranscendably physical, not at all a mental construction.
Why, one wonders, would conscious experience – a virtual reality – interpose itself between the physical brain and the physical world? Why does consciousness exist at all, and how does the brain cook it up? Why is it “something to be like” a human being, someone who experiences the myriad sensory qualities (“qualia”) usually present in consciousness – the phenomenology of colors, sounds, tastes, pains, pleasures, and emotions? (Presumably there is nothing it is like to be a stone.) Is phenomenology something physical, non-physical, or what? Who or what is this you, apparently sitting in the middle of experience, to whom the world appears? And for what sorts of systems, besides us humans, does consciousness arise?
Being You offers deep and entertaining insight into many facets of these questions, primarily in the context of predictive processing: the idea that the brain instantiates a predictive model of the possible causes of sensory input, a model that allows us to successfully navigate the world. Our being conscious subjects is thus inextricably tied to our being complex biological systems that represent reality to stay alive – we are “beast machines” bent on survival (“beast” meaning animal, not monster).
But Seth warns us off the big question – the so-called “hard problem” of consciousness – of precisely how the existence of phenomenology is entailed by our being the sorts of creatures we are. He says we should put that aside, at least for now, in favor of exploring the “real problem” of consciousness, of how various brain-instantiated mechanisms might explain the major structural and variational properties or dimensions of conscious experience: its level (e.g., dreaming vs. waking), the different types of conscious contents (red, pain, sweet, fear), and the sense of being a conscious self (you as you now experience being someone). If and when these properties of consciousness are fully explained, perhaps in terms of predictive processing, the hard problem might dissolve; we’ll likely have closed the “explanatory gap” between neural goings-on and the existence of phenomenology. It will have been naturalized by, and within, the physical sciences, just as life has been naturalized.
No surprise, then, that Seth is a self-described, but properly circumspect (22), physicalist, who thinks the best working (but defeasible) assumption when researching consciousness is that it’s either identical to, or something emergent from, the purely physical stuff that science finds in spacetime (18). The hunt for neural mechanisms subserving the properties of consciousness he targets (level, content, self) makes perfect sense on this assumption. But as I think Seth realizes, physicalism as a metaphysical thesis about reality, including phenomenal experience, is not central to science, an explanatory undertaking which is, or should be, metaphysically agnostic. Depending on how we define the physical (fundamental “stuff”? what’s observable? what physics says exists?), to naturalize is not necessarily to physicalize.
To keeps things physicalism-friendly, Seth does his best to limit his investigation of consciousness to its structure and variability, not the existence of phenomenology per se. This gives physicalist neuroscience its traction: it can investigate the neural and functional correlates of the structural and compositional variation we find when comparing conscious experiences, building explanatory links from the physical to the phenomenal. But the inconvenient fact intrudes that all this structure and composition involves phenomenal particulars, the non-decomposable basic qualitative feels of experience such as red, pain, and sweet. Seth does an admirable job adducing possible explanations of experiential structure and content from a predictive processing perspective, which alone is worth the price of admission. But the question of qualia – why phenomenology at all? – hovers, and won’t go away.
Qualia, where are you?
Although qualia notoriously present problems for physicalism, Seth in my view correctly holds that phenomenology is real, not, as some physicalists argue when attempting to evade the hard problem, an illusion. This means that in explaining consciousness we’re obliged, eventually, to account for the two fundamental characteristics of all phenomenal experience: that it’s qualitative, and that it’s subjective – experiences only exist for the conscious system.
Seth acknowledges that subjectivity is a fact about consciousness: “…the explanatory targets of consciousness science are subjective – they exist only in the first person” (33). Right away this puts pressure on physicalism, at least construed as the thesis that all that’s real exists objectively, out there and potentially observable in the spatio-temporal world. For an experience to exist only for the mind system in question makes it mind-dependent and private to the system; and indeed, we don’t see experiences sitting in spacetime.
One essential explanatory target for a science of consciousness is therefore subjectivity itself: how does what’s existentially subjective and unobservable end up attached to – arise for – certain physically objective, observable systems like ourselves? How does physicalism handle that? As one aspect of the supposedly deferrable hard problem, this question doesn’t get raised in Being You, at least not directly.
But it comes up implicitly in Seth’s discussion of one frequently cited example of qualitative experience, good old sensory red (original emphasis is in italics, my emphasis is underlined):
When I have the subjective experience of seeing a red chair in the corner of the room, this doesn’t mean that the chair actually is red – because what would it mean for the chair to possess a phenomenological property like redness?... Does this mean that the chair’s redness has moved from being “out there” in the world to “in here” inside the brain? In one sense the answer is clearly no. There’s no red in the brain in the naïve sense of there being some kind of red pigment – or “figment” – inside the head… The only sense in which one could locate redness “in the brain” is simply because that’s where the mechanisms underlying perceptual experience are to be found. These mechanisms are, of course, not red.
When I look at a red chair, the redness I experience depends both on properties of the chair and properties of my brain. It corresponds to the content of set of perceptual predictions about the ways in which a specific kind of surface reflects light. There is no redness-as-such in the world or in the brain. (90-1)
What we have here is a rather extraordinary proposition, namely that the experience of red, and all conscious experiences, are not locatable in the spatio-temporal world, even though the neural mechanisms that somehow subserve phenomenology are in the head, and thus in the world. How then are we to construe experiences as physical? What sort of thing or property, if any, could be material but not locatable in spacetime?
I think a possible answer lurks in Being You, but (understandably) is not made explicit. Seth says in the second paragraph of the quote above that the redness I experience corresponds to the content of perceptual predictions as carried out by certain neural processes in my head. These mechanisms, engaged in representing reality by means of predictive models constrained by sensory input (explained in chapters 5 and 6), are observables, clearly, but what about their content?
I’ve suggested that as a general rule representational content (e.g., concepts, propositions, numbers), although real, is indeed not an observable, not locatable in the world that it participates in representing. We shouldn’t try to reify or objectify it. Now, if we were to hypothesize that the qualities of experience don’t merely correspond to perceptual content but indeed are a type of content, then this might account for the existential privacy – the subjectivity, non-objectivity – of consciousness. The content of my perceptual mechanisms, unlike the mechanisms themselves, is not an observable, but only available to me as a conscious, experiencing subject. It’s available in the sense that experientially I consist of said content, that of a self-in-the-world reality model. If (an open question) phenomenology is a variety of representational content – qualitative content – this might explain why we’re not going to find redness in the head nor out in the world. If someone asks you where to find dear old sensory red, I think the correct reply is: nowhere.
An open question for physicalism is whether experiential content, as something real but on the face of it not an observable, can be cashed out as identical to, or an emergent property of, its physical vehicles (neural ensembles in our case). And if content as a property or phenomenon isn’t locatable in spacetime, is it fair to call it physical, or would representational be more appropriate? I’ll leave these questions to fend for themselves since I want to consider three further points about consciousness and one about free will that Seth raises. But whatever additional caveats I come up with, do not let them deter you from Being You.
A causal role for consciousness?
A perennial worry is that explanations of consciousness might show it to be causally irrelevant to behavior, that is, epiphenomenal, even though we commonsensically suppose that it plays a causal role: my pain causes me to wince and avoid hot stoves henceforth. Ok, but isn’t the brain doing all the causal work necessary to move my body out of harm’s way? If experience is identical to something that neurons are doing, then its qualities don’t causally contribute anything over and above neuronal activity; we needn’t appeal to phenomenology per se when explaining behavior. If it’s not identical, then the problem of mental causation looms: how does consciousness as some additional phenomenon or property causally contribute to behavior control? Seth doesn’t tackle this issue head on, at least not that I could see, but when discussing the possibility of an embodied conscious robot, he seems to suggest that experience is causal:
The signals flowing through its circuits implement a generative model of its environment, and of its own body. It is constantly using this model to make Bayesian best guesses about the causes of its sensory inputs. These synthetic controlled (and controlling) hallucinations are geared, by design, toward keeping the robot in an optimal functional state – to keep it, by it’s own lights, “alive.” (262)
From this it seems that hallucinations – that is, experiences – are tasked with behavior control. But given that experiences are not, according to Seth, occupants of spacetime (see the section above), it’s hard to see how they can gain purchase on the behaving body, whether biological or artificial. I’d suggest the solution is to give experiences, construed as representational content, an explanatory, but not causal role, in behavior. I can legitimately cite pain to explain why I wince since as content it reliably arises in parallel with the neural vehicles carrying said content and, ultimately, those that cause muscular contractions. So we needn’t worry that experience, conceived as qualitative content, doesn’t operate as a causal controller since its associated neural processes – the content vehicles – are already reliably in control.
Is life necessary for consciousness?
Consistent with his “beast machine” theory, Seth doubts that artificial non-biological systems, even the elaborate, embodied robot mentioned above, are likely candidates for consciousness, but it isn’t clear to me why. If experiences supervene on the operation of a predictive self-in-the world, behavior-controlling model, perhaps as its representational content, then it would seem that endowing an artificial system with a human-equivalent model would entail consciousness. Seth’s view makes consciousness out to be a system property having to do with maintaining bodily integrity, so why would the system’s physical substrate matter when its continued organization is primarily at stake? That it does would mean the physical characteristics of system components at some level contribute to consciousness, not just their organization into a predictive model, but nothing like that figures in his account. Seth acknowledges that his intuition that the naturally evolved “materiality of life” is necessary for experience is just that, an intuition (263), so validation of his (tentative) biological chauvinism must await a settled theory of consciousness. That theory might show precisely why it is that “life, rather than information processing, breathes fire into the equations” (264). Or it might not.
Is anesthesia, or death, oblivion?
Having a few times been fully anesthetized, in the book’s prologue Seth likens it to a state of oblivion, what we might anticipate at death, and something even to look forward to:
Under general anesthesia…I could have been under for five minutes, five hours, five years, or even fifty. And “under” doesn’t quite express it. I was simply not there, a premonition of the total oblivion of death, and in its absence of anything, a strangely comforting one. (2)
Our conscious experiences are part of nature just as our bodies are, just as our world is. And when life ends, consciousness will end too. When I think about this, I am transported back to my experience – my non-experience – of anesthesia. To its oblivion, perhaps comforting, but oblivion nevertheless… When the end of consciousness comes, there is nothing – really nothing – to be frightened of. (8-9, original emphasis)
But of course we don’t ever experience oblivion, or nothing, so it’s not something we can look forward to as a “non-experience” to be enjoyed (“rest in peace”), or perhaps to be frightened of. What we have in anesthesia is an instantaneous transition from one experience to the next, with no subjective interruption, even after 50 years of being under. If such is the case, then the argument can be put that death, too, is not the onset of oblivion for the subject who dies. In which case, what should we anticipate at death? Not nothingness or oblivion, in any case.
Could you have done otherwise?
As a long-time skeptic about contra-causal, libertarian free will, I think Seth does us an important service in debunking the idea that we are, or need be, causal exceptions to nature. There’s no reason to think that consciousness as a natural phenomenon transcends the workings of physical law. On the beast machine theory, there’s no immaterial self or soul calling the shots when deciding or acting, only the material brain working hard to discern the best course of action, given one’s beliefs, desires, memories, and one’s current and projected environments. The agent is fully naturalized and has considerable causal, determinative powers, but it operates within the constraints of its biological nature and the world it must negotiate.
I made tea. Could I have done otherwise? In one sense, yes. There’s coffee in the kitchen too, so I could have made coffee. And when making the tea, it certainly seemed to me that I could have made coffee instead. But I didn’t want coffee, I wanted tea, and since I can’t choose my wants, I made tea. Given the precise state of the universe at the time, which includes the state of my body and brain, all of which have prior causes, whether deterministic or not, stretching all the way back to my origin as a tea-drinking semi-Englishman and beyond, I could not have done otherwise. You can’t replay the same tape and expect a different outcome, apart from uninteresting differences due to randomness. The relevant phenomenology – the feeling that I could have done otherwise – is not a transparent window on how causality operates in the physical world. (225, original emphasis)
Seth will be taken to task by philosopher Daniel Dennett for supposing it’s the unconditional, not conditional (counterfactual) sense of “could have done otherwise” that matters when thinking about causal determinism and human agency. But since many, perhaps most, folks suppose they could have done otherwise in an actual situation, or in a replay with all conditions the same (so supposedly have the unconditional ability to do otherwise), it really matters that they be set straight on this. Although apologists for libertarian free will aim to establish us as contra-causal controllers, there’s no evidence that we have the capacity – the ability – to transcend causation when deciding and acting. That said, Seth rightly observes that
The counterfactual aspect of the experience of volition is particularly important for its future-oriented function. The feeling that I could have done differently does not mean I actually could have done differently. Rather, the phenomenology of alternative possibilities is useful because in a future similar, but not identical, situation I might indeed do differently. (228, original emphasis)
With luck this will satisfy Dennett: we can acknowledge the importance of having the conditional ability to do otherwise, while also acknowledging the importance of seeing we don’t have the unconditional ability. We are not, as Dennett once put it so nicely, “moral levitators” who do right or wrong independently of our causal histories. To imagine we do is a very destructive bit of magical thinking.
Despite any disagreements that might arise with Seth’s approach or conclusions, some of which I’ve aired above, those interested in the quest to naturalize consciousness should not miss Being You. Seth’s presentation is lively throughout, with candid vignettes from his career that illustrate the very strangeness of what might be the nature of subjective experience. It isn’t clear where the science of consciousness will ultimately take us, but if and when science (kept honest by philosophy) reaches consensus, it will necessarily unite our understanding of phenomenology with that of its physical and, dare I say, representational basis. Such unification will help cement a coherent, systematic, and comprehensive naturalism that situates us in the natural world as one instance of its knowers. Consciousness is the untranscendable medium through which reality appears to each of us as individuals, and getting a grip on its nature via the collective philo-scientific enterprise is yet another step in situating ourselves. In portraying us as living modelers of reality, caught up in the model itself, Being You is an important and original contribution to understanding how conscious minds arise in the natural world.
 Being No One, 2003, p 52: “…a fruitful way of looking at the human brain…is as a system that constantly lets its internal autonomous simulational dynamics collide with the ongoing flow of sensory input, vigorously dreaming at the world and thereby generating the content of phenomenal experience.” One direct, experiential way to see that consciousness is a virtual reality is to have a lucid dream, and with a bit of practice just about anyone can have one.
 Daniel Dennett and Keith Frankish are notable illusionists; see their respective Journal of Consciousness (JCS) papers on illusionism here and here, and my realist reply to them is in section 3 of Locating consciousness, also in JCS.
 Seth writes on the mind-dependence of experience and other mental phenomena, such as the concept of chairness: “It is because our perceptions have the phenomenological character of ‘being real’ that it is extraordinarily difficult to appreciate that, in fact, perceptual experiences do not necessarily – or ever – directly correspond to things that have a mind-independent existence. A chair has mind-independent existence; chairness does not.” (144)
 On 186-7 he says “In the same way that ‘redness’ is the subjective aspect of brain-based prediction about how some surfaces reflect light, emotions and moods are the subjective aspects of predictions about the causes of interoceptive signals. They are internally driven forms of controlled hallucination.”
 The other big “hard problem” question is why experiential content is not only private, but qualitative. Following Thomas Metzinger’s work, I suggest a possible answer in section 6.3 of Locating consciousness.
 About the status of content as a natural phenomenon, and something that might play a role in explaining consciousness, see Nicholas Shea’s 2018 prize-winning open access book, Representation in Cognitive Science.
 See also p. 193 iwhere Seth says “The forms of self-perception [emotions and moods] are not merely about registering the state of the body, whether from the outside or from the inside. They are intimately and causally bound up with how well we are doing, and how well we are likely to do in the future, at the business of staying alive.” (my emphasis)
 On this see section 7 of Locating consciousness and Respecting privacy: why experience isn’t even epiphenomenal.
 In the preface to his book Elbow Room, Dennett issues this challenge: “It simply does not matter whether or not in precisely the same circumstances you would always do the same thing, and those who continue to suppose that it matters greatly…owe the rest of us an argument showing why.”
Nadelhoffer, Thomas, et al. 2020. Folk intuitions and the conditional ability to do otherwise. Philosophical Psychology 33 (7):968-996. Doi: 10.1080/09515089.2020.1817884.
 Dennett on “moral levitation,” what some suppose is required for us to be moral agents: “Real autonomy, real freedom, requires the chooser be somehow suspended, isolated from the push and pull of…causes, so that when decisions are made, nothing causes them except you!” Freedom Evolves, p.101-2, original emphasis).
 Supposing we have contra-causal free will deflects attention from the actual causes of behavior, thus undercutting effective efforts to ameliorate the human and planetary condition. It also sets us up as ultimate first causes, thus deserving of deep credit and blame, a belief that underwrites demonization, punitive social and criminal justice policies, and the claim that extreme economic inequalities simply reflect a just distribution of wealth.