Anil Seth’s Being You is hard to review, because he covers astonishingly wide territory in just a few hundred pages. Rather than compiling a systematic summary of everything he says (you can just read the book for that – and I highly recommend you do), I’ll just share the important bits that come to mind.
First, we have the hard problem of consciousness, which is where Seth begins. There are many ways to formulate it, but one I’ve taken a liking to lately is this: why is it that when we crack open someone’s skull and look at their brain, we don’t see their inner world?
If we lived in a less mysterious universe, we would be able to open up their skull, peer into the layers of their brain, and get direct access to all the stuff they experience. We’d be able to see shades of red, or the sound of their inner monologue, or the sensations of their hands and feet. We’d see the inner universe of their thoughts, questions, emotions, and memories. But instead, when we open up their skull all we see is a lump of pink goo. When we look closer, we still don’t see what we’d expect—experiences, feelings, memories—but instead see this vast network of tendrils, that bear no obvious relation to what they’re feeling “on the inside”. What’s going on here?
People have wondered about this question for hundreds (thousands?) of years, and we still can’t agree on even a rough outline of an answer. Seth’s approach is to try to sidestep the hard problem by tackling what he calls the “real problem”. Instead of taking consciousness to be this one mysterious thing, he breaks it down into pieces, and tries to study each of them empirically.
In order to study consciousness, Seth breaks it down into three separate things: conscious level (the extent to which someone is awake and aware of their experience, on the spectrum between full concentrated awareness and complete unconsciousness under anesthesia), conscious content (the actual phenomenology of the person’s experience: the colors, sounds, smells, and internal sensations they experience), and conscious selfhood (the particular experience of being a person that is aware and has a body).
I’ll admit that throughout the course of reading the book, and even now as I review it, I never became fully convinced that yes, we can ignore the hard problem and just focus on the real problem. Perhaps I’ve been too Bernardo Kastrup-pilled into thinking that consciousness is indeed a hard problem—a mistake in our fundamental philosophical orientation towards the world—and that ignoring the problem is a kind of self-deception. But, I picked up the book in an act of openminded-ness, and I continue to have exactly that: an openminded-ness towards Seth’s view. He might be right, and I’m not thoroughly convinced, and I do think his line of questioning is well worth exploring further.
Seth expounds on a few core threads in the book: perception as a controlled hallucination, active inference, the free energy principle, and the continuity of life and mind. Let’s tackle each of these in turn.
Perception is a controlled hallucination
The naive view of perception is that we register the world exactly as it is. I see a table in front of me because there is, as a matter of straightforward fact, an actual table in front of me, exactly as it appears to me, independent of me. There are a number of ways we can break free of this naive realism:
- Color is not intrinsic to objects—because the color of an object will vary depending on the light shining on it—but is instead generated by our brains, so it’s not true that everything we perceive is exactly as it is in the world.
- Optical illusions make things look a certain way (e.g. there appears to be motion) when in reality they are not that way.
- There are many wavelengths of light—from infrared to ultraviolet to gamma rays and X-rays and radio waves—that we cannot see, but which nonetheless exist.
- When you zoom in really closely on the objects around us, you find that their boundaries are ill-defined—at the level of atoms, it’s very difficult to discern what is and isn’t part of the table you’re looking at.
So, it’s obvious that we don’t perceive the world “exactly as it is”: we get a narrow window into it, shaped by the particulars of our perceptual and cognitive faculties. Seth describes our experience of the world as a “controlled hallucination”: we are hallucinating an external reality, but that hallucination is kept in check by sensory input from the external world.
It would be silly to suggest that reality is entirely made up by our minds, as much as it’s silly to believe that we transparently perceive reality exactly as it is. So it’s obviously something in between. What the controlled hallucination idea does is add some detail to this middle-of-the-road view: our brain generates predictions of what the world is like (and these predictions are our conscious experiences), and then it makes modifications to these predictions based on sensory input. The brain says “I expect to see a table” and then compares that expectation with input from our eyes, ears, nose, and skin. If there is sufficient contradictory evidence from our senses, the brain will update its prediction to expect something else – a dog, perhaps. The brain is in the business of minimizing prediction error.
This is where things get interesting, because there are two ways to minimize the error between our brain’s expectations and the sensory signals it receives. One way is to change the prediction, as mentioned above, but the other way is to change the world itself, so that our sensory signals change. In Seth’s view, this is what our activity in the world is all about. When we pick up a coffee mug, the brain is continuously generating predictions of the form “the coffee mug will be lifted from the table”, “the coffee mug will be pulled slightly closer”, “the coffee mug will approach my mouth”, and it is recruiting our body to turn these predictions into reality. It is engaged in a sequence of self-fulfilling prophecies to make the world closer to what we want it to be. This is called active inference, and was originally coined by Karl Friston.
Something about this that’s still fuzzy for me, though, is how the brain “decides” in any given case whether to update its top-level prediction based on contradicting sensory input, or to take action in the world to change the sensory input itself. It seems like the brain needs to make a discrete choice about whether to do one or the other, and I wonder how and when this decision takes place.1
The free energy principle
One reason I’m glad I read this book is that I finally got an explanation of this idea that my friends on Twitter can’t seem to stop talking about. Seth does a great job of intuitively explaining of Karl Friston’s free energy principle without getting too deep into jargon or statistics.2
Here’s a rough sketch of the principle. What constitutes a living system? In order to be considered “alive”, an organism must continue to exist in a stable form for some period of time (its lifetime). To do so, it must maintain its separation from the world by way of a distinct boundary. In the case of a single-celled organism, we have the cell membrane; for mammals like us we have the skin. Not only does the organism need to maintain its boundary with the world, but it also needs to participate in an endless give-and-take, in which it absorbs nutrients, releases waste products, and keeps its internal variables (like body temperature, acidity, salt concentration) within some homeostatic balance.
For each variable that the body wants to track, we can imagine a “distribution” of values that this variable takes over the organism’s lifetime. So for humans, one variable is our internal temperature, and the temperature needs to remain at approximately 37ºC for the body to function. Given that this is the most common temperature for the body to be in during its lifetime, we can call this the “most expected temperature”. So, it follows almost trivially that in order to survive, the body needs to keep its temperature close to the “most expected temperature”. This is pretty much all that the free energy principle is saying. Here’s Seth:
According to the FEP, for a living system to resist the pull of the second law, it must occupy states which it expects to be in. Being a Good Bayesian, I’m using “expect” in a statistical sense, not in a psychological sense. It is a very simple, almost trivial idea. A fish in water is in a state it statistically expects to be in, because most fish are indeed in water most of the time. It is statistically unexpected to find a fish out of water, unless that fish is beginning to turn to mush. My body temperature being roughly 37 ºC is also a statistically expected state, compatible with my continued survival, with my not dissolving into mush.
It turns out that by some complicated mathematics that Seth doesn’t get into, staying close to these “statistically expected states” is equivalent to “minimizing free energy”. The reason why all of this is important is that it provides a unifying principle for biology at all levels of emergence—what an organism does is minimize its free energy, not just at the level of its individual cells, but also at the level of its organs and the nervous system. Minimizing prediction error, at the level of the brain’s perceptions, is an instance of the brain minimizing free energy for the organism as a whole.3
Octopuses are wild
In one of my favorite chapters in the book, Seth talks about animal consciousness. His personal guess is that all mammals are conscious—based on a number of anatomical and functional similarities to human brains—but he’s not sure about the rest of the animal kingdom. One animal in particular that he writes at length about is the octopus.
Octopuses are strange on many levels. First of all, they don’t have bones: they’re basically a blob of flesh with one bony beak—a “liquid animal” as Seth puts it. Pertinent to our discussion of minds, they have a very distributed nervous system: they have neurons throughout their tentacles. Further, their axons lack glia, which makes the conduction of action potentials much slower, which in turn suggests that much of the computation that their brains are doing must be localized to the tentacles themselves. In fact, the tentacles can be separated from the squid’s body and still display complex behaviors like grasping food for some time after. Their nervous systems—and by inference, their minds—are about as distinct from ours as we can find on this earth.
Octopus minds are not aquatic spin-offs from our own, or indeed from any other species with a backbone, past or present. The mind of an octopus is an independently created evolutionary experiment, as close to the mind of an alien as we are likely to encounter on this planet.
Not only do they differ from us at the level of the brain, but they even have distinct processes for transcribing DNA:
Octopuses do things differently even at the level of genes. In most organisms, genetic information in DNA is transcribed directly into shorter sequences of RNA (ribonucleic acid) which are then used to make proteins—the molecular workhorses of life. This is a well-established, textbook-level principle of molecular biology. But in 2017, this principle was upended by the discovery that RNA sequences in octopuses—and in a few other cephalopods—can undergo significant editing before being translated into protein. It’s as if the octopus is able to rewrite parts of its own genome on the fly. (RNA editing had been previously identified in other species, but in those instances, it plays only a relatively minor role.) What’s more, for the octopus, much of this RNA editing seems to be related to the nervous system. Some researchers have suggested that this prolific genome rewriting ability may partly underlie the impressive cognitive abilities of octopuses.
Functionalism and philosophy of mind
In the final chapter, Seth talks about machine intelligence and the question of functionalism in the philosophy of mind.
Here’s what functionalism states: consciousness is purely a matter of information processing. Put another way, what makes an entity conscious is the mathematical relationship between its inputs and outputs—not the actual substrate that those inputs exist in, or the substrate that the mathematical processing takes place in. In this view, we can abstract away all the information processing that the brain does and simulate that on a computer (or even with dominoes and pipes of water), and that would result in a conscious experience much (exactly?) like ours.
Seth maintains a “suspicious agnosticism” towards functionalism, while acknowledging that there are no hard knockdown arguments against it. I share this suspicion. In the past it made sense to me that consciousness should be substrate independent, but this intuition has shifted over time to a belief that the substrate in which our minds operate must have some important relationship to what our minds are like. One of the reasons I hold this view is a claim made by Andres Gomez Emilsson in this interview. In the interview he states that the main argument against functionalism comes from the binding problem.
The binding problem is the question of how it is that a system like the brain, which has its constituents (neurons, atoms) widely distributed in space, contributes to one unified conscious experience. If consciousness is purely about information processing, we would need to establish that the information processing that the brain does—distributed across time and space—has some intrinsic cohesiveness to it. But Emilsson claims that there is no way to describe a system’s information processing that is frame-invariant (i.e. that will look the same regardless of the frame you’re viewing it from). Instead, he points out that there are other properties of physical systems that are frame-invariant, and these could be the actual basis of consciousness. Emilsson is a dual-aspect monist, meaning that he believes that the universe is one thing, and when viewed from a certain angle that thing looks like matter, and when viewed from another angle that thing looks like consciousness.
In any case, all of this is relevant for the question of mind uploading and machine consciousness. If functionalism is false, then it’s harder to imagine that we’d be able to upload our consciousness onto computers, because again, our consciousness is intrinsically coupled with physical processes taking place in our brain. If functionalism is true, then there is hope for mind-uploading.
And if functionalism is true, it’s also easier to imagine that computers (and programs like ChatGPT) might be conscious. Here again, Seth doesn’t make a strong claim, but he does reiterate his tentative hypothesis that life and mind have a deeper continuity than most people think.
The beast machine theory proposes that consciousness in humans and other animals arose in evolution, emerges in each of us during development, and operates from moment to moment in ways intimately connected with our status as living systems. All of our experiences and perceptions stem from our nature as self-sustaining living machines that care about their own persistence. My intuition—and again it’s only an intuition—is that the materiality of life will turn out to be important for all manifestations of consciousness. One reason for this is that the imperative for regulation and self-maintenance in living systems isn’t restricted to just one level, such as the integrity of the whole body. Self-maintenance for living systems goes all the way down, even down to the level of individual cells. Every cell in your body—in any body—is continually regenerating the conditions necessary for its own integrity over time. The same cannot be said for any current or near-future computer, and would not be true even for a silicon beast machine of the sort I just described.
This shouldn’t be taken to imply that individual cells are conscious, or that all living organisms are conscious. The point is that the processes of physiological regulation that underpin consciousness and selfhood in the beast machine theory are bootstrapped from fundamental life processes that apply “all the way down.” In this view, it is life, rather than information processing, that breathes the fire into the equations.
-
Here’s one proposal. Perhaps these are different categories of perceptions: when it comes perceptions about objects and their movement in the world, the brain takes the “active” approach approach of changing the world, and when it comes to everything else, the brain takes the “passive” approach of updating its predictions. ↩︎
-
Granted, I would’ve personally preferred a more rigorous treatment and that’s what I’m looking for now. Let me know if you have suggestions. ↩︎
-
I recommend this video for more details from Friston himself. ↩︎