The idea that the hard problem can be solved with metaphysics has been gelling in my mind for quite a while now. There are three threads I’m exploring that are all landing in the same place.
First, there’s Bernardo Kastrup’s work. In his theory of analytical idealism, reality is ultimately made of a single, primitive, instinctive mind, and this mind dissociates into a bunch of more complex minds (these are the individual conscious beings that we’re familiar with). “Matter is what consciousness looks like from across a dissociative boundary.” In the same way that an individual can dissociate, at which point their various selves are “across” some kind of impenetrable “boundary” from each other while being housed within the same brain, so can the “primitive mind” of reality dissociate.
In Kastrup’s view, the hard problem is nothing but a demonstration of how bad our metaphysics has become. He claims that up until the time of Galileo, it was fairly obvious to people that reality was mental, and that the physical is secondary. It was only when scientists in the sixteenth century had to appease the Church that they created this distinction between the mental and the physical, claiming that science only pertained to the physical. This allowed science to work in a contained little box without upsetting the Church, while leaving the mental—the important stuff, the real stuff—untouched and firmly within the Church’s grasp.
The second thread I’m exploring is the work of Qualia Research Institute. I’m currently watching an interview (highly recommend) between Tasshin and Andrés Gómez Emilsson in which Emilsson says that his view is very close to dual-aspect monism. The universe is one thing, and from a certain angle it looks like physical systems, while from a different angle it looks like consciousness. QRI’s full position is “non-materialist physicalism with topological segmentation for binding”, which I’m still unpacking, but their general direction seems close to that of analytical idealism.
The third thread that leads me to some form of idealism is the fact that the papers I’ve read on consciousness so far all seem to sidestep solving the actual hard problem. For example, in Homing in on consciousness, the authors talk about how consciousness seems to be necessary for integrating sensory information in the service of musculoskeletal movement. They argue that we need consciousness for action selection, i.e. voluntary behavior:
Just as a prism combines different colors to yield a single hue, the conscious field permits for multiple response tendencies to yield a single, integrated action. Absent consciousness, skeletomotor behavior can be influenced by only one of the efference streams, leading to unintegrated actions. (6)
That’s all well and good, but also, why? Why would we actually need consciousness for integrated action? As the authors themselves point out, we conduct many complex behaviors without conscious involvement (see: reflexes, as well as weird disorders like alien hand syndrome and utilization behavior syndrome).
The authors do make this interesting point that in order to properly integrate sensory information and the effects of actions, all of these disparate streams of information (sights, smells, actions) need to be converted into the same kind of thing – i.e. conscious contents (emphasis mine):
Again, as with the case of (a) anticipated action effects, (b) actual action effects, and (c) information about the immediate environment, adaptive action selection requires that the conscious contents associated with both Stage 1 (e.g., the percept of the opening and the warmth) and Stage 2 (e.g., the smell and the inclination to stay in the cave) be, in terms of their functional consequences for action selection, the same kind of thing – comparable tokens existing in the same decision space. Thus, the conscious field permits for the contents about the smell and about the opening to influence action collectively. (14-15)
(Context to help understand the paragraph above: to elucidate their theory with a concrete example, they describe a hypothetical “creature in a cave”: a very primitive being that feels warm inside the cave, but also experiences a toxic smell that makes it inclined to leave the cave via an opening.)
It makes sense that the various percepts and anticipated action effects need to be “tokens in the same space”, but it’s unclear why that space has to feel like something rather than not feeling like anything.
In addition to the above paper, I’m also currently reading What insects can tell us about the origins of consciousness, and I’m getting a similar sentiment. They posit that consciousness is enabled by structures in the midbrain, which do this whole process of integrating multisensory information about the body in space, along with homeostatic information about the internal state of the body, in order to select actions and targets (e.g. chasing prey). Sounds familiar.
We will adopt and expand the account given by Merker (8), who argues that subjective experience arises from interacting midbrain and basal ganglia structures creating an integrated simulation of the state of the animal’s own mobile body within the environment.
Merker suggests that an important function of the midbrain is to combine interoceptive (stimuli arising from within the body) and exteroceptive (stimuli external to the body) sensory information. Information on the environment, and the location and movement of the animal within it, is processed within the roof of the midbrain (the tectum, or colliculus in mammals). Information about homeostatic needs is processed within the floor of the midbrain (the hypothalamus and associated structures). Nuclei located between these poles integrate this information to produce a unified multimodal neural model of the spatial location of resources relative to the animal, which is coupled to, and weighted by, the extent to which different resources are needed by the animal. (4901)
Anyway, you get the point: these theories, while providing fascinating ideas about how the brain integrates information and selects actions, don’t really solve the hard problem, unless I’m missing something. (Not that they necessarily intended to solve the hard problem, as much as I hoped they would.)
But when we flip our metaphysics, there is no hard problem to begin with, and all of these papers make more sense. Mentation is fundamental to reality, and we’re just trying to figure out some of the details about how organisms create rich models of the world and make decisions. There are still some important and interesting problems in our metaphysics like: why does consciousness look like matter from a certain angle, rather than always just looking like consciousness? In what way is a rock made of conscious stuff? How do the boundaries between conscious beings exist? What the heck is matter anyway? But the problem of why there is any conscious experience at all is no longer a problem, because consciousness is where you start.
I don’t mean to claim that this is the answer to the hard problem, or even that this is an especially good answer. It’s just the answer that makes the most sense to me at the moment, given what I’ve seen so far.