The space of possible minds ðŸ§
Close your eyes. No wait, don’t do that. You still have to read the rest of this thing. So, keep your eyes open, but imagine a ‘mind’.
Did you think of a brain? A human-type of brain? Yeah, me too.
But does a mind have to look like that?
No. We don’t have too many issues with granting (some) other animals a mind - a thing that thinks and feels and has subjective experiences. If you think that’s vague, I agree. Sometimes, though, that’s where the fun bits are hiding.
Anyway, mammal minds, bird minds, reptile minds. I think we can agree on those. Why stop there, though? Anthill minds? What about AI? Or aliens?
One interesting way to think about this is using the ‘space of possible minds’. In this Aeon essay, cognitive robotics professor Murray Shanahan asks what kinds of minds are possible. He rightly points out that ‘mind’ is not a singular on-off phenomenon. It has different parts and those parts can all come in degrees.
Shanahan keeps it conceptually simple by choosing two main axes: human-likeness and capacity for consciousness. We, humans, score high on both. That’s the prerogative of designing metrics based on yourself. As we map animals onto this graph, we see that they fall on a nice diagonal: moving away from human-likeness also seems to correspond to a lower capacity for consciousness in the animal kingdom. A cat is more human-like and more capable of complex consciousness than a single bee, for example. Of course, we can’t exactly measure capacity for consciousness and human-likeness is arbitrary at best, but it still works (for most cases) if we use sufficiently broad strokes, even if bee-lovers might disagree.
(Do check out the essay for a visual representation. I can’t reproduce the images without copyright issues.)
What about AI? Today, it would be a line. Capacity for consciousness is (probably?) close to zero. Human-likeness: depends on the model and sometimes perhaps even human-like enough to fool people. If we ever develop ‘conscious AI’, that could change as it would expand AI territory further along the capacity for consciousness axis.
So, humans and other animals: human-likeness and capacity for consciousness move together.
AI: capacity for consciousness very low, human-likeness can differ. Might change in the future.
This leaves us with one more option. Low human-likeness and high capacity for consciousness. Shanahan puts weird AI and ET here as possible candidates, but we have no real-life current candidates. (Although I think swarm intelligence has potential here…)
Finally, a neat thing Shanahan does is notice that the human capacity for consciousness is not the limit. Is it possible to be super-humanly conscious? You tell me. He puts our ‘mind children’ (future humans or whatever replaces us) and ‘conscious exotica’ there.
If all this sounds like thought experiment hocus-pocus, that’s because it is. It does not mean that it’s pointless, though. As Shanahan puts it:
But even if none of these science-fiction scenarios comes about, to situate human consciousness within a larger space of possibilities strikes me as one of the most profound philosophical projects we can undertake. It is also a neglected one. With no giants upon whose shoulders to stand, the best we can do is cast a few flares into the darkness.
Mind(s of) the gaps 🤯
Let’s light another flare and let’s get a bit more physical to do so.
A recent paper by complexity scientists (yes, that’s a thing) Ricard Solé and Luis Seoane looks at the ‘hardware’ requirements for neural systems in both biological and artificial systems to try and tease out at least some potential requirements for minds - you need the right hardware to run the software, so the idea goes.
(The metaphor is a bit simplistic, I admit. there’s a whole subfield of philosophy that revolves around bickering about how apt the hardware-software metaphor actually is.)
So, what do Solé & Seoane see as the key hardware components of a mind?
Threshold units. Consider the neuron. It doesn’t simply fire when it gets an input, but only when it gets enough of the right inputs (and not too many inhibitory ones) to pass a threshold.
Hierarchical processing. Take the visual system. Some cells respond to specific colors, others to light/dark intensity or gradients, and so on. All that info gets bumped up to the visual centers of the brain where it’s put together step-by-step. Edges, gradients, colors… Hey, it’s a cat!
Wiring costs. You can’t keep stacking basic units into infinity. A skull only has limited space, after all. Constraints of space, latency, and energy drive the development of specific network architectures that limit ‘wiring costs’. This is known as Rent’s rule and it seems to apply to both biological brains and technological networks.
Simple units, dynamic systems. It’s not about the complexity of the units, it’s how you put them together. A neuron is not a brain, but put enough of them together in the right way, with the proper balance of stability and flexibility, and they’ll be ‘cogito ergo summing’ like there’s no tomorrow.
Meta-learning. Minds do not only learn, they learn how to learn. At the most basic level, we encounter the Hebbian rule that ‘cells that fire together, wire together’. It’s an oversimplification of reality, sure, but the brain changes in response to learning to learn swifter later. At a higher level, things like reinforcement and associative learning adapt our behavior (and ultimately brain) through feedback loops.
Solé & Seoane go on to discuss a few ‘gaps’ in current machine learning efforts that (so far?) have precluded the rise of true artificial minds including language, mental time travel, mind reading, morality, the extended mind, and social learning. However, what interests us most (if you’ve read this far, you’re definitely one of us), is that they also end up building a space of cognitive complexity, much like Shanahan did. They conceptualize two such spaces, each one with three axes. Yay, Creative Commons license, so here they are:
Without going into too much detail, there are two points I’d like to emphasize:
They don’t extend their axes beyond human as Shanahan did. I can’t help but wonder what types of mind might be hiding in a larger cube…
There are pretty substantial gaps. That could be an anthropocentric bias. Maybe we underestimate other beings’ capacities or overestimate our own. Humans are known to do that. Also, humans picked the axes. Was the game rigged from the start?
They end their paper with a fun little bout of speculation:
Is it possible to create artificial minds using completely different design principles, without threshold units, multilayer architectures or sensory systems like those that we know? Since millions of years of evolution have led, through independent trajectories, to diverse brain architectures and yet not really different minds, we need to ask if the convergent designs are just accidents or perhaps the result of our constrained potential for engineering designs.
Let me know what’s on your mind…