Strange Loops Part 2: You are a Strange Loop
In the last post
I broke down the components of strange loops. In this instalment go through the components one by one to see whether or not you are a strange loop
Being a Loop
The loop that we are part of is an action-perception loop.
We receive sensory data from the environment. Light hits our eyes, sound-waves enter our ears, nerve endings are stimulated across the body. We use this information to help work out what is around us so that we can best achieve our goals. We perform actions to achieve those goals. Those actions change the sensory data we receive. Perhaps we move our head so that different light hits our eyes. Perhaps we reach for an object, picking it up with exactly the right amount of pressure based on the tactile feedback from the nervous system. When we act, we change ourselves or the world so that we get new sensory data as input. The loop repeats.
The action-perception loop is perhaps the most fundamental component of being a living agent. It is more dynamic than the basic loops of the previous instalment, where you had to go all the way around to get back to the start. New sensory data is constantly hitting the eyes and we are continuously acting. There is no pause in the bombardment of ever changing sensory data while we decide what to do. The loop is dynamic and continuous and messy, and we are living through it on many different time-scales at once. It is not just a loop. It is an incredibly loopy loop.
Moving Up In A Hierarchy
The patterns of light hit the eyes as low level information. The packed rod and cone cells of the retina respond to different intensities of light to send uninterpreted information forwards. The back of the retina does not contain a picture of the world that the brain just has to watch. It contains millions of different intensity readouts that need to be combined and analysed to work out what is actually out there. It doesn't really makes sense to visualise uninterpreted sensory information, but, I like to think of it as looking something like this:
Mouse over to animate.
...except that the eye has over 100,000,000 photoreceptors to deal with, instead of the paltry 15,000 visualised here. You'll just have to imagine the missing 99.985%. If somehow you manage that, then next would be to add in the other eye, followed by other forms of sensory input.
How does a mess like this get turned into meaningful information? As the information traverses the pathways in the brain it is abstracted into higher level concepts. The brain's task is to use this information to update its previous best educated guess as to what is out there. At lower levels guesses might be about simple properties of the inputs like large adjacent differences. The brain predicts edges. Later, with edges already guessed, it may be possible to guess shapes. Later still, objects complete with an understanding of how they will behave. Each step in this process is a shift up in hierarchy.
At the bottom of the hierarchy is the unfiltered sensory input. There are no objects in the original data, yet we perceive a world populated by objects. It takes multiple shifts up in the hierarchy to turn the patterns in the data into the world we perceive.
Again, the brain turns out to be a perfect example. There's not just one jump up in hierarchy, but many.
Considering the Entire System
So, do any of these higher level representations of the world have the ability to 'be about' the entire system as was necessary for Gödel's proof? I have a concept of 'self'. The concept is vague and I don't exactly know what it means. But it corresponds to the left part of the action-perception loop expertly illustrated above.
Perhaps it feels like the entirety of your model, everything that you perceive, is you. Maybe you see yourself roughly in terms of the bodily boundary, defined by the parts that are directly controlled by actions. Perhaps you see yourself as inside your head, somehow witness to the unfolding of thoughts, events, and actions. Perhaps, like me, you alternate between these different perspectives. Whatever the case, there is something, no matter how nebulous, that you call you.
But it isn't enough for the system to just contain a concept of its entirety. In order to count, it needs to be used as part of the loop. In our case, the concept of the self must be involved in a decision to act. Then, and only then, will the concept change the sensory input and close the loop.
If an action was chosen because it was reasoned to lead to your (as an entity) future survival or future benefit, then the concept of self was instrumental in the choice of action. This clearly occurs in deliberative reasoning about our possible futures. We have the capacity to reason about future selves in a multitude of scenarios, and pick the course of action that we expect to lead to the best scenario.
Humans are still on track for being strange loops.
Considering Components of the System
The individual components of the human system that contribute to the loop are the possible actions. We need to know if there are high-level encodings of the possible actions of the system that are used in determining which action is performed? As with the previous case, this seems like an obvious yes.
This occurs for every problem with multiple solutions where there is explicit reasoning over the different possible actions that could be used. Again, this is clear in the deliberative case. It happens when you choose which hand to hold an umbrella with, when you choose to pay attention to a specific instrument in an orchestra, or when you choose whether to peel off the plaster slowly, or go for the quick rip.
Again, humans are acing the test for strange loopiness.
Uncovering Properties at the Low-Level
The final component is that properties of the low-level system are uncovered. This has already partially been dealt with by ensuring that the two types of self-reference are actually used in the loop. Additionally though, it is becoming increasingly clear that we learn to model the world by constantly interacting with it. Properties of the world are uncovered by interaction and exploration.
In his book Other Minds, Peter Godfrey-Smith discusses the action perception loop (pp80-83) using the example of TVSS. TVSS are tactile vision substitution systems. These systems can give a blind person limited sight by substituting their normal vision with a camera which uses tactile stimulation (for example on the back or on the tongue) to relay the visual scene. Over time, the person can learn to see objects located in space. They do not experience patterns of tactile stimulation, but begin to perceive surrounding objects. Crucially though, this happens "only when the wearer is able to control the camera, to act and influence the incoming stream of stimulation." [p80].
It seems humans easily meet every component. You are
a strange loop. Great! But now there are even more questions:
- How exactly does strange loopiness correspond with consciousness? I don't know exactly how to answer this yet. My feeling is that all of the components discussed above are necessary, as they are in Gödel's proof. But, until it can be expanded, it's just a feeling. I will return to this question in the future. In the meantime, I hope that the discussion sparked some intuitions about the connection between strange loops and consciousness.
- What other entities/organisms are strange loops (and therefore potentially conscious)? The examples for the two types of self-reference above focused on slow deliberative decision making, involving concepts brought to life by our language. However, a pre-language concept of self and a pre-language concept of possible actions also exists. This would also qualify if it was used to determine actions (in such a way that fed back into the meaning of the concept). Our big, obviously conscious deliberations meet the requirements for consciousness. That's not hugely interesting. A pessimist would say it points to the vacuity (or potentially even circularity) of the theory. An optimist would say it means we're on the right track. The really interesting question is: what are the minimal structures needed for the right types of self reference, and what systems have them?
- What is the difference between our strange loop and that in Gödel's proof? The former is conscious, while the latter, presumably, is not. This seems like an obvious point to raise given the last two posts. A good initial answer to this seems to be that, in our case, there is an ongoing interaction between us and our environment such that the loop is continually playing out over time. Gödel's proof has no such dynamic existence.
In the next instalment I will look at how we can build strange loop systems with current AI technology, and, by extension, how close we might be to potentially conscious AI.