November 11, 2016, John Æonid
I have no experimental evidence for the claims I write below. They are purely a philosophical proposition, one that I've not heard from any of those discussing the nature of consciousness. This is primarily the result of my own thinking, particularly on the topic of free will in the context of evolution . What I describe are the minimal conditions for consciousness, hinting at how it might have evolved.
The point I'm making is that self reflection does not come until there is a self to reflect upon, and there is no self without others, which requires a sense that others make decisions regarding their own self interest. That means that self awareness had to co-evolve with a sense that others are self aware. And, as such, theory of mind needed to be well established in a being before there could be a concept of self—and thus, consciousness.
There is a bit of a side point about this, one that tends to freak people out a bit. It's the idea that the self is an illusion. In terms of what I'm writing here, this is essentially the idea that the self is a virtual actor in a virtual reality within our consciousness (and non-conscious brain). And, we can observe that actor from the outside within that virtual reality—though, most of the time we are in that actor, looking out at the actors we see as others within that virtual reality. But, there's no need to be freaked out. That virtual actor we call the self is still a meaningful part of our lives. It's the one that looks out for our own self interests and that experiences a sense of relationship to others. And, oddly enough (speaking faceteously), that actor has some manner of connection with the physical person1.
The difficult part is in deciding what the co-evolution of self awareness and theory of mind should look like in other species. And, the hindrance is that our thinking about consciousness is heavily tied to complex language and symbolic reasoning, including that of this writing. So, we need to imagine what it would look like before complex language and symbolic reasoning.
It likely can't go all the way back to the reptilian brain, as we are not likely to find theory of mind at that level. The only non-mammals that have been identified as having self awareness are magpies (by way of a mirror test, video), and possibly several other birds. However, I'm not familiar enough with the studies to know how complex that self awareness is or what level of theory of mind their might be.
For us recreational thinkers, more insight is possible by considering these ideas in regards to apes, as they have both self awareness and theory of mind. The experiments generally focus on tasks associated with basic survival, but I would be more interested in those that involve social connection and self reflection. Only, I don't the extent of what this would show if there was any experimental evidence. What I imagine is that what there is in the way of ape consciousness is very limited in regards to self reflection.
Still, I wonder what goes through an ape's mind when it find itself ostricized from its troupe. There must be thoughts of getting back in, getting back at others, or making up. That last one is the one that interests me. When an ape tries to make up with another, would there have been a realization that a mistake was made, that a previous decision was a bad one? Or, is it still more primal than that?
So, imagine what this means for artificial intelligence and artificial consciousness. There is a lot of doubt about how soon we will have technology that is genuinely autonomous. But, it's pretty clear to me that there will need to be something in the technology that provides the conditions that I describe.
Something needs to be able to work out scenarios in a model of the environment, and the actors need to be recognized as being actors. Then the engine that works out the scenarios can't simply work out all the possible actions of the actors. The behaviors of the actors must be seen as reflecting their thinking and their distinctive personalities, that which would characterize their likely thinking.
That means that an artificial consciousness would require the computing capacity to model the behavior of actors as well as the kinds of thinking that would lead to those behaviors. And yes, one of those actors needs to be the artificial consciousness itself, in a way that allows comparing its own thinking to that of the other actors. It would then need to be able to modify that thinking and develop its own algorithms. An artificial consciousness would have to have it's own theory of how thinking works.
The Mind-body problem is where notions of consciousness become really difficult. There are many perspectives on this, but what I present here grew out of thoughts about evolution and free will. And, such an exploration would inevitably have to deal with causal determism and Physics.
My approach to consciousness is very much one that is tied to survival and being able to work out scenarios involving multiple actors and the possible actions of those actors. This does not answer the question of how consciousness is connected to physical reality. But, the questions I have about free will and physical reality, as well as why I think those questions are important, will come later.
The conditions I give for consciousness do present problems in another area. And, that is the notion of a God consciousness or primordial consciousness. In particular, this is the question of why God would create man, or that a primordial consciousness would result in an individual and relatively independent consciousnesses. My first thoughts on this emerged around the year 1982, but I've heard others express similar sentiments. And, that is this: we must know others to know ourselves.
My thoughts in 1982 were that God would want to create others with the same fundamental essence of consciousness, and the reason was that having others provides an experience that one cannot have alone. Others have similarly expressed this kind of need, in that a primordial consciousness would only expand by finding a way to have contrasting experiences, and those experiences must involve interaction with others—as there is no other way to know the self.
Only, now we have a chicken-or-egg problem. The conditions of consciousness that I give first require others, and we are effectively defining a primordial consciousness as being utterly alone.
Well, what I've written here ties our mortal consciousness to evolution and free will. I would be reaching to claim that these conditions would necessarily apply to a primordial consciousness. But, we can look at the implications.
So, God would have had to have self knowledge, both in terms that God was conscious and that God was a being. And, this works for the ontological argument, in which God was a perfect being, having all the qualities that a perfect being would have (assuming you can ignore the challenges to the ontological argument). So, God knowing that God was a being allowed God to create other beings before they existed. God did not have to learn this (again, if you assume there's an acceptable ontological argument).
Given the philosophical challenges to the ontological argument, we don't have the luxury of attributing such traits to some other primordial consciousness. So, the chicken-or-egg problem is not so easy. We need there to be a way for a single solitary being to gain self knowledge and a concept of individual actors or beings. In that this would have been before life as we know it, then this would almost certainly be before the existence of genetic evolution. As a side point, I have spent time pondering the metaphysical esssence of evolution, but I don't know that it would be in add anything meaingful here.
Could this chicken-or-egg problem involve a mistaken assumption? Must we define a primordial consciousness as a single solitary consciousness? I imagine that for a primordial essence to be a thinking essence, there must be some manner of organization that takes arises, not pure chaos. And, if there was not yet a concept of self—yet there was thinking, then what manner of organization can we expect or allow?
Could primordial reasoning involve ideas that were in competition for becoming an answer? Could ideas organize into factions, essentially a category of like thinking? Would a faction of one type of ideas win over another?
For any of this to matter, this would have to result in a concept of a collection of ideas as a thing, that would eventually lead to a concept of a thinking thing. I can't say how much organized complexity would have to arise before achieving this, as this is all speculation. It's just a perspective on what kind of thinking is needed to address the question2.
Ultimately, self and a concept of thinking beings are two essential aspects of consciousness. The one thing to keep in mind is that there is only one consciousness that we know intimately, and that is our own—human consciousness. And, the assumption that God or a primordial consciousness would be like our own is the place we start from. If those beings or essences are not like us, we won't really get to know them without something changing in us.
TODO: TED Talk with Donald Hoffman
My motivation for getting this written at this time comes from following the series Closer to Truth, where I've been listening to many who speak of the problems of consciousness. In particular, I just watched: Donald Hoffman — Does Evolutionary Psychology Explain Mind? In this video, he explains why consciousness does not need the capacity to perceive objective truth and would actually be hampered by it. Imagine that!
March 4, 2017 — Further Thoughts on The Mind's Emergence
I want to cover the concept of self further, which I've hinted upon here. Part of what I'm responding to is a gap between the Spiritual no-self concept and the old Philosophical problem of the Homunculus argument. The later is the idea that there a little person inside of us that is the self that drives the real person—but this is fraught with problems.
What I present is this view that one of the most powerful aspects of the human mind is that it models reality—and potential realities—most often in a way that we can experience as a virtual reality. And within that virtual reality, we can change our focus to take on whatever role interests us. But, what is the thing that decides what focus is of interest?
One key element is the working memory. This is the where we work with the ideas that are foremost in out conscious activities. It is limited in what it can hold. It is the place where we move from idea to idea, the place where we work out new ideas. And when we ponder questions, it is the place where we compare the possible conclusions or outcomes for those questions. It's that moving from idea to idea that makes this space that makes this a sequential process, a single, serial train of thought.
As someone who works computers, I find myself using computer terms, and this working memory is not a parallel processor, which leaves us with the conclusion that it is essentially serial processor. However, I do not think it would be a mistake to regard other areas of the brain as a kind of parallel processing. This includes both the modular nature of the brain (having functional regions), and the concept of the neural network. Whatever the specifics, there is a lot of processing going on just beyond our conscious thought processes, layers of processing. And, those non-conscious processing elements for an extremely powerful computer, one that can transport us into entire virtual realities on a whim or a familiar scent.
What brings me back to this topic this morning is a book that was loaned to me: The Body Keeps The Score; Brain, Mind And Body in the Healing of Trauma, by Bessel Van Der Kolk, M.D. And, while I've only gotten a few chapters in, skimming more than I should, I'm fascinated by the states of mind that overpower those afflicted with PTSD. It's chapter four that I'm now learning that PTSD can present in two ways: flashbacks and dissociation/depersonalization (the mind going blank).
But even more intriguing (for my way of thinking) is a diagram titled: “The emotional brain has first dibs on interpreting incoming information.” It presents the following processing sequence:
The three middle portions in that diagram (2, 3, 4 above) are grouped as the Hypothalamus, which is labeled as affected by stress hormones. The full paragraph for the diagram is a long one and fills the middle of the page.
One of the directions that I want to go with this is in describing the importance of the non-conscious brain in presenting our emotions, but I feel quite strongly that our emotions are the key determiners in what is of value to human life. This includes our sense of connectedness. And, a society that stressed by divisive forces is one that will act less rationally and tend to “other” those to whom they do not feel connected.
Enough for now. This is more than I meant to write, considering I only wanted to outline something for further elaboration.
March 24, 2018 — Further Thoughts on the Mind's Emergence
In the comparison between the primordial consciousness and the human consciousness created by evolution, I've come to the conclusion that theory of mind was necessary in order for humans to develop a perception of a self, but I should be clear (and remember to be clear) that this does not mean that theory of mind had to be fully developed at an advanced level before there could arise a perception of a self. In truth, they had to co-evolve. It was that as theory of mind emerged a perception of actors emerged; and as the perception of actors emerged, so did the sense that one of the actors was the self. And, since the emergence of theory of mind is about the notion that actors are making decisions and that one of the actors is the self, then what emerges is the comparison of the self's thinking to that of others.
What comes to mind is that cliché phrase: “I wouldn't do that if I were you”, an expression that's had shown up in so many crime dramas. One might call it empathy in conflict. We can imagine how less evolved creatures would deal with the threat of conflict, then evolve to anticipate action, eventually to attach logic to that threat of action. I imagine that the perception of actors came before the eventual development of theory of mind. So, it seems that basic consciousness would need basic social behaviors in order to deal with reproduction cycles and competition for mates. This is where the ability to distinguish between inanimate objects and actors would have begun to form, and is in the perception that beings would be in competition (or collaboration) with others that it would be necessary to distinguish between other actors and inanimate objects. There is simply no evolutionary reason to develop competitive behaviors with regard to inanimate objects. So, this is where the distinction between thinking actors and dumb objects arises. It's then in the emergence of the comparison of one's own thoughts to that of other actors that we see the emergence of the self.