It appears that saying “I am my mind” brings surprising amount of controversy, even among people who don’t believe in supernatural. This didn’t really occur to me until yesterday’s Second Life Thinkers meeting where we had a rather lengthy discussion, a discussion started with a problem – can a strong AI feel emotions, if it doesn’t have (human / organic) nervous system? Or, if you prefer, is an emulation of emotions real? The problem is very interesting, but can it be answered without defining what emotions actually are? Well, let me try!
I’m going to use the term “abstract” a lot. What is an abstract? Wikipedia has a definition of an “abstract object”, which reads:
“An abstract object is an object which does not exist at any particular time or place, but rather exists as a type of thing (as an idea, or abstraction). In philosophy, an important distinction is whether an object is considered abstract or concrete. Abstract objects are sometimes called abstracta (sing. abstractum) and concrete objects are sometimes called concreta (sing. concretum).”
Which is very interesting, but for our discussion, useless, because me – my entity, my being – does definitely exist in both time and place! Let’s not stop here, there is another definition, a definition of an abstraction, which reads:
“Abstraction is a process by which higher concepts are derived from the usage and classification of literal (“real” or “concrete”) concepts, first principles, or other methods. An “abstraction” (noun) is a concept that acts as super-categorical noun for all subordinate concepts, and connects any related concepts as a group, field, or category.”
This, the highlighted part at least, is much better, but I still feel it only scratches a surface. Is there any better encyclopedic definition we can get? Well, let’s try with “Abstraction (computer science)” – I know, you are thinking “she is not going to try to explain human existence with computer science, is she!?” – Well, that’s actually precisely what I’m going to try to do, but only a bit later in this text! The definition reads:
“In computer science, abstraction is the process by which data and programs are defined with a representation similar to its meaning (semantics), while hiding away the implementation details. Abstraction tries to reduce and factor out details so that the programmer can focus on a few concepts at a time. A system can have several abstraction layers whereby different meanings and amounts of detail are exposed to the programmer. For example, low-level abstraction layers expose details of the hardware where the program is run, while high-level layers deal with the business logic of the program.”
We are getting somewhere, but again, this is a definition of an act of abstraction, we are interested, however, in abstraction the thing, which is what I mean when I say “an abstract”. What I am trying to capture is not the concept / type of a thing – as the first definition of an abstract object attempts it – but rather, a particular instance of that concept. My mind is a mind, it has a series of properties that makes it a mind, and that series of properties (even if we can’t define them, we know they have to be there!) is what makes up the concept of a mind, my mind, your mind, any mind. That’s all nice, but useless for our topic. I’m not trying to understand what the concept of a mind is (which is fascinating on itself), but rather – what is my mind, the instance of the mind concept that’s unique to me, the essence of me that makes me who I am?
If we were to print a replica of my physical self, atom by atom, perfectly placed to create exactly the same body in exactly the same moment, with same energy (electrical, kinetic or any other there is) of that moment being applied to all of it’s particles, would it be me? Just by the fact that my replica is standing somewhere else is enough to say – no. The context of my replica is different than that of me, so even if on the very instant of such “forking” our minds would be identical concepts, a fraction of a time later they would cease to be so, because my replica will see, hear, think and feel something different than me. The concept of Ivy Sunkiller would have been the same, but different context makes it a different instance, another entity, separate from mine. Just like a car that leaves a factory is a different car than the one that leaves it few seconds later, even if both are conceptually identical.
Now let me pause a bit and try to define another term that I’ve used – context. The context of me is not just the room I am in and the chair I sit on, which would be the natural way to understand the word. From all of the particles in the universe, there is only a limited selection that can create my body at any given time – that selection as well as it’s form is my context. The replica of myself, as described before, would have to use different particles, hence be different context. It’s critical to understand that the context can change overtime (just as parts of my body decay and get recreated from different atoms all the time), while not changing the abstract, but this can only happen as long as the changes that occur at any given time do not afflict the context’s ability to sustain the abstract, because the abstract cannot exist without it’s context.
And it gets even more interesting than that, because the context of an abstract can be an abstract itself! This is easiest explained with a computer, because – unlike our minds – we know how computer software works – at least conceptually! A computer, not just any computer but one, specific computer, is a physical – or concrete if you wish – context. Just like my body, it’s made by a limited amount of very specific particles available in our universe and any different selection of particles, even constructed into an identical concept, will create a different computer. Within the context of that computer, and not just at any given stage, but only within the context of a turned on computer (with all the electrons running through it), there can exist the abstract of a machine code. Actually “within” is a bit misleading and I’d argue that “on top of” is, metaphysically, more accurate, but let’s leave linguistics for now, we have enough definitions to work with! When I say that “the machine code exists”, I don’t mean the concept of machine code, I mean the very specific instance of machine code that is being processed by our computer. On top of that specific processed machine code, there can exist a series of assembler instructions, on top of which can exist some ANSI C code, on top of which can exist an operating system, on top of which can exist some higher level code such as a python script. This is using a lot of mental shortcuts and should I said that on an IT conference it would probably make me an idiot, but for our understanding of an abstract existing within a context of another abstract it is good enough. Note how at any step the abstract is not separate from the actual physical computer, should I pull the power cord out, my context of a computer would no longer be able to sustain the abstract of machine code, and all of the abstract contexts that existed within it, like pieces of domino, would cease to exist. The software would die.
Ironically, we are very lucky that we don’t live forever because that would make our attempts to understand ourselves much, much harder. People die all the time, in this very moment someone died – *snap*, gone – forever. *Snap* – another one, like flies! We understand death, the moment of dying, as the moment when we, our beings, cease to exist (well, we the normal people that is, not those nutcracks believing in supernatural). Our bodies, most of the time at least, do not cease to exist when we die, nor are they formed from any different atoms than before death at large, so what changed? Just like in the moment I pull out the power cord of a computer, it’s context stops to be able to sustain the abstract of machine code – the moment we die the context of our body, for whatever reason, stops being able to sustain the abstract we call mind, not just any mind, but a very specific mind of a very specific person – or rather, the very specific abstract of a very specific person. Ready for the bomb?
A mind is the lowest level abstract construct possible within any context, which has a series of properties (defined or not) that we associate with sentient being, thus any particular instance of a mind is someone.
We can name a lot of other abstracts that can exist within the context of a mind – consciousness, thoughts and emotions, to name a few. We might have problems with their definitions, or even grasping what those things really are, but we can agree about one thing – all of those cease to exist when the mind, their context, ceases to exist. I am my mind, therefore I am an abstract.
I’m not going to risk saying that emotions are software, but emotions are definitely an abstract concept, one that exists within the context of a mind – another abstract. What makes emotions what they are, are not the chemical processes that happen in a body, we know that same chemical processes can happen in a dead body, but not produce any emotions at all – emotions are felt within the context of a mind – no mind, no context, no emotions. If we assume, and I don’t like to assume things, but if we do assume that having emotions is one of the properties of a sentient being (and simultaneously, of the mind) then creating a strong AI is impossible unless we can create, abstract or not, a context in which the emotions can operate. Whether that is possible or not I do not know, but what I do know – what I can conclude – is that if it is possible, then those emotions of the strong AI will not be conceptually different than mine or yours dear reader! (And what if some strong AI finds this text one day and reads it, oh my!)