Down the rabbit hole of creative computation 18 Aug 2013

Cyborg cat

Brief summary

Owing to the length and depth of this post, I’ll start it with a short summary of what follows. I argue that what we perceive as human-level intelligence and creativity is impossible to replicate (not emulate, replicate) on classical computing systems. I am also aware of a scenario in which I might be wrong, and outline it at the end of my rant. Given my actual lack of expertise when it comes to the fields of AI, neurology, and psychology, comments and critiques are more than welcome.

Setting the stage

There was an interesting link on HN this morning which spoke about a recently published paper titled ”On our best behavior”. The paper makes the argument that current AI behavioral tests (specifically, the Turing test) are flawed, in that they reward intelligent systems that rely upon deception to answer questions, rather than actual intelligence. In his paper, Mr. Levesque defines the overarching question for the science of AI as the following:

“How is it possible for something physical (like people, for instance) to actually do X?”

Followed by:

“Can we engineer a computer system to do something that is vaguely X-ish?”

I have not studied the field of Artificial Intelligence to any extent, so I am not qualified to make powerful/broad statements about the feasibility of creating intelligent systems. I have however, been working with computers for some time now, and understand them on an extremely deep level. Furthermore, I’ve spent quite some pondering what consciousness is, and what separates us from the supercomputers attempting to mimic our behavior.

I have come to the conclusion that, on purely classical computing systems (non-quantum), true intelligence is and will forever-more be impossible to replicate. There is much that I do not know, a fact that will always be true, but I understand enough about the mechanics of classical computers and that of our brains/neural networks to be able to whole-heartedly make that statement. Based on the little that I know about quantum physics and physics in general, there are signs that my arguments are potentially flawed, and I will cover these cases at the end of my rant.

Thinking cat

Divergent and Convergent thinking

I’ll start by breaking down human intelligence to one underlying ability: creativity. I believe that because pure creativity is impossible on linear, classical computing systems, so is what we know of as human intelligence.

Convergent thinking is what we are taught in schools. The ability to search for a single correct answer to any given question. Our minds are said to converge upon the correct answer. Divergent thinking is the opposite of this, namely the process of exploring and drawing connections between multiple solutions. In his book “Outliers”, Malcolm Gladwell goes over these two styles of thinking and makes the point that those we view as creative are simply better at divergent thinking. Creativity requires one to think “out of the box”, and make unlikely connections between otherwise unrelated concepts. Uses for a single paperclip? One could fashion a pitiful frying pan handle. So crazy it just might work.

So, what does it mean to be creative? How exactly do we perform divergent thinking? While I have no idea on the technicalities of the last one, I’ve isolated an interesting property of our brain that appears to aid creativity. That is, the ability of our subconscious to directly influence our conscious stream of thought. Epiphanies are described as “enlightening realizations”, which spontaneously occur to us. All epiphanies and creative concepts emerge in our minds, and slowly rise up to our conscious awareness. This occurs while we are actively performing other tasks, and is a result of the human brain’s distributed nature. Our minds process things in parallel, on the level of individual neurons and the connections between them. This aspect of the way our brain operates allows for said operations to affect each other without request.

So when you are brainstorming potential names for your next startup, or designing that perfect logo, your mind is constantly serving up novel ideas for you to consider. These ideas collectively emerge as a result of your stored memories and current sensory input. One can think of our brains as extremely complex computers, with input provided through both our senses, and our memories.

A short thought experiment

I want to make the point that our minds contain two systems existing in unison, our consciousness and our sub-consciousness. My argument is that our sub-consciousness is the source of our creativity, and is something we can not replicate on linear systems.

Pretend you were born in a black box. Worse yet, you are blind, deaf, anosmatic, perpetually numb, and ageusiatic. Essentially, you lack all 5 senses (let’s include balance as well), and have no previous experiences. If a chess set materialized in front of you, you would not even know it was a game. If you were tasked with playing it, you would be met with extreme difficulty.

On the other hand, a chess grandmaster plays chess with ease, since his experiences and memories sub-consciously direct his actions. Although conscious thought is tragically linear, experienced chess players have their thoughts externally affected by their own recollections, while they “do their thing”. The source of this effect is also the source of our creativity. Thoughts are injected into our stream of consciousness without us asking for them, and are not bound by the realm of reason our consciousness is.

Keep going cat

What does this have to do with AI?

Right, I’ve strayed off topic a bit. The point of this post is to make the argument that Strong AI is impossible on classical computers. Why do I keep saying classical? Well, I haven’t thought too much about what effects quantum computers would have on the conclusions I’ve drawn here, and I know close to nothing about how they operate, so I need to leave that path open as a possibility.

“Classical” computers are what we know of as computers today (as opposed to quantum computers). They execute pre-stored instructions in a linear fashion, in a relative vacuum as opposed to our brains. What I mean by this is that while a CPU is executing a single instruction, no outside impulses can affect it. Bar sunspots and gamma rays (a bit too exotic to consider, eh?), instructions are executed with no knowledge of each other.

Of course, modern processors execute as many instructions in parallel as possible in a pipeline, but each instruction is really only aware of itself, and performs a concrete action/effect. CPUs can be interrupted using what is known as an “interrupt”, but that merely sets a flag for the CPU to check up on post instruction. It does not go in and supply new, novel instructions.

As such, computers can not come up with novel ideas on their own. They are entirely pre-programmed, as is their behavior. What is known as an “intelligent” program is really anything but. Software appears intelligent, but under the hood it is just executing pre-written instructions with no chance of novelty.

Consciousness vs sub-consciousness

This one-instruction-at-a-time flow is how our conscious mind operates. Although we feel like we are multi-tasking, we only ever perform one thought at a time. I am writing this post, while chatting on skype, thinking of what album to play next, and considering changing my wallpaper. Each thought fully occupies my attention for an instant, and is replaced by the next. Each is offered an equal chunk of time for processing, just like what our multi-tasking operating systems do.

Your computer’s CPU can only ever perform one action per-core, with each core operating purely independently. This is not like in our brains, where neurons can affect each other without asking for permission. Yes, cores can communicate between each other, but they must do so explicitly. No novel instructions or un-planned behavior can occur. Computers can even program themselves, but there again, the resulting program can be predicted with perfect accuracy, as the process by which the new program is made is linear. Our computers are slaves to their own linearity, which prevents creative thought.

Our conscious thought is akin to classical computers, but our sub-conscious is entirely different. It is the source of our creativity (pure opinion, but it seems to make sense), and is what separates me from my laptop.

Winograd Schemas

I mentioned Mr. Levesque’s paper to speak about the new style of tests for intelligence systems. He presents Winograd schemas as superior (sets of) questions instead of the classic, google-able “What was Einstein’s most important contribution?”. An example winograd schema is:

“The large ball crashed right through the table because it was made of styrofoam. What was made of styrofoam?”

With the matching question being: “The large ball crashed right through the table because it was made of steel. What was made of steel?”

Better yet, let’s take out the dynamic element and make it highly ambiguous: “The large ball crashed right through the table because it was made of X. What was made of X?”

Those are easy, consider: “The sack of potatoes had been placed below the bag of flour, so it had to be moved first. What had to be moved first?”

Our sub-consciousness serves up answers to these questions near instantly, without any conscious mental effort on our part to remember the specific properties of styrofoam or steel (well, for most of us). Any “intelligent” program would need to look up the properties of the various objects in the questions, and attempt to guess if we are referring to the table or the ball. Not to mention the fact that the size of the table and the ball is not mentioned anywhere. If both were made of the same type of wood, a giant ball and a miniature table would probably lead to a surprising answer. The computer would have to explicitly guess as to the size of both, while we have a “feeling” for what size the table and ball should have. At any rate, we expect the table to be broken, so we automatically imagine it as larger but less resilient than the ball. Computers don’t have these sub-consciousness expectations.

Possibility

A glimmer of classical hope

So, I’ve made the point that as opposed to the distributed nature of our brains, computers are linear in nature and therefore lack any sort of emergent sub-conscious. As a result, they lack creativity and the ability to “feel” for answers. They are exceptionally good at convergent thinking, but literally cannot partake in divergent thinking.

What scientists in the AI field have been doing for some time now is simulating neural networks using classical computers. Although the computers are still entirely deterministic, they simulate distributed reactions that are susceptible to simulated interference. I believe that because of the simulated distribution and fully parallel processing, we can simulate what we perceive as consciousness, but not sub-consciousness/creativity. Creativity relies (or so it appears to me) upon true novelty and randomness, something which is impossible for computers to exhibit.

This of course, leads us to an interesting paradox and fault in my argument, which I don’t know how to address. From a physical standpoint, our brains ought to be fully simulatable. On their lowest levels they are nothing but deterministic chemical reactions! That is of course, above the quantum level. Since creativity appears to arise from true novelty, I postulate that our intelligence, what separates us from linear computers, arises as a result of quantum uncertainty. It’s the only source of non-deterministic-ness in nature that I know of, and it makes sense as long as there are components of our neurons that are small enough to be affected by quantum effects.

The glimmer of hope stems from the fact that the transistors on our processors are getting small enough to be effected (not yet, but soon) by quantum uncertainty.

Oh yeah, and quantum computers! So crazy it just might work.