Can Computers Think?

The big ol’ question! I must say that as a Computer Science major I get asked this question more times than you’d expect.

I also have to confess that I cringe a bit every time I hear it.

That’s due to the fact that this is a classic example of an ill-posed question. One without a definitive answer; mainly because it would rely on our own personal philosophical thoughts.

Understanding the Question

Let us start by being on the same page when it comes to the meaning of this question. The mere wording already poses a problem. More than half of the words don’t have a clear definition.

So let’s take it step by step:

  • ”Can”: simply means that an actuator has the capacity to perform an action. It’s also the easiest word in the sentence. No surprises so far.

  • ”Computers”: most people would think this word is self-evident. That it means the device I’m writing this on. However, this definition depends on what someone in the marketing department decided to call a Computer.

Which isn’t very helpful, since this doesn’t include your smartphone, your smart watch or your smart espresso machine, etc. So, for now, let’s define a computer as any machine capable of being programmed with a Turing Complete Interface.

Your laptop, smartphone and “Minecraft”: all Computers!

And finally the hardest one:

  • ”Think”: you, the reader will notice immediately that the word ”think” has some fuzzy edges to it. Can dogs think? Of course! Can plants think? It doesn’t feel like they do; however, we can’t rule it out definitively. Sunflowers do move to face the sun and catch more light. Who can say that this is not the result of thinking?

What we are trying to state with the word ”think” is not necessarily a biological process but simply the existence of conscious ”thought”. And I do appreciate the irony of defining a word with itself.

But what we are looking for is ”self-awareness”.

So for now, we’ve reduced our question to the following:

Is a computer capable of being self-aware?

Self-awareness in a Computer

Now of course when we say that a computer is self-aware, we are not saying that the hardware itself is self-aware, but rather the program running on it.

With this in mind, I would like to ask you a simple question:

print I think, therefore I am

Is this simple Python program self-aware? It is, after all, stating its existence. Is that not enough to be considered thoughtful?

Now, you may be thinking:

“Hold on. I don’t care if a print statement is thinking. Answer the real question: is Skynet possible? Is artificial intelligence gonna take over the world?”

If what you’re fearing is a program that will seek to destroy you as soon as it notices you fear it: you don’t even need artificial intelligence.

You’re picturing an Open Loop Controlled System. Just like an air conditioning:

while True:
    if humanFearIndex > 0.3: # considered to be a safe amount of fear
        killHumans()
    else:
        print All Hail Skynet!”

But of course, no programmer would ever think of writing anything like that so we are safe…

“But couldn’t an artificial intelligence write that snippet itself?”

No.

“But what if it could?”

Good point! So let’s talk about the state of the art in artificial intelligence.

Artificial Intelligence and Machine Learning

All jokes aside: we’ve actually arrived at a serious conclusion already. For a machine to be able to think freely like a human being it would need to be able to create new behavior from scratch by simply existing.

A program like this not only has to be able to understand what it needs but also come up with a solution to it and modify itself to be able to perform it.

“And can we do this using Machine Learning?”

Well, not really. You see, Machine Learning, while sounding very awing and complicated, has a simple goal. Recognize Patterns. And we do this by classification.

The idea is this. If I want to know for example whether a piece of fruit is an Apple or a Banana based on some information, I can simply feed the computer examples and let an algorithm recognize the pattern.

For example, after feeding the following table:

Color Width Height Fruit
— — — - — — — — — - — — -
#f46542 9.5 cm 9.3 cm Apple
#f45641 9.7 cm 9.5 cm Apple
#f1f441 4.2 cm 25.3 cm Banana
#eef441 3.9 cm 23.1 cm Banana

to an implementation of the k-nearest neighbors algorithm, it would create a 5-dimensional space (red, green, blue, width and height). And in this coordinate system, draw clear boundaries on what category fits which coordinates best, by calculating how close the given examples are (using the Euclidian Norm).

In this case, it would probably classify anything taller than 15 cm as a Banana.

So now, with enough data, this program should be able to successfully distinguish between two pieces of fruit. But is this enough for a computer to understand what it needs?

Yes and no. Machine learning is great at recognizing patterns because it’s great at interpolating data based on what it was trained with.

If you gave the previous example a Grapefruit it would stubbornly call it an Apple. Because it doesn’t understand what a Grapefruit is. Extrapolation is still a serious problem in numerics, not to mention learning.

So we have to modify the machine learning algorithm to understand when it has encountered something it doesn’t know. This can be done by classifying anything where the distance to the nearest neighbor is above a certain threshold as an unknown case.

And by applying reinforced learning this algorithm should be able to learn new cases. Where the computer will be rewarded on favorable outcomes. And will, therefore, learn to replicate them.

Where’s Skynet then?

It all sounds very plausible, but we have arrived at the true problem of creating a self-aware AI. Who determines what’s a favorable outcome? Who will explain the algorithm if being disconnected is a good thing or a bad thing?

Should we then apply an extra layer of learning to determine which outcomes are the ones that should be replicated? It is now clear that going this way we will always be adding more and more layers of learning without ever actually giving this machine a purpose.

How can a machine extrapolate what it should do in cases it can’t foresee and possibly modify itself during runtime if it doesn’t know what it wants?

So? Can Computers Think?

The answer is yes, no, maybe. I don’t know. But if they can, it’s not the way you “think”.

TL;DR?

Can Computers Think? Probably not.

Fin~

Written by:

Mathias Quintero

03 Aug 2017
Comments: