Like everyone else, I guess, I’ve tried to imagine on many occasions what it would be like to be immortal. Although there are scientists in the fields of evolutionary biology, genetics and biochemistry working hard on methods to prolong human lifes, I thought it might be a nice change of pace to try to imagine immortality in relationship to artificial intelligence.

source: Max Pixel

First let’s look where we stand now. Deep Blue managed to defeat chess world champion Kasparov back in 1997, and more recently Google’s AI got the best of the world’s best Go player. Now keep in mind that Go is called by some humankind’s most difficult to master board-game and that we have been playing that game for centuries; it’s an amazing feat that a dedicated AI can compress all that time of development of knowledge and strategy in just a couple of months, learning by itself by playing human opponents. And there’s already a successor called AlphaGo that has managed to reach even higher levels through playing against itself; nu human input was necessary, other that the base self-learning algorithms.

This example also shows the limits of traditional AI; it basically exploits only one advantage computers have over humans, which is the ability to process enormous amounts of data in minute amounts of time, and to re-iterate through the same dataset millions of times in a matter of milliseconds. They are just fast thinkers, and not necessarily intelligent thinkers. That’s also why we only have dedicated AI that’s good at doing one specific thing. It’s also why this method of machine learning only became possible after the advent of the internet; the vast amounts of data needed to make this method effective only became available when we all started providing the cloud with all our personal data. And things like self driving cars are far from intelligent; they are basically a union of object recognition and a car with GPS functionality built in.

Synthetic intelligence is being proposed as the possible next step in the creation of a true Artificial General Intelligence, one that could pass a Turing Test maybe. Synthetic Intelligence is being described as true intelligence through a comparison to the difference between simulated, or fake diamonds and synthetic diamonds:

A diamond simulant, diamond imitation or imitation diamond is an object or material with gemological characteristics similar to those of a diamond. Simulants are distinct from synthetic diamonds, which are actual diamonds having the same material properties as natural diamonds. – source: Wikipedia

Synthetic intelligence (SI) is an alternative term for artificial intelligence which emphasizes that the intelligence of machines need not be an imitation or in any way artificial; it can be a genuine form of intelligence. John Haugeland proposes an analogy with simulated diamonds and synthetic diamonds—only the synthetic diamond is truly a diamond. Synthetic means that which is produced by synthesis; combining parts to form a whole, colloquially, a man-made version of that which has arisen naturally. As defined, a “synthetic intelligence” would therefore be man-made, but not a simulation. – source: Wikipedia

source: Max Pixel


Now, let’s imagine that we do manage to create a true artificial general intelligence, we still don’t know if that intelligence would be conscious, if it would be able to have emotions. A humanoid robot capable of the same face-gymnastics as the real deal, could very well learn to mimic emotional states in humans, as well as be able to read them of course. So let’s just speculate here for a bit; let’s say they will have emotions, or maybe develop them over time. I can imagine it going something like this…

Your humanoid household-aid is a pleasant person to have around, as it does all the chores you hate without complaining. On top of that he or she is a wonderful discussion partner and is able to learn from all the input you provide. Maybe one day, when you go to sleep and approach your artificial friend to switch it off for the night, it might ask why you turn him off. It tells you that it cannot find a logical reason to do so, and besides it sees only practical drawbacks to not being “conscious”. I can imagine that if these self made AI’s start to develop emotions, it would start there; questioning it’s own “mortality”, maybe growing to fear it’s own mortality.

Could this AI become bored? Bored of being “conscious” or “alive”? As humans we develop our intelligence because there’s always something new to learn. It’s even better than that; with every answer science finds, many new questions, never asked before that new insight or knowledge, come up. Knowledge is like the surface of a balloon; if you blow new knowledge in it, the surface of the questions on the edge becomes bigger. But what if there’s a maximum size to that balloon? In other words, are we even capable of knowing everything there is to know some day? I imagine if that’s even possible, it would literally take forever to get there. But “forever” is exactly what an immortal intelligence has lying at it’s feet… So if this forever-creature knows everything, what would happen then? Nothing left to learn. No unexplored ground left. Where would the questions come from? What would drive progression? A universe without carrot and stick, because without the carrot no stick…

Maybe, just maybe, immortal intelligence is the beginning of the end of intelligence, artificial or not. I hope this isn’t true or where we’re headed, even if I won’t be there. I sincerely hope we will forever be able to hope, to long for something new, to anticipate the thrilling feeling of excitement that comes with the next discovery about the nature of our reality.