That was 12 years ago. Yes, we had a computer that could beat a human at chess. And we had some algorithms that could learn faces. And even today we have IBM’s Watson, which can beat geniuses at Jeopardy. But… none of these computers are smart. They have no intelligence.
or deal with new or trying situations.
Let’s look at the three factors in the above definition of intelligence.
Watson can improve the accuracy of its knowledge through absorbing more information, weighing the accuracy of that information against information it already has, and making connections between that and related information.
Is this learning? Maybe in very loose terms. It’s learning in the same way that Netflix might learn over time that you’ll probably like Inception if you liked both The Matrix and Catch Me If You Can. And it’s only mildly better than my couch “learning” the impression of my rear end and developing a depression over time.
But you can’t give Watson a calculus textbook and say, “Learn this from scratch,” even though it could read the entire book in a one-millionth of a second. Watson would be able to make plenty of assertions about calculus afterwards, but wouldn’t be able to perform calculus unless someone specifically programmed that capacity into it. If Watson could do that, then it would truly be learning.
Can Watson, or any computer, grasp the meaning of things? This could be a huge philosophical argument with a maddening chain of tautological reasoning reminiscent of Louis CK’s hilarious comedy skit “Why?”
Instead, let me pose a hypothetical question. Say you told Watson the following: “If you beat my husband on Jeopardy, I’ll be upset at you.” Do you think Watson would grasp the simple meaning of your statement? And do you think Watson would wonder to itself, “Maybe I should consider losing because I don’t want this person to be mad… she might disassemble me.”
Assume, for a second, that Watson was Woody-Allen-sized and had a battery pack, but still immobile. If you dropped Watson off outside Grand Central station near a pretzel cart, do you think it would learn to cope with its new situation? Would it figure out how to get someone to recharge it? Would Watson be able to ask for a ride back to IBM, perhaps promising its drivers a monetary reward from IBM?
The broader an environment that an AI can understand and react meaningfully to, the closer it gets to intelligent. A robot might be superb at learning to stack a few blocks, but if that’s all it can do, it’s not intelligent. Humans are the most adaptable. And although we may not be able to adapt to, say, suddenly being pushed off a cliff, we’re intelligent enough to scream on the way down.
Artificial Intelligence still doesn’t exist. What we have are artificial creations that appear intelligent under tightly constrained situations in which they have been programmed to excel (and even then, not always). We code algorithms for computers to learn specific tasks, and that’s all they’ll be able to do until we update their algorithms. Just because we have cars that can drive themselves and avoid accidents, it doesn’t mean that we’ve discovered a brilliant new approach to AI. (And if they do hit you, they won’t feel bad about it.) This quote pulled from Wikipedia’s history of AI sums it up nicely:
but mostly on the tedious application of engineering skill and on the
tremendous power of computers today.”
This TED talk by Henry Markram about building a brain in a supercomputer is the best approach I’ve seen. It doesn’t produce any intelligent results yet, and it’s the type of approach that will feel meandering and vague… until one day when it all comes together and a computer makes a sound of its own accord – not a sound that was programmed into it, but rather a sound born of its electrical signals swimming around in the womb of its malleable mental structure, and that sound may be reminiscent of the crying of a newborn baby.
Why would it make a sound? Not because it was programmed. But because it could.
So how does this relate to my techno-thriller novels? I asked myself, if I had to make an in intelligent computer – a real intelligent computer – how would I do it?
I’d decided that I would grow one.
We don’t know enough about the structure of the brain to model it entirely yet. But why start with the most complex part of human biology? Why not start much simpler? Start with DNA.
DNA is nature’s compression algorithm. If we could just unzip it, we would have all we needed to create true intelligent programs. And guess what? Nature already knows how to unzip DNA into a human. So let’s simulate that.
Let’s build an incredibly detailed simulation of a human egg. Simulate inception, and then… just keep the simulation running. A human will grow. It will have a brain. Given the proper inputs and care, it will be intelligent.
There are myriad reasons why this isn’t possible today… but… what if we were wrong? What would it take? Who could do it? How much would it cost?
And then, what if this intelligence became unfathomably smart, but its human side remained? And what if you were the only one to realize the danger it would put the world in? How would you stop it?
This is what The Day Eight Series is about. I wanted to write fast and fun thriller novels, but I also wanted to explore questions about unfathomable intelligence, about existence, and about our universe. If these topics excite you, or if you’ve enjoyed novels by Michael Crichton or Dan Brown, or even if you’re just looking for a fun read, check out my novels or see what people are saying about them.