Stephen Hawking sees the danger of artificial intelligence. So does Elon Musk. Oxford professor Nick Bostrom, head of the Future of Humanity Institute, has written a whole book about it. Even the scientists at Google DeepMind, who are developing artificial intelligence, seem a little spooked about it.

What they all say, in one form or another, is that AI could be dangerous. Some go further and say that if or when true artificial intelligence emerges, humans will face extinction. We will simply be outcompeted by a superior new species, one that we ourselves have created.

A few years ago Shane Legg, a co-founder of DeepMind, pretty much said exactly that: "Eventually, I think human extinction will probably occur, and technology will likely play a part in this," was the quote, and though Legg added a caveat after that, it was still kind of scary.

More recently, Hawking said that AI "could end the human race." Nice!

Since it's a new year, and since it's the weekend, why don't we ponder the possibility that sometime in the (relatively speaking) not-too-distant future, our miserable species will vanish.

The timeline

When will this all happen? Ray Kurzweil (now at Google) thinks we'll achieve AI by 2029. Others in computer science have predicted the same thing by 2050. Then again, computer scientists always believe they can get things done faster than they actually can, and scientists who work with artificial intelligence are the worst offenders. The joke about AI is that "full artificial intelligence is 20 years away, and always will be."

Some people believe we will never get there, that AI is quixotic quest. You can choose to believe this, and if so, there's no point reading further.

However, a lot of very smart and well-funded people (think: Google) are putting a tremendous effort into creating super-intelligence. I'd say they will get there.

Maybe they have the timeline wrong. Maybe it will take until 2080, 2100, or even 2200. The point is that the timeline doesn't matter as much as the fact that this seems to be where we are headed.

But achieving artificial general intelligence is just the starting point. Once machines achieve intelligence they will continue to evolve, and at a breathtaking place. Each generation will create ever-more-intelligent descendants.

In relatively short time computers could be millions or even trillions times smarter than we are. These machines will seem to us like a new species — not a biological species, but a species nonetheless.

What happens to us when this evolutionary leap takes place? Those of us who are just plain old humans, with our limited brains, our weak bodies prone to illness and injury?

According to people like Kurzweil, our descendants simply will enhance themselves with technology in order to keep up. Humans will merge with the machines.

But if those semi-biological hybrid creatures emerge — will they really be the same species as us? Will they be human?

You can get into a game of semantics and say that the definition of "human" will change. But for the sake of argument let's assume that we define "human" as the species as it exists today.

What happens to that species? The one that we are all a part of? It seems to me that there is a grim but inescapable conclusion: That species will become extinct.

March of the machines

Kevin Warwick, a professor of cybernetics at University of Reading, in the UK, lays out the following three-step logic chain:

1. Humans are the dominant species on earth.

2. It is possible that machines will become more intelligent than humans.

3. Machines will then be the dominant form of life on earth.

That's not a new idea. Those lines come from Warwick's 1997 book, March of the Machines: Why the New Race of Robots Will Rule the World, in which he concludes that there will be no escaping our fate.

"If asked what possibility we have of avoiding point 3, perhaps I can misquote boxing promoter Don King by saying that humans have two chances, slim and a dog's, and Slim is out of town," Warwick writes.

Should we turn back?

Maybe we should, but we won't. Three reasons:

1. The potential benefits of new technologies are so compelling that we can't just set them aside. Machine intelligence will enable us to solve problems that we otherwise couldn't with our puny brains. The machines could discover new forms of energy. They could gain a comprehension of biology and genetics so advanced that they could synthesize new life forms. They could build tools to study the universe and unlock the secrets of the cosmos. They could, in the wildest theories, break the constraints of Earth's orbit and go off exploring outer space. The machines might also harness the power of nanotechnology to fashion new forms for themselves.

2. There are competitive reasons, both military and economic. Turning away from AI would mean falling hopelessly behind. In fact, whoever dominates AI probably ends up dominating everything else.

3. The last reason we won't turn back is that the urge to discover and invent is so powerfully hard-wired into our DNA that we can't resist taking one step, then another, then another, even if it leads us over a cliff. We never have turned back, not with any technology, no matter how dangerous. In fact there has been a little burst of investment in AI companies recently, as the FT reported.

Eric Schmidt says there's nothing to fear

Which actually makes me more afraid.

Schmidt insists AI will only make the world better, and there's nothing to worry about. Then again, Schmidt would say that, right? He's the guy leading the company that's doing more than any other to develop AI.

People keep thinking of Google as a search engine company. Its real goal, I believe, is to become the dominant AI company in the world.

Google CEO Larry Page grew up with a dad who was a computer science professor at Michigan State University and an AI researcher.

I believe AI has been Larry Page's ultimate goal all along. Search was just a way to foot the bill for the really interesting stuff.

Also, ask yourself this. Stephen Hawking says AI "could end the human race." Eric Schmidt says that's not the case.

Which one of those two people do you think is (a) smarter; and (b) more trustworthy?

This might have been the point all along

A few years ago I befriended a computer scientist named Hugo De Garis. He's Australian but lives in China. He was recruited there to develop AI for the Chinese government, but turned away from his work after becoming worried about its implications.

The scary scenario is not that the machines will rise up and slaughter us all, but that we simply won't be able to keep up with them. They will outcompete us. We'll get left behind.

De Garis believed that the work of engineering our own successors has been our purpose all along, that we humans have been merely a stepping stone to higher, more advanced species.

Of course this seems unbearably cruel. After 200,000 years on the planet, we humans are just getting close to understanding some fundamental things about ourselves and the universe around us. We're unraveling our genetic code. We're figuring out how our brain works. We've taken baby steps in nanotechnology and quantum computing. We've dipped our toes into outer space.

Now it's time for us to go? Just as things are getting good? Well, maybe so. That's not so unusual, after all. Millions of species have come and gone, destroyed by the same genetic dice-rolling that eventually created us.

Plenty of species before us have succumbed to evolution's equation favoring the fittest. The difference with us would be that we humans would be the first species to do this to ourselves.

Better yet, we would do this while understanding the consequences — which is, when you think about it, both brilliant and phenomenally stupid at the same time.

We humans like to think we're unique, that somehow we are exempt from the rules of natural selection. We want to believe that we are not like every other species, that we're the ultimate creation and the best that evolution can do. But maybe the big truth we're hurtling toward is that in fact we're not so special after all.