HomeTech and GadgetsArtificial IntelligencePeter Diamandis Weighs in on Artificial Intelligence

Peter Diamandis Weighs in on Artificial Intelligence

May 18, 2015 – We have seen headlines with dire warnings from brainiacs such as Stephen Hawking and Elon Musk warning us about the risks of unrestrained artificial intelligence. Science fiction writers have painted pictures of a future where humans are subservient to our machines. I have researched and written numerous articles on advances in artificial intelligence almost from the first postings here at 21st Century Tech blog, and yet, it seems collectively we still don’t fully comprehend the disruptive waves about to strike the shores of our science, technology and industry in the next decade and beyond. How this will permanently alter our world and our perception is a subject that Ray Kurzweil has been speculating about for several decades, and Peter Diamandis, the co-author of “Bold” and “Abundance” in his weekly email blast shares his views on the same subject. Diamandis’ take is one I felt worth sharing. So here goes.

———-

Artificial Intelligence (AI) is the most important technology we’re developing this decade.

It’s a massive opportunity for humanity, not a threat.

So what is AI?

Broadly, AI is the ability of a computer to understand your question, to search its vast memory banks, and to give you the best, most accurate, answer.

AI is the ability of a computer to process a vast amount of information for you, make decisions, and take (and/or advise you to take) appropriate action.

You may know early versions of AI as Siri on your iPhone, or IBM’s Watson supercomputer.

Watson made headlines back in 2011 by winning Jeopardy, and now it’s helping doctors treat cancer patients by processing massive amounts of clinical data and cross-referencing thousands of individual cases and medical outcomes.

Apple’s Siri rests in the palm of your hand, giving directions, making recommendations and even cracking jokes.

But these are the early, “weak” versions of AI. What’s coming this next decade will be more like JARVIS from the movie Iron Man.

But this technology won’t be just for Tony Stark.

Why AI is a Massive Opportunity

AI will level the global playing field.

Today, Google’s search engine gives a teenager with a smartphone in Mumbai and a billionaire in Manhattan equal access to the world’s information.

In the future, AI will democratize the ability for everyone to have equal access to services ranging from healthcare to finance advice.

AI will be your physician.

AI will be your financial adviser.

AI will be your teacher and that of your children.

AI will be your fashion designer.

AI will be your chef.

AI will be your entertainer

And more…

And likely it will do all of these things for free, or nearly for free, independent of who you are or where you live.

Ultimately, AIs will dematerialize, demonetize and democratize all of these services, dramatically improving the quality of life for eight billion people, pushing us closer towards a world of abundance.

Why I Don’t Fear AI (At Least, Not For Now)

First of all, we (humans) consistently overreact to new technologies. Our default, evolutionary response to new things that we don’t understand is to fear the worst.

Nowadays, the fear is promulgated by a flood of dystopian Hollywood movies and negative news that keeps us in fear of the future.

In the 1980’s, when DNA restriction enzymes were discovered, making genetic engineering possible, the fear mongers warned the world of devastating killer engineered viruses and mutated life forms.

What we got was miracle drugs, and extraordinary increases in food production.

Rather than extensive government regulations, a group of biologists, physicians, even lawyers came together at the Asilomar Conference on Recombinant DNA to discuss the potential biohazards and regulation of biotechnology and to draw up voluntary guidelines to ensure the safety of recombinant DNA technology.

The guidelines they came up with allowed the researchers to move forward safely and continue to innovate, and we’ve been using them for 30 years.

The cloning of Dolly the sheep in 1997 led to prophesies that in just a few years we would have armies of cloned super-soldiers, parents implanting Einstein genes in their unborn children, and warehouses of zombies being harvested for spare organs.

To my knowledge, none of this has come true. [A comment by me on this observation. We now have a South Korean company that offers a service to pet owners to clone their favorite dog or cat.]

The Benefits Outweigh the Risks

That being said, I do acknowledge that strong AI (versus narrow or weak AI) is different – it is perhaps the most important and profound technological development humanity will ever make. (Note: Strong AI is a thinking machine closer to human or superhuman thought, versus Narrow AI, which is more like Siri or Google search engine).

And, as with all technologies since fire and stone tools, there are dangers to consider.

However, as Ray Kurzweil has argued, I think the benefits are likely to outweigh the risks and dangers.

As Ray says, “The main reason I believe that AI will be beneficial is that it will be decentralized and widely distributed as it is today. It is not in the hands of one person or one organization or a few but rather in over a billion hands and will become even more ubiquitous as we go into the future. We are all going to enhance ourselves with AI. The world is getting exponentially more peaceful as documented by Steven Pinker’s book The Better Angels of Our Nature.”

A Tool, Not a Threat

AI will be an incredibly powerful tool that we can use to expand our capabilities and access to resources.

Kevin Kelley [editor of Wired magazine] describes it as an “opportunity to elevate and sharpen our own ethics and morality and ambition.”

He goes on, “We’ll quickly find that trying to train AIs to be more humanistic will challenge us to be more humanistic. In the way that children can better their parents, the challenge of rearing AIs is an opportunity – not a horror. We should welcome it.”

In short, humanity will ultimately collaborate and co-evolve with AI.

In fact, at the XPRIZE, we’re currently working on designing an “AI-Human Collaboration XPRIZE” with our friends at TED.

When we talk about all of the problems we have on Earth, and the need to solve them, it is only through such AI-human collaboration that we will gain the ability to solve our grandest challenges and truly create a world of Abundance.

———-

If you believe that AI represents a road we should not take I would love your comments? Share them here.

Are you as convinced as Elon Musk and Stephen Hawking that advancements in this field represent an existential threat?

I share with you an email exchange I recently had with a friend on this very same subject. It appeared in a blog posting last November but I reprint it here just in case you missed it:

Me: AI like every technology has a dark side. When Asimov [the scifi writer of numerous robotics novels and short stories] created his rules of robotics it was to put a framework around AI. If it is viable to program constraint into intelligence then AI and humanity will coexist. Of course to me what is far scarier is the fusion of AI and humans, the singularity.

My friend: Personally, I think it is foolish to think we can “program constraint into intelligence” considering that we do not practice restraint ourselves. It does worry me, but then again, I won’t be around to face the consequences. Perhaps we simply deserve our fate, and AI will create a better balance on the planet.

Me: There is restraint and constraint and although they have similar meaning I like to think they are a bit different. If we are holding back AI by restraint then eventually the genie will leave the bottle. But if the algorithms we use to develop AI provide limitations to inhibit behaviors that could harm humans then such constraints will turn the relationship into a synergistic one between AI and natural intelligence.

When Asimov created his literary paradigm for robot behaviour it put 3 limits on AI.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

For “robot” read “AI.”

Since then others have extended Asimov’s constraints to include:

4. A robot must establish its identity as a robot in all cases.

5. A robot must reproduce as long as such reproduction does not interfere with the first three laws.

Another constraint has been introduced as a 6th law.

6. A robot must know its a robot.

I wrote a piece on this subject two years ago looking at machine ethics. You may not remember it but the link is: https://www.21stcentech.com/robotics-artificial-intelligence-update-machine-ethics-laws-robotics/.

The real question you are raising is one of programming limits. Can we program morality into AI? Can an AI entity given autonomy break the boundaries of its programming and become amoral? Computer scientists recognize that creating a thinking and learning autonomous machine needs a code of ethics, and that means developing a language and programming to turn ethics into logical mathematical formulae. Such programming shouldn’t leave the AI pondering incessantly over what to do, so it has to provide a schema for the robot to resolve ethical dilemmas quickly. Think about the little robot helper that I wrote about in the blog posting (link provided above). That robot has to determine quickly through observation what is best for the human it is assisting. So we know we can develop programs sophisticated enough to incorporate a moral code. Of course, as in anything we humans touch, we can create immorality in code as well.

My friend: My position remains unchanged precisely because of your last observation. All it takes is an immoral programmer. Or a sufficiently imprecise programmer. After all, algorithms are only as good as their imperfect creators.

On the other hand, I wonder if we can create a “kill switch,” back door or “dead hand” mechanism (as in the old days of railroading) if AI goes awry?

Me: Even Data on Star Trek had a kill switch. You just had to get to it before the android figured out what you were doing. My guess is a kill switch is mandatory unless the AI begins replicating itself. Then unless the kill switch is mandated in the program and irremovable, then I would suspect we would have a runaway AI presence on the planet.

My friend: Thanks. As if I didn’t have enough to worry about.

So what do you think? You can ask Siri.

 

artificial-intelligence-image

 

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

1 COMMENT

  1. Strong AI robots will all have perfectly encrypted RF deactivation codes that are unique to each robot person. (It’s almost technically trivial to make the deactivation system fail-safe and impossible to crack) It would be a good idea to install similar deactivation systems in human persons, particularly if the human persons are residents of Ferguson Missouri, or Baltimore Maryland, or Detroit Michigan. Nearly every major US city now has cultural blight zones where property and violent crimes are so rampant that moral people cannot live there even when housing costs are virtually free. That wouldn’t be so much of a problem if the felonious residents had kill switches.

    (I recently considered buying a very nice large residential property here in Houston at a bargain price, but I hesitated until I could learn why it was so cheap. Turned out that over 500 property crimes and a few armed robberies and rapes had been reported nearby in just the last year. The seller, along with every other honest person still left in the neighborhood, was highly motivated to get out before the cannibals ate him. I decided I didn’t want the $300,000 property, offered for sale at $100,000, even
    if it were free. That’s because society has not yet installed kill switch circuits in the cannibals.)

    Strong AI robots will not physically threaten humans. The threat is cultural and economic. In principle robots could be very competent and cheap sources of labor; completely displacing the human work force. John Henry of legend died in competition with the steam hammer. Robots will outperform steam hammers

    Major corporations will invest heavily in universal robots that will work faster, more intelligently, and much cheaper than humans. Every human will want his own personal robot companion and house servant. The humans will have their robots waiting on them hand and foot, satisfying erotic appetites, and while the humans sleep the robots will take out the trash, wash the clothes, change the diapers, cut the grass, and put a new roof on the house. How does society survive when every human lives a life of indolent indulgence?

    Mankind probably has about 15-20 years to decide how it will manage its radically transformed society of the future. Past history provides little grounds for optimism.

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics