HomeTech and GadgetsArtificial IntelligenceCould Artificial Intelligence Trigger Nuclear War or Save Us From Ever Experiencing...

Could Artificial Intelligence Trigger Nuclear War or Save Us From Ever Experiencing It?

May 30, 2018 – The RAND Corporation has published the findings of a series of workshops in which leading artificial intelligence (AI) and nuclear security experts explore the effect the former might have on the latter. Would AI destabilize or stabilize those states with nuclear arsenals?

How could AI destabilize the nuclear status quo?

Pattern recognition AI could study satellite and high altitude reconnaissance imagery to locate and target nuclear weapons. With such knowledge in the hands of the wrong leadership, a nation could launch a pre-emptive strike on these locations.

Or

An AI could constrain a military or civilian leadership from launching a nuclear missile attack by penetrating the security apparatus of the operational systems and effectively shut them down.

Or

An AI could be misled by another AI or human operative so that it provides “fake news” and subsequently compromise nuclear safety.

Today, IBM’s Watson is used to advise medical practitioners on diagnoses and businesses on best practices. Why not a Watson providing decision support for nuclear defense?

The problem, in a nutshell, is that there is no AI at present sufficiently capable to be an advisor to a leader with access to the nuclear trigger. But that isn’t to say we won’t be seeing one soon.

The RAND report suggests that by 2040, “given the progress AI is making” that an effective machine intelligence will be in place. But that isn’t to say that it will be hack proof, or capable of misreading the data it studies. Whether it is the United States, China, Russia, India, Israel, or some other nuclear state that wins the AI race related to a penultimate weapons adviser, this will be a gamechanger of consequence.

Battlefield Nuclear and AI Deployment

On a smaller scale, adding AI to nuclear warheads and various weapon delivery systems could prove far more destabilizing than an AI providing overall guidance. Back in March of this year, Russia’s Vladimir Putin proclaimed recently that his country had built a new array of nuclear weapons capable of hitting any point in the world, equipped with built-in smarts to evade state-of-the-art tracking systems.

Not to be outdone, the United States Air Force has been developing a nuclear gravity bomb that is described as being “three times more accurate” than any previously created. This is just one of a number of new initiatives by the U.S. Pentagon to create more low-yield nuclear weapons for use with existing air, land and sea forces.

With both the U.S. and Russia moving to lower yield nuclear weapons for deployment in battlefield conditions, it is pretty clear that AI will be deployed in a supporting role for on-field decision making.

Network-on-network warfare is the new battlefield concern. AI can penetrate a battlefield network and disrupt how and when weapons get fired. If nuclear, even low yield, then the consequences of AI algorithm hacks could be enormous.

And then there is the rise of autonomous battlefield systems and the potential for using low-yield nuclear weapons with this technology. One can imagine autonomous weapon systems guided by AI starting a low-yield nuclear confrontation which then gets escalated by a battlefield network-controlling AI to a point where high-yield nuclear weapons get brought into action.

What has happened to MAD?

What makes the nuclear option work in today’s world is the concept of mutual-assured destruction (MAD). Both the United States and Soviet Union (now Russia) have comparable arsenals of high-yield nuclear weapons that can be delivered by land-based ballistic missiles, by air, and by sea. Both have developed nuclear weapon systems designed to deter either country from launching a first-strike. This is referred to as “crisis stability” and means that escalations during geopolitical confrontations never lead to a nuclear outcome.

But now deep learning, machine vision, autonomy and the deployment of sensors connected through the Internet of Things (IoT) is likely to undermine the assurance of MAD. AI can destabilize this carefully constructed military paradigm established during the Cold War. And in a world where more than Russia and the United States are nuclear weapons holders, any state actor developing AI enhancements to its nuclear arsenal could embolden it to seek an advantage leading to a lethal response.

It would seem, therefore, that the countries holding nuclear weapons today need to come to some kind of agreement on the limits in the deployment and use of AI in the battlefield. Such an agreement would be similar to a test ban treaty. Without such an agreement, we may very well witness in the near future a scenario where one or more AI algorithms set off a nuclear war.

 

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics