HomeTech and GadgetsArtificial IntelligenceRussia May Have Used an Artificial Intelligence Killer Drone in Ukraine According...

Russia May Have Used an Artificial Intelligence Killer Drone in Ukraine According to the Bulletin of the Atomic Scientists

There are far too many precedents in this Vladimir Putin-inspired war against Ukraine. Unprovoked aggression that is entering its fourth week, has featured Putin’s threats to deploy nuclear weapons. It has included Russian misinformation being spread about Ukraine deploying biological and chemical weapons which are seen as a way for the former to begin using them in the conflict.

Russian forces have indiscriminately shelled and fired rockets at hospitals, residential buildings, theatres, places of worship, and other civilian targets with no other purpose than to create terror. And now, a report which is described in the Bulletin of the Atomic Scientists describes recovered images of a crashed drone near Kyiv that is a known artificial intelligence (AI) weapon system. How many more lines can Putin give the Russian military the license to cross?

The wrecked drone is a KUB-BLA, manufactured by the Kalashnikov Group. A similar drone was downed in Syria back in 2019. At the time it was not operating in autonomous mode, and it should be noted that this latest incident and the investigation of its use has yet to determine if the drone was operating using AI to pick and fire on targets.

The United Nations has been working towards a ban on autonomous weapons that target people. Agreement on a treaty is being held up because of resistance from the United States, Russia, Israel and a few other countries. It appears they object to a blanket ban. But nonetheless, the discussion continues as many raise ethical objections to AI-directed weapons in general based on principle.

How can an AI-directed weapon understand human rights or conform to humanitarian law? What algorithms can be built into the software to ensure that agreed to laws are followed? And what types of military deployment could justify the use of an AI weapon system? What happens if an AI weapon develops a bug, or is hacked? Who would be responsible should such a weapon end up killing people? Is there any acceptable error or accident rate for an AI-directed weapon that wouldn’t involve criminal prosecution of those deploying it?

An obvious solution is to pass an international agreement to ensure that all AI-guided weapons are under human control at all times. Without this, AI-directed weapons could start a world war. This could easily come true if weapons development to deploy AI in swarms becomes a reality. But wait, this has already happened.

In May 2021, when Israel and Hamas got into another border skirmish along the Gaza Strip, the former deployed AI-guided drone swarms to geolocate and strike Hamas targets. Was the deployment justified? That question remains unanswered and may explain why Israel has been reluctant to agree to the human constraint in the use of AI-directed weapon systems.

When Isaac Asimov, the noted science fiction author and scientist, created his three laws governing the behaviour of robots, the first one stated “a robot may not injure a human being, or, through inaction, allow a human being to come to harm.” The second stated that “a robot must obey the orders given it by a human being except where such orders conflict with the first law.” Asimov’s third law gave a robot the right to protect itself as long as it wasn’t in conflict with the first two laws. What Asimov didn’t cover is the interaction among robots such as in swarms and the ethics to govern their relationships.

Substitute AI for robots and you have guiding principles that appear to no longer be under consideration by those who are developing autonomous weapon systems.

Asimov’s three laws inspired others to add to them. Back in 2012, I ran across such an effort which I believe includes the kind of governing constraints we need to put in place in all nations when dealing with AI. Instead of three laws, the author of these proposed the five that follow. I have put comments on each in brackets:

  1. Robots should not be designed solely or primarily to kill or harm humans. [That law has already been broken.]
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals. [Accountability for the use of AI lies with the humans who created the technology.]
  3. Robots should be designed in ways that assure their safety and security. [The definition of what is safe and secure remains fuzzy.]
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human. [The question of whether at some point in the future AI should be equated as being equal to any living creatures or humans needs resolution.]
  5. It should always be possible to find out who is legally responsible for a robot. [If Israel, who has used, or Russia, who is being questioned about use with this latest discovery, are using AI-directed weapons, then both are accountable for the destruction and loss of human life from their deployment.]

The world needs unanimous agreement in banning AI-directed weapon systems because one day one or more nations may add the word nuclear to the equation. That would be an ominous development. And when you consider Putin’s recent threat to deploy nuclear weapons in Ukraine, we could be moving one step closer to such an outcome.

 

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics