New military drones with lethal AI systems in development, ethical concerns arise


A military drone of the United States Armed Forces / Photo by: Lt. Col. Leslie Pratt via Wikimedia Commons


Drones equipped with cameras and sensors have been actively used in exploratory and military purposes. These aerial devices are normally controlled remotely by a human pilot to intervene when necessary. However, the United States Armed Forces is developing the first set of drones capable of targeting and eliminating targets using artificial intelligence, with almost no human intervention.

New Drones Use AI to Determine Targets

The MQ-9 Reaper is an armed, long-endurance remotely piloted aircraft and is considered one of the most lethal drones in the world. It is equipped with wide-range sensors, an advanced communications suite, and precision weapons to take out targets.  It is currently controlled remotely by drone operators.

The people behind the controls of every military drone hold the ultimate decision to eliminate a target. That decision is influenced by moral judgment, human emotions, and ethics. And people behind the decision often suffer from post-traumatic stress disorder.

“I am very much of the mindset that I would allow an insurgent, however important a target, to get away rather than take a risky shot that might kill civilians,” an operator of the Reaper stated.

The current plan of the US army seems to militarize lethal AI systems that would induce legal and ethical implications. The militarization could also remove humanity in warfare, which is currently in place.

According to the Small Business Innovation Research, the US army is developing a system that can be integrated and deployed in class 1 or class Unmanned Aerial System. The system should also be capable of automatic detection, recognition, classification, and identification of targets.

Any drones with this system can release weapons like bombs on targets of interest without requiring any human aid. While it removes the guilt and stress on drone operators when making decisions, the use of these drones incur consequences including:

- The disappearance of legal and ethical responsibility from certain people.

- Transfers of legal and ethical responsibility to other people like AI scientists.

- Liability imposed on individuals like employees who work for companies that support lethal AI systems.

While these drones improve warfare to stop terrorism, the AI systems might not be able to bring true peace. Instead, might start a new age of war among nations.