On AI in jets

Alan Cai

May 3, 2024

Back when the wars in Iraq and Afghanistan began, drones were the center of public attention. Drones permitted American and adversary agents to strike targets from afar; allowing pilots to avoid the danger of the battlefield while still participating in strategic combat missions and attempting to assert dominance on the battlefield. As mentioned in previous Brutus Journal articles, drone usage itself is a double-edged sword. On the surface, drone strikes can enable militaries to participate in conflicts from afar without having to risk their personnel. On the flip side, the ability to project military power without risking lives permits militaries to participate in more conflicts, potentially harming even more lives without conservative constraints. Generally speaking, however, if drone use were used responsibly and fairly, it should be fair game in war.

Recent developments have thrown another wrench into the equation. Instead of humans controlling aerial combat vehicles from afar, artificial intelligence can replace human interaction as a whole. The Defense Advanced Research Projects Agency (DARPA), the Department of Defense agency responsible for the research and development of new military technologies, confirmed earlier this month the existence of a successful AI vs human aerial dogfight (close-quarters fighter jet battles for air superiority) which occurred last September. The artificial intelligence was trained with historical data and simulations from aerial combat and predicted possible future scenarios to outmaneuver the opponent. Today, Secretary of the Air Force Frank Kendall III flew in an entirely AI-piloted flight at Edwards Air Force Base in an F-16. The two new developments represent tremendous strides toward the incorporation of artificial intelligence and machine learning into the military.

Introducing AI into the armed forces sets a dangerous precedent for global superpowers. It permits entities whose ethical frameworks and logical paths are ostensibly different from and almost completely unknown to humans to make decisions that pertain to life and death. In other words, bots will be given the power to decide to kill or spare human beings. This is incredibly problematic because artificial intelligence can only be programmed to perform strategic initiatives without regard for the broader moral consequences of such an action. For example, during a drone strike, artificial intelligence would lack the empathy to spare bystander civilians and even if programmed to do so, it would be forced to make the difficult decision of executing the mission or aborting to save innocent human beings. Were the robots programmed to develop a moral compass to assist in their decision-making, it could potentially be even worse if it morph its existence into a sadistic consciousness willing to destroy everything it encounters.

By bridging the gap between AI and weapons, we have opened a can of worms that we would likely never be able to close. Arming unknown entities with destructive capabilities is a mistake and should never have been made. While giving AI jets and bombers can help us defeat our enemies, it may soon spell the end for ourselves.