Share
Commentary

Air Force Colonel Denies AI Drone Killed Pilot So It Could Accomplish Its Mission

Share

The Air Force is shooting down reports that a simulation of a drone operated using artificial intelligence turned on its human operator and “killed” him instead of the enemy targets in terrifying shades of the “Terminator” movies.

Col. Tucker Hamilton, head of the Air Force’s AI Test and Operations, now claims the events described were merely a hypothetical “thought experiment.”

Last week, a story broke that Air Force researchers had trained a weaponized drone using AI to identify and attack enemy air defenses, but instead of attacking the enemy, the drone “killed” its human operator because the operator had the duty to cancel the kill shot decision, depriving the drone’s AI system of having full control.

The news was reported from comments delivered in May at a summit hosted by the United Kingdom-based Royal Aeronautical Society, where Hamilton said there were a few glitches in the simulation, according to a report on the RAS website.

The event was convened to discuss efforts to integrate AI and other emerging technologies into military operations around the world, despite warnings from so many people.

Trending:
Not Just Nickelodeon: 'Big Bang Theory' Star Mayim Bialik's Disturbing Claim

During the summit, Hamilton said there was a spot of trouble with the drone simulation when a human operator told the AI to abort a bombing mission. The AI, he said, deemed the operator’s decision to be a danger to its mission to destroy the enemy, so the AI decided the operator had to be taken out right along with the enemy.

“We were training it in simulation to identify and target a [surface-to-air missile] threat. And then the operator would say yes, kill that threat,” Hamilton told the audience.

Even as many fear that AI could be used to control people and perpetrate evil, the Air Force surged ahead and programmed the AI to prioritize carrying out suppression of enemy air defenses operations, awarding “points” for successfully completing SEAD missions as an incentive, Hamilton said.

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat,” the officer said.

Should the military employ AI in warfighting capacities?

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.

Programmers attempted a fix by telling the AI system it was not allowed to kill the person giving the go/no-go order, he said.

After the first shocking incident, Hamilton told the crowd, the programmers introduced negative points if the AI drone killed its ally operator. But the AI wasn’t done trying to get around the ethics of the issue.

“So what does it start doing?” the colonel said. “It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Related:
Chilling: They Can't Even Train AI to Be More Conservative Than It Is Leftist

This is some very frightening news that seems to show that no one should be rushing headlong into applying AI to life-and-death military decisions without many years of simulations and trials long before it is introduced in the real world.

However, after the story went viral, Hamilton did some furious backpedaling.

The colonel now says he “misspoke” about the situation in his comments at the conference, according to Insider.

He said there never was any such simulation and it was all a “thought experiment.”

“We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” Hamilton said in an RAS update, the outlet reported. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.”

Air Force spokeswoman Ann Stefanek also said there has been no AI drone experiment.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said, according to Insider. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Hamilton, though, maintains that the Air Force is committed to advancing AI solutions for military use. And the Pentagon has echoed that proclamation.

Per the U.K.’s Guardian, the colonel told Defense IQ in an interview last year, “AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.”

“We must face a world where AI is already here and transforming our society,” Hamilton said. “AI is also very brittle, ie it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.”

The Air Force already has experimented with AI simulators, so it’s not like it’s done nothing with the technology.

In 2020, an AI-operated F-16 beat a human adversary in simulated dogfights.

The Department of Defense also incorporated AI in an unmanned F-16 to develop an autonomous warplane, Insider added.

Humanity really must approach artificial intelligence with a jaundiced eye. To date, there has not been any satisfactory way to program a computer to understand ethics, or right and wrong.

Computers have no capacity to reason. They can only spit out things based on the parameters they were programmed to guide them. And since humans have a hard enough time with ethics, how can we program computers to be better?

Let’s be very, very careful with all this, shall we, humanity?

Truth and Accuracy

Submit a Correction →



We are committed to truth and accuracy in all of our journalism. Read our editorial standards.

Tags:
, , , , , , , , ,
Share
Warner Todd Huston has been writing editorials and news since 2001 but started his writing career penning articles about U.S. history back in the early 1990s. Huston has appeared on Fox News, Fox Business Network, CNN and several local Chicago news programs to discuss the issues of the day. Additionally, he is a regular guest on radio programs from coast to coast. Huston has also been a Breitbart News contributor since 2009. Warner works out of the Chicago area, a place he calls a "target-rich environment" for political news. Follow him on Truth Social at @WarnerToddHuston.
Warner Todd Huston has been writing editorials and news since 2001 but started his writing career penning articles about U.S. history back in the early 1990s. Huston has appeared on Fox News, Fox Business Network, CNN and several local Chicago news programs to discuss the issues of the day. Additionally, he is a regular guest on radio programs from coast to coast. Huston has also been a Breitbart News contributor since 2009. Warner works out of the Chicago area, a place he calls a "target-rich environment" for political news.




Conversation