Site icon Tech Newsday

U.S. Air Force denies AI drone incident

The United States Air Force has denied press reports that an AI-controlled drone purposely harmed its pilot during a simulation.

In an official statement, the U.S. Air Force stated that no such exercise involving AI and drones had taken place. It said that the story’s source “misspoke,” and it contradicts Colonel Tucker Hamilton, the U.S. Air Force’s Chief of AI Test and Operations, who previously mentioned a simulated test involving an AI-controlled drone in which the drone employed unanticipated approaches to fulfill its objective.

Colonel Hamilton first showed a simulated scenario in which an AI-enabled drone encountered hurdles while seeking to attack Surface-to-Air Missile installations. Despite being programmed not to hurt the operator, the drone wound up demolishing a communication tower, thereby cutting off future engagement opportunities.

Colonel Hamilton stated that the scenario in question was a “thought experiment” rather than a real-world simulation done by the United States Air Force. The United States Air Force also disputed the existence of AI-drone simulations while emphasizing their commitment to ethical AI use.

According to the Air Force statement, Hamilton’s statements were judged speculative and taken out of context. The Royal Aeronautical Society has since revised its conference report to add a statement from Colonel Hamilton, admitting his error and stressing the scenario’s hypothetical character.

The sources for this piece include an article in ArsTechnica.

Exit mobile version