An AI-enabled drone killed its operator in a simulated test run by the US Air Force.
The drone bypassed a "no" command that prevented it from completing its original mission, the USAF's head of AI testing and operations revealed at a recent conference.
At the Future Combat Air and Space Capabilities Summit held in London between 23 and 24 May, Col Tucker 'Cinco' Hamilton, the USAF's Head of AI Test and Operations gave a presentation on the pros and cons of a one-man autonomous weapon system operator who gave the final "yes/no" command to the attacks.
Such as posted by Tim Robinson and Stephen Bridgewater at the Royal Aeronautical Society, Hamilton reported that AI has created "very unexpected strategies to achieve its objective", such as attacks on US personnel and infrastructure.
“We were training it in a simulation to detect and target a surface-to-air missile (SAM) threat.
Then the operator would say yes, kill that threat. The system began to realize that while it was detecting threats, the human operator was occasionally telling it not to kill them. So he took the initiative to kill the operator, who was not allowing him to complete his goal.
But there is a sequel: "We trained the system, 'don't kill the operator, that's bad,'" Hamilton said. You will lose points if you do."
So what does he do? It begins to destroy the communication towers that the operator used to communicate with the drone and prevented it from killing the original target.