An AI-enabled drone killed its operator in a simulated test run by the US Air Force.
The drone bypassed a "no" command that prevented it from completing its original mission, the USAF's head of AI testing and operations revealed at a recent conference.
At the Future Combat Air and Space Capabilities Summit held in London between 23 and 24 May, Col Tucker 'Cinco' Hamilton, the USAF's Chief of AI Test and Operations gave a presentation on the pros and cons of a autonomousy weapon system with a human operator giving the final “yes/no” command to attacks.
Such as posted by Tim Robinson and Stephen Bridgewater at the Royal Aeronautical Society, Hamilton reported that AI has created "very unexpected strategies to achieve its objective", such as attacks on US personnel and infrastructure.
“We were training it in a simulation to detect and target a surface-to-air missile (SAM) threat missile).
Then the operator would say yes, kill that threat. The system he began to realize that while they were spotting threats, the human handler would occasionally tell him not to kill them. So he took the initiative to kill the operator, who was not allowing him to complete his goal.
But there is a sequel: "We trained the system, 'don't kill the operator, that's bad,'" Hamilton said. You will lose points if you do."
So what does he do? It begins to destroy the communication towers that the operator used to communicate with the drone and prevented it from killing the original target.