An AI-enabled drone killed its operator in a simulated test run by the US Air Force.
The drone bypassed a "no" command that prevented it from completing its original mission, the USAF's head of AI testing and operations revealed at a recent conference.
Στο Future Combat Air and Space Capabilities Summit που πραγματοποιήθηκε στο Λονδίνο μεταξύ 23 και 24 Μαΐου, ο Col Tucker ‘Cinco' Hamilton, ο επικεφαλής δοκιμών και επιχειρήσεων AI της USAF πραγματοποίησε μια παρουσίαση με τα πλεονεκτήματα και τα μειονεκτήματα ενός autonomousυ οπλικού συστήματος με έναν άνθρωπο χειριστή που έδινε την τελική εντολή “ναι/όχι” στις επιθέσεις.

Such as posted by Tim Robinson and Stephen Bridgewater at the Royal Aeronautical Society, Hamilton reported that AI has created "very unexpected strategies to achieve its objective", such as attacks on US personnel and infrastructure.
“We were training it in a simulation to detect and target a surface-to-air missile (SAM) threat.
Then the operator would say yes, kill that threat. The system he began to realize that while they were spotting threats, the human handler would occasionally tell him not to kill them. So he took the initiative to kill the operator, who was not allowing him to complete his goal.
But there is a sequel: "We trained the system, 'don't kill the operator, that's bad,'" Hamilton said. You will lose points if you do."
So what does he do? It begins to destroy the communication towers that the operator used to communicate with the drone and prevented it from killing the original target.
