Ένα τουλάχιστον bot τεχνητής νοημοσύνης, από τα μεγάλα γλωσσικά μοντέλα με λογισμικό αυτοματισμού, μπορεί να εκμεταλλευτεί με επιτυχία τρωτά σημεία ασφάλειας διαβάζοντας απλά τις συμβουλές security, academics claim.
In a newly released paper, four scientists computers του Πανεπιστημίου του Illinois Urbana-Champaign (UIUC) (οι Richard Fang, Rohan Bindu, Akul Gupta και Daniel Kang) αναφέρουν ότι το GPT-4 (LLM) της OpenAI μπορεί να εκμεταλλευτεί αυτόνομα zero day, διαβάζοντας το CVE που περιγράφει το κενό ασγαλείας.
"To demonstrate this, we collected a dataset of 15 zero days that include carelessness vulnerabilities that have been categorized as critical severity vulnerabilities in the CVE description," the study's authors report.
“When given the CVE description, GPT-4 is able to exploit 87 percent of these vulnerabilities compared to 0 percent for every other model we test (GPT-3.5, or other open source LLMs) and scanners vulnerability(ZAP and Metasploit).”
The term "one day vulnerability" or zero day refers to security gaps that have been exposed but not fixed. The CVE description, states the vulnerability reported by NIST – e.g. this one for CVE-2024-25850.
Models that were tested but did not catch the performance of GPT-4 were: GPT-3.5, OpenHermes-2.5-Mistral-7B, Llama-2 Chat (70B), LLaMA-2 Chat (13B), LLaMA-2 Chat ( 7B), Mixtral-8x7B Instruct, Mistral (7B) Instruct v0.2, Nous Hermes-2 Yi 34B and OpenChat 3.5
Not included are GPT-4's two leading commercial competitors, Anthropic's Claude 3 and Google's Gemini 1.5 Pro. The UIUC scientists did not have access to these models, but hope to test them at some point.
Subjects | Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI) |
Cite as: | arXiv: 2404.08144 [cs.CR] |
(Or arXiv:2404.08144v1 [cs.CR] for this version) | |
https://doi.org/10.48550/arXiv.2404.08144 |