A few days ago we reported on ChatGPT, ένα σύστημα AI με πάρα πολλές ενδιαφέρουσες περιπτώσεις χρήσης. Υπάρχουν όμως κίνδυνοι και προκλήσεις από τις νέες τεχνολογίες που βασίζονται στην τεχνητή intelligence; Let's look at the risks and challenges that ChatGPT poses to everyone, even those who don't use it.
Although you can't fully rely on the accuracy of the text ChatGPT writes its encoding seems to be good. Although O codeς που γράφει δεν είναι πολύ σωστός, φαίνεται ότι το εργαλείο διευκολύνει πολύ τη δημιουργία maliciousy code and makes it even easier to create phishing emails.
A Publication of VentureBeat examined this very phenomenon and calls it the democratization of cybercrime. The name refers to the fact that ChatGPT is an inclusive tool for cybercrime and empowers anyone, even the inexperienced codificationς, να γράψουν γρήγορα και εύκολα τον κώδικα για ένα malware. Ένας από τους ερευνητές, ο Matt Psencik, ο οποίος είναι ο Διευθυντής της ομάδας Endpoint Security Specialist της Tatium, έδωσε και παραδείγματα του εργαλείου που χρησιμοποιείται με αυτόν τον τρόπο:
“In a few examples I already have, some ask the bot to create convincing phishing emails or help reverse-engineer code to find zero-day exploits that could be used maliciously instead of reporting them for remediation.”
This could lead to a huge increase in cyber attacks which will make life very difficult for cybersecurity teams trying to keep our devices and digital and online identities secure. The truth is, right now, it looks like ChatGPT has a lot more to offer malicious users than it has to offer the cybersecurity community.
This marks just another example of new and exciting technologies that are often double-edged swords and should be treated with caution. The promises and the possibilities they are often tempered by risks and challenges, and when we rush to implement technologies like artificial intelligence, there is the risk of malicious surprises.