A few days ago we reported on ChatGPT, an AI system with tons of interesting use cases. But are there risks and challenges from new technologies based on artificial intelligence? Let's look at the risks and challenges ChatGPT poses to everyone, even those who don't use it.
Although you can't fully rely on the accuracy of the text ChatGPT writes its encoding seems to be good. Although the code he writes is not very correct, it seems that the tool makes it a lot easier creation κακόβουλου κώδικα και κάνει ακόμα πιο εύκολη τη δημιουργία email Phishing.
A Publication του VentureBeat εξέτασε αυτό ακριβώς το φαινόμενο και το αποκαλεί εκδημοκρατισμό του εγκλήματος στον κυβερνοχώρο. Το όνομα αναφέρεται στο γεγονός ότι το ChatGPT είναι ένα εργαλείο χωρίς αποκλεισμούς για το έγκλημα στον κυβερνοχώρο και δίνει τη δυνατότητα σε οποιονδήποτε, ακόμη και σε αυτούς που δεν έχουν εμπειρία κωδικοποίησης, να γράψουν γρήγορα και εύκολα τον κώδικα για ένα malware. Ένας από τους researchers, ο Matt Psencik, ο οποίος είναι ο Διευθυντής της ομάδας Endpoint Tatium Security Specialist, also gave examples of the tool being used in this way:
"In a few examples I already have, some are asking the bot to create convincing phishing emails or help reverse-engineer code to find zero-day exploits that could be used maliciously instead of reporting them for remediation."
This could lead to a huge increase in attacks in cyberspace which will make life very difficult for cybersecurity teams trying to keep our devices and digital and online identities secure. The truth is, right now, it looks like ChatGPT has a lot more to offer malicious users than it has to offer the cybersecurity community.
This marks just another example of new and exciting technologies that are often double-edged swords and should be treated with caution. Promise and potential are often tempered by risks and challenges, and when we rush to implement technologies like artificial intelligence, there is the risk of nasty surprises.