I: "Can Artificial Intelligence Be Abused for Cyber Attacks?"
GPT Chat: “As a linguistic model of artificial intelligence, I have no agency or intentions of my own. However, like any technology, it can potentially be abused by people with malicious intent.
Ultimately, the responsibility for the use and possible misuse of IT systems rests with the users. It is important for individuals and organizations to use AI technology ethically and responsibly, following legal and ethical guidelines and taking precautions to prevent any harm or misuse.”
In this debate, ChatGPT is not wrong…
Artificial intelligence (AI) tools like Chat GPT and Google Bard are gaining popularity around the world. These tools enable their users to enhance efficiency and productivity in their daily tasks and responsibilities.
However, such important capabilities can also be misused. Shortly after their appearance, generative AI tools were used to create malicious programs, many of which appeared in official app stores.
Generative AI has been used to create AI-powered emails and messages for phishing campaigns, as well as an AI-generated video on YouTube that was found to be malicious.
Unfortunately, AI companies do not handle these issues. In one a recent report, the Check Point Research team found that Bard (Google's artificial intelligence tool) imposes almost no restrictions on the creation of phishing emails, and that with minimal manipulation, it can be used to develop malware keyloggers, which rightly raises security concerns.
For these reasons, some companies have decided to ban the use of AI tools on corporate devices or networks, while other companies that allow employees access to productive AI tools have ended up being victims of sensitive data leaks.
Therefore, until appropriate measures are available to protect sensitive data from internal information leakage and theft, companies should pay close attention to their security policies.
Mobile devices are no exception. In fact, mobile users are more likely to download a malicious app or fall victim to phishing attempts. Mobile devices, smaller screens, high-volume apps and notifications can distract users, making it harder for them to avoid clicking on malicious links or downloading malicious files. For many of us, social networks are the most used apps on our mobile device, which means that mobile users are more exposed to social engineering and phishing.
Additionally, the blurred lines between personal and business use of mobile devices makes them a primary entry point into an organization, so protecting them from AI threats must be a top priority.
What should you do to protect your fleet of mobile devices from AI threats?
- The most important thing is to realize that you should not trust mobile users. Attacks continue to evolve in both size, quantity and sophistication, and are now too advanced and complex for a human to detect. A classic example is phishing sites that have evolved so much that they resemble the original. To deal with this level of complexity, you need advanced technology to protect the device and prevent threats from entering your organization.
- Mobile devices are an entry point into your organization. With AI evolving so rapidly, detection and remediation are not enough. Your mobile security solution must include proactive capabilities to stop a threat at the device level before it gives access to your corporate assets.
- The capabilities of generative AI learning are overwhelmingly impressive. To keep up, you have to use the AI to fight it. Make sure your security solution uses AI and machine learning technology to be sure.