AI disaster stories from the scientists themselves

When we talk about the dangers that come with artificial intelligence (AI from Artificial Intelligence), we focus more on involuntary side effects.

We are worried that we could accidentally create a very smart AI and forget about conscious programming or develop criminal algorithms that have absorbed racist biases of developers.

But it's not just that…Artificial Intelligence AI

What about people who want to use AI for unethical, criminal or malicious purposes?

Will they cause big problems much faster? The answer is yes, according to many experts from the Future of Humanity Institute, the Center for the Study of Existential Risk and the non-profit OpenAI institute of Elon Musk.

In one published today under the title “The malicious use of artificial intelligence: predicting, preventing and mitigating”, or “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, "Academics and researchers are analyzing some of the ways in which AI can be used to cause us damage over the next five years and what we can do to stop it.

Because while AI can allow some very unpleasant new attacks, Miles Brundage of the Future of Humanity Institute told The Verge that we should not panic or abandon our efforts.

Η έκθεση είναι εκτεταμένη, αλλά επικεντρώνεται σε μερικούς βασικούς τρόπους με τους οποίους θα μπορέσει το AI να επιδεινώσει τις απειλές τόσο στα ψηφιακά όσο και στα συστήματα ς s, as well as create entirely new risks.

It also sets out five recommendations on how to tackle these problems, basically launching new dialogues between policy makers and academics who are working on and dealing with the issue.

But let's start with possible threats:

One of the most important is that AI will drastically reduce the cost of certain attacks by allowing malicious users to automate tasks that require human work.

Take, for example, spear phishing, in which messages are sent that are specifically designed to trick recipients. AI could automate a large part of the work, mapping the social and professional network of people helping to highly targeted messages.

You could create very realistic chatbots that through chatting can compose data to guess your email password.

This type of attack sounds complicated, but once software is created that can do all of this, it can be used again and again at no extra cost.

A second point mentioned in the report is that AI can add new dimensions to existing threats.

With the same example of spear phishing AI could be used not only to generate emails and text messages, but also audio and video messages.

We have already seen how AI can be used to mimic one's voice by studying just a few minutes of a recorded speech and how to convert the footage of the people they are talking to. Think about what a sophisticated AI can bring to politicians with some fake video and sound.

AI could turn the cameras CCTV from passive to active observers, allowing them to categorize human behavior automatically. This would give the AI, millions of samples of human behavior, and footages that could be used to produce fake videos.

Finally, the report highlights the completely new risks that AI will bring. The authors outline a number of possible scenarios, including one where terrorists plant a bomb in a cleaning and transfer it to a ministry.

The robot the built-in για να εντοπίσει έναν συγκεκριμένο πολιτικό και όταν είναι κοντά, η βόμβα εκρήγνυται.

We describe scenarios such as what seems like a science fiction scenario, but we have already begun to see the first new attacks allowed by AI. Face replacement technology was used to create so-called "deepfakes"Who use celebrities in pornographic clips without their consent.

These examples appear only in one part of the report. What should be done? Solutions are easy to describe and report makes five key recommendations:

  • AI researchers should be aware of how it can be used in a bad way
  • Policy makers should learn from technical experts about these threats
  • Ο κόσμος του AI θα πρέπει να μάθει από εμπειρογνώμονες στον τομέα της ς στον κυβερνοχώρο πώς να προστατεύσει καλύτερα τα συστήματά τους
  • Ethical frameworks for AI must be developed and followed.
  • More people should be involved in these discussions. Not only scientists and policymakers, but businesses and the general public as a whole

In other words: a little more discussion and more action.

 

iGuRu.gr The Best Technology Site in Greecefgns

Subscribe to Blog by Email

Subscribe to this blog and receive notifications of new posts by email.

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).