AI disaster stories from the scientists themselves

When we talk about the dangers that come with the artificial (AI from Artificial Intelligence), we emphasize more on unintended side effects.

We are worried that we could accidentally create a very smart AI and forget about conscious programming or develop criminal algorithms that have absorbed racist biases of developers.

But it's not just that…Artificial Intelligence AI

What about them who want to use AI for unethical, criminal or malicious purposes?

Will they be able to cause big problems much faster? The answer is yes, according to many experts from of Humanity Institute, Centre for the Study of Existential Risk και το μη κερδοσκοπικό ινστιτούτο OpenAI του Elon Musk.

In a report published today entitled "Misuse of Artificial Intelligence: Prediction, Prevention and Mitigation", or "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, "Academics and researchers are analyzing some of the ways in which AI can be used to cause us damage over the next five years and what we can do to stop it.

Because while AI can allow some very unpleasant new attacks, Miles Brundage of the Future of Humanity Institute told The Verge that we should not panic or abandon our efforts.

The report is extensive but focuses on some key ways AI can exacerbate threats to both digital and physical security systems and create entirely new risks.

It also sets out five recommendations on how to tackle these problems, basically launching new dialogues between policy makers and academics who are working on and dealing with the issue.

But let's start with possible threats:

One of the most important is that AI will drastically reduce the cost of certain attacks by allowing malicious users to automate tasks that require human work.

Take, for example, spear phishing, to which messages are sent specially designed to deceive recipients. AI could automate much of the work by mapping the social and professional network of individuals by helping to create highly targeted messages.

You could create very realistic chatbots that through chatting can compose data to guess your email password.

This type of attack sounds complicated, but once software is created that can do all of this, it can be used again and again at no extra cost.

A second point mentioned in the report is that AI can add new dimensions to existing threats.

With the same example of spear phishing AI could be used not only for par emails and text messages, but also audio and video messages.

We have already seen how AI can be used to mimic one's voice by studying just a few minutes of a recorded speech and how to convert the footage of the people they are talking to. Think about what a sophisticated AI can bring to politicians with some fake video and sound.

AI could turn the cameras CCTV from passive to active observers, allowing them to categorize human behavior automatically. This would give the AI, millions of samples of human behavior, and footages that could be used to produce fake videos.

Finally, the report highlights the completely new risks that AI will bring. The authors outline a series of possible scenarios, including those where the terrorists implant a bomb into a cleaning robot and transfer it to a ministry.

The robot uses the built-in camera to locate a particular politician and when it's close, the bomb explodes.

We describe scenarios such as what seems like a science fiction scenario, but we have already begun to see the first new attacks allowed by AI. Face replacement technology was used to create so-called "deepfakes"Who use celebrities in pornographic clips without their consent.

These examples appear only in one part of the report. What should be done? Solutions are easy to describe and report makes five key recommendations:

  • AI researchers should be aware of how it can be used in a bad way
  • Policy makers should learn from technical experts about these threats
  • The AI ​​world should learn from cybersecurity experts how to protect their systems
  • Ethical frameworks for AI must be developed and followed.
  • More people should be involved in these discussions. Not only scientists and policymakers, but businesses and the general public as a whole

In other words: a little more discussion and more action.

 

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.082 registrants.

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).