Deep Learning who I kill in the event of an accident

Deep Learning Dilemmas: Slowly but surely, self-driving cars will enter our daily lives. Accidents still happen in real-world driving, and autonomous vehicles continue to do stupid things which even the most novice drivers would avoid.

But scientists and researchers are trying to teach cars to see the world like us and to drive the roads to a level that equates or exceeds the skills of most driving people.

When this happens at some point, the roads will become safer as accidents will become very rare.

But if a fatal accident occurs, how would autonomous vehicles have to decide when the loss of life is inevitable? It seems that until now we can not have a response that satisfies us all.

The dilemma

This shows a four-year survey by MIT Media Labs. It is called Moral machine, and presented 13 participants with different driving scenarios, where the driver had to make a decision that would inevitably lead to a loss of life for passengers or pedestrians.

For example, in one of the scenarios, the driver must choose to hit a pedestrian or change direction and hit an obstacle that will lead to the death of passengers.
In other, more complex scenarios, the driver has to choose between two pedestrian groups that differ in number, age, gender and social status.

The research results, published by MIT in an article in the journal Nature, show that preferences and decisions vary according to culture, economic and social conditions and geographical location.

Diversity

For example, participants from China, Japan and South Korea are more likely to preserve older people's lives than young people (researchers assume this is due to the fact that these countries have greater respect for the elderly).

In contrast, in countries with individualistic cultures, such as the United States, Canada and France, drivers would protect the lives of young people.

All these are dilemmas faced by scientists who write the driving software. How should a driver decide on a situation where human decisions diverge in any way?

Deep Learning

Cars without a driver have some of the most advanced hardware and software technologies. They have sensors, cameras, lidars, radar and computational vision to evaluate and understand their environment to make decisions.

As the technology develops, cars will be able to make decisions in fractions of a second, perhaps much faster than even the most experienced drivers. This means that in the future, a self-driving vehicle will be able to come to a sudden stop 100 times faster if a pedestrian is thrown into the road a and foggy night.

But that does not mean that self-driving cars will be able to make decisions at the same level as humans. Basically, these vehicles are powered by artificial intelligence, technologies that mimic human behavior. So the decisions seem to be human, but only superficial.

More specifically, self-driving cars use deep learning, a subset of AI that is particularly good in comparing and classifying data.deep learning

By training an algorithm in deep learning with enough tagged data, it will be able to sort out all the new information and decide what to do with it based on previous data. In the case of cars now, if you give enough samples of road conditions and driving scenarios, they will be able to know what to do when, for example, a young child is thrown in the street by chasing a ball.

To some extent, deep learning is questioned as too rigid and shallow. Some scientists believe that some problems simply can not be solved by deep learning, no matter how much the algorithm has.

We want to believe that deep learning will become reliable enough to be able to respond to all road conditions, which will allow cars to navigate safely in different traffic and road conditions.

But even if the algorithm allows cars without a driver to avoid obstacles and pedestrians, they can not help in very serious decisions, such as what life is worth more than another.

In this case, no pattern matching of standards and statistics can help you decide. What is missing is the responsibility.

Difference between people and AI

What makes human intelligence so different from AI?

People recognize their weaknesses. We forget, we confuse facts, we are not fast enough with numbers and the information, and our mental and spiritual concerns slow down our reaction. In contrast, AI algorithms never age, never get confused, never forget facts, and can process information at lightning speeds.

However, we can make decisions even with incomplete data. We can decide on common sense, culture, moral values ​​and beliefs. But most importantly, we can explain the rationale behind our decisions and defend them.

This explains the great difference between the choices made by participants in the MIT Media Lab test. We also have consciousness and we can withstand the consequences of our decisions.

For example, last year, a woman in the Quebec province of Canada decided to stop her car in the middle of the motorway to save a family of ducks crossing the road.

A little later, a motorcycle fell on her car and two people died. The driver of the car was found guilty of two types of criminal negligence that caused the death of fellow humans. He was finally sentenced to nine months imprisonment, 240 hours of community work and five-year deduction of driving license.

Responsibility

AI algorithms can not take responsibility for their decisions and of course they can not attend the court for the mistakes they make.

If a self-driving vehicle accidentally hits a pedestrian, we know who will be responsible: its manufacturer. We also know (almost) what to do: educate AI models to manage data that had not been taken into account.

But who is the one who states that the error was from the deep learning algorithms, and not the proper functionality of the car? The car does not feel and can not take responsibility for its actions, even if it could explain it.

If the algorithm developers are responsible, then they should appear in court for any deaths caused by the vehicles.

Such a measure would obviously inhibit innovation in the mechanical learning and AI industry in general, because no manufacturer can guarantee that cars without a driver will work perfectly 100 percent.

But to return to the issue, MIT Media Labs tests on this issue are a bit far-fetched (although they are scenarios that can happen). Most drivers will never find themselves in such situations for the rest of their lives.

A solution for the above would be to create safe zones for pedestrians, completely separating them from the areas where self-driving vehicles move. This will eliminate the problem completely.

https://www.youtube.com/watch?v=TJvhVCnD_y8

Recall that the transition from horses to cars has created turmoil in many aspects of the lives of the people of the time. So we should learn how to influence cars without their driver regulations, urban infrastructure and behavior patterns.

The article was published in TNW.

________________

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.082 registrants.

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).