Deep Learning who I kill in the event of an accident

Deep Learning Dilemmas: Slowly but steadily, self-driving cars will enter our everyday lives. Accidents continue to happen in real driving, and stand-alone vehicles continue to make stupid mistakes that most beginner drivers would also avoid.

But scientists and researchers are trying to teach cars to see the world like us and to drive the roads to a level that equates or exceeds the skills of most driving people.

When this happens at some point, the roads will become safer as accidents will become very rare.

But if a fatal accident occurs, how would autonomous vehicles have to decide when the loss of life is inevitable? It seems that until now we can not have a response that satisfies us all.

The dilemma

This shows a four-year survey by MIT Media Labs. It is called Moral machine, and presented 13 participants with different driving scenarios, where the driver had to make a decision that would inevitably lead to a loss of life for passengers or pedestrians.

For example, in one of the scenarios, the driver must choose to hit a pedestrian or change direction and hit an obstacle that will lead to the death of passengers.
In other, more complex scenarios, the driver has to choose between two pedestrian groups that differ in number, age, gender and social status.

The research results, which MIT published in an article in the scientific journal Nature, show that the prodecisions differ according to culture, economic and social conditions and geographical location.

Diversity

For example, participants from China, Japan and South Korea are more likely to preserve older people's lives than young people (researchers assume this is due to the fact that these countries have greater respect for the elderly).

In contrast, in countries with individualistic cultures, such as the United States, Canada and France, drivers would protect the lives of young people.

All these are dilemmas faced by scientists who write the driving software. How should a driver decide on a situation where human decisions diverge in any way?

Deep Learning

Driverless cars are some of the most advanced hardware and software. They have sensors, cameras, lidars, radar and computer vision to assess and understand their environment to make decisions.

As technology develops, cars will be able to make decisions in a fraction of a second, perhaps much faster than the most experienced drivers. This means that in the future a self-propelled vehicle will be able to stop short 100 times faster if a pedestrian throws up a dark and misty night on the road.

But that doesn't mean self-driving cars will be able to make decisions on the same level as humans. Basically, these vehicles are powered by , technologies that mimic human behavior. So the decisions seem to be human, but only superficially.

More specifically, self-driving cars use deep learning, a subset of AI that is particularly good in comparing and classifying data.deep learning

By training a deep learning algorithm with enough labeled data it will be able to classify all new and decide what to do with them based on past data. In the case of cars now, if you give them enough samples of road conditions and driving scenarios, they will be able to know what to do when, for example, a small child is thrown into the road chasing a ball.

To some extent, deep learning is questioned as too rigid and shallow. Some scientists believe that some problems simply can not be solved by deep learning, no matter how much the algorithm has.

We want to believe that deep learning will become reliable enough to be able to respond to all road conditions, which will allow cars to navigate safely in different traffic and road conditions.

But even if the algorithm allows cars without a driver to avoid obstacles and pedestrians, they can not help in very serious decisions, such as what life is worth more than another.

In this case, no pattern matching of standards and statistics can help you decide. What is missing is the responsibility.

Difference between people and AI

What makes human intelligence so different from AI?

People recognize their weaknesses. We forget, we confuse the facts, we are not fast enough with numbers and information processing, and our mental and intellectual concerns slow down our reaction. Instead, AI algorithms never get older, do not get confused, do not forget facts, and can process information at lightning speeds.

However, we can make decisions even with incomplete data. We can decide on common sense, culture, moral values ​​and beliefs. But most importantly, we can explain the rationale behind our decisions and defend them.

This explains the great difference between the choices made by participants in the MIT Media Lab test. We also have consciousness and we can withstand the consequences of our decisions.

For example, last year, a woman in the Quebec province of Canada decided to stop her car in the middle of the motorway to save a family of ducks crossing the road.

A little later, a motorcycle fell on her car and two people died. The driver of the car was found guilty of two types of criminal negligence that caused the death of fellow humans. He was finally sentenced to nine months imprisonment, 240 hours of community work and five-year deduction of driving license.

Responsibility

AI algorithms can not take responsibility for their decisions and of course they can not attend the court for the mistakes they make.

If a self-driven vehicle accidentally knocks a pedestrian, we know who will be responsible: the manufacturer. We also know (almost) what we need to do: better educate AI models to manage unobservable data.

But who is the one who states that the error was from the deep learning algorithms, and not the proper functionality of the car? The car does not feel and can not take responsibility for its actions, even if it could explain it.

If the algorithm developers are responsible, then they should appear in court for any deaths caused by the vehicles.

Such a measure would obviously hamper innovation in learning and the AI ​​industry in general, because no manufacturer can guarantee that driverless cars will work 100 percent perfectly.

But to return to the issue, MIT Media Labs tests on this issue are a bit far-fetched (although they are scenarios that can happen). Most drivers will never find themselves in such situations for the rest of their lives.

One solution to the above problem would be to create safe pedestrian zones, which will completely separate them from the spaces that drive self-propelled vehicles. This will eliminate the problem completely.

https://www.youtube.com/watch?v=TJvhVCnD_y8

Recall that the transition from horses to cars has created turmoil in many aspects of the lives of the people of the time. So we should learn how to influence cars without their driver regulations, urban infrastructure and behavior patterns.

The article was published in TNW.

________________

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.100 registrants.

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).