Alessandro Acquisti

Alessandro Acquisti: Why privacy is important

Alessandro AcquistiAlessandro Acquisti μελετά τη συμπεριφορική της ς (και της ασφάλειας των πληροφοριών) στα κοινωνικά δίκτυα.
Τι σας παρακινεί να μοιράζεστε προσωπικές πληροφορίες στην απευθείας σύνδεση σας; Οι απαντήσεις έρχονται με ένα βίντεο από το TED Talks δια στόματος του Alessandro Acquisti.

Η μετάφραση στα ελληνικά έγινε από τη Chryssa Rapessi με την τελική αναθεώρηση της Miriela Patrikiadou.

Η γραμμή μεταξύ του δημόσιου και του ιδιωτικού έχει γίνει θολή την τελευταία δεκαετία, και στο διαδίκτυο και στην πραγματική ζωή, και ο Alessandro Acquisti είναι εδώ για να εξηγήσει τι σημαίνει αυτό και γιατί έχει σημασία. Σε αυτή την προκλητική, ελαφρά ανατριχιαστική ομιλία, μοιράζεται λεπτομέρειες πρόσφατων και εν εξελίξει ερευνών– συμπεριλαμβανομένου κι ενός σχεδίου που δείχνει πόσο εύκολο είναι να ταιριάξει μια φωτογραφία ενός αγνώστου με τα ευαίσθητα προσωπικά δεδομένα τους.

I would like to tell you a story that links the notorious incident of privacy to Adam and Eve and the remarkable shift in the boundaries between public and private, which took place in the last 10 years. You know the event.

Adam and Eve, one day in the Garden of Eden, realize that they are naked. They're screaming. And the rest is history. Today, Adam and Eve would rather behave differently. [@Adam Chate was jam! The apple was perfect LOL] [@Eve yes .. baby, do you know what happened to my pants?] We unveil much more information about ourselves on the internet than ever and so much information about us is collected by companies. There are many profits and benefits from this massive analysis of personal information, or the big data, but there are complex returns that come with the protection of our personal data. And my story has to do with these rewards. We start with one observation, which, in my mind, has become more and more clear in recent years that any personal information can become sensitive information.

In 2000, about 100 billion photos were taken worldwide, but only a tiny percentage of them were uploaded to the internet. In 2010, only in , σε έναν μόνο μήνα, ανεβήκαν 2,5 δισεκατομμύρια φωτογραφίες, οι περισσότερες από αυτές έχουν εντοπιστεί. Στο ίδιο χρονικό διάστημα, η ικανότητα των to recognize people in photos improved by three orders of magnitude.

What happens when you combine these technologies: increasing the availability of facial data, improving the face recognition of computers, and computer cloud, which gives anyone in this room the kind of computational power that a few years ago was only the three-letter organization sector and ubiquitous computing technology that allows my phone, which is not a supercomputer, to connect to the internet and make hundreds of liades measurements persons in seconds.

Well, we guess the result of this combination of technologies will be a radical one στις αντιλήψεις μας σχετικά με την προστασία προσωπικών δεδομένων και της ανωνυμίας. Για να το ελέγξουμε αυτό, κάναμε ένα πείραμα στην πανεπιστημιούπολη του Carnegie Mellon. Ρωτήσαμε φοιτητές που περνούσαν να πάρουν μέρος σε μία έρευνα και τραβήξαμε μία φωτογραφία με μία διαδικτυακή κάμερα και τους ζητήσαμε να συμπληρώσουν μία έρευνα σε έναν φορητό υπολογιστή. Καθώς συμπλήρωναν την έρευνα, ανεβάσαμε τη φωτογραφία τους σε ένα σύμπλεγμα υπολογιστικού νέφους και αρχίσαμε να χρησιμοποιούμε αναγνώριση προσώπου για να ταιριάξουμε αυτή τη φωτογραφία σε μία βάση δεδομένων με μερικές εκατοντάδες χιλιάδες φωτογραφίες τις οποίες είχαμε κατεβάσει από προφίλ στο Facebook. Μέχρι να φτάσει το άτομο στην τελευταία σελίδα της έρευνας, η σελίδα είχε ανανεωθεί δυναμικά με τις 10 φωτογραφίες που ταίριαζαν καλύτερα τις οποίες είχε βρεί ο αναγνωριστής και τους ζητήσαμε να υποδηλώσουν αν έβλεπαν τον εαυτό τους στη φωτογραφία. Βλέπετε το άτομο; Λοιπόν, ο υπολογίστής το είδε και στην πραγματικότητα το είδε για ένα από τα τρία άτομα. Έτσι, ουσιαστικά, μπορούμε να ξεκινήσουμε από ένα ανώνυμο πρόσωπο, στο διαδίκτυο ή εκτός, και να χρησιμοποιήσουμε αναγνώριση προσώπου για να δώσουμε ένα όνομα σε αυτό το ανώνυμο πρόσωπο χάρη στα δεδομένα κοινωνικών μέσων. Αλλά πριν από μερικά χρόνια, κάναμε κάτι άλλο. Ξεκινήσαμε από δεδομένα κοινωνικών μέσων, τα συνδυάσαμε στατιστικά με δεδομένα από την κοινωνική ασφάλιση της κυβέρνησης των ΗΠΑ και καταλήξαμε να προβλέπουμε αριθμούς κοινωνικών ασφαλίσεων, οι οποίοι στις Ηνωμένες Πολιτείες είναι εξαιρετικά ευαίσθητη πληροφορία. Βλέπετε που το πάω;

So if you combine them μελέτες, το ερώτημα που προκύπτει είναι, μπορείτε να ξεκινήσετε από ένα πρόσωπο και χρησιμοποιώντας αναγνώριση προσώπου, να βρείτε ένα όνομα και δημόσια διαθέσιμες πληροφορίες σχετικά με αυτό το όνομα και αυτό το άτομο και από αυτήν την δημόσια διαθέσιμη πληροφορία να εντοπίσουν μη-δημόσια διαθέσιμες πληροφορίες, πολύ πιο ευαίσθητες τις οποίες συνδέετε με το πρόσωπο; Και η απάντηση είναι, ναι, μπορούμε και το κάναμε. Φυσικά, η ακρίβεια όλο και χειροτερεύει. [Βρέθηκαν τα 5 πρώτα ψηφία του αριθμού ΙΚΑ στο 27% των ατόμων (με 4 προσπάθειες)] Αλλά στην πραγματικότητα, αποφασίσαμε να αναπτύξουμε μία εφαρμογή για iPhone η οποία χρησιμοποιεί την εσωτερική κάμερα του τηλεφώνου για να βγάλει μία φωτογραφία του ατόμου και να την ανεβάσει στο σύννεφο και μετά να κάνει αυτό που μόλις σας περιέγραψα σε πραγματικό χρόνο: αναζήτηση ταύτισης, εύρεση δημοσίων πληροφοριών, προσπάθεια εντοπισμού ευαίσθητων πληροφοριών και μετά αποστολή τους πίσω στο τηλέφωνο έτσι ώστε να επικαλύψει το πρόσωπο του ατόμου.

An example of augmented reality, rather a creepy example of augmented reality. In fact, we did not develop the application to make it available, just as proof of the concept. Basically, take these technologies and push them at their reasonable ends. Imagine a future where the strangers around you will be looking at you through Google glasses or one day their contact lenses and will use seven or eight data points for you to find anything else that may be known to you. How will this future be without secrets? And should we care? Perhaps we want to believe that the future with so many data will be a future without prejudices, but in fact, having so much information does not mean that we will make more objective choices. In another experiment, we presented the participants with information about a potential job candidate. We included some recommendations in this information some funny, completely legitimate, but perhaps slightly embarrassing, information that the participant had uploaded online.

Now it is very interesting, that among our participants, some had uploaded relevant information, and some did not. Which team do you think was more likely to judge the participant harder? Surprisingly, it was the group that had uploaded such information, an example of moral misconduct. Now you might think, that's not me, because I have nothing to hide. But, in fact, the protection of personal data has nothing to do with having something negative to hide. Imagine being the human resource manager of a particular company and receiving CVs and deciding to find more information about the candidates. So, you gaggle their names in a universe, you find this information. Or in a parallel universe, you find this information.

Do you think it would be equally possible to call one of the candidates for an interview? If you think so, then you are not like the American employers, who, in fact, are part of our experiment, which means we did exactly that. We created profiles on Facebook, manipulating features, then began sending CVs to US companies. and we traced, watched, if they were looking for our candidates, and if they were acting according to the information they found on social media. And they did it. Discrimination was done through social media for equally experienced candidates. Now the marketers like us want to believe that all the information for us will always be used in a way favorable to us. Think again. Why is it always so? In a movie that arose a few years ago, the Minority Report, Tom Cruise walked to a commercial center on a famous stage, and personalized hologram ads appeared around him. Now, this movie is taking place at 2054, about 40 years from now, and as fascinating as this technology is, it undoubtedly underestimates the amount of information companies can collect for you and how they can use them to influence you in a way that you will not even understand.

So, as an example, this is another experiment we are doing now, which is not over yet. Imagine that a company has access to your friends list on Facebook and through an algorithm can find out which two friends you like best. And then they create, in real time, a composition from the faces of these two friends. Studies, before ours, have shown that people do not even recognize themselves in face compositions, but react to these compositions in a positive way. So the next time you look for a specific product and an ad suggests you buy it, it will not just be a formal representative. He will be one of your friends and you will not even know that this is happening. The problem now is that the existing policy mechanisms we have to protect ourselves from the misuse of personal information are like carrying a knife to a gun. One of these mechanisms is transparency, telling people what to do with their data. First of all, this is a very good thing. It is necessary, but it is not enough. Transparency can go in the wrong direction. You can tell people what to do, but then keep pushing them to reveal arbitrary amounts of personal information. In another experiment, this time with students, we asked them to provide information about their behavior on campus, including very sensitive questions like this. [Have you ever copied in exams?] In one group we said, "Only other students will see your answers." In the other group we said, "Students and teachers will see your answers." Transparency. Notice. And of course, it worked, in the sense that the first team was much more likely to reveal than the second. Understandable, isn't it? But then we added the deception. We repeated the experiment with the two same groups, but this time we added a delay between the time we told the participants how to use their data and the time we started answering the questions. How much do you think was the delay we had to add to eliminate the inhibitory effect of the knowledge that teachers will see your answers? Ten minutes; Five minutes; One minute; How about 15 seconds? Fifteen seconds was enough to get both groups to reveal the same amount of information, as if the second group no longer cared whether the teachers would read their answers.

Now I must admit, this speech so far may sound particularly pessimistic, but that is not my point. In fact, I want to share with you the fact that there are alternatives. The way we do things now is not the only way they can be done, and certainly not the best way they can be done. When they tell you, "People don't care about their privacy," ask yourself if the game is designed and rigged so that they don't care about their privacy, and by the time they realize these manipulations are being made they're already in the middle the process of protecting yourself. If you're told that protecting personal data is incompatible with the benefits of big data, consider that over the past 20 years, researchers have created technologies that allow almost every electronic transaction to be done in a way that protects personal data more. We can surf the internet anonymously. We can send emails that can only be read by the intended recipient, not even by the National Security Agency. We can even have privacy-preserving data mining. In other words, we can have the benefits of big data while protecting personal data. Of course, these technologies imply a shift in costs and turnover between data owners and data subjects, which is why you may not hear much about them. And I return to the Garden of Eden.

There is a second interpretation of privacy in the Garden of Eden story that has nothing to do with Adam and Eve feeling naked and ashamed. You can find echoes of this interpretation in John Milton's Paradise Lost. In the garden, Adam and Eve are materially satisfied. They are happy. They are satisfied. But they lack knowledge and self-awareness. The moment they eat what has been aptly called the fruit of knowledge, then they have discovered themselves. They have a conscience. They manage to have autonomy. But the price is persecution from the garden. Thus, the protection of personal data, in a way, is both the way and the price of freedom. Again, marketers tell us that big data and social media are not just a profit heaven for them, but a Garden of Eden for the rest of us. We get free content. We can play Angry Birds. We get targeted applications. But in fact, in a few years, companies will know so much about us that they will be able to infer our desires before we even form them, and perhaps buy products on our behalf before we even know we need them. There was an English writer who predicted such a future where we would exchange our autonomy and our freedom for our comfort. Even more than George Orwell, the author is, of course, Aldous Huxley. In "Brave New World," he imagines a society where the technologies we originally created for freedom ended up coercing us. But in the book, he also offers us a way to escape from this society, similar to the path that Adam and Eve had to follow to leave the garden. In the words of Agrio, it is possible to regain autonomy and freedom, but the price is high. So I believe that one of the defining battles of our time will be the battle for control of personal information, the battle for whether big data will become a force for freedom, rather than a force that manipulates us in secret. Now, many of us don't even know this battle is happening, but it is happening, whether you like it or not. And at the risk of playing the ophi, I'll tell you that the tools for battle are here, the awareness of what's going on, at your fingertips, just a few clicks away.

Thank you.

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.082 registrants.

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).