Stop Scanning Me scanning private online chats, must be rejected

Private conversation is a basic human right. Like the rest of our rights, we should not lose them when we go online. But a new European Union proposal could significantly affect our privacy rights.

The European Union's executive body is promoting a proposal that could lead to mandatory scanning of every personal message, photo and Η Επιτροπή της ΕΕ θέλει να ανοίξει τα απόρρητα δεδομένα της ψηφιακής μας ζωής για έλεγχο από λογισμικό σάρωσης εγκεκριμένο από την κυβέρνηση και στη συνέχεια να ελεγχθεί σε δεδομένων που διατηρούν εικόνες κακοποίησης παιδιών. 

Technology It does not work properly . And to launch a system with bugs in our pockets" it's just wrong, even when it's done in the name of protecting children.

We don't need government monitors looking into our private conversations, whether it's artificial intelligence, bots or police officers. Adults don't need it, and neither do children. ssme

If you are in one of the 27 EU member countries , now is a good time to contact your Member of the European Parliament and let them know that you are opposed to this dangerous proposal. A few days ago digital rights organizations in Europe, created a website called “Stop Scanning Me», with more information about the proposal and its problems. It includes a detailed legal analysis of the regulation and one letter co-signed by 118 NGOs opposing this proposal, including the EFF.

Even if you are not an EU resident, this regulation should still concern you. Major messaging platforms will drop commitments to and security to their users. This will affect users around the world, even those who do not regularly communicate with people in the EU.

“Probe commands” to listen to private conversations

The proposed EU Child Sexual Abuse Regulation (CSAR) is a disappointing step backwards. In the past, the EU has pioneered privacy legislation which, while not perfect, has moved in the direction of increasing rather than decreasing people's privacy, such as the General Data Protection Regulation (GDPR) and the e-Privacy Directive. But the CSA regulation moves in the opposite direction. It does not respect the EU Charter of Fundamental Rights and undermines it recently adopted law on digital services , which already gives authorities powers to remove illegal content.

The proposal requires online platforms and messaging service providers to moderate abusive content and encourages general monitoring of user communications. However, if “significant” risks of online child sexual abuse remain after these mitigations—and it is entirely unclear what this means in practice—the law enforcement can send "trace orders" to technology platforms. Once a search warrant is issued, the company running the platform may need to scan messages, photos, videos and other data using law enforcement-approved software.

With the tracking orders in place, the platforms will not be able to host truly private conversations. Either they scan people's messages on a central server or on their own devices , the CSA simply won't be compatible with end-to-end encryption. 

Not content with checking our data and checking it against existing government child abuse databases, the authors of the proposal go much further. CSAR suggests using algorithms to guess which other images might represent abuse. It even plans to look into "grooming" using artificial intelligence to check people's text messages to try to guess which communications could indicate future child abuse.

Large social media companies often fail to live up to the stated promises of their own content moderation policies. It's unbelievable that EU lawmakers can now force these companies to use their problematic tracking algorithms to blame their own users for the worst kinds of crimes.

The EU Commission is promoting a crime-detecting artificial intelligence that doesn't work 

It is difficult to test the accuracy of the software most commonly used to detect child sexual abuse material (CSAM). But the data that has come out should send up red flags, not encourage lawmakers to move forward.

  • A Facebook study found that the 75% of messages flagged by the scanning system  to detect child abuse material that was not "malicious" and included messages such as bad jokes and memes.
  • LinkedIn reported 75 cases of suspected CSAM to EU authorities in 2021. After manual review, only 31 of these cases —about 41%—involved confirmed CSAM.
  • The recently published data from Ireland, published in a report from EDRi (see page 34), show more inaccuracies. In 2020, Irish police received 4.192 reports from the US National Center for Missing and Exploited Children (NCMEC). Of these, 852 referrals (20,3%) were confirmed as true CSAMs. Of these, 409 referrals (9,7%) were deemed 'actionable' and 265 referrals (6,3%) were 'completed' by the Irish police.

Despite the insistence of boosters and law enforcement officials that scanning software has magically high levels of accuracy, independent sources make it clear: extensive scanning produces a significant number of false accusations. Once the EU votes to launch the software on billions more messages, it will lead to millions more false accusations. These false accusations are forwarded to law enforcement agencies. At best, they are wasteful. they also have the potential to cause pain in the real world.

False positives cause real harm. A recent New York Times story highlighted a flawed Google CSAM scanner that falsely identified two fathers of US infants as child abusers. In fact, both men had sent medical photos of infections to their children at the request of their pediatricians. Their data was reviewed by local police and the men were cleared of any wrongdoing. Despite their innocence, Google permanently deleted their accounts, stood by its failed AI system and defended its opaque human review process.

Regarding the newly released Irish data, the Irish National Police has verified that it currently retains all personal data transmitted to it by NCMEC—including usernames, email addresses and other data of verified innocent users.

Child abuse is horrible. When digital technology is used to share images of child sexual abuse, it is a serious crime that requires investigation and prosecution. 

That is why we should not waste efforts on actions that are ineffective and even harmful. The vast majority of online interactions are not criminal acts. Police investigating online crimes are already able to look for a 'needle in a haystack'. Introducing mandatory scanning of our photos and messages will not help them narrow down the target – instead, it will massively expand the 'haystack'.

The proposed EU regulation also proposes mandatory age verification as a way to reduce the spread of CSAM. There is no form of electronic age verification that does not adversely affect the human rights of adult speakers. Age verification companies tend to collect (and share) biometric data . The process also interferes with adults' right to speak anonymously – a right that is especially vital for dissidents and minorities who may be oppressed or unsafe. 

EU states or other western nations may well be the first nations to ban encryption to scan every message. They won't be the lastj. Governments around the world have made it clear: they want to read people's encrypted messages. They'll be happy to highlight terrorism, crimes against children, or other atrocities if they get their citizens to accept more surveillance. If this regulation is passed, authoritarian countries, which often already have surveillance regimes, will be required to implement EU-style message scanning to find their own "crimes". The list will likely include governments that attack dissidents and openly criminalize LBGT+ communities.

Article Source: https://www.eff.org/

Reproduced by ellak.gr

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.086 registrants.
Stop Scanning Me, disable media scanning, eff

Written by newsbot

Although the press releases will be from very select to rarely, I said to go ... because sometimes the authors are hiding.

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).