Automated decisions: how is the GDPR implemented?
Legal Watch No. 47 – May 2022
Automated decisions: how is the GDPR implemented? On May 17, the Future of Privacy Forum published an extensive report on the issue of automated decisions.
This report analyses more than 70 documents shedding light on how Article 22 of the GDPR is implemented by courts and tribunals, European and UK data protection authorities, as well as in several guidelines and recommendations issued by regulators on this subject.
While automated decisions were already regulated in European Directive 95/46/EC, the principle has taken on a new dimension since the entry into force of the GDPR.
It is therefore applicable in more varied and more frequent contexts such as those of artificial intelligence, and the APDs now have the means to enforce the law.
This type of decision is found mainly in the following areas:
- Access and attendance control in schools using facial recognition technologies
- Online monitoring in universities and automated grading of students
- Automated filtering of job applications
- Algorithmic management of platform workers
- Distribution of social benefits and detection of tax fraud
- Automated Credit Assessment
- Content moderation decisions in social networks
Let us briefly recall the principle: Article 22 of the GDPR gives individuals the right not to be subject to a decision based solely on automated processing, including profiling, producing legal effects concerning it or significantly affecting it in a similar way.
In this regard, it should be noted that the European Data Protection Board has clarified that data controllers cannot avoid the application of Article 22 by having a human simply stamp the decisions taken by the machine, without having the real authority or competence to modify the result.
On the other hand, if the automated process in question only provides data for a decision that will ultimately be taken by a human, the processing underlying it does not fall within the scope of Article 22.
Under Article 22(2), automated decisions remain possible when they are necessary for the performance of a contract, provided for by law, or based on the explicit consent of the individual.
In such cases, the controller shall nevertheless provide appropriate measures to safeguard the data subject's rights and freedoms and legitimate interests, and their right to obtain at least human intervention on the part of the controller, to express their point of view and to contest the decision.
This strict limitation on automated decisions must be read in the broader context of the GDPR, and in particular its requirements concerning the legitimacy of any processing, the existence of a legal basis, and the rules concerning so-called sensitive data – including biometric data.
The documents analysed show that the supervisory authorities and the courts strictly apply these principles.
They insist in particular on the following elements:
- The obligation of transparency regarding the parameters leading to an automated decision;
- The application of the principle of loyalty, to avoid discrimination;
- The strict conditions for obtaining the consent of the person concerned.
Furthermore, to decide whether a decision is exclusively automated, the entire context of the decision-making process will be taken into account: organizational structure of the manager, hierarchical lines, employee awareness.
To assess the impact of the decision on the individual, the authorities examine in particular whether the input data of an automated decision include inferences about the behavior of individuals and whether the decision affects the behavior and choices of the individuals concerned.
In France, one of the reference decisions is that of the CNIL in the Clearview file, regarding facial recognition: the Commission ordered Clearview AI to stop collecting facial images of people in France from the internet to feed the database that trains its facial recognition software, and to delete previously collected images, within two months.
Italy and the United Kingdom have adopted similar decisions regarding the same company.
A judgment handed down in February 2020 by the Marseille administrative court overturned a decision by the Provence-Alpes-Côte d'Azur region to conduct two facial recognition pilots at school entrances from Nice and Marseille.
While the case was pending, the CNIL expressed concerns about the implementation of such a system, given the target audience (children) and the sensitivity of the biometric data involved.
The Court's decision to annul the pilots was taken on the grounds that the consent obtained from the high school students was not given freely, specifically, informed and unequivocally, and that less intrusive means were available to schools to control their students' access to their premises (for example, badge/ID card controls, combined with video surveillance).
Let us add that in most cases, the data controller will not be able to avoid a impact analysis, as provided for by Article 35 of the GDPR, when the decision relates to profiling with significant effects on the individual: for example, in Italy, the recent decision of the DPA concerning the company Deliveroo: the company should have carried out an impact analysis on its algorithm, the processing using innovative technologies, being on a large scale (both in terms of the number of cyclists – 8,000 – and the types of data used), concerning vulnerable individuals (workers in the “gig economy” paid by the task) and involving an evaluation or rating of the latter.
And also
France:
The CNIL publishes its activity report for the year 2021: Among its notable activities, we note the renewal of its support policy, increased mobilization on cybersecurity and the strengthening of repressive action.
It also publishes a series of criteria for assessing the legality of wall cookies.s (tracer walls).
It provides information enabling the legality of the "paywalls", which require the Internet user refusing cookies to provide a sum of money to access the site.
The publisher who wishes to implement a paywall must be able to justify the reasonableness of the monetary compensation offered, and demonstrate that its cookie wall is limited to the purposes which allow fair remuneration for the service offered.
The French government will deploy a system allowing citizens to authenticate themselves online by scanning their identity card with their smartphone.
The Official Journal has in fact published decree no. 2022-676 of April 26, 2022 which authorizes the creation of an electronic identification application called “Digital Identity Guarantee Service”.
This application will be linked to the new ID card equipped with a chip and will allow its data to be stored on a mobile phone.
Europe:
On May 12, the European Data Protection Board adopted two guidelines, one on methods for calculating fines in compliance with the GDPR, and the other on facial recognition.
The Committee advocates that facial recognition tools should only be used in strict compliance with the European Police-Justice Directive.
There European Commission published on May 11 its proposal for a regulation aimed at preventing and combating child sexual abuse material (CSAM).
The implications for companies like WhatsApp and Instagram, which are required to monitor and delete private communications or transfer them to law enforcement, are worrying civil society.
European Commission publishes Q&A on new standard contractual clauses for international data transfers
The European Commission's "Data Act" has also prompted a reaction from the European Data Protection Board and the European Data Protection Supervisor (EDPS). : in a joint opinion, they are concerned about the scope of the regulation.
Although it mainly targets data transmitted by connected objects, sensitive personal data is also concerned.
Authorities are calling for clear limits on the use of data for direct marketing or advertising, employee monitoring, insurance premiums, and credit reporting.
The Spanish data protection authority has fined Google LLC €10 million. for having unlawfully transferred personal data to a third party and for having prevented the exercise of the right to erasure.
Google is also being sued in the UK for illegal use of medical data of 1.6 million people: DeepMind, the company's artificial intelligence system, reportedly received this data in 2015 from the Royal Free NHS Trust in London to test a mobile application.
Google – still, has finally decided, after being condemned by the CNIL and its European counterparts, to make it easier to refuse all cookies on its search engine and on YouTube. The “Customize” option will thus be replaced by two buttons “Accept all” and “Reject all” of the same shape, accompanied by a third “More options”.
According to the Belgian Data Protection Authority, Sending an email with the list of recipients in CC, rather than BCC, is not considered a security breach, as long as only a small group of people (16 people) are affected.
The Danish authority is considering imposing a fine of DKK 100,000 against an agency of the Ministry of Justice for the loss of an unencrypted USB drive and failure to report the security breach to the data protection authority.
The Irish Court of Appeal considers that data collected by a video surveillance system for the purpose of preventing offences cannot be used to monitor employees and initiate disciplinary proceedings against them, this purpose being incompatible with the first.
The Norwegian Data Protection Authority intends to impose a fine of €486,700 on the labor administration for disseminating online the CVs of 1,800,000 people without a legal basis.
International :
The European Data Protection Supervisor expressed concern in an opinion dated 18 May about the European Union's participation in the United Nations Convention on Cybercrime.
He highlights the risk of weakening fundamental rights, due to the large number of countries with different legal systems that are involved.
The Hong Kong Data Protection Authority published its guidelines on the use of contractual clauses for international data transfers on May 12.
Singapore Data Protection Authority publishes a guide to data anonymization
Twitter reaches settlement with U.S. Department of Justice and Federal Trade Commission for $150 million and commits to implementing a compliance program regarding breaches of confidentiality of its subscribers' non-public data.
A report by the Irish Council for Civil Liberties states that the Real Time Bidding, which allows targeted advertising to be delivered, is behind the largest data breach ever recorded in the world.
According to figures, the industry, which is worth more than €110 billion, tracks and shares individuals' online activity and real-world locations 178 billion times a year in the United States and Europe.
In Europe, RTB exposes people's data 376 times a day and Google sends 19.6 million broadcasts about the online behavior of German Internet users every minute they are online.
Anne Christine Lacoste
Partner at Olivier Weber Avocat, Anne Christine Lacoste is a lawyer specializing in data law; she was Head of International Relations at the European Data Protection Supervisor and worked on the implementation of the GDPR in the European Union.