ChatGPT: What legal framework for new artificial intelligence applications?
Legal Watch No. 58 – April 2023.
ChatGPT: What legal framework for new artificial intelligence applications?
ChatGPT, Google Bard, Stable Diffusion, and Dall-E are Large Language Models (LLMs) a subcategory of existing language models.
Language models are computer programs designed to process and generate text in a way similar to a human being.
They include, for example, speech recognition, machine translation, text-to-speech, and content generation. LLMs are larger and more complex than traditional models.
They are trained on very large text corpora, sometimes consisting of millions of pages of text, and use deep neural network architectures to learn to process language.
An LLM can thus provide an infinite variety of on-demand content, whether it be a poem, a film script, computer programming code, or a musical composition.
However, the information is not up to date: ChatGPT was trained using information available on the internet up to 2021.
It is therefore pointless to ask him for an update on data transfers to the United States or a comment on the latest CNIL sanction.
LLMs raise many legal and ethical questions, particularly concerning copyright, data protection, disinformation and discrimination.
In recent weeks, many regulators have begun to react, the Italian data protection authority in the lead.
On March 31, the latter issued an order against OpenAI, blocking ChatGPT in Italy because
- due to the lack of transparency in the application,
- due to the absence of a legitimate reason for processing,
- due to failure to ensure the accuracy of the data processed,
- the lack of age verification and
- of the general violation of the principle of "privacy by design".
It was closely followed by Germany, which began to examine ChatGPT's compliance with the GDPR: the state of Schleswig-Holstein sent OpenAI a questionnaire which it should answer before June 11.
The European Data Protection Board followed suit with its national members and set up a working group on the subject, and the European Consumer Organisation (BEUC) wrote to the network of consumer protection authorities and the network of consumer safety authorities (CPC and CSN networks) asking them to investigate LLMs.
Across the Atlantic, the Federal Trade Commission (FTC) also indicates that existing US sector-level laws cover generative AI, while warning companies to take strong account of consumer interests and the risks associated with ongoing deployments.
Following these investigations, OpenAI has taken certain measures:
- Implementation of a right to object to data processing;
- Expansion and improved visibility of the privacy policy, information on updates made;
- Implementation of mechanisms for deleting inaccurate information;
- Added forms allowing any European user to filter their chats and the history of data used to train and improve training algorithms
- Added the ability to export data and verify stored information.
Although ChatGPT is once again accessible in Italy, many questions remain.
The safeguards provided by Open AI allow people who use the application to correct or delete their own data, but they have no impact on the processing of data of all those who do not use the application, thus raising serious reputational concerns.
Recently, ChatGPT accused a law professor of sexual harassment, the alleged source being an article that was never written.
Racial or ethnic biases are also possible. ChatGPT was trained on internet data and, therefore, is likely to reflect and perpetuate societal prejudices that exist on the web.
A predictive algorithm used for medical decision-making has thus made biased decisions against black patients.
Even though the designers of the algorithm excluded race as a measurement criterion when running the system, the algorithm continued to perpetuate biases against black patients by taking into account certain economic factors and healthcare costs.
Finally, the risks of cybercrime, such as the production of more convincing phishing emails, or the learning of new attack techniques, are increased, as highlighted in the report by the UK's National Cyber Security Centre and the FTC in the United States.
One might wonder why the designers of ChapGPT did not integrate data protection from the design stage of its application, but on a case-by-case basis based on the reactions of regulators.
Open AI is not a newcomer: it has been under development since 2016, led by experienced co-founders, and has been the subject of six funding rounds.
Its current valuation is $29 billion, and its AI-based tools are integrated into non-AI-based products used daily by millions of people through a partnership with Microsoft.
The principles of "privacy by design" are not only desirable in such a context, they are an integral part of the requirements of the GDPR.
Their implementation should be further strengthened with the adoption of the future European regulation on AI, according to which systems presenting high risks should comply with a stricter regime including requirements for risk management, transparency and data governance.
The latest version of the Regulation discussed by MEPs also requires that all AI models comply with principles of supervision, technical robustness and security, privacy protection, data governance, transparency, social and environmental well-being, diversity, non-discrimination and fairness.
And also
URSSAF
During the May 1st weekend, The Urssaf made a computer error which led to the release of data from several thousand self-employed workers.
Some members claim to have received documents containing the payment schedules "of eighteen other people", or even, in the most significant cases, "301 pages of Urssaf payment schedules that do not concern me".
These files contain sensitive personal information such as bank details, income, addresses or full identities.
The people affected by these broadcast errors, 10,640 in total according to the initial findings of the internal investigation, should soon receive a message alerting them to this.
The Urssaf reported the security breach to the CNIL.
DRONE
Since April 19th, Police officers and gendarmes are authorized to film gatherings from the sky.
This was the case at the Stade de France, in Mayotte, in Le Havre and during the May Day demonstrations in Paris.
A decree from the Police Prefecture, dated April 28, allows the overflight of demonstrators in the capital by on-board cameras to prevent "attacks on the safety of persons and property" and "to maintain or restore public order".
This is one of the first authorizations issued under a recent decree from the Ministry of the Interior as part of the new law relating to criminal liability and internal security.
The CNIL has issued two opinions (in January and July 2021) on these new legal provisions and, on this occasion, it called for strict regulation of the use of drones given the risks of infringement of public freedoms and the privacy of individuals.
CYBERSECURITY
The law of March 3, 2022, which will come into force on October 1, 2023, introduces a cybersecurity certification for digital platforms intended for the general public.
Operators of online platforms and providers of non-number-based interpersonal communication services will have to conduct a cybersecurity audit covering the security and location of the data they host as well as their own security.
According to the draft decree implementing the cyberscore, the audit procedures will include the use of the SecNumCloud reference framework for data exposure to extraterritorial legislation, the European location of hosting infrastructures and the nationality of subcontractors.
CNIL
The CNIL published on April 3rd a new version of its guide concerning the security of personal data.
The new version notably takes into account the latest recommendations from the CNIL regarding passwords and logging.
European institutions and bodies
EUROPEAN PARLIAMENT
- Data transfers between the EU and the United States The Resolution adopted on April 13 by the members of the Civil Liberties Committee considers that the data protection framework proposed by the EU and the United States is an "improvement" but that the progress is not "sufficient" to justify an adequacy decision on transfers of personal data.
The commission notes that the framework still allows for the mass collection of personal data in certain cases and provides for an appeals mechanism that may not be independent (judges could be dismissed by the US president, who could also overturn his decisions).
EU citizens could also be prevented from exercising their rights to access and rectify their data since the decisions would be kept secret.
MEPs urge the Commission "to ensure that the future framework can withstand legal challenges and provides legal certainty for EU citizens and businesses."
- Members of the European Parliament overcame their differences on April 27 and reached a provisional political agreement on the AI Regulations.
The text will include stricter obligations regarding LLMs and biometric identification software: initially prohibited in real time, this recognition software will only be able to be used after the fact for serious crimes and with prior authorization.
The text could still be subject to minor technical adjustments before a key vote in committee scheduled for May 11, and is expected to be voted on in plenary session in mid-June.
- On April 20, MEPs approved the first EU legislative text for trace crypto-asset transfers such as bitcoins and electronic currency tokens.
The text – which was provisionally approved by Parliament and Council negotiators in June 2022 – aims to ensure that cryptocurrency transfers, as with any other financial transaction, can always be traced and suspicious transactions blocked.
The European Parliament and EU member states are currently negotiating the Cyber Resilience Act (CRA), a new regulation aimed at strengthening the digital security of connected devices in the EU.
The ARC proposes audit and certification requirements for software and hardware manufacturers of connected devices and provides for a minimum period during which they must provide software security patches for their products.
EDPB
- The European Data Protection Board (EDPB) published on April 27 a A guide for SMEs, detailing the principles applicable when processing data of employees, customers and business partners.
The guide also explains the essential security rules to follow and how to manage a personal data breach.
- The EDPB published on April 19th a report on the results of the task force's work regarding the 101 complaints filed by the NGO NOYB following the Schrems II ruling of the CJEU.
These complaints concern the tools "Google Analytics" and "Facebook Business Tools", and the transfer of personal data to the United States.
The report sets out the common positions of the task force and contains information on the results of the first cases concerned.
Several data protection authorities have ordered website operators to halt the data transfers in question.
- On April 4, the EDPB published the final version of its guidelines 9/2022 on the notification of personal data breaches.
CVRIA
- The Court of Justice of the European Union held in a judgment of 4 May that the right of access of the person concerned implies the right to obtain whole documents or extracts of documents or extracts of databases, if this is essential to enable the person concerned to effectively exercise his right.
In this particular case, CRIF, a company specializing in commercial information, had provided the complainant with a summarized version of its data.
- The CJEU has also ruled for the first time on the awarding of non-material damages under the GDPR.
Although the Court confirms in its judgment of May 4 that the GDPR does not require a "threshold" to claim damages, it nevertheless recalls that there must be a violation, damages and a causal link, and that there is no claim without actual damage.
The case originated when the Austrian postal service generated statistics on the likely political leanings of millions of people.
The complainant had been assigned a probable interest in a far-right party, but it was not certain that this information had been disclosed to a third party because he was on a postal advertising exclusion list.
- In a similar context, the Advocate General of the CJEU delivered his opinion on April 27 on the following question: can the unlawful dissemination of personal data held by the Bulgarian National Revenue Agency, following a computer hack, give rise to compensation for moral damage in favour of the person concerned, simply because the latter fears a possible misuse of his data?
According to the General Assembly, moral damage as defined in Article 82 of the GDPR should not be confused with mere inconveniences.
Damages must be able to be proven objectively and not depend exclusively on the subjective representation of the claimant.
National news
- ITALY : On March 2nd, the Italian Data Protection Authority (APD) fined a data controller 5,000 euros for sending unsolicited commercial communications to email addresses created by software through the automatic combination of data collected on the internet.
- AUSTRIA: Following a complaint filed by noyb, the Austrian Data Protection Authority (APD) stated on March 29 that a "accept or pay" cookie banner on a newspaper's website was not compliant with the GDPR given the requirements for granularity of consent.
- THE NETHERLANDS : On April 4th, a Dutch appeals court ordered Uber to provide drivers with access to their personal data and explanations regarding automated decision-making. The court also imposed a penalty of 4,000 euros per day on the data controller.
- SPAIN: The Catalan data protection authority has deemed the use of facial recognition systems to prevent fraud in online university exams disproportionate. It has fined the data controller €20,000 for violating Articles 5(1)(a) and 9 of the GDPR.
- UNITED KINGDOM : Nearly 50 British MPs have written to the company that owns House of Fraser and Sports Direct stores, to condemn the use of facial recognition cameras in the group's stores.
Describing the technology as "invasive and discriminatory," parliamentarians urged the group to end the use of these cameras nationwide.
- On April 4, the UK Data Protection Authority (DPA) opened an investigation against TikTok and fined it £12,700,000 for misuse of children's data.
- The president of Signal, the encrypted messaging app, has expressed concerns about the British government's proposal on online security, which could weaken encryption and force companies to scan encrypted messages for illegal content.
Meredith Whittaker indicates that Signal could leave the UK if that were to happen. Similar concerns have been raised by other messaging companies such as WhatsApp and Element.
- VIETNAM: On April 17, 2023, the Vietnamese government published Decree No. 13/2023/ND on the Protection of Personal Data (“PDPD”), which is the first comprehensive document governing the protection of personal data in the country.
- TANZANIA: The new law on the protection of personal data came into effect on May 1st.
- CBPR: On April 17, the United Kingdom applied to join the group of countries adhering to the Cross-Border Security Rules (CBPR), which currently includes Australia, Canada, Japan, the Republic of Korea, Mexico, the Philippines, Singapore, Chinese Taipei, and the United States. The CBPR, established by the US Department of Commerce, allows any jurisdiction to apply for associate status and benefit from the free exchange of data with participating countries, provided it has laws that protect personal information and has "at least one public agency responsible for enforcing the law(s) and/or regulation(s)."
- IAPP:An infographic from the International Association of Privacy Professionals (IAPP) lists the jurisdictions that give a DPA or government authority the power to designate other jurisdictions as having "adequate" privacy standards.
It will be complemented by another infographic detailing the mechanisms and guidelines regulating transfers according to jurisdictions (contractual clauses, consent, etc.).


