Veille juridique

DMA, DSA: the new obligations of tech giants

Legal Watch No. 62 – August 2023.

DMA, DSA: the new obligations of tech giants.

At the beginning of September, the protection of online user rights is expanding, with the application of the Digital Services Act to major platforms and the publication of the list of companies subject to the Digital Markets Act.

  • Since August 25, the European regulation on digital services (DSA) applies to very large online platforms (VLOPs) and very large online search engines (VLOSEs).

The regulation will also apply from 17 February 2024 to all intermediaries that offer their services to users based in the EU, including online platforms such as app stores, collaborative economy platforms and social media platforms, with more limited obligations.

Additional exceptions are provided for SMEs and micro-enterprises.

Nineteen companies fall into the category of VLOPs and VLOSEs, according to the European Commission's decision of April 25, including TikTok, Facebook, X, Snapchat, YouTube and Google Search, influential online retailers such as Amazon and Zalando, and the two major online search engines Bing and Google Search.

These companies will have to comply with a set of obligations regarding transparency, protection of minors, content moderation, and respect for privacy.

In particular, they will need to identify and assess the systemic risks arising from their services, including algorithmic systems, such as:

  • The distribution of illegal content
  • The negative effects on the exercise of fundamental rights
  • The negative effects on civic discourse and electoral processes;
  • The negative effects on gender-based violence, public health protection and minors;
  • The serious negative consequences for the person's physical and mental well-being.

Several DSA obligations overlap with those of the GDPR. These are listed in a recent article from the "Future of Privacy Forum".

For example, there are similar or complementary obligations regarding "dark patterns", targeted advertising based on sensitive data or concerning minors, transparency, profiling, risk analysis and removal of illegal content.

The control procedures are complex and may interfere with those of the GDPR: unlike the latter, which ensures regulation mainly at the national level with coordination by the European Data Protection Board for cross-border cases, the DSA centralizes controls at the EU level with regard to VLOPs and VLOSEs, while giving Member States responsibility for other intermediate service providers.

Let us hope that coordination will be put in place between these different bodies, in order to guide both the companies concerned and the individuals wishing to take legal action.

  • While the Digital Markets Act has been in force since May, it was on September 6th that the Commission published the list of six tech giants, the "gatekeepers," who will have to comply with its principles. These are Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft.

The Commission indicates that a total of 22 basic platform services operated by these six guardians are affected.

The primary objective is to prevent these companies from taking advantage of their dominant position.

Thus, the text prohibits self-referencing or the obligation for professional users to use only the services or products of the company in question.

Gatekeepers also cannot prohibit business users from offering and promoting competing services, and they have an obligation to share with them the information generated by the use of their platform.

Specific interoperability requirements are also planned for online messaging services, and options for operating systems, browsers, search engines and virtual assistants.

Furthermore, "gatekeepers" are prohibited from tracking and profiling users for advertising targeting purposes, unless they obtain their consent, and from preventing them from uninstalling their pre-loaded applications.

Some of these obligations therefore reinforce those provided for by the DSA in terms of user protection, particularly with regard to profiling.

Several companies such as TikTok, Meta and Google have already changed their terms of service.

The fines provided for by the DSA and the DMA can reach 6% and 10% of the turnover of the companies concerned, respectively.

In the event of repeated violations of the DMA, the penalty can reach 20% of turnover…

Amounts that exceed the 4% provided for by the GDPR, already presented as dissuasive by the legislator at the time of the adoption of the regulation.

 

And also

  • The CNIL is preparing a draft recommendation on systems at major risk in the event of a security breach, and is launching a public consultation.

Its aim is to consolidate all advanced security practices into a single document, which specifically targets so-called "critical" processing, defined by the following two cumulative criteria:

  • The processing is large-scale within the meaning of the GDPR;
  • A personal data breach could have very significant consequences for the individuals concerned, for national security, or for society as a whole.

It is possible to participate in the consultation until October 8, 2023.

  • On August 8, the CNIL published an information note on connected beacons, in order to help anyone who is a victim of misuse or illegal use to protect themselves.

These tags, which allow the objects to be located and found (for example, keys or a wallet), are sometimes used to locate people without their knowledge.

  • Pôle emploi announced on August 23 that the personal data of approximately ten million people registered in its files had been stolen following an "act of cyber-maliciousness".

This data was outsourced to the company Majorel, responsible for digitizing the documents sent by job seekers.

The name and surname, current or former job seeker status, and social security number could be affected.

However, "email addresses, telephone numbers, passwords and bank details" were not compromised.

 

European institutions and bodies

  • On October 11, the European Union Agency for Cybersecurity (ENISA) is organizing the Trust Services and eID Forum in collaboration with the European Commission, in order to monitor developments in the legal environment, the European digital wallet and the protection of citizens' online activities across the EU.

ENISA also publishes guidelines for smartphones: “SMASHING – Smartphone Secure development Guidelines”.

The tool provides a map of measures for smartphone application developers aimed at ensuring the development of secure mobile applications.

  • The European Telecommunications Standards Institute (ETSI) has published a report on "Securing Artificial Intelligence (SAI); Automated Manipulation of Multimedia Identity Representations".

The document covers AI-based techniques to automatically manipulate existing identity data or create fake identity data represented in various media formats, such as audio, video, and text (deepfakes).

It describes the different technical approaches and analyzes the threats posed by deepfakes in different attack scenarios.

He then proposes technical and organizational measures to mitigate these threats and examines their effectiveness and limitations.

  • In the context of its 2023 audit program, the EDPB is focusing on the role of DPOs.

An article published by the IAPP on July 31 lists the reference decisions of European data protection authorities concerning the designation and skills of DPOs.

  • Google-owned Fitbit is facing privacy complaints in the European Union, alleging that the company is illegally exporting user data in violation of EU data protection rules.

The complaints target Fitbit's claim that users have consented to international transfers of their information – to the United States and elsewhere – while the NGO NOYB alleges that the company forces users to give their consent.

 

News from European member countries.

  • In the Netherlands, an initial report from the Data Protection Authority (DPA) dated September 1st calls for additional measures to control the risks associated with algorithms and AI in anticipation of upcoming European legislation.

To better control them, public authorities and businesses must face two challenges.

First, the risks associated with the rapid integration of AI innovations into society, such as intelligent chatbots.

Secondly, the report highlights the need for all major public and private institutions in the Netherlands to understand their use of high-risk algorithms – those that have a substantial impact on individuals' lives. The report lists the actions to be implemented.

  • The Spanish Data Protection Authority (APD) has fined a media company 20,000 euros for publishing a photo taken from the person's private Instagram profile and posting it on a blog with their name and age, in violation of Article 6(1) of the GDPR.

It also imposed a fine of €120,000 (reduced to €72,000) on Fourth Party Logistics SL for illegal subcontracting due to the lack of formalization of contracts and the absence of prior authorizations for formalization.

  • In Croatia, a photo identifying a police officer was posted as a comment on a video of a police operation shared in a public Facebook group.

The APD found a violation of Article 5(1(b) and Article 6(1) of the GDPR and ordered the removal of the photo.

  • In a similar context, the Cypriot Data Protection Authority fined a local newspaper 7,000 euros for violating Article 5(1)(c) and Article 6 of the GDPR: the newspaper had published the names and photos of police officers on duty.
  • As part of a joint investigation, data protection authorities in the Baltic countries audited and sanctioned a car rental company.

In calculating the fine, the Latvian Data Protection Authority highlighted the total lack of cooperation on the part of the data controller as an aggravating factor.

She initially considered a fine of 15,000 euros appropriate. However, given the financial difficulties faced by the data controller and the high risk of insolvency, she ultimately reduced the fine to 1,000 euros.

  • The new Swiss federal law on data protection came into force on September 1st.

Among the new provisions inspired by the GDPR are impact assessments for the processing of sensitive data, records of processing activities, the appointment of a Data Protection Officer (DPO), and the reporting of data breaches. The concept of "Privacy by Design" is now explicitly mentioned.

 

  • On August 24, twelve international data protection and privacy regulators from the Americas, Europe, Africa and APAC announced that they expect social media platforms and other sites to protect themselves against illegal data retrieval (“web scraping”).

This announcement reiterates advice previously provided by regulators such as the Australian Information Commission, the CNIL, and the UK Information Commissioner's Office following investigations into Clearview AI, Inc.'s personal information handling practices and data breach notification obligations.

  • In the United States, the Cybersecurity and Infrastructure Security Agency (“CISA”), the National Security Agency (“NSA”) and the National Institute of Standards and Technology (“NIST”) published a joint fact sheet on quantum computing preparedness on August 21 to alert organizations – particularly those supporting infrastructure sectors

criticisms – on the threats of quantum computing and to encourage these organizations to start planning for future migration to post-quantum cryptographic standards (“PQC”).

  • The US government is launching the Cyber Trust Mark, its program for labeling the security of the Internet of Things.
  • In the United States, a data breach is also affecting Tesla: 75,000 people are impacted.

Two former Tesla employees provided the Handelsblatt newspaper with personal information and contact details concerning other employees.

The company notified the Maine Attorney General of the security breach and offered identity theft protection services to those affected.

In April 2023, employees had viewed and shared private videos recorded by customers' Teslas, taken from the vehicles' Sentry Mode security systems.

Tesla is not the only company raising privacy concerns.

A study published on September 5 by the Mozilla Foundation describes cars from 25 automakers as "nightmares on wheels when it comes to data privacy."

The foundation assessed the policies and practices of 25 car manufacturers and warned that they can collect and commercially exploit far more than location history, driving habits, in-car navigation history and users' music preferences.

Some manufacturers may process deeply personal data, such as – depending on the privacy policy – sexual activity, immigration status, race, facial expressions, weight, health, and even genetic information.

In addition, more than half of the manufacturers sell the data to third parties.

  • New guidelines were published in China on August 25, 2023, regarding the labeling of AI-generated content: The Chinese National Technical Committee for Information Security Standardization (“TC260”) published the final version of the “Practical Guidelines for Cybersecurity Standards – Method for Labeling Content in Generative Artificial Intelligence Services”
  • Canada also publishes a code of practice for generative AI and encourages contributions to its document.
  • In India, the Digital Personal Data Protection Act 2023 was published in the official gazette on August 12.

While this law is welcomed because it provides protection for the data of 760 million internet users, it also raises criticisms regarding the level of protection offered, particularly in light of the landmark Puttaswamy ruling, which established the right to privacy in India five years ago.

  • On August 31, Apple announced the abandonment of development of its iCloud scanning feature to identify child pornography content (CSAM).

The company is now focusing on a set of tools and resources on users' devices, known as "Communication Safety features".

After collaborating with a range of security and privacy researchers, digital rights groups, and child safety advocates, the company concluded that it could not pursue development of a cloud scanning mechanism, even if it was designed specifically to preserve privacy. 

"Analyzing each user's private iCloud data would create new threat vectors that data thieves could find and exploit. It would also create a risk of unintended consequences. Searching for one type of content, for example, opens the door to mass surveillance and could create a desire to search for other encrypted messaging systems for all types of content."

This public stance is important in the current context, as the UK, the EU and the US are preparing legislation aimed at imposing widespread screening of web actors in the context of the fight against cybercrime in general and the protection of children online in particular.

en_USEN