French regulator fines Amazon France Logistique for excessive employee monitoring

In a recent decision, the French data protection regulator (the 'CNIL') has levied a substantial €32 million fine against Amazon France Logistique ('AFL') for multiple breaches of the GDPR. The case originated from concerns raised after media reports on the company's warehouse practices, prompting the CNIL to investigate and to respond to employee complaints.

AFL, responsible for managing Amazon's extensive warehouses in France, was found to be in breach of the GDPR principles in several key areas. Firstly, the CNIL highlighted a failure to comply with the data minimisation principle (Art. 5.1(c) GDPR) in the warehouse stock and order management processes. The company was using three unlawful indicators, including the 'Stow Machine Gun' (which flags an error when an employee scans an item too quickly) and 'idle time' (which signals more than 10 minutes between scans) indicators, leading to excessive monitoring of employees beyond legitimate business objectives.

Furthermore, the CNIL identified shortcomings in the handling of work schedules and employee appraisals. AFL was found to be in breach of the data minimisation principle once again, as the detailed data and statistical indicators reported by employees' scanners were deemed unnecessary for work schedule management and employee assessments. Additionally, the company failed to meet its obligation to provide information and transparency to employees as stipulated in Arts. 12 and 13 GDPR.

Lastly, the CNIL highlighted violations in the video surveillance processing, emphasising a failure to provide information and ensure the security of personal data as required by Art. 32 GPDR.

The €32 million fine imposed by the CNIL underscores the importance for business of adhering to GDPR principles, and in particular maintaining transparency in data processing practices.

Largest ever data breach revealed

In what is being termed as the Mother of all Breaches ('MOAB'), cybersecurity researchers, led by Bob Dyachenko from Cybernews, have uncovered a colossal 12 terabytes of data, comprising a staggering 26 billion records. This leak encompasses information from various data breaches, including LinkedIn, Twitter, Weibo, Tencent, and other major platforms, making it the largest ever discovered data breach.

The MOAB is not just a compilation of previously stolen data, but appears to include a vast array of compiled and reindexed records from thousands of leaks, breaches, and privately sold databases. The leaked data, spanning over 3,800 folders, each corresponding to a separate data breach, is believed to contain both historical and potentially new data not previously disclosed.

While duplicates are likely in the 26 billion records, the sheer volume of sensitive information (which includes far more than merely credentials), poses significant risks for affected individuals. With the MOAB potentially being a malicious actor's repository, the exposed data could be leveraged for identity theft, phishing schemes, targeted cyberattacks, and unauthorised access to personal accounts.

It is thought that the MOAB may have unprecedented consumer impacts, as many individuals reuse usernames and passwords across multiple accounts; this widespread practice could lead to a surge in credential-stuffing attacks, jeopardising users across various platforms.

The legal implications of the MOAB are substantial, raising concerns about privacy, data protection, and potential legal actions against the as-yet-unidentified owner of the leak. The sheer scale of the breach dwarfs previous incidents, emphasising the need for individuals to prioritise cybersecurity measures such as strong, unique passwords, multi-factor authentication, and maintaining vigilance against phishing attempts.

UK Information Commissioner's Office launches consultation on generative AI

On 15 January 2024, the UK's Information Commissioner's Office ('ICO') initiated a consultation series addressing the application of data protection law to generative AI ('GenAI') development and usage. GenAI, in this context, refers to AI models capable of generating new content, such as text, code, audio, music, images, and videos.

The ICO's consultation involves the gradual release of chapters outlining its perspective on how the UK GDPR and Part 2 of the Data Protection Act 2018 relate to GenAI. The initial chapter, unveiled with the consultation's launch, focuses on the lawful basis, under UK data protection law, for web scraping of personal data to train GenAI models. Stakeholders are encouraged to provide feedback to the ICO by 1 March 2024.

The ICO acknowledges in its first chapter that legitimate interests, as per Art. 6(1)(f) of the UK GDPR, can serve as a lawful basis for using web-scraped personal data to train GenAI models. Developers are reminded to ensure that their processing adheres to the lawfulness principle, avoiding breaches of other laws, such as intellectual property or contract law.

For GenAI model developers to rely on the legitimate interests lawful basis, they must meet a three-part test:

  1. Purpose Test: Demonstrating a valid interest for processing web-scraped personal data, such as developing a model for commercial gain;
  2. Necessity Test: Establishing that processing web-scraped data is necessary to achieve the identified purpose, considering the current dependence on large-scale scraping for generative AI training; and
  3. Balancing Test: Weighing the interests, rights, and freedoms of individuals against those pursued by the GenAI developer or a third party. The ICO identifies upstream risks (loss of control over personal data) and downstream risks (reputational harm) to be balanced.

Suggested risk mitigation approaches include implementing controls, monitoring model use, and specifying contractual controls with third parties.

The ICO plans to release additional chapters addressing topics including purpose limitation, compliance with the accuracy principle, and adherence to data subject rights in the context of GenAI development and deployment.

CJEU finds that national law can define implicit controllers

In a significant decision, the Court of Justice of the European Union ('CJEU') clarified the definition of controllership under the GDPR and its applicability to joint controllers determined by national law. The case involved the Moniteur Belge, the official journal responsible for publishing official documents under Belgian national law.

The case originated from changes made to a company's statute by a majority shareholder, leading to the erroneous inclusion of personal data in official publications. The Moniteur Belge, following national law requirements, published the amended articles. Subsequently, the majority shareholder sought the deletion of sensitive data under Art. 17 GDPR, triggering a legal dispute.

The CJEU confirmed that the the Moniteur Belge qualifies as a data controller under Article 4(7) GDPR, as determined by implicit powers vested in it by Belgian national law. The court stated that a broad interpretation of controllership was required in order to ensure robust data subject protection. Additionally, the CJEU clarified that, absent any implied joint controllership, the Moniteur Belge is solely responsible for GDPR compliance under Art. 5(2) GDPR.

The ruling sets a precedent for cases involving implicit controllership and joint controllership and provides practical guidance on controllership and joint responsibilities under the GDPR. The CJEU's approach, emphasising a broad interpretation of controllership under national law, establishes a flexible framework for determining the responsibilities of entities involved in data processing.

Helsinki Administrative Court upholds Finnish regulator's decision on life insurance company's data processing

The Administrative Court of Helsinki has affirmed a decision by the Finnish Data Protection Authority ('FDPA') directing a life insurance company (the 'controller') to alter its data processing practices. The controller was found to be in violation of Art. 9 GDPR in relation to its processing of health data of life insurance applicants.

OP-Henkivakuutus Oy, a life insurance company, sought the Administrative Court of Helsinki's intervention to overturn the FDPA's ruling, contending that it had the legal right to process health data for life insurance applicants. The company argued that assessing the health status of the insured party and associated risks was crucial for determining insurance eligibility.

The controller asserted that insurance applicants should be regarded as insured parties under s. 6(1)(1) of the Finnish Data Protection Act. This section allows insurance institutions, notwithstanding Art. 9(1) GDPR, to process health data necessary for assessing risks associated with insurance.

The Court observed that the Finnish Data Protection Act and its preparatory materials lacked a specific definition of 'insured party.' Contrary to the controller's interpretation, the Court found no indication in the legislative intent that the concept of 'insured party' extended to include insurance applicants before the conclusion of an insurance contract.

Additionally, the Court referred to s. 2(1)(5) of the Finnish Insurance Contracts Act, defining the 'insured party' as the party currently covered by personal or non-life insurance policy. It found this definition applicable when interpreting s. 6(1)(1) of the Finnish Data Protection Act, concluding that, under this section, insurance applicants could not be considered as insured parties.

Consequently, the Court concurred with the FDPA's stance that the controller's processing of special categories of personal data from voluntary insurance applicants breached Art. 9 GDPR. This decision will cause considerable difficulties for businesses offering life insurance products in Finland, as they will no longer be able to process health data to make decisions on pricing their products.

ICO journalism code of practice issued

The UK Parliament has issued a comprehensive Code of Practice designed to navigate the intricacies of data protection law in journalism. The code was submitted to the Department of Science, Innovation, and Technology ('DSIT') on 6 July 2023, laid before Parliament on 23 November 2023, and issued on 1 February 2024, under s. 125 of the Data Protection Act 2018. It will come into force on 22 February 2024.

The code aims to provide clear guidelines on data protection law and compliance specifically tailored for the journalism sector. It emphasises the importance of upholding the fundamental right to freedom of expression and information while ensuring lawful use of personal information.

The code addresses the historical context, referencing the Leveson Inquiry of 2011-2012, which shed light on unethical practices within parts of the press. The media's role as a public watchdog and the need for accountability are highlighted as central themes in justifying the application of data protection law to journalism.

In his press release, the Commissioner underscored the delicate balance achieved by the code. It is described as a clear and practical document that strikes the right equilibrium between supporting the vital work of journalists and safeguarding individuals' personal information.

Commissioner Edwards expressed a commitment to ongoing collaboration with industry stakeholders. The code is envisioned to complement existing industry codes and contribute significantly to building and maintaining public trust in journalism.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.