By Jade Kowalski, Charlotte Halford, Peter Given & Hans Allnutt

|

Published 03 September 2024

Overview

Our 'In Case You Missed It' section of the Data, Privacy and Cyber Bulletin provides readers with a high-level digest of important regulatory and legal developments in the previous month.

 

Contents

  1. Case Law Updates
  2. Regulatory Developments
  3. Data & Privacy Developments
  4. Cyber Developments

 

Case Law Updates

One of our detailed analysis pieces this month focuses on two recent and important data breach decisions: Adams v Ministry of Defence and Farley v Paymaster (1836) Limited (t/a Equiniti).

 

Regulatory Developments

ICO consultation on generative AI and the allocation of accountability for compliance

The ICO has issued a fifth and final call for evidence as part of the consultation series on data protection and generative AI. This final consultation, closing on 18 September 2024, relates to the allocation of accountability for data protection compliance across the generative AI supply chain, and addresses the recommendation made for ICO guidance on this issue.

This issue is complex due to the different methods through which generative AI is developed, used and disseminated whether as a model, application or service. The levels of accountability and control at the participating organisations also raises questions as to how this works in practice. Concluding the call for evidence, the ICO notes that the allocation of roles (controller, joint controller, processor) must reflect the control and influence on each processing activity occurring, and that in some instances, joint controllership is more likely to accurately reflect processing relationship rather than controller / processor relationships.

We have previously written on those other calls for evidence forming part of this series relating to generative AI:

 

ICO issues statement on Meta's ad-free subscription service

As we have highlighted in previous commentary, there have been a number of questions and concerns across Europe about the development and increasing use of 'consent or pay' mechanisms by large online platforms such as Meta. These concerns are primarily framed within the context of processing personal data for behavioural advertising, producing a detailed opinion from the European Data Protection Board (EDPB) earlier this year with additional guidelines on 'consent or pay' models expected in the future from the EDPB.

Meta has not yet introduced an equivalent consent or pay model in the UK. The ICO released a statement in mid-August confirming that its response to this years' ICO consultation on 'consent or pay' models will be provided later this year, including an examination of how UK data protection law would apply to those models. The ICO expects that Meta will "consider any data protection concerns we raise prior to any introduction of a subscription service for its UK users."

 

ICO issues reprimand to Labour Party

The ICO has issued a reprimand to the Labour Party for failing to respond to subject access requests and requests for erasure. Reiterating that organisations must respond to requests within one month after receipt (or up to an additional two months if the request is complex), the ICO recommended that the Labour Party follow the steps set out in its own action plan. Confirming the reprimand, the ICO noted that engagement had produced a reduction in the backlog from the Labour Party including extra funds and staff.

The reprimand can be found here.

 

Fit for purpose? The Online Safety Act and recent events

Following the recent civil unrest throughout various locations in the UK, questions have been raised about the role of social media and messaging services in amplifying disinformation and allowing for coordinated direction of demonstrators.

The Mayor of London, Sadiq Khan, challenged the government to review the Online Safety Act, labelling it not for purpose, if it is unable to regulate social media effectively and prevent the posting of content provoking violence and generating disinformation. The independent regulator for online safety, Ofcom, responded that it was working to implement the Online Safety Act, highlighting that from late 2024, "tech firms will have three months to assess the risk of illegal content on their platforms, and will then be required to take appropriate steps to stop it appearing, and act quickly to remove it when they become aware of it."

 

Uber fined EUR290 million by Dutch DPA following data transfers

The Dutch data protection authority (Autoriteit Persoonsgegevens) has imposed a fine of EUR290 million on Uber. Following a complaint by more than 170 French drivers, the Dutch DPA, as lead supervisory authority for the company, worked with the French DPA (CNIL) to assess whether Uber's conduct was in breach of the GDPR. Uber's conduct, now ended, was found to be in breach.

The investigation found that Uber had collected sensitive data from drivers, and then stored the data on US-based servers for a period of two years without using appropriate transfer mechanisms. Per the press release from the Dutch DPA, "the protection of personal data was not sufficient. The Court of Justice of the EU invalidated the EU-US Privacy Shield in 2020… Standard Contractual Clauses still provide a valid basis for transferring data to countries outside the EU, but only if an equivalent level of protection can be guaranteed in practice… Uber no longer used Standard Contractual Clauses from August 2021, the data of drivers from the EU were insufficiently protected, according to the Dutch DPA. Since the end of last year, Uber uses the successor to the Privacy Shield."

 

Dutch DPA warns of data breaches through use of AI chatbots

The Dutch DPA has issued a warning to organisations of the risks involved in employees accessing AI chatbots on their own initiative and without the agreement of their company. A number of data breach notifications were cited, those breaches having occurred as a result of employees entering personal data of customers and patients into a chatbot. This was an issue raised at the advent of ChatGPT and other AI chatbots; the warning from the Dutch DPA serves as a timely reminder of the risks involved in using these models.

 

Data & Privacy Developments

Meta challenges EDPB opinion on 'consent or pay' models

As referred to above, in April 2024 the EDPB issued an opinion on 'consent or pay' models being offered by 'large online platforms', a term defined in the opinion and clearly aimed at organisations such as Meta and their social media platforms. The opinion created a high threshold for large online platforms looking to use a 'consent or pay' model, stating that "in most cases [a binary consent or pay model] will not be possible…" as a means of obtaining valid consent and that the platforms should instead offer a free alternative without tracking users.

In response, Meta has issued proceedings against the EDPB seeking the annulment of the opinion, financial damages associated with the opinion, and costs. The pleadings allege that the EDPB failed to act as an impartial body, issued an opinion imposing obligations not found in the GDPR and failed to follow (or misinterpreted) a binding decision of the Court of Justice. Meta also alleges that the right of the EDPB to issue binding opinions prevents companies from challenging them in the Court of Justice.

The Official Journal entry confirming Meta's challenge to the EDPB can be found here, and we will follow these proceedings with interest.

 

Noyb issues proceedings against Hamburg DPA

Once again, with reference to the EPDB opinion on the 'consent or pay' model, we identified that the opinion was limited to 'large online platforms' and did not resolve questions around platforms such as media publishers, where some organisations have operated 'consent or pay' online editions for several years.

The consumer activist group, noyb, challenged the 'consent or pay' model of the German news website, Der Speigel, in 2021. In 2024, the Hamburg Data Protection Authority determined that the model was permissible in principle. In response, noyb has filed proceedings in the Hamburg Administrative Court seeking to have the DPA's decision overturned, arguing that the authority effectively provided legal advice to Der Speigel during the investigation. The details of the complaint can be found here.

 

Noyb files complaints against X for using personal data to train AI technologies

Last month, our detailed analysis of the privacy challenges associated with the training phase of artificial intelligence highlighted the regulatory pressure placed on X in respect of its plans to train an AI model, Grok, on user data including posts. We noted that the Irish Data Protection Commission and X reached an agreement that X would suspend its processing of personal data contained within the public posts of X's EU/EEA users processed between 7 May and 1 August 2024, for the purposes of training Grok.

However, noyb has filed GDPR complaints with data protection authorities in nine countries (including Ireland) raising further concerns about the legal issues associated with the steps that X had previously taken. The full press release accompanying the action by noyb can be found here.

 

Cyber Developments

Persons unknown injunction awarded in respect of recent Synnovis cyber attack

In June of this year, Synnovis, an organisation providing services to the NHS, services users and clinical users was subject to a ransomware attack by the Russian hacking group, Qilin. It is believed that records covering over 300m patient interactions were affected by the attack, which also resulted in a number of planned operations and treatments being cancelled.

Synnovis recently made a successful application against the unknown defendant or defendants, being granted an interim injunction preventing the release or publication of data stolen during the ransomware cyber incident and prohibiting further attempted cyber-attacks.

The full judgment from the application hearing can be found here.

Authors