There's rarely a quiet week in data protection — and this one was no exception. Below are two developments from the past seven days that caught my eye.

Story #1: Lessons on necessity, DPIAs and APDs under the UK GDPR

You may have seen the Serco enforcement notices issued on Friday 23 February by the UK ICO and thought: we don't use facial recognition or biometrics, so they aren't relevant for us. If so, I would urge you to reconsider (or at least to read the rest of this post), given that the notices provide valuable lessons that apply to all personal data processing, regardless of industry, data type and context. (The notices are here.)

The ICO ordered a Serco subsidiary to stop processing biometric data of 3,000 employees across 38 of its leisure centres. Serco uses facial recognition and fingerprint scanning to monitor and pay staff — a practice the ICO said was "neither fair nor proportionate".

Those principles are critical in the context of sensitive personal data and higher risk technologies. But fairness goes to the core of all processing, such that an otherwise lawful basis can be invalidated if the processing isn't fair. And necessity is a critical component of most lawful bases under the UK GDPR/GDPR.

Necessity can be overlooked, and perhaps understandably so. When done properly, it will involve a combination of factors — legal and ethical, but also, in practice, commercial — that can be difficult to balance.

For example, it may be necessary for an employer to record staff attendance data in order to pay them. But is it necessary to use facial recognition to do so? Even where other systems — swipe cards, manual sign-in/out — haven't worked? The ICO suggests not, and although it *may* be possible to reach another conclusion in different circumstances (e.g., as a measure of last resort), this looks to be the direction of travel in the UK.

Necessity aside, there a number of practical points from the notices that are worth your time.

  • The ICO found that Serco had conducted a Data Protection Impact Assessment and a Legitimate Interests Analysis (albeit the latter was drafted after the ICO began its investigation). The timing issue notwithstanding, the DPIA/LIA process should help you reach a conclusion about whether the processing is justified. In other words, don't use the process to rubber stamp a conclusion you've already reached.
  • The ICO became aware of Serco's practices in 2019, and it's not clear why it took five years to conclude its investigation. But if you become aware that the ICO (or any regulator) is looking into your practices, it's time to put your approach under the microscope from an internal point of view.
  • As a derogation to the GDPR (as it was then), the DPA 2018 introduced the requirement for controllers to have an Appropriate Policy Document in place when processing special categories of personal data in certain cases. Serco appears not to have done so, although this isn't unusual in the private sector. But if you don't have an APD in place, now would be the time to change that.

Story #2: Applying the ECJ's Austrian Post decision to UK GDPR litigation

No harm, no foul?

For those who find the developments around awarding compensation for non-material (i.e., distress/anxiety) data protection harms as interesting as I do, the English High Court handed down a judgment last Friday (again, 23 February) that considered the issue. The judgment is here.

The litigation concerns a data breach involving nearly 450 current and former police officers whose pension letters were sent to the wrong addresses.

After the UK ICO found that no further action needed to be taken in relation to the breach, the officers collectively sued the company that sent the letters, alleging breaches of UK data protection laws and the misuse of private information. They each sought damages of between £3,000 and £4,000.

Last week's pre-hearing judgment turned on whether the letters were actually opened, and the judge found that most were not. Indeed, of the 446 claimants, only 14 were allowed to proceed — but even those 14 were "very far from being serious cases", and may ultimately be dismissed as the proceedings continue.

For example, in each case the letter was handed back to the relevant claimant, and in 11 of the 14 cases, the letter was apparently opened by a relative before being passed back to the relevant claimant. Nevertheless, the 14 claimants variously alleged that the breach caused them anxiety, distress, exacerbation of unrelated symptoms (in one case, of post-natal depression), and annoyance and irritation.

*****

The question you likely want to ask is whether — and if so what — the judge had anything to say about a minimum threshold of seriousness of harm being suffered.

And the answer is that he kicked the can down the road.

The European Court of Justice's decision last year in Austrian Post (in which the ECJ found that there is no minimum threshold for non-material harm claims) isn't binding on English courts post-Brexit, albeit they may choose to give it weight.

Here, the judge wrote: " ... it is not necessary (nor is it desirable) for me to reach a concluded view on the very interesting points as to whether the law in this jurisdiction imposes a threshold of seriousness in data protection claims ... I think it is better for me not to express any view. It is sufficient ... that I have decided that whether each Claimant could surmount a threshold of seriousness (were one found to apply in data protection claims) is a factual question that ... can only fairly be resolved at a trial".

So the wait goes on for a post-Austrian Post decision by the (senior) English courts on non-material data protection harms. Frankly, I'm not sure that this is the case to get it over the line, given the facts discussed above, but time will tell.

And lastly...

I contributed to two articles that were published this week — about the EU's upcoming Digital Operational Resilience Act (link here) and how financial services firms should be thinking about AI (link here).

The latter is paywalled, but the gist is that although hard AI regulation is still in its infancy, organisations shouldn't overlook how current laws (e.g., on data protection, IP, antitrust, etc.) apply to their use of AI, automated and machine-learning technologies — whilst at the same time preparing (or at least thinking about how to prepare) for the AI Act and beyond.

And keeping with the AI theme, I co-wrote a piece for Law360 on the UK's (light-touch) approach to AI regulation. The non-paywalled link is here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.