New ICO guidance on the lawful use of personal data and AI

The ICO has published recent guidance which include a series of FAQs to assist with the handling of AI when applied to personal data.

18 November 2022

Publication

On 8 November 2022, the Information Commission’s Office (ICO) published guidance and a series of FAQs to improve the handling of AI when applied to personal data. Some of the key themes covered are highlighted below.

Risk-based approach

The first of eight tips provided by the ICO include taking "a risk-based approach when developing and deploying AI". This emulates the approach taken by the EU in its draft regulation on AI1 (expected to come into force late 2023 / early 2024) which distinguishes certain AI uses as High Risk AI Systems (HRAIS) for which more stringent regulatory requirements will apply. The ICO guidance notes that "AI is generally considered a high-risk technology and there may be a more privacy-preserving and effective alternative", thereby seeming to discourage AI generally from being applied to personal data. This ICO tip and the subsequent FAQ section both emphasise the importance of carrying out a data protection impact assessment which, even if not legally required, is considered to be best practice for a major project that involves the use of personal data.

Recording decision making to explain decisions made by AI systems

The UK GDPR restricts solely automated decisions that have a legal or similarly significant effect on individuals2. A decision will be solely automated if there is no meaningful human input in the final decision that is made. Further, the ICO guidance has emphasised that this obligation cannot be circumvented by a human rubber-stamping a decision. The ICO guidance provides that the degree and quality of human review and intervention before a final decision determines whether a system is used for automated decision-making (which is restricted) or just to support decision making.

Ensuring you only collect the minimum data required to develop an AI system

Known as the data minimisation principle, you must identify the minimum amount of personal data required for a purpose (here AI development) and only process that amount. The ICO recommends mapping out all areas in the development process where personal data may be used and ensure that this is reviewed at each significant development milestone.

Addressing potential bias and discrimination at an early stage of development

The risk of bias can be reduced by ensuring the AI training data is balanced. This can be done by adding or removing data to under- or over-represented population subsets. Equally, the possibility that the data collection method may incorporate past discrimination should be accounted for – the ICO provides the example that recruitment data may have historically considered certain populations to be more appropriate for the role.

This guidance should come as no surprise and is clearly aimed at reinforcing the seven principles of the UK GDPR3, in particular data minimisation, transparency and accountability, building on previous guidance issued by the ICO. As the ICO suggests, AI has the potential to make a significant difference to society but must be deployed appropriately to realise those benefits.

1 See the Simmons & Simmons explanatory article here.
2 See Article 22, UK GDPR.
3 See Article 5, UK GDPR.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.