On 23 September 2025, the European Data Protection Supervisor (“EDPS”) published a comprehensive TechDispatch report addressing the human oversight of automated decision-making (“ADM”) systems (the “Report”). The Report comes at a time when ADM is increasingly relied upon across a variety of sectors to make decisions that significantly affect individuals. It seeks to clarify how “meaningful” and “effective” human oversight can be implemented in practice, which are central concepts in the definition of ADM under Article 22 of both the UK and EU GDPR.
While the Report is not legally binding, it supports organisations in understanding new technologies as part of the EDPS’s broader efforts to monitor emerging technologies and analyse those technologies from the perspective of existing data protection law.
For organisations, the key takeaway is that human oversight must be more than a procedural formality or tick-box exercise – it should be an effective, substantive and meaningful safeguard that is carefully designed and implemented. This article summarises the EDPS’s main findings and, crucially, what organisations should do in response.
Background
The Report highlights that while human oversight is widely regarded as a necessary safeguard against the risks of ADM, such as opacity, bias, and discrimination, simply adding a human into the process does not guarantee better outcomes or accountability. The Report also stresses that human oversight must be carefully structured, with attention paid to both the limitations of ADM systems and the complex dynamics between human operators and machine-generated outputs.
Common Flawed Assumptions about Human Oversight
The Report identifies a series of common flawed assumptions that can undermine the effectiveness of human oversight in ADM systems. Set out below are key assumptions highlighted by the EDPS:
Operating Conditions of ADM systems
Assumption: ADM systems will operate within specific and predetermined conditions.
EDPS Clarification: In reality, ADM systems often encounter novel or unpredictable environments. Overreliance on the assumption that systems will only face the predictable scenarios around which they were designed can be dangerous.
EDPS Example: A 2016 incident involving a Tesla vehicle in autopilot mode illustrates this risk: the system failed to detect a white truck crossing a highway against a bright sky, resulting in a fatal crash. This occurred because the system operated beyond its capacity to interpret unusual lighting conditions and atypical obstacles that were not well represented in its training data, highlighting the dangers of assuming ADM systems will always function as intended in real-world environments.
Human Handover in Atypical Situations
Assumption: ADM systems will transfer control to humans in atypical situations.
EDPS Clarification: Many ADM systems lack explicit mechanisms to defer to human operators when faced with a novel or uncertain scenario. Without built-in guardrails, the expectation that systems will “know” when to hand over control is unfounded.
EDPS Example: The Babylon Health AI-powered symptom checker, used in the UK’s NHS, failed to detect serious conditions such as heart attacks and sometimes provided inappropriate advice, like misattributing a breast lump to “hysteria.” These failures highlight the risks when ADM systems do not have mechanisms to recognise their own limitations and appropriately transfer control to human operators in complex or uncertain situations.
Influence of Automation on Human Judgment (e.g. Automation Bias)
Assumption: Automation does not influence human judgment.
EDPS Clarification: Automation bias can occur when human operators become overly reliant on system recommendations, sometimes accepting them without sufficient critical assessment or use of their own expertise. This can result in operators effectively “going on autopilot”, allowing the ADM system’s outputs to shape or even override their own judgment, especially in complex or specialised contexts where the human may feel less confident intervening.
EDPS Example: A 2023 study found that radiologists reviewing mammograms with AI-generated suggestions were less accurate when the AI’s recommendations were incorrect, regardless of their experience. This illustrates how automation bias can lead professionals to follow ADM outputs without sufficient independent assessment, underscoring the need for safeguards such as confidence scores and targeted training.
Effectiveness of collaboration between humans and machines
Assumption: Systems that combine human judgment with machine outputs are inherently superior and will seamlessly complement each other.
EDPS Clarification: Without deliberate design, clear role allocation, and appropriate training, hybrid systems can compound the weaknesses of both humans and machines, leading to confusion and reduced accountability. In practice, human operators may struggle to interpret or trust machine outputs, thereby reducing their inherent speed and convenience, while machines often lack the contextual awareness needed to appreciate important context or nuance, thereby increasing the risk of error or bias.
EDPS Example: The Boeing 737 MAX crashes in 2018 and 2019 demonstrate the risks of assuming that humans and machines will inevitably work well together. The Manoeuvring Characteristics Augmentation System, which the planes operated on, was intended to automate stabiliser adjustments while pilots handled complex situations, but limited transparency and inadequate training meant pilots hesitated or struggled to override the system during emergencies, contributing to fatal outcomes.
Authority and Ability to Override
Assumption: Human operators always have the authority and ability to override.
EDPS Clarification: Operators may lack the independence, expertise, or confidence to challenge ADM system recommendations. Factors such as fear of consequences or deference to the system’s perceived expertise can discourage intervention and reinforce reliance on automated outputs.
EDPS Example: Between 2014 and 2019, Poland’s Public Employment Services used a profiling algorithm to categorise job seekers, with client advisors formally given the authority to override the system’s classifications. In practice, however, excessive workloads, insufficient training, unclear guidelines, and organisational pressures meant advisors were often unable to intervene effectively, demonstrating how design and operational constraints can prevent meaningful human oversight even when formal mechanisms exist.
Transparency and Explainable AI
Assumption: Transparency and explainable AI are sufficient for effective oversight.
EDPS Clarification: While explainability can support understanding, it does not necessarily equip operators to identify when a system is ill-suited to a particular scenario. Additionally, explainability does not necessarily equate to good decision-making and may even cause overreliance.
EDPS Example: A 2021 study found that when clinicians were given incorrect machine learning recommendations for antidepressant treatments, along with simple and easily interpretable explanations, their accuracy in selecting the correct treatment significantly decreased. These findings suggest that simplistic explanations can encourage overreliance on ADM outputs, even when those outputs are incorrect, highlighting the limitations of explainable AI for effective oversight.
Promoting Effective Human Oversight
The EDPS outlines both organisational and technical measures to ensure oversight is meaningful:
- Organisational measures include providing stable and fair working conditions, sufficient time for review, and adequate training. Crucially, the EDPS stresses that a culture valuing human oversight is essential, and must foster an environment where operators feel empowered and supported to challenge ADM outputs. The EDPS also emphasises that culture and empowerment are as important as technical measures in ensuring that oversight acts as an effective safeguard. For example, if human oversight is unnecessarily blamed for system failures, it is likely that operators will not feel sufficiently capable or empowered to exercise effective human oversight on future occasions.
- Technical measures focus on system explainability, intuitive interfaces, and mechanisms for operator intervention.
- Practical approaches such as regular auditing, sampling, the “four-eyes” principle (where a second individual reviews or validates critical decisions to reduce mistakes and cognitive biases), and integrating feedback from affected individuals are recommended to reinforce the practical effectiveness of human oversight.
Black Box AI: Challenges for Human Oversight
- The EDPS highlighted that many modern ADM systems are "black box" models, whose internal workings are difficult to interpret, even for experts. The EDPS recommends that, wherever possible, organisations should prioritise interpretable-by-design models such as rule-based systems or decision trees for high-stakes decisions, as these naturally support effective human oversight through transparency and understandability.
- Where the use of complex black box models is unavoidable, the EDPS advises implementing additional safeguards, such as Explainable AI (XAI) techniques, to provide user-friendly explanations.
- However, the EDPS cautions that XAI is not always sufficient. While it can help users understand a system’s general functioning, it may not reveal when the system is ill-equipped for exceptional or unforeseen situations, which is when human oversight is most critical. This can lead to a false sense of confidence and, in some cases, increased overreliance on system outputs, as users may trust explanations without fully questioning underlying assumptions or limitations.
- As such, XAI should be seen as one component of a broader oversight strategy, which must include clear communication about the limitations of AI, ongoing risk assessments, and mechanisms to ensure meaningful human involvement and critical engagement with ADM decisions.
Conclusion
The EDPS highlights the urgent need to develop standardised frameworks and metrics to assess and ensure the quality of human oversight in ADM systems, signalling potential future regulatory developments in this area. In the meantime, organisations should review their ADM systems for inherent flaws or risks, ensure systems are designed to uphold fundamental rights, implement robust organisational and technical measures such as adequate training and empowered oversight, and participate in institutional review processes where appropriate. By taking these steps now, organisations can help ensure that human oversight is effective and that ADM technologies are deployed responsibly and in line with evolving data protection standards.

.jpg?crop=300,495&format=webply&auto=webp)
.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)










_11zon.jpg?crop=300,495&format=webply&auto=webp)

.jpg?crop=300,495&format=webply&auto=webp)
_11zon.jpg?crop=300,495&format=webply&auto=webp)