Need for action when using AI systems in HR

Negotiations between the EU Parliament and the Council of the EU on a regulatory framework for dealing with artificial intelligence lasted for over three years.

31 March 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Negotiations between the EU Parliament and the Council of the EU on a regulatory framework for dealing with artificial intelligence lasted for over three years. There was much discussion about the challenges of regulation and its impact on the European market. The gradual entry into force of the Artificial Intelligence Act (AI Act), as a result of the controversial negotiations, has now begun in February 2025.

The AI Act is based on a product safety and risk-based approach. One of the main objectives of the AI Act is to protect fundamental rights and safety, while promoting innovation and providing legal certainty. To this end, AI systems are classified into four different risk classifications: Prohibited AI Practice, high-risk AI, general purpose AI and limited AI Systems.

Employers must now take action, as the general provisions of the AI Act and its regulations concerning prohibited AI practices came into effect on 2 February 2025, imposing requirements on AI systems within the HR sector.

What already applies

Regulations on Prohibited AI Practices

Article 5 of the AI Act covers AI systems which, due to their functionality, have an unacceptable risk to the safety, rights or livelihoods of individuals and whose placing on the EU market, putting into service, or use is therefore prohibited. The AI Act divides these prohibited AI practices into the following eight basic categories:

  1. Use of subliminal, manipulative or deceptive techniques that could distort a person's behaviour;
  2. Harmful exploitation of vulnerabilities;
  3. Social scoring: evaluation based on social behaviour or personality characteristics;
  4. Predicting Criminal Offences: individual risk assessment and prediction of criminal offences;
  5. Creation of facial recognition databases through untargeted evaluations;
  6. Emotion recognition,
  7. Biometric categorisation,
  8. Biometric remote identification in real time.

In order to facilitate legally compliant handling when classifying an AI practice as a prohibited practice, the EU published the Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act). This Guideline is intended to increase legal clarity and to provide insight into the Commission's interpretation of the prohibitions in Article 5 of the AI Act with a view to ensuring their consistent, effective and uniform application. The Guidelines also provide a number of helpful examples to vividly illustrate the abstract definitions of prohibited AI practices.

The regulations on prohibited AI practices are also important for employers. Since 2 February 2025, the use of AI systems, for example in an application process, which use cameras, voice analyses or sensors to capture physiological signals, emotions and determine and evaluate information on stress levels, satisfaction or health status, is expressly prohibited.

The prohibition can also become relevant in exceptional cases when general purpose AI systems, such as ChatGPT, Gemini, or Microsoft Copilot, are used in the workplace for prohibited practices, for example, to analyse emotions of employees.

Non-compliance with the prohibition is subject to administrative fines of up to EUR 35,000,000 or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

Regulations on AI literacy

According to Article 4 of the AI Act, providers and deployers of AI systems must ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. This obligation applies to all companies that develop an AI system, and are thus providers, or that use one on their own responsibility as deployers. Employers are therefore deployers if they use AI systems on their own responsibility. Consequently, employers should train their employees in the operational use of AI systems so that they develop the skills, knowledge and understanding in order to use AI systems in an informed way. In addition, the training is intended to convey awareness about the opportunities and risks of AI and possible harm it can cause. The training requirements depend on the technical knowledge, experience, education and the context in which the AI systems are to be used in. The higher the risk classification of the AI system is, the more extensive the training in AI expertise must be.

Abstract-technical definition as a practical difficulty

Practical difficulties in implementing and complying with the legal requirements of the AI Act arise partly due to the elusive definition of AI systems.

Article 3 no. 1 of the AI Act defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. This definition is not very illustrative.

The EU has responded to this by publishing the Guidelines on the definition of an artificial intelligence system established by AI Act in order to facilitate the effective application of the rules of the AI Act. A detailed explanation of the core elements of AI systems and the explicit exclusion of systems that are based exclusively on functional rules defined by natural persons are intended to support deployers in classifying the systems they use as AI systems.

The insight “What is AI?” covers the definition of AI systems in detail and provides a more precise overview of this topic.

Outlook

While the classification as high-risk AI system under the conditions of Article 6 paragraph 1 of the AI Act will not apply until 2 August 2027, the remaining provisions of the Act will come into force on 2 August 2026, including regulations concerning high-risk AI in employment and employee management, according to Article 6 paragraph 2 of the AI Act, along with Annex III. Special obligations will then apply to the deployers of high-risk AI systems, as these systems are generally capable of causing considerable damage in the event of errors or misuse. For these systems, deployers, i.e. employers who use on their own responsibility such systems, must ensure that:

  • the system is used in accordance with the provider's instructions for use,
  • the system is supervised by sufficiently competent and trained natural persons,
  • the input data used is sufficiently relevant and representative in relation to the intended purpose,
  • the system is continuously monitored in accordance with the instructions for use,
  • automatically generated logs are stored for at least six months,
  • employees and employee representatives affected by a high-risk AI system in the workplace must be informed in advance.

Also in this context, employers must fulfil their obligations related to high-risk AI systems not only when using systems whose primary business model involves one of the purposes mentioned in Annex III, such as the analysis and filtering of applications. The use of general purpose AI systems like ChatGPT, Gemini, or Microsoft Copilot can also lead to classification as high-risk AI if they are used accordingly.

Summary

The AI Act makes employers responsible for the use of AI systems in the workplace. Management and HR staff therefore need to take action. The first step is to provide employees who are involved in the operation and use of AI systems on behalf of the employer with a sufficient level of expertise in dealing with the systems.

In future, AI systems that are used in the application process or in the performance of the employment relationship must also be operated in compliance with the obligations for high-risk systems.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.