AI View - February 2026

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

17 February 2026

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  1. EU sets out implementation priorities for AI Act in 2026

  2. US introduces bill to protect children from harmful companion AI chatbots

  3. UK Ofcom launches consultation on impact of AI in the telecommunications sector

  4. EU misses Article 6 AI Act guidance deadline, deepening uncertainty on high-risk AI rules

  5. UK reports on progress under AI Opportunities Action Plan

  6. EU sets up signatory taskforce to implement General Purpose AI Code of Practice

  7. ESMA releases Data Strategy 2025 Roadmap for scaling data hubs, AI tools and streamlined EU reporting

  8. Dutch data protection authority publishes report on generative AI

  9. Indonesia publishes draft rules for mandatory labelling of AI-generated content

1. EU sets out implementation priorities for AI Act in 2026

On 11 February 2026, press reported that the European Commission has produced a report setting out its implementation priorities for the EU AI Act, a copy of which the press had seen. The report indicates which secondary measures the European Commission intends to advance in 2026 and which are likely to be deferred. The distinction is significant for providers of general-purpose AI (GPAI) models and high-risk systems.

Measures prioritised for 2026

  • Procedural rules for enforcing GPAI model obligations: An implementing act is targeted for Q2 2026 to define enforcement procedures, safeguards, and fines applicable to GPAI model providers. This must be in place before GPAI model obligations become enforceable on 2 August 2026. The approach is expected to draw on enforcement models under the Digital Services Act and Digital Markets Act.
  • Regulatory sandbox framework: An implementing act expected in Q1 2026 will specify how national competent authorities must establish and operate AI regulatory sandboxes. These sandboxes are intended to enable supervised testing while maintaining regulatory oversight.
  • Real-world testing conditions: The Commission plans to adopt rules by late 2026 or early 2027 defining the conditions under which AI systems may be tested in real-world environments. Proposed amendments under the AI Omnibus package may broaden the scope of such testing.
  • AI Office as market surveillance authority: By Q3 2026, an implementing act is expected to define how the AI Office will exercise market surveillance functions, including coordination with national authorities and oversight of compliance.
  • Revision of the compute threshold for systemic-risk GPAI models: A delegated act tentatively scheduled for 2026 may revise the compute threshold used to designate GPAI models with “systemic risk”.

Measures deprioritised

The Commission has signalled that several technically complex or politically sensitive measures are unlikely to be adopted in 2026:

  • Qualitative criteria for systemic-risk designation: Further specification of qualitative criteria supplementing the compute threshold.
  • Energy-consumption measurement methodologies: Standardised methods for estimating and verifying energy consumption in GPAI model training.
  • Updates to technical documentation requirements: No near-term revision is planned to the public technical documentation obligations applicable to GPAI model providers.
  • Implementing acts approving GPAI codes of practice: These may be removed in favour of a simplified procedure under the AI Omnibus proposals.
  • Harmonised post-market monitoring template for high-risk AI: A standardised template for post-market monitoring.
  • Remuneration framework for the scientific panel: Rules governing payment of experts advising on GPAI enforcement.

Read press coverage here (MLex subscription required).

2. US introduces bill to protect children from harmful companion AI chatbots

On 22 January 2026, bipartisan bill S 2714 titled the Children Harmed by AI Technology Act (the CHAT Act, the Bill) was introduced in the US House of Representatives. The Bill proposes a targeted regulatory framework for the specific category of AI systems of “companion AI chatbots” with the express aim of mitigating risks to minors arising from emotionally responsive and interactive AI technologies.

The Bill applies to any person or organisation that owns, operates, or makes available a companion AI chatbot to users in the US. Companion AI chatbots are defined as “any software-based artificial intelligence system or program that exists for the primary purpose of simulating interpersonal or emotional interaction, friendship, companionship, or therapeutic communication with a user”. The scope explicitly excludes customer service bots, business and productivity tools, most video game chatbots, and conventional voice-activated virtual assistants.

From an AI regulatory perspective, the Bill is notable for imposing system-level design and governance obligations rather than relying solely on downstream content controls. Covered entities would be required to implement robust age-verification mechanisms for all users, including retroactive verification of existing accounts. Where a user is identified as a minor, additional safeguards apply, including mandatory parental account linkage, verifiable parental consent, and the automatic blocking of access to any chatbot capable of sexually explicit communication.

The Bill also introduces an obligation to monitor chatbot interactions for suicidal ideation. These requirements effectively mandate ongoing behavioural monitoring and response mechanisms within AI systems deployed at scale.

Enforcement is centred on the Federal Trade Commission Act (the FTC Act), with violations treated as unfair or deceptive practices under the FTC Act. The FTC must issue compliance guidance within 180 days of enactment, and a statutory safe harbour is available where entities act in good faith, follow FTC guidance, and conform to recognised industry standards for age verification.

The Bill reflects a growing regulatory trend towards sector- and use-case-specific AI obligations, particularly where AI systems interact directly with children and vulnerable users. If enacted, it would take effect one year after enactment, giving covered entities a defined implementation period.

Read the Bill here.

3. UK Ofcom launches consultation on impact of AI in the telecommunications sector

On 27 January 2026, the UK communications regulator, Ofcom, launched a consultation on the impact of AI in the telecommunications sector. The consultation aims to gather evidence on how AI tools are being used throughout the telecoms value chain, the risks and opportunities they present for residential and business customers, and whether existing regulatory rules may require adjustment to support responsible innovation and consumer protection.

The consultation document explains that while AI is not new to communications, its recent rapid adoption, such as AI-driven customer support and personalised service tools, is reshaping customer interactions and service delivery. Ofcom seeks input from telecoms providers, technology developers, consumer organisations, researchers, and others on three core questions:

  • Deployment and adoption: how AI tools are currently used across the telecoms value chain and their effects on customer experience.
  • Opportunities and risks: the benefits and potential harms AI might pose to residential and business customers.
  • Regulatory fit: whether existing regulatory frameworks require modification to enable responsible innovation or to safeguard consumers.

The consultation is explicitly connected to Ofcom’s Strategic Approach to AI 2025/26 report, which outlines the regulator’s broader stance on AI across sectors it oversees, emphasising technology-neutral regulation that focuses on outcomes rather than prescribing specific technologies.

Responses to the consultation will inform Ofcom’s findings, to be published in the second half of 2026, and may lead to proposals for regulatory changes affecting how AI is deployed and governed in telecoms markets. This work reflects the UK’s ongoing sectoral approach to AI regulation, where established regulators address AI’s impact within their respective remits.

The consultation is open for stakeholder submissions until 10 March 2026.

Read the report and engage with the consultation here.

4. EU misses Article 6 AI Act guidance deadline, deepening uncertainty on high-risk AI rules

In early February 2026, the European Commission missed a statutory deadline to publish guidance on Article 6 of the EU AI Act, a central provision governing the classification of high-risk AI systems. Article 6 determines whether an AI system falls within the Act’s high-risk regime, triggering extensive obligations (especially for providers) relating to risk management, technical documentation, post-market monitoring and conformity assessment.

Under Article 6(5), the Commission was required to issue practical guidance and illustrative use cases clarifying how providers may assess whether a system should be treated as high-risk. That guidance was also expected to address post-market monitoring plans. Its absence leaves providers without authoritative direction just months before key compliance milestones.

The Commission has indicated that it is consolidating stakeholder feedback and intends to publish draft high-risk guidance for further consultation, potentially by the end of February, with final adoption now expected in March or April 2026. This delay has fuelled wider uncertainty around the AI Act’s implementation timetable, particularly as high-risk obligations are currently due to apply from August 2026.

The missed deadline is closely linked to the Commission’s proposed “Digital Omnibus” package, which would delay the high-risk regime. This delay to the high-risk guidance is expected to increase the likelihood of the delay to the high-risk regime being approved.

Read Article 6 of the AI Act here.

5. UK reports on progress under AI Opportunities Action Plan

On 13 January 2026, the UK Government published its one-year update to the AI Opportunities Action Plan, marking a shift from high-level ambition to delivery across infrastructure, skills, public services and governance, assurance and procurement. The update states that around 75% of the Action Plan commitments have now been delivered, supported by a public delivery dashboard and reinforced ministerial direction to regulators to enable safe AI adoption at pace.

Alongside the formal policy update, the Government signalled a sharper operational focus on regulation and procurement as enablers of AI adoption, rather than constraints. Regulators have been asked to set out concrete plans for driving responsible AI deployment within their remits and to report annually on progress. This reinforces the UK’s regulator-led model, where existing sector regulators remain responsible for applying AI governance in context, rather than through a single horizontal AI statute.

The update also places renewed emphasis on workforce transition and labour-market governance, including the launch of a new Future of Work Unit to provide evidence on AI’s employment impacts and guide policy responses. This is positioned as a regulatory complement to large-scale AI upskilling initiatives, aimed at maintaining public confidence as AI adoption accelerates.

From an assurance perspective, the Government reiterates investment in the AI Security Institute, expansion of frontier model testing, and the development of a domestic AI assurance ecosystem. This includes support for third-party assurance services, regulatory sandboxes, and cross-economy experimentation environments designed to allow real-world testing while risks are assessed and managed.

Read the AI Opportunities Action Plan here.

6. EU sets up signatory taskforce to implement General Purpose AI Code of Practice

On 2 February 2026, the European Commission’s AI Office announced the establishment of the Signatory Taskforce (the Taskforce) under the General-Purpose AI (GPAI) Code of Practice (the Code). The Taskforce is intended to facilitate coherent application of the Code as a voluntary mechanism supporting compliance with the EU AI Act in respect of GPAI models.

The AI Act’s transparency, safety and accountability obligations for providers of GPAI models have applied since 2 August 2025, with enforcement scheduled to begin in August 2026. The Code, which was developed through a multi-stakeholder process and endorsed by the European Commission and the AI Board, enables providers to demonstrate adherence to these obligations on a structured, voluntary basis.

Chaired by the AI Office, the Taskforce comprises Code signatories that elect to become Taskforce members. Its core function is to provide a forum for exchanges relevant to implementation of the Code. In particular, it may:

  • facilitate discussion on practical application of the Code’s commitments.
  • provide input on draft guidance documents, without prejudice to formal public consultations.
  • exchange views on technological developments relevant to GPAI compliance.
  • gather and disseminate research, independent expert input and emerging evidence.

The Taskforce will meet at least annually, with sessions convened by the AI Office either on its own initiative or at the request of members. To promote transparency, the AI Office has published a Taskforce Vademecum and committed to registering meetings and issuing high-level summaries, subject to commercial confidentiality constraints.

Read the official announcement here.

7. ESMA releases Data Strategy 2025 Roadmap for scaling data hubs, AI tools and streamlined EU reporting

On 13 January 2026, the European Securities and Markets Authority (ESMA) published the 2025 update to its Data Strategy 2023–2028 (the 2025 Update), setting out an enhanced roadmap for data-driven supervision and regulatory efficiency across EU financial markets.

The 2025 Update reflects a marked shift towards greater use of AI, SupTech and RegTech tools within supervisory processes. It also aligns with the European Commission’s objective of reducing reporting burdens by 25% and supporting the development of the Savings and Investments Union.

The 2025 Update positions ESMA as a central EU data hub, with six strategic objectives including data-driven supervision, efficient data policy, and thought leadership in data standards and reporting innovation. Key regulatory and AI-related measures include:

  • deployment of AI-based proofs-of-concept for market abuse detection and anomaly detection in supervisory datasets.
  • expansion of machine learning and advanced analytics capabilities on the ESMA Data Platform.
  • exploration of machine-readable and executable reporting to streamline regulatory data flows.
  • development of a common securities markets data dictionary to enhance harmonisation and reduce duplicative reporting.
  • publication of machine-readable public data, including via the European Single Access Point.
  • use of synthetic and anonymised datasets to support innovation and research while safeguarding confidentiality.

The 2025 Update places particular emphasis on generative AI deployment within ESMA’s internal supervisory toolkit and coordinated SupTech development with National Competent Authorities. It also highlights joint supervisory tools under MiCA and integrated reporting initiatives across AIFMD, UCITS, MiFIR, EMIR and SFTR frameworks.

Overall, the 2025 Update signals a move beyond incremental reporting reform towards a structurally digital supervisory architecture. For regulated firms, the practical implications include increased data standardisation, deeper analytics-led supervision, and growing expectations around data governance, interoperability and machine-readability.

Read the strategy here.

8. Dutch data protection authority publishes report on generative AI

On 4 February 2026, the Dutch data protection authority, Autoriteit Persoonsgegevens (AP), published its vision document ‘Moving forward responsibly: AP’s Vision on Generative AI’ (the Report), setting out how generative AI should be developed and deployed in line with fundamental rights and EU law.

The Report frames generative AI as a transformative technology that can strengthen rights such as education and healthcare, but which also presents acute risks to privacy, non-discrimination, democratic governance and digital autonomy. The AP emphasises that generative AI does not operate in a legal vacuum - its development and deployment must comply with the GDPR, AI Act and a broader body of EU digital legislation, including the Digital Services Act, Digital Markets Act and Data Act.

A key regulatory concern is the rapid centralisation of sensitive data and growing societal dependence on a small number of providers. The Report highlights risks linked to training data scraping, opacity in model design, misuse of chatbots (including in mental health contexts), and the erosion of information reliability. It warns that unlawful training practices and inadequate risk mitigation may infringe fundamental rights across the AI lifecycle.

The Report outlines four future scenarios for 2030, ranging from a deregulated “Wild West” to a high-regulation, low-adoption “Bunker” model. The AP identifies “Values at work” as the preferred outcome: high adoption combined with effective, enforceable EU regulation. In this scenario, generative AI applications are transparent, purpose-specific, risk-assessed, and lawfully deployed, supported by strong supervisory cooperation and AI literacy.

The Report also signals its role in shaping enforcement of the AI Act, contributing to GPAI model oversight, issuing further GDPR guidance in 2026, and supporting regulatory sandboxes to enable compliant innovation.

Read the Report here.

9. Indonesia publishes draft rules for mandatory labelling of AI-generated content

On 27 January 2026, Indonesia’s Ministry of Communication and Digital, Komdigi, confirmed it is drafting a ministerial regulation requiring generative AI content to carry a watermark or clear label when published on digital platforms (the Proposed Regulation). The proposal was presented during a hearing with House Commission I and is intended to strengthen transparency and accountability in response to the rapid proliferation of deepfakes.

Under the Proposed Regulation, platforms hosting AI-generated content, including social media and other electronic system providers, would be required to ensure that outputs produced using generative AI are labelled. Content that appears without an AI label would be subject to takedown measures. The obligation would apply to AI service providers whose systems generate text, images, audio or video content.

The Proposed Regulation is designed to complement two forthcoming Presidential Regulations: a National AI Roadmap and a regulation on AI ethics. The Roadmap will identify priority sectors for AI deployment, including food security, transport, logistics and finance, and establish a cross-ministerial task force to coordinate implementation. The AI ethics regulation will impose obligations across three constituencies: users, industry actors and regulators, including safeguards around data protection, cybersecurity and responsible deployment.

Although Indonesia’s 2008 Electronic Information and Transactions Law already provides sanctions for unlawful online content, it does not contain AI-specific provisions. The new labelling requirement would therefore function as a transparency measure aimed at mitigating risks associated with manipulated content, including non-consensual sexual deepfakes and political disinformation.

Read press coverage here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.