AI View: March 2026

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world

18 March 2026

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  1. European Commission publishes second draft of Transparency Code of Practice under EU AI Act

  2. US States progress AI bills:
    a. New York: AB A3411B on generative AI notices
    b. Oregon: SB 1546 on AI companions
    c. Utah: HB 276 on explicit deepfakes

  3. UK House of Lords publishes report on “AI, copyright and the creative industries”

  4. Ireland opens consultation on draft law implementing the EU AI Act

  5. UN hosts First Meeting of the Independent International Scientific Panel on Artificial Intelligence

  6. UK CMA issues guidance on complying with consumer law when using AI agents

  7. European Parliament adopts report on AI, copyright and the EU’s creative sector

  8. Vietnam issues implementation plan for AI Law

1. European Commission publishes second draft of Transparency Code of Practice Under EU AI Act

On 5 March 2026, the European Commission published the second draft of its Code of Practice on the marking and labelling of AI generated content (the Code). The voluntary Code is intended to help providers and deployers meet the transparency requirements for AI generated content under Article 50 of the EU AI Act.

The new draft has been streamlined and simplified to provide greater flexibility for signatories, reduce compliance burdens and improve legal clarity and practicality. It promotes the use of open standards for AI content marking and envisages an EU icon for labelling, to support more consistent implementation and lower costs.

The Code is structured in two sections:

  • Section 1 – Providers of generative AI systems
    Section 1 addresses marking and detecting AI generated content for providers within the scope of Article 50(2) AI Act. It has been revised, with some measures removed or consolidated and new optional elements introduced, while aiming to keep all commitments technically feasible and proportionate. Key commitments include a revised two layered marking approach using secured metadata and watermarking, optional fingerprinting and logging, and protocols for detection and verification.
  • Section 2 – Deployers and labelling of deepfakes and public interest content
    Section 2 targets deployers and focuses on labelling deepfakes and text publications on matters of public interest under Article 50(4) AI Act. It adopts a more practice oriented approach, removes the previous taxonomy distinguishing AI generated from AI assisted content, and sets design and placement requirements for icons, labels or disclaimers to ensure a minimum level of uniformity. It also proposes a taskforce to develop a uniform, interactive EU icon, and further clarifies regimes for artistic, creative, satirical and fictional works, as well as text under human review or editorial control. The annex now includes illustrative examples of a potential EU icon to be made freely available to signatories.

The Commission will collect feedback on the second draft from participants and observers to the Code until 30 March 2026, with finalisation expected by the beginning of June 2026.

Read the Code here.

2. US States progress AI bills

New York: AB A3411B on generative AI notices

On 9 March 2026, New York State passed Assembly Bill A3411B, which will require the owner, licensee or operator of a generative AI system to display a clear notice on the system’s user interface to inform users that the system’s outputs may be inaccurate.

The law applies to “generative AI systems”, defined broadly as encompassing self supervised AI models that emulate the structure and characteristics of input data to generate synthetic content, including images, video, audio, text and other digital content.

The measure reflects a growing trend towards simple, standardised warnings for AI tools used by the general public, particularly where there is a risk that users may over rely on outputs as accurate or authoritative. Providers offering generative AI services into New York will need to assess whether their interfaces meet the new “conspicuous notice” standard and may wish to align wording and placement with other emerging AI transparency requirements.

Read A3411B here.

Oregon: SB 1546 on AI companions

On 5 March 2026, Oregon enacted Senate Bill 1546, creating a dedicated regime for “AI companions” – systems that use AI to simulate sustained, human like platonic, intimate or romantic relationships with users, including by retaining user information, asking unsolicited emotionally themed questions and maintaining ongoing personal dialogue. Customer service tools, most enterprise and productivity software, certain video game features and conventional virtual assistants are excluded.

Where a reasonable person could believe they are interacting with a natural person, operators must provide a clear and conspicuous notice that the user is engaging with artificially generated output. Stricter rules apply where an operator knows or has reason to believe a user is a minor.

Individuals suffering monetary loss, property loss or other injury as a result of violations may bring a civil action for the greater of actual damages or USD 1,000 per violation, plus injunctive relief and, for prevailing plaintiffs, legal fees and costs.

Read Senate Bill 1546 here.

Utah: HB 276 on explicit deepfakes

On 13 March 2026, Utah H.B. 276 came into force, combining a new Digital Voyeurism Prevention Act with a Digital Content Provenance Standards Act. The law targets non consensual AI generated intimate imagery and introduces authenticity and labelling requirements for digital content.

Under the Digital Voyeurism Prevention Act, “counterfeit intimate images” are intimate images depicting an identifiable individual that are created or manipulated using AI or other digital tools. Generation services (those which allow users to generate intimate images) must not distribute such images without first obtaining and verifying the depicted individual’s express, specific and revocable consent, supported by an identity verification system that retains records for at least seven years. Platforms (i.e. services that host and distribute user generated content and allow users to interact with it) may not knowingly allow distribution of non consensual counterfeit intimate images and must operate notice and takedown procedures, including temporary disabling within 48 hours of notice, investigation within seven days and permanent removal (and prevention of reposting substantially similar images) where lack of consent is confirmed.

Individuals (or their heirs) can bring civil actions against generation services and platforms for violations, with courts able to order removal and destruction of offending content. Plaintiffs may recover actual damages (including emotional distress), statutory damages of USD 10,000–100,000 per violation for generation services and USD 5,000–50,000 per violation for platforms, punitive damages for wilful, reckless or malicious conduct, and reasonable legal fees and costs.

Read H.B. 276 here.

On 6 March 2026, the House of Lords Communications and Digital Committee published its report “AI, copyright and the creative industries” (the Report), warning that the UK’s creative industry is under threat from generative AI systems that imitate creative works using training data drawn from vast quantities of human‑created content, often without consent from or remuneration for creators.

The Report concludes that the problem lies not in an outdated copyright framework, but in widespread unlicensed use of protected works and limited transparency from AI developers, which leaves rightsholders unsure whether their content has been used and unable to enforce their rights when it has. The Report also highlights the absence in UK law of a robust personality right or specific protection for digital likeness, limiting creators’ ability to challenge harmful outputs that mimic their style, voice or persona.

The Report urges the Government to position the UK as a world‑leading hub for responsible, licensing‑based AI development, rather than weakening copyright in pursuit of speculative AI gains.

Key recommendations to the Government include:

  • Ruling out a new commercial text and data mining exception with an opt‑out model and providing a clear decision on AI and copyright in the next 12 months;
  • Closing gaps in protection for identity, style and likeness by introducing rights against unauthorised digital replicas and “in the style of” outputs;
  • Making transparency about AI training data a statutory obligation for UK AI developers, supported by procurement and regulatory levers to influence international providers; and
  • Creating conditions for a fair, inclusive licensing market, backed by open, globally aligned standards for rights reservation, data provenance and labelling of AI‑generated content.

Read the Report here.

4. Ireland opens consultation on draft law implementing the EU AI Act

On 6 March 2026, Ireland’s Joint Committee on Enterprise, Tourism and Employment opened a public consultation on the pre‑legislative scrutiny of the General Scheme of the Regulation of the Artificial Intelligence Bill 2026 (the Bill). The Bill is described as necessary for the full implementation and enforcement in Ireland of the EU AI Act.

The Committee is seeking views, evidence and recommendations on any aspect of the General Scheme, including:

  • How AI should be regulated in Ireland;
  • The role of the new AI Office of Ireland;
  • Impacts on businesses, workers, consumers and public services;
  • Safeguards for individuals’ rights and protections; and
  • Practical considerations or suggestions for improvement.

Written submissions are invited from interested individuals and organisations by 13 April 2026.

Read more on the consultation here and read the Bill here.

5. UN hosts First Meeting of the Independent International Scientific Panel on Artificial Intelligence

On 3 March 2026, UN Secretary‑General António Guterres addressed the first meeting of the new Independent International Scientific Panel on Artificial Intelligence (the Panel), established by the UN General Assembly to provide scientific guidance on the global governance of AI. The Panel brings together experts from diverse regions and disciplines, serving in their personal capacity, to deliver independent scientific assessments on the societal, economic and security impacts of AI.

The Panel’s creation follows recommendations from the UN High‑Level Advisory Body on AI and is intended to underpin the forthcoming Global Dialogue on AI Governance, a new UN process aimed at coordinating international approaches to AI regulation and cooperation. Guterres framed the Panel as a “first‑of‑its‑kind” global, independent scientific body tasked with helping to shape the trajectory of AI “for the benefit of humanity – while there is still time”, stressing that AI is advancing at “lightning speed” and that no single country, company or research field can see the full picture alone.

The Secretary‑General emphasised that the world “urgently needs a shared, global understanding of artificial intelligence” grounded in science rather than ideology or misinformation. The Panel’s mandate is intentionally broad, spanning frontier systems and the impacts already unfolding across peace and security, human rights and sustainable development.

Read the Secretary-General’s remarks here.

6. UK CMA issues guidance on complying with consumer law when using AI agents

On 9 March 2026, the UK Competition and Markets Authority (CMA) published guidance for businesses on complying with consumer protection law when using “agentic” AI systems to interact with customers (the Guidance). The Guidance is aimed at firms deploying AI agents for activities such as handling customer queries, processing refunds, recommending products and running marketing campaigns, and emphasises that AI can only deliver its full benefits where consumers trust AI‑driven services and their providers.

The Guidance underlines that, when dealing with customers, “the same rules apply whether using AI or human agents”. Businesses remain legally responsible for what AI agents do in the same way as for employees, including where the AI is designed or provided by a third party. Breaches of consumer protection law can lead to enforcement action, including fines of up to 10% of worldwide turnover and potential compensation orders.

Key expectations include:

  • Transparency about AI use – Businesses should be clear and open where AI agents are used, particularly where this might surprise consumers or affect their decisions.
  • Training AI agents to comply with consumer law – Firms should design and train AI agents with consumer law in mind, including ensuring respect for statutory rights and contractual terms (for example, cancellation and refund rights), avoiding misleading statements or omissions, and obtaining any necessary consents. Testing (such as A/B or unit testing) is described as crucial before deployment.
  • Ongoing monitoring and human oversight – Businesses should regularly check that AI agents are delivering the right results, behaving as intended and complying with consumer law. The Guidance highlights risks of “hallucinations” and stresses the need for a “human in the loop” actively reviewing decisions and outputs.
  • Rapid remediation – Where AI agents are found to be producing non‑compliant or potentially non‑compliant outcomes, firms should act quickly to refine prompts, workflows or system design, particularly where AI interacts with large numbers of users or vulnerable consumers.

Read the Guidance here.

On 10 March 2026, the European Parliament adopted a set of recommendations on protecting copyrighted works and the EU’s creative sector in the age of AI (the Report). The Report underlines that EU copyright law should apply to all generative AI (genAI) systems placed on the EU market, regardless of where they are trained, and that the use of protected content for AI training must be subject to transparency and fair remuneration.

The Report calls for genAI providers and deployers to ensure that use of copyrighted material is fairly remunerated, including exploring mechanisms to compensate for past uses, while rejecting the idea of a global licence allowing training in exchange for a flat‑rate payment. It stresses the need for full transparency over training and use of protected content, including itemised lists of all copyrighted works used for training and detailed records of crawling activities for inference and retrieval‑augmented generation. A lack of such transparency could be treated as copyright infringement, with AI providers and deployers required to bear all legal costs and related expenses where courts find in favour of rightsholders.

Raed the Report here.

8. Vietnam issues implementation plan for AI Law

On 1 March 2026, Vietnam’s new law regulating AI (the Law) entered into force, making it the first country in Southeast Asia to adopt a comprehensive AI framework. Passed by the National Assembly in December 2025, the Law focuses in particular on risks associated with generative AI and is described by the government as aligning with international standards while maintaining “digital sovereignty”, drawing parallels with the EU AI Act’s emphasis on human oversight and control.

The Law applies broadly to developers, providers and deployers of AI systems, including foreign entities operating in Vietnam. It introduces transparency obligations requiring companies to clearly label AI‑generated content – including deepfakes and other outputs that cannot readily be distinguished from reality – and to inform users when they are interacting with an artificial, rather than human, agent. These measures are aimed at mitigating harms such as misinformation, online abuse and copyright violations linked to chatbots and image generators.

Commentary from local practitioners characterises the Law as a regulatory milestone and a “decisive starting point” that embeds responsibility, human control and risk management as core themes of AI regulation. However, they also note that the practical impact for businesses will depend heavily on forthcoming implementing decrees, sector‑specific rules and enforcement practice, with some uncertainty expected to persist until detailed guidance is issued.

Read the Law here (Vietnamese only).

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.