AI View - March 2026

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

02 March 2026

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

We are hosting our AI x Dispute resolution webinar series throughout 2026, exploring how AI-related disputes are already emerging in practice and what organisations should be doing as regulatory enforcement and litigation risk accelerate.

The recording from our first session, Overview of AI disputes + what’s coming down the line? can be viewed here. To sign up for future sessions, covering topics such as IP, collective actions and personal negligence, please follow the link to register.

This edition brings you:

  1. New Delhi Declaration on AI Impact signed by 91 countries and international organisations

  2. Data protection and privacy authorities issue joint statement on AI-generated imagery

  3. UK regulatory authorities asked to publish AI guidance

  4. US Center for AI Standards and Innovation announces AI Agent Standards Initiative

  5. Irish Government publishes Digital and AI Strategy

  6. UAE Central Bank issues guidelines on AI in the financial sector

  7. UK research body launches £1.6 billion AI strategy

1. New Delhi Declaration on AI Impact signed by 91 countries and international organisations

On 21 February 2026, participants from 91 countries and international organisations convened in New Delhi for the AI Impact Summit, issuing a declaration (the Declaration) that sets out a shared vision for the responsible and inclusive development of AI. The Declaration states that the benefits of AI must be realised for all of humanity.

The Summit identified seven foundational “Chakras” (pillars) for international cooperation and multistakeholder engagement:

1. Democratising AI resources: The Declaration recognises that robust digital infrastructure and affordable connectivity are prerequisites for unlocking AI’s potential.

2. Economic growth and social good: The wide-scale adoption of AI is seen as a catalyst for economic and social development. The Declaration highlights the role of open-source and accessible AI applications in enabling scalability and adaptability across sectors.

3. Secure and trusted AI: The Declaration emphasises the importance of secure, trustworthy, and robust AI systems. It recognises the value of industry-led voluntary measures, technical solutions, and policy frameworks that promote innovation and the public interest.

4. Science: The Declaration calls for the removal of structural barriers to AI innovation and the expansion of research infrastructure.

5. Access for social empowerment: AI is recognised as a tool for social empowerment, enabling access to knowledge, services, and opportunities.

6. Human Capital: The Declaration stresses the need for investment in AI skills, workforce development, and public awareness.

7. Resilience, Innovation, and Efficiency: Recognising the resource demands of AI, the Declaration underscores the importance of energy-efficient and resilient AI systems.

Read the Declaration here.

2. Data protection and privacy authorities issue joint statement on AI-generated imagery

On 23 February 2026, a coalition of global data protection and privacy authorities issued a joint statement addressing mounting concerns over AI systems capable of generating highly realistic images and videos depicting identifiable individuals without their knowledge or consent (the Statement). The Statement, coordinated by the Global Privacy Assembly’s International Enforcement Cooperation Working Group (IEWG), reflects growing regulatory unease over non-consensual intimate imagery, defamatory content, and other harmful materials, particularly as such capabilities become integrated into widely accessible social media platforms.

The Statement highlights the risks to children and other vulnerable groups, including the potential for cyber-bullying and exploitation. It reminds all organisations developing or deploying AI content generation systems that they must comply with applicable legal frameworks, including data protection and privacy laws. The creation of non-consensual intimate imagery is emphasised as a criminal offence in many jurisdictions.

The Statement sets out clear expectations for organisations, including:

  • Implementing robust safeguards to prevent misuse of personal information and the generation of non-consensual intimate imagery or other harmful content, with particular attention to depictions of children.
  • Ensuring meaningful transparency regarding AI system capabilities, safeguards, acceptable uses, and the consequences of misuse.
  • Providing effective and accessible mechanisms for individuals to request the removal of harmful content involving their personal information.
  • Addressing specific risks to children by implementing enhanced safeguards and providing clear, age-appropriate information to children, parents, guardians, and educators.

The signatories call for a coordinated regulatory response, recognising the significant harms arising from the non-consensual generation of intimate, defamatory, or otherwise harmful content. They commit to sharing information on enforcement, policy, and educational approaches, consistent with applicable laws, to address these risks collectively.

Read the Statement here.

3. UK regulatory authorities asked to publish AI guidance

On 28 January 2026, the Secretaries of State for Science, Innovation and Technology, and for Business and Trade, issued a joint letter to UK regulators (the Letter). This sets out expectations for supporting safe and effective AI-powered innovation across regulated sectors.

The Letter underscores the government’s commitment to driving economic growth by fostering a regulatory environment that actively supports investment, innovation, and productivity, with AI identified as a key opportunity.

The Letter calls on regulators to maintain a clear organisational focus on enabling safe AI innovation, acting proportionately and transparently, and removing unnecessary barriers to adoption. Key actions requested of regulators include:

  • Publishing a Plan by May 2026: Regulators are asked to work with their sponsor department and the Department for Science, Innovation and Technology to publish a plan detailing how they will enable safe AI-powered innovation. This plan should include:
    • Guidance on how existing rules apply to AI use cases within their remit, covering both regulated entities’ use of AI and the application of rules to new AI services or products.
    • Ensuring regulatory processes, including approvals, are compatible with AI-enabled products and services, particularly those that are dynamic and receive regular updates post-approval.
    • Initiatives to consider whether anonymised or synthetic data sets could be made available to support AI development, deployment, and adoption.
    • The creation of regulatory sandboxes where beneficial, particularly in areas where regulatory uncertainty may hinder AI adoption and innovation.
  • Annual Reporting: Following publication of the plan, regulators are expected to report annually on how their regulatory approach has enabled AI-driven innovation and growth in their sector. This reporting should summarise key actions, outcomes, transparent metrics, and lessons learned, and identify any necessary adjustments to remove regulatory barriers to innovation and AI adoption.

Read the Letter here.

4. US Center for AI Standards and Innovation announces AI Agent Standards Initiative

On 17 February 2026, the Center for AI Standards and Innovation (CAISI) at the U.S. National Institute of Standards and Technology (NIST) announced the launch of the AI Agent Standards Initiative (the Initiative). The Initiative is designed to ensure that the next generation of AI agents - capable of autonomous actions - can be adopted with confidence, operate securely on behalf of users, and operate seamlessly across the digital ecosystem.

AI agents are increasingly able to perform complex tasks autonomously, such as writing and debugging code, managing communications, and conducting online transactions. However, the widespread adoption of these agents is currently limited by challenges around interoperability, reliability, and security. Without robust standards, the risk of a fragmented ecosystem could impede innovation and harm user trust.

To address these challenges, CAISI, in coordination with NIST’s Information Technology Laboratory (ITL) and other federal partners, is advancing the Initiative along three core pillars:

  • Facilitating industry-led development of agent standards and supporting existing U.S. leadership in international standards bodies.
  • Fostering community-led open-source protocol development and maintenance for AI agents.
  • Advancing research in AI agent security and identity to enable new use cases and promote trusted adoption across economic sectors.

NIST will shortly announce further research, guidelines, and deliverables under the Initiative. To ensure broad stakeholder engagement, NIST is soliciting public input through Requests for Information (RFIs) on AI agent security (responses due 9 March) and on AI agent identity and authorisation (responses due 2 April). Beginning in April, CAISI will also convene listening sessions focused on sector-specific barriers to AI agent adoption, with the aim of informing concrete projects to support secure and interoperable deployment.

Read NIST’s announcement of the Initiative here.

5. Irish Government publishes Digital and AI Strategy

On 19 February 2026, the Irish Government published its National Digital & AI Strategy 2030, “Digital Ireland – Connecting our People, Securing our Future” (the Strategy), setting out a comprehensive roadmap to position Ireland as a global leader in digital transformation and AI innovation.

Key pillars of the Strategy include:

  • Apply: A Digital Public Service: By 2030, 100% of key public services will be digitalised, with 90% consumed online. Responsible and transparent use of AI will underpin public service delivery, supported by initiatives such as a new AI Advisory Unit, a National AI Fellowship Programme, and the “AI for Care” strategy for healthcare. The digital transformation of health services will be accelerated through the digitisation of health records and the introduction of a national electronic prescribing service.
  • Grow: A Digital, Innovative & Competitive Enterprise Sector: Ireland aims to remain a location of choice for investment and a global hub for applied AI innovation. The Strategy provides for the creation of an AI Research Centre of Scale, an AI Regulatory Sandbox, and a Quantum Centre of Excellence. The new AI Office of Ireland will act as the central coordinating authority for the EU AI Act, supporting responsible AI innovation and regulatory clarity.
  • Invest: Digital and AI Infrastructure: The Government is committed to investing in secure, resilient, and future-proofed digital and AI infrastructure. This includes the completion of the National Broadband Plan, the rollout of gigabit broadband, strengthening international connectivity, and the development of advanced computing infrastructure such as the CASPIr supercomputer and the AI Factory Antenna.
  • Invest: Cyber Security: Recognising the increasing cyber-related risks, the Strategy sets out a roadmap to enhance Ireland’s cyber security capacity, including a new Cyber Security Strategy, the establishment of a Cyber Security Research Centre of Excellence, and targeted support for compliance with the EU NIS2 Directive and the forthcoming Cyber Resilience Act.
  • Lead: Digital Regulatory Hub and Centre of Expertise: Ireland will reinforce its position as an EU Centre of Excellence and digital regulatory hub, advocating for a balanced, proportionate, and coherent approach to digital regulation at EU level.
  • Empower: Online Safety, Skills & Talent: Online safety, particularly for children and vulnerable groups, is a Government priority. The Strategy supports the implementation of the Online Safety Framework, robust age verification tools, and enhanced media literacy and digital citizenship education.
  • Implementation and Stakeholder Engagement: The Strategy sets out 20 high-level objectives and 90 supporting deliverables, with implementation driven from the centre of Government and regular progress reporting to ensure coherence and impact. Stakeholder engagement is central to delivery, with ongoing collaboration across industry, regulators, and wider society.

The Strategy positions Ireland to maximise the benefits of digital and AI technologies, strengthen its role as a digital leader, and ensure that all of society is empowered to succeed in the digital era.

Read the Strategy here.

6. UAE Central Bank issues guidelines on AI in the financial sector

On 11 February 2026, the Central Bank of the United Arab Emirates (CBUAE) issued a comprehensive guidance note setting out principles and guidelines for the responsible adoption and use of AI and machine learning by licensed financial institutions (LFIs) in the UAE (the Guidance Note). The Guidance Note is non-binding but is intended to promote consumer protection and good market conduct in the deployment of AI/machine learning technologies, with a particular focus on the interests of end users.

Key features of the Guidance Note include:

Governance and accountability

  • LFIs are expected to implement a documented governance framework for AI/machine learning, proportionate to the size and complexity of their operations.
  • Senior management and boards are accountable for AI/machine learning systems, including model selection, deployment, oversight, and ongoing risk management.
  • Regular reporting to senior management and boards is required, and governance structures must facilitate informed decision-making and risk mitigation.
  • An inventory of all AI models and systems must be maintained, with compliance measures and training embedded across relevant functions.

Fairness, non-discrimination, and uethics

  • AI/machine learning systems must not cause discriminatory or manipulative outcomes. Data used for training must be accurate, relevant, and representative.

Transparency and explainability

  • LFIs must be transparent with customers about the use of AI, especially in high-impact decisions, and provide clear disclosures in both Arabic and English.
  • Documentation on model design, data, and assumptions must be maintained for audit purposes.
  • Customers should be provided with meaningful information about AI decision logic and mechanisms for clarification or redress, including opt-out rights where appropriate.

Data quality, privacy, and security

  • Data used in AI/machine learning models must be of high quality and comply with all relevant laws, including the UAE Personal Data Protection Law.

Continuous monitoring and review

  • AI/machine learning systems must be continuously monitored for reliability, relevance, and alignment with consumer protection objectives.

Human oversight and consumer protection

  • Meaningful human oversight is required, particularly for decisions with significant consumer impact. Models of oversight include ‘human-in-the-loop’, ‘human-on-the-loop’, and, for low-risk processes, ‘human-out-of-the-loop’.
  • Consumers must be able to request human review of AI decisions and have access to clear complaints and redress mechanisms.
  • AI must not be used to target consumers with unsuitable products or for pressure-selling or misleading marketing.

Ethical collaboration and innovation

  • LFIs are encouraged to collaborate with industry, academia, and regulators, participate in AI sandboxes, and share best practices and case studies on responsible AI use.

The Guidance Note supplements, but does not replace, existing laws and regulations, and LFIs remain responsible for full compliance with all applicable requirements. The CBUAE encourages LFIs to remain informed of technological developments and to seek clarification where necessary.

Read the Guidance Note here.

7. UK research body launches £1.6 billion AI strategy

On 19 February 2026, UK Research and Innovation (UKRI) published its AI Research and Innovation Strategic Framework (the Framework), setting out a comprehensive vision for the UK’s leadership in AI research, innovation, and responsible adoption. The Framework articulates UKRI’s role as the central public funder across the AI value chain, supporting the government’s ambition to harness AI for economic growth, improved public services, and societal benefit.

The Framework is underpinned by an investment of over £1.6 billion directly targeted at the AI sector, with further funding anticipated as AI becomes increasingly embedded across UKRI’s programmes and councils.

Key elements of the Framework include:

  • Vision: To position the UK as a global leader in developing and deploying AI to drive economic growth, improve lives, and address major societal challenges.
  • Strategic role: UKRI will fund groundbreaking research, build national assets, and support commercialisation and scale-up of AI solutions.
  • Priority action areas: Six priority areas are identified, each with ambitious outcomes for 2031:
    • Technology development and future foundations: Investment in foundational research and mission-driven programmes to accelerate the UK’s leadership in explainable, sustainable, and ‘human-in-the-loop’ AI systems.
    • AI transforming research: Delivery of the National AI for Science Strategy, supporting cross-disciplinary testbeds, equitable access to AI tools, and collaborative platforms to address global challenges.
    • Developing AI skills and talent: Expansion of doctoral and fellowship routes, workforce development, and inclusive career frameworks to grow and retain world-leading AI talent.
    • Accelerating innovation and adoption: Strengthening commercialisation pathways, supporting regional clusters, and incentivising public-private collaboration.
    • Championing responsible and trustworthy AI: Supporting research and assurance toolchains for validated, safe, and accountable AI systems, and shaping global standards through international partnerships.
    • AI-enabling data and infrastructure: Investment in sustainable compute and data foundations, privacy-respecting datasets, and skilled technical teams to underpin AI research and innovation.
    • Implementation approach: UKRI will act as an agile, long-term funder, aligning programmes, removing barriers to adoption, and balancing high-risk research with safeguarding long-term capabilities.
  • Future outlook: UKRI will publish a delivery plan to operationalise the Framework, with regular updates to ensure responsiveness to technological, economic, and security developments. The ambition is for AI to become as ubiquitous as statistics or computers, embedded across all sectors for the benefit of UK residents.

Read the Framework here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.