Key trends
- UK and Singapore: Regulatory frameworks are evolving to address risks from agentic AI.
- UK and UAE: Protection of children’s only privacy is a growing priority, with new laws and monitoring programmes.
- UK, France, China and Saudi Arabia: Authorities are increasing penalties and demanding stronger cybersecurity measures to prevent data breaches and protect personal data.
- EU and Middle East: International alignment is advancing, with mutual adequacy recognitions and clearer responsibilities for cross-border data transfers.
Must reads / listens
- DUAA: new ICO guidance on data protection complaints handling duties by Robert Allen, Lawrence Brown, Keisuke Noma, Alexander Brown
- Digital opportunities, disruption and risk: Cybersecurity podcast, by Sarah Bailey, Jingyuan Shi, Robert Allen, William Dunning
- Digital Omnibus Update, by Christopher Götz, Minesh Tanna, Matteo Susta, Jaap Tempelman, Camille Saettel, Jérémie Doornaert, Tina Gausling
Regional updates
UK
ICO publishes key research paper on agentic AI
On 8 January, the ICO published its latest Tech Futures paper on the development of, and risks and opportunities presented by, agentic AI systems. Agentic AI pairs general‑purpose models with tools, memory and interfaces so systems can plan, take actions and adapt with less human direction—moving beyond simple chat or fixed automations into assistants that book travel, triage service requests or orchestrate multi‑step workflows.
The ICO’s paper highlights how the capability of these systems is rising quickly across e-commerce, workplace operations, cybersecurity, public services and personal assistants. For organisations deploying agentic AI, though, the data protection compliance bar is unchanged: agency in software does not avoid the need for human accountability. The report addresses a range of specific areas of risk, including clarifying controller/processor roles across agentic systems; limiting automated decision‑making or providing for notice, contestation and meaningful human intervention where significant decisions are made; setting narrow purposes and avoiding “open‑ended” access to ensure data minimisation; and hardening new attack surfaces (such as goal manipulation and memory poisoning) created by agentic systems. The ICO also speculates about how the role of the DPO could evolve and increasingly be supported and enabled by specific “DPO agents”.
Practically, the ICO recommends building privacy by design into agentic architectures and strikes an optimistic note by pointing to a range of potential privacy-enhancing controls for agentic systems and scenarios where privacy management agents could help mitigate privacy and confidentiality risks in daily life. The ICO invites organisations to engage with it via its innovation support services and Regulatory Sandbox. The report’s themes will also help to inform the ICO’s new statutory code on AI and automated decision-making (ADM) and updates to its guidance on ADM and profiling.
For more information, see and download the report here: ICO tech futures: Agentic AI | ICO
ICO publishes guidance on international transfers
The ICO has published its guide to international transfers. This guidance is designed to help organisations to identify both when an international transfer is taking place in respect of which they must comply with the UK GDPR and what they need to do to ensure compliance.
Various aspects of the guidance helpfully restate earlier guidance, such as in relation to the distinction between “transit” (broadly, when data passes through a country rather than undergoing any substantive processing in it, i.e. to which the international transfer rules under UK GDPR do not apply) and “transfer” (which can include situations where data is remotely accessed rather than being transferred in layman’s terms). Similarly, the three compliance options – relying on adequacy regulations (i.e. scenarios where the ICO has designated countries or organisations as offering adequate protection), putting in place adequate safeguards or relying on exceptions - are well-established and directly equivalent to the position under the EU GDPR.
The guidance also touches on the territorial scope of the UK GDPR. As well as applying to organisations in the UK, it can apply to others where (for example) the processing is inextricably linked to an organisation’s UK establishment or relates to the offering of goods or services to individuals in the UK (where this offering specifically targets UK individual) or monitoring their online behaviour. This means that the process of ensuring compliance with international transfer requirements first requires organisations to understand how the UK GDPR applies to them. In this respect, the guidance is similar to the equivalent guidance from the European Data Protection Board.
The guidance also looks beyond the letter of the law in the UK GDPR, considering which party initiated the transfer as the factor that determines where compliance responsibility lies. It considers example scenarios such as:
- where a UK controller contracts with a UK processor for cloud services that uses a network of processors around the world, the UK processor is deemed to have initiated the transfer;
- where a UK controller instructs one processor to transfer personal data to another of the UK controller’s processors, the UK controller is deemed to have initiated the transfer; and
- where a UK processor transfers personal data to a controller that is outside the UK, provided that the UK processor is only complying with its controller’s instructions, it is not deemed to have initiated the transfer.
The ICO notes that in some respects the position under its guidance differs from that under the European Data Protection Board guidance on the equivalent provisions of the EU GDPR. Last but not least, the ICO provides guidance on the requirements for carrying out transfer risk assessments (or TRAs), the purpose of which, following the introduction of the Data (Use and Access) Act 2025 (DUAA), is to determine whether the standard of protection is “not materially lower” following the transfer to the destination. The change introduced by the DUAA is not material; the guidance indicates that organisations must still consider risks to people’s rights in the destination country both from third parties accessing the personal data that aren’t bound by the adequate safeguard (such as government and public bodies) and from difficulties enforcing the safeguard. If it would be helpful to discuss how we can support you in relation to international data transfer compliance (including by making our CtrlTransfer subscription tool available to you), please let us know.
For more information, see the guidance here.
ICO fines LastPass £1.2m for security failings in 2022 breach
On 20 November 2025, the ICO fined password manager provider LastPass UK Ltd £1,228,283 over a 2022 data breach affecting the personal data of approximately 1.6 million UK users. The breach stemmed from two linked incidents: first, a hacker compromised a corporate laptop and accessed LastPass’ development environment, exfiltrating encrypted company credentials which, if decrypted, could allow access to its backup database; second, the attacker exploited a known vulnerability in a third‑party streaming service on a senior employee’s personal device, installed a keylogger, captured the employee’s master password and bypassed multi‑factor authentication using a trusted device cookie. Because the employee’s personal and business LastPass vaults were linked and could both be unlocked with the same master password, once the attacker had captured that password they were able to get into the employee’s business vault and obtain the AWS access key and decryption key needed to download the backup database. Using these, the attacker extracted a backup containing customer names, email addresses, telephone numbers, billing addresses, IP addresses and stored website URLs. Because LastPass uses a “zero knowledge” design in which the master password is stored only on the user’s own device and never held by LastPass, the ICO found no evidence that the attacker was able to decrypt any of the passwords or other sensitive information stored in customers’ vaults.
The ICO found infringements of the integrity and confidentiality principle in Article 5(1)(f) and of Article 32(1)(b) UK GDPR, concluding that LastPass had failed to implement appropriate technical and organisational measures. In particular, senior staff with access to highly sensitive corporate credentials were allowed to access their Employee Business vaults from unmanaged personal devices, and employees were allowed to link their Employee Business and Personal accounts so they could be accessed using a single master password—practices the ICO considered inconsistent with its own and the NCSC’s guidance on device security, working from home and separation of home and work passwords.
The case underlines the need to restrict privileged access to firm-managed, security‑hardened devices, avoid single credentials spanning personal and business environments, and benchmark security controls against ICO and NCSC guidance, especially where services are marketed as security‑enhancing.
For more information, see the ICO’s penalty notice here.
ICO launches review of children’s online privacy in mobile games
The ICO has announced a new monitoring programme examining how ten popular mobile games protect children’s personal data. As part of this initiative, the ICO will review each game’s practices against the standards of its Children’s Code, with particular focus on:
1. Default privacy settings – assessing whether games provide high-privacy settings for child users, without relying on children (or parents) to make changes;
2. Geolocation controls – evaluating how location data is collected, used and restricted, especially where features may risk revealing a child’s real-time or historic whereabouts; and
3. Targeted advertising practices – scrutinising the in-game ad tech ecosystems, including any profiling, behavioural targeting or data-driven advertising involving under-18s.
Broadening of ICO focus
This programme builds on earlier Children’s Code action, which prompted major platforms to strengthen default privacy settings, introduce just in time notices, and limit the visibility of child accounts. Having previously concentrated on social media and video-sharing platforms, the ICO’s move into mobile gaming signals its intention to enforce the Children’s Code across a broader range of online services likely to be accessed by children.
This aligns with wider global regulatory trends, including action by the US Federal Trade Commission under the Children’s Online Privacy Protection Rule (COPPA), and reflects the growing international focus on children’s digital privacy and safety.
How organisations can prepare
Gaming companies – and any organisation offering child accessible digital services – should consider:
- Running a Children’s Code gap analysis;
- Revisiting lawful bases and refreshing Data Protection Impact Assessments (DPIAs), particularly where profiling, personalisation or behavioural advertising is involved;
- Updating privacy notices to ensure they are age appropriate;
- Validating age assurance mechanisms; and
- Stress testing geolocation features and cross platform data sharing controls.
For more information, see the announcement here.
EU
Digital Omnibus: expected changes to GDPR and ePrivacy Directive
On 19 November 2025, the European Commission unveiled its Digital Omnibus package, which is set to introduce significant amendments to the GDPR and ePrivacy framework. Notably, the definition of personal data is refined: information relating to an identifiable person will not be considered personal data for an entity that cannot reasonably identify that person, even if identification is possible by another party. This codifies recent CJEU case law but, in practice, might not achieve its goal to enhance legal certainty.
The package also proposes adjustments to the rules on special categories of data, aiming to support AI development under strict safeguards. New legal bases are introduced for limited processing in the context of AI system and model development, and for biometric identity verification where the data subject retains control.
Further, the proposal refines data subject rights and transparency obligations. Requests remain free of charge, but controllers may refuse or charge for manifestly unfounded or abusive requests. The principle of purpose limitation is clarified, confirming that research and statistical re-use is compatible by default.
Finally, the proposal integrates the ePrivacy cookie rules into the GDPR through new Articles 88a and 88b. Consent requirements are maintained, but refusal must be as easy as acceptance and supported through automated, machine-readable signals. Browser providers will be required to technically support these mechanisms within a phased transition period.
For more information, see our Digital Omnibus Update here and the Commission’s Proposal here.
Cross-border enforcement: new timelines for GDPR complaints
New EU procedural rules have been adopted to streamline the handling of GDPR complaints and investigations involving cross-border processing. Published in the Official Journal on 19 December 2025, these rules will apply to complaints lodged after 2 April 2027. They are intended to address inefficiencies in the one-stop-shop mechanism, where divergent national procedures have led to lengthy investigations and inconsistent outcomes.
The new framework introduces binding deadlines for supervisory authorities. Lead authorities must generally submit a draft decision within 15 months of confirming competence, with a single extension of up to 12 months permitted in exceptional cases. For less complex matters, a simplified cooperation procedure applies, with a shorter 12-month deadline. An early resolution mechanism is also introduced, allowing authorities to close complaints swiftly where the alleged infringement has ceased and the complainant does not object.
For more information, see the Regulation here.
Data transfers: UK and Brazil recognised as adequate, U.S. guidance updated
On 19 December 2025, the European Commission renewed the United Kingdom’s adequacy status under the GDPR, confirming that the UK continues to provide a level of personal data protection essentially equivalent to that of the EU. The renewed adequacy decisions are valid for six years, until 27 December 2031, with a mid-term review planned after four years. Similarly, the Commission adopted its adequacy decision for Brazil on 26 January 2026. These developments ensure continued, data flows from the EU to both the UK and Brazil, without additional transfer safeguards.
On 15 January 2026, the European Data Protection Board adopted an updated version of its FAQ on the EU-U.S. Data Privacy Framework (DPF). The updated document offers practical guidance for European organisations transferring personal data to U.S. entities certified under the DPF, clarifying operational requirements, eligibility, and compliance steps. It also emphasises that, while the DPF facilitates lawful transfers, all other GDPR obligations remain fully applicable.
For more information, see the UK renewal decision here, the Brazil adequacy decision here, and the updated DPF FAQ here.
'Russmedia’: CJEU clarifies platform liability for user ads under the GDPR
On 2 December, the CJEU issued a judgment clarifying when operators of online marketplaces may be considered (joint) controllers under the GDPR for personal data included in user-posted advertisements. The Court held that a platform can be deemed to determine the purposes and means of processing where it designs the marketplace, sets parameters for dissemination, and retains broad rights over uploaded content. As a result, GDPR obligations may apply even if the platform did not create the advertisement and removes it promptly after being notified.
The judgment also confirms that intermediary liability privileges under EU law (as reflected in the Digital Services Act) do not override GDPR responsibilities when the operator qualifies as a controller.
In practice, this means platforms face heightened duties where advertisements contain special category data (such as information about sex life). Controllers must implement appropriate technical and organisational measures, which may include identifying ads with sensitive data, verifying advertiser identities, refusing to publish unlawful ads, and adopting security measures to limit copying and further dissemination.
For more information, see the judgment here.
France
Data breach - CNIL fines Free and Free Mobile €42 million
On 13 January 2026, the CNIL (the French data protection authority) fined French telecom operators Free and Free Mobile a total of €42 million following a large-scale data breach that compromised the personal data (including IBANs) of 24 million subscriber contracts.
The investigation, triggered by over 2,500 complaints, revealed multiple violations of the General Data Protection Regulation (GDPR) by both companies in their capacity as data controllers. The CNIL found that Free and Free Mobile had failed to implement basic security measures, such as robust VPN authentication and effective monitoring for unusual activity. In addition, the notification emails sent to affected individuals lacked key information required under Article 34 GDPR. In the specific case of Free Mobile, the CNIL also found a breach of the obligation to limit the retention period of personal data.
The amounts of the fines were influenced by the financial capacities of the data controllers, their lack of knowledge of essential security principles, the number of people affected, and the sensitivity of the compromised data.
Accordingly, the CNIL imposed a fine of €27 million on Free Mobile and a fine of €15 million on Free, which has announced that it will appeal the decision before the French Council of State.
For more information, see the CNIL's decisions here and here (in French only).
Italy
Italian data protection authority: guidelines on data protection and security in whistleblowing systems
On 27 November 2025, the Italian Data Protection Authority (the “Garante”) issued an Opinion on ANAC’s draft guidelines for whistleblowing channels, focusing on the data protection and cybersecurity implications of systems established under Legislative Decree No. 24/2023, which implements Directive (EU) 2019/1937. The Opinion aims to ensure that whistleblowing mechanisms comply with data protection principles and are supported by robust technical and organisational security measures, while remaining effective for reporting misconduct. Key recommendations include guaranteeing the confidentiality and, where applicable, anonymity of whistleblowers, preventing indirect identification through technical traces, and strictly limiting access to whistleblower information. The Garante emphasises data minimisation, purpose limitation, and the prohibition of collecting excessive or irrelevant data. Whistleblowing platforms must feature strong security measures such as encryption, authentication, access controls, and role segregation. Organisations are required to provide clear privacy notices detailing data processing activities, legal bases, recipients, retention periods, and data subject rights. The Opinion calls on public and private entities to review and update their whistleblowing procedures, processing records, IT security, and staff training to ensure compliance.
For more information, see the guidelines here (in Italian only).
Belgium
New guidance and model agreement on secondary use of health data
In October 2025, pharma.be, the association of the medicines industry in Belgium, published non‑binding guidelines and a template agreement to assist (bio)pharmaceutical and medtech companies in re‑using routinely collected patient data for scientific research in line with the GDPR and the Belgian Data Protection Act. The guidelines provide a practical, step‑by‑step methodology to assess secondary‑use projects involving real‑world data from healthcare providers, addressing key issues such as anonymisation, allocation of roles (controller, joint controller, processor), purpose limitation and research compatibility, legal bases (including scientific research under Article 9(2)(j) GDPR and the Belgian Data Protection Act), transparency towards patients, and the need for appropriate contractual, technical and organisational safeguards.
To operationalise this framework, a model agreement has been developed as a standalone contract between healthcare institutions and companies for projects where an institution processes patient data to deliver an anonymised, aggregated report. The template covers data protection obligations, security, cooperation, remuneration, confidentiality and intellectual property, and is designed to accommodate both controller–processor and joint‑controller scenarios, depending on the factual set‑up of the project.
For more information, see the Guidelines here and the model agreement here.
Belgian DPA sets priorities for the coming years
The Belgian Data Protection Authority (APD/GBA) has adopted a new Strategic Plan for 2026–2028 built around stricter prioritisation and cooperation. The Authority plans to focus its enforcement resources on high‑impact cases, in particular large‑scale high‑risk processing and the processing of children’s data, rely more on mediation for straightforward complaints, and move away from answering individual ad hoc information requests in favour of general guidance and thematic actions.
For more information, see the Strategic Plan here (in French and Dutch only).
Middle East
New Federal Decree-Law No. 26/2025 on Child Digital Safety in the United Arab Emirates
Effective 1 January 2026, Federal Decree‑Law No. 26/2025 introduces a comprehensive overhaul of how the United Arab Emirates regulates children’s online safety. The legislation is designed to protect children from harmful digital content and online practices that may affect their physical, psychological, or moral well‑being.
At a high level, the law requires:
- Restrictions on children’s data: Digital platforms are prohibited from collecting, processing, publishing, or sharing the personal data of children under 13, except in narrowly defined circumstances.
- Mandatory privacy‑by‑default and age verification: Platforms must implement default child‑safe privacy settings and deploy effective age‑verification mechanisms.
- Prohibition on gambling‑related digital activities: Platforms must prevent children from participating in, creating accounts for, or accessing online commercial games involving gambling or any digital activity that includes betting.
- Content‑safety and filtering obligations: Platforms must activate content‑filtering systems and take necessary measures to prevent children’s exposure to harmful content, ensuring safe and supervised use of internet services and electronic devices.
- Defined duties for caregivers: Caregivers must monitor children’s digital activities, use parental‑control tools, and refrain from creating accounts for children on platforms that are not age‑appropriate or that fail to meet enhanced child‑protection standards.
The new Child Digital Safety law applies to internet service providers and digital platforms operating in the United Arab Emirates or targeting users in the United Arab Emirates, giving it extraterritorial reach. Covered platforms include websites, search engines, smart applications, messaging services, forums, online gaming platforms, social media platforms, live‑streaming services, podcast platforms, video‑on‑demand services, and e‑commerce platforms. The law also applies to individuals responsible for the care of children and expressly defines their obligations regarding digital safety.
The new Child Digital Safety law further establishes the Child Digital Safety Council, chaired by the Minister of Family, to coordinate federal and local efforts and strengthen collaboration with the private sector to ensure a unified national approach to children’s digital safety.
For more information, see the official statement here and the source here.
Significant increase in enforcement from Saudi Data & AI Authority (SDAIA)
Saudi Arabia’s data protection authority, SDAIA, has significantly ramped up enforcement of the Personal Data Protection Law 2021 (PDPL), issuing 48 decisions against organisations for noncompliance since the law became fully enforceable in September 2023. This marks a new phase of regulatory maturity, with the authority’s specialised committees now actively investigating infringements and imposing sanctions such as fines and orders to remedy unlawful practices. Common violations include unlawful data processing, inadequate transparency, insufficient security measures, and unauthorised marketing communications. The heightened enforcement signals that compliance is now a practical necessity, not merely a formality, and reflects the Kingdom’s commitment to embedding data privacy as a core element of its digital transformation agenda.
For entities operating in Saudi Arabia or processing the data of Saudi residents, this development means a comprehensive reassessment of privacy governance is essential. Organisations must ensure robust consent management, transparent privacy notices, and strong technical safeguards, while also conducting regular risk assessments and compliance audits.
For more information, see IAPP’s commentary here.
Oman’s Personal Data Protection Law Executive Regulations enter into force
On 5 February 2026, the extended one‑year grace period for the Executive Regulations (Ministerial Decision 34/2024) issued under the Oman Personal Data Protection Law (Royal Decree No. 6/2022) (together, the “Oman Data Protection Laws”) came to an end. The Oman Data Protection Laws are now fully in force, and Oman's Ministry of Transport, Communications and Information Technology (“MTCIT”) is now exercising its full supervisory and enforcement mandate as Oman’s data protection regulator.
The Oman Data Protection Laws establish a framework governing the collection, use, disclosure, storage and transfer of personal data in or from Oman, including obligations on controllers and processors, specific restrictions on the processing of sensitive personal data, rules on cross‑border transfers and enhanced data subject rights. With the transition period now concluded, entities operating in Oman should promptly, if not already done so, familiarise themselves with these requirements and undertake any necessary updates to their policies, procedures, contracts and technical measures to ensure compliance.
Regional adequacy: mutual recognition between the ADGM, DIFC and QFC
In January 2026, the Abu Dhabi Global Market (“ADGM”), Dubai International Financial Centre (“DIFC”), and Qatar Financial Centre (“QFC”) each adopted mutual, reciprocal data protection adequacy decisions, adding one another to their respective adequacy lists. Each regulator has therefore recognised the others’ data protection regimes as providing an essentially equivalent level of protection for personal data.
Practically, this enables entities established in any of the DIFC, ADGM or QFC to transfer personal data to entities in the other two jurisdictions without implementing additional transfer safeguards such as binding corporate rules or standard contractual clauses. This should streamline cross border data flows within the three financial centres and reduce the contractual and operational burden of regional compliance. The triple recognition does not, however, displace underlying obligations, entities must still comply with local data protection rules and regulations.
This development represents a significant step towards regional alignment of data protection regimes in the Gulf and underlines the increasing sophistication of the DIFC, ADGM and QFC data protection frameworks, potentially paving the way for broader regional and international adequacy arrangements.
For more information, see the source here.
China
Amendment to cybersecurity law takes effect
On 28 October 2025, the Amendment to China’s Cybersecurity Law (Amendment) was adopted by the National People’s Congress, which took effect on 1 January 2026.
The Cybersecurity Law was originally enacted in June 2017, providing extensive security obligations in relation to network operation, network information, risk monitoring and emergency disposal. The changes brought by the Amendment are primarily about penalties, including increasing the fine limit from CNY 1 million (approx. USD 140,000) to CNY 10 million (approx. USD 1. 4 million), clarifying the penalties for specific violations and providing for the conditions where penalties may be mitigated, reduced or waived.
The Amendment also introduces explicit support for AI research and development, aiming to promote foundational research, algorithmic innovations, and the development of AI-related infrastructure.
For more information, see the official text of the Amendment here (in Chinese only).
China enacts certification rules for CBDT
Obtaining the Personal Information Protection Certification (PIP Certification) from designated professional agency (Certification Agency) is one compliance route to transfer personal information out of China. Same as the Chinese SCCs, the PIP Certification applies to those data exporters that: (i) are not identified as operators of “critical information infrastructures”; and (ii) have transferred non-sensitive personal information of 100,000 to one million individuals or sensitive personal information of less than 10,000 individuals out of China since 1 January of the current year, unless an exemption applies.
The Measures on Certification for Personal Information Outbound Transfer (Measures), which took effect on 1 January 2026, provide for the content of impact assessment (to be conducted by data exporters), procedures and validity period of certification, and the filing of Certification Agencies, etc.
As of 30 December 2025, three Certification Agencies had completed their filings with the Cyberspace Administration of China (CAC) and may provide PIP Certification services in accordance with the Measures, which are China Cybersecurity Review, Certification and Market Regulation Big Data Centre, CAC Data and Technical Support Centre, and Beijing CESI Certification Co., Ltd.
For more information, see the official text of the Measures here (in Chinese only).
Singapore
Singapore launches governance framework for autonomous AI agents
Singapore has introduced a new governance framework addressing the unique risks of agentic AI. Unlike traditional and generative AI, AI agents are capable of independent reasoning, decision-making and executing multi-step tasks with limited human oversight.
The use of AI agents with access to sensitive information, such as customer databases and financial records, might introduce new risks, including unauthorised and erroneous actions. The framework addresses these risks through four domains:
- Assessing and bounding the risks upfront - e.g., by placing limits on access to data
- Human accountability - e.g., defining significant checkpoints for human approval
- Technical controls throughout the agent lifecycle - e.g., baseline testing, controlling access to whitelisted services
- End-user responsibility - e.g., through education
Unveiled at the World Economic Forum on January 22, the Model AI Governance Framework for Agentic AI provides guidance for responsible deployment while emphasizing that humans remain ultimately accountable. Developed by the Infocomm Media Development Authority (IMDA), the framework builds on Singapore's 2020 AI governance guidelines.
The framework supports Singapore's digital economy strategy while maintaining trust as organizations increasingly adopt agentic AI to automate tasks and drive transformation.
For more information, see the Model AI Governance Framework for Agentic AI here.
New national certification for data protection
On 7 July 2025, Singapore has elevated the Data Protection Trustmark (DPTM) to a new Singapore Standard (SS 714:2025), which puts it on par with global data protection benchmarks and international best practices. Companies that demonstrate accountable data protection practices can now apply to be certified under the new Data Protection Trustmark (DPTM) Singapore Standard, setting the benchmark for data protection excellence.
The DPTM Singapore Standard provides organisations with clearer data protection requirements around critical areas like third-party management and overseas transfers, helping certified organisations to demonstrate their commitment to effective data protection. The Singapore Accreditation Council will provide oversight to the certification bodies to ensure assessment for the DPTM Standard is conducted professionally and meets globally recognised standards. The streamlined process means that organisations will now have a single point of contact throughout their journey from application to certification, working directly with their assessment bodies. Consumers can look at the Trustmark to be assured that these DPTM-certified organisations are following world-class practices in protecting their personal data.
For more information, see the Singapore Standard (SS 714:2025) Implementation Guide here.
Hong Kong
Guidance on the use of CCTV Surveillance
The Office of the Privacy Commissioner for Personal Data has issued updated guidance on the responsible use of CCTV surveillance under the Personal Data (Privacy) Ordinance (PDPO). The guidance clarifies that while the PDPO does not prohibit CCTV use, any system that captures images or information relating to identifiable individuals constitutes the collection of personal data and must comply with the PDPO.
Data users must take practicable measures when using CCTV, including conducting a pre-installation assessment to ensure necessity and proportionality, limiting the scope and duration of monitoring to what is strictly required, and avoiding installation in sensitive areas where individuals have a reasonable expectation of privacy. Data collection should be minimised to what is adequate and relevant for the purpose, with audio recording avoided unless strictly necessary. Where practicable, users should consult with affected individuals to address privacy concerns, and regularly review the necessity and effectiveness of CCTV use, discontinuing it if less privacy-intrusive alternatives become available.
For more information, see the Guidance on the use of CCTV surveillance here.



_11zon.jpg?crop=300,495&format=webply&auto=webp)

_11zon.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)






_11zon.jpg?crop=300,495&format=webply&auto=webp)


.jpg?crop=300,495&format=webply&auto=webp)
_11zon.jpg?crop=300,495&format=webply&auto=webp)