Download the full visual bulletin here, text below.
Canadian federal government prepares to enact a draft AI Act
16 June 2022
Who is affected?
Firms using and involved in the development of AI technologies in Canada.
What is it?
The federal government in Canada introduced a bill which, if passed, would enact the Artificial Intelligence and Data Act (AIDA). AIDA is principle-based and will require industries to demonstrate the deployment of responsible AI.
AIDA has two key purposes:
to ensure that "high-impact" AI systems (still to be defined) are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and biases; and
to prohibit conduct which may result in serious harm to individuals or their interests.
Amongst other developments, AIDA would establish a new AI and Data Commissioner to assist with enforcement. The Commissioner will have the power to audit organisations, producing a report at the organisation's expense and make necessary orders. Organisations guilty of an offence under AIDA may also be liable for up to $25m CAD or 5% of global revenue. However, if an organisation can establish it has exercised due diligence in an attempt to prevent the offence, it may have a successful defence.
AIDA also imposes a mandatory reporting obligation where the use of an AI system results in or is likely to result in "material harm" (also still to be defined). The second reading of the bill is currently in progress and was most recently debated in the House of Commons on 4 November. These debates may lead to further proposed amendments in the future.
What should I do?
Read the draft bill here and consider how your business activities may be affected by the obligations under AIDA.
Meta reaches a settlement with the US Department of Justice in relation to its algorithmic advertising systems
21 June 2022
Who is affected?
US firms using or considering the use of AI.
What is it?
The US Department of Justice (DOJ) has reached a settlement with Meta Platforms (Meta) to resolve allegations that Meta has engaged in algorithmic discriminatory advertising in violation of the Fair Housing Act (FHA).
The complaint alleges that Meta uses algorithms to determine which Facebook users receive housing ads and that those algorithms partly rely on protected characteristics under the FHA. Specifically, the allegations claim that Meta's system discriminates against Facebook users based on race, colour, religion, sex, disability, familial status and national origin.
Under the settlement, Meta is required to stop using its algorithmic advertising tool and also develop a new system which mitigates against bias by the end of the year. The settlement marks the first time that Meta has agreed to change its ad targeting and delivery system to guard against algorithmic bias. Failure to sufficiently change the delivery system will result in the DOJ proceeding with the litigation.
What should I do?
Consider whether your AI systems could involve the processing of information regarding protected characteristics under the FHA in a discriminatory manner.
Read the DOJ press release here.
The UK DCMS publishes the outcome of its consultation on data
23 June 2022
Who is affected?
All UK firms implementing or looking to implement AI systems.
What is it?
The UK Department for digital, culture, media and sport (DCMS) set out the UK Government's plans to reform the UK's approach to data in its consultation outcome. Whilst the Government has considered the role of fairness in AI governance as part of a white paper (available here), it does not currently plan to legislate on this point.
The consultation outcome addressed AI-related areas including:
Building trustworthy AI systems - The government had asked whether organisations should be able to use personal data more freely for developing AI systems. The majority of respondents felt that the current data protection regime already provided enough scope to experiment and develop AI.
Bias mitigation - Most respondents agreed there should be legal clarity on how sensitive data can be lawfully processed in relation to bias monitoring and correction in AI systems. Concerns were also highlighted regarding the potential for loopholes if sufficient safeguards were not implemented. The Government plans to introduce a condition to the DPA to enable the processing of sensitive personal data for monitoring and correcting bias in AI systems. This would be subject to appropriate safeguards and limitations.
Automated decision-making and profiling - Respondents were largely opposed to the removal of Article 22 UK GDPR (which governs automated decision-making and profiling), but noted that its efficacy is uncertain. The Government confirmed it will not pursue the proposed removal of Article 22, but that it would consider how it can be amended to clarify its application.
Public trust in AI systems - The government will further consider the approach to explainability of AI-powered decision-making through a white paper on AI governance.
What should I do?
This consultation outcome will likely inform further legislation. We recommend reading the consultation outcome and considering whether your firm's AI systems are likely to be affected by the proposals.
Read the full consultation outcome here.
UK IPO consultation outcome on AI and IP
28 June 2022
Who is affected?
Firms dealing with AI-related intellectual property in the UK.
What is it?
The UK Intellectual Property Office (IPO) has published the outcome of its consultation on AI and Intellectual Property.
The key takeaways from the outcome are:
Continued copyright protection of Computer Generated Works (CGW) - The use of AI to generate creative content is still in its early stages and the effect of any changes to the copyright regime would be uncertain. However, future changes to CGW have not been ruled out.
Introduction of a new copyright exception for text and data mining (TDM), allowing it to be used for any purpose - Rights holders will not be able to charge for UK licences for TDM and cannot opt-out of this exception.
No patent protection for AI-generated works (i.e., no change to existing UK patent legislation) - The Government recognises the need to keep this under review and advance discussions internationally to ensure that any future changes can be harmonised at the international level. The Government aims to further seek to address perceptions that the current legislation prevents the patenting of AI-assisted inventions.
What should I do?
Review current practices to identify how the Government's proposals may affect your business activities.
Read the full outcome here and more about our AI IP offering here.
Business at OECD publishes findings on implementing the OECD’s AI Principles
5 July 2022
Who is affected?
All organisations using and developing AI in-house.
What is it?
Business at the Organisation for Economic Co-operation and Development (OECD) has published its findings on implementing OECD's AI Principles following research on seven different organisations.
Key insights from the publication include the following.
Successful AI adaptation requires AI governance to be prioritised across all levels of the organisation, with clear channels of communication and escalation regarding potential AI risks.
Teams should be upskilled with appropriate AI training, both technical and non-technical.
Transparency does not automatically equate to explainability.
Organisations should ensure that structures exist to allow wider organisational buy-in to develop and deploy AI.
What should I do?
Consider the publication's key findings and suggestions for best practice with regards to implementing the OECD AI Principles. The publication sets out helpful examples of how different organisations implemented the AI Principles internally.
Read the full publication here.
The UK ICO’s newest strategic plan places emphasis on AI
14 July 2022
Who is affected?
Developers of AI, organisations using AI algorithms in their recruitment process and the health and life sciences sector.
What is it?
The UK Information Commissioner's Office (ICO) has published a new strategic plan, the ICO25, and AI is firmly on its radar.
ICO25 emphasises safeguarding vulnerable groups and empowering the public. In this regard, the ICO plans to investigate the use of algorithms in recruitment which could potentially discriminate against those with protected characteristics.
The ICO will set out its expectations for AI developers on ensuring that algorithms treat people fairly. In addition, whilst the increasing capability of biometric technologies is promising, the ICO notes that it is also inherently risky, especially around emotion recognition technologies which could discriminate against vulnerable groups.
ICO25 also sets out to publish a "guidance pipeline" which will include guidance on emerging technologies such as AI and biometrics.
The ICO also aims to identify key issues that will influence the way personal data is used. It will focus its efforts on areas such as the regulation of biometrics and health data.
What should I do?
Look out for the ICO's upcoming guidance publications where focus will be on AI and biometrics. In the meanwhile, firms should remain alert to the issues surrounding the use of algorithms in recruitment and facial recognition, especially if such technology is already utilised within the organisation.
Read the ICO25 plan here.
The UK Government’s proposal for a new AI rulebook
18 July 2022
Who is affected?
UK firms interested in or involved in AI.
What is it?
The key takeaway is that in keeping with its pro-innovation approach post Brexit, the UK Government is proposing a light-touch approach that will be much less onerous than the EU AI Act, and, at this stage, unlike the EU, is not proposing specific AI legislation.
The Government also departs from the EU's strategy to establish a centralised body to oversee the use of AI technology. Rather than a single regulator, oversight will be left to a multitude of regulators (e.g., Ofcom, the CMA, the ICO, the FCA and the MHRA) which will tailor rules for their specific sectors, whilst taking a risk-based, proportionate approach to regulation. This will complement existing legal enforcement powers.
The approach is based on the following six core principles that regulators must work to apply:
Ensure that AI is used safely
Ensure that AI is technically secure and functions as designed
Make sure that AI is appropriately transparent and explainable
Consider fairness
Identify a legal person to be responsible for AI
Clarify routes to redress or contestability
What should I do?
Whilst this policy paper does not propose any regulations at this stage, it indicates the Government's emphasis on responsible AI implementation. It will therefore be important to ensure that these six core principles are considered when designing and implementing AI systems within the UK.
Read the full text of the policy paper here and our update with insights organisations here.
InfoComm Media Development Authority (IMDA) launches Singapore’s first Privacy Enhancing Technology (PET) Sandbox
20 July 2022
Who is affected?
Companies in Singapore interested in or involved in AI.
What is it?
IMDA launched the PET Sandbox on 20 July 2022 for companies who wish to experiment with PET, to work with trusted PET solution providers to develop use cases and pilot PET.
PET allows businesses to extract value from data without exposing the data itself, thereby protecting personal data and commercially sensitive information. PET increases the options for B2B data collaboration, enables cross-border data flows, and increases the availability of data for developing AI systems.
The PET Sandbox will:
match-make use case owners to a panel of PET solution providers;
provide grant support to user companies to scope and implement the pilot projects; and
provide regulatory support to give assurance and minimise concerns related to regulatory compliance when deploying PETs.
What should I do?
Consider whether the PET Sandbox could be useful to your company's AI workflow.
Read more here.
European Parliament study on auditing the quality of datasets used in algorithmic decision-making systems
25 July 2022
Who is affected?
Companies interested in or involved in the use of AI within the EU.
What is it?
The European Parliament's study puts forward several policy options in response to the challenges of mitigating biases in AI.
The study provides an overview of biases in the context of AI/ML, including pre-existing bias, technical bias, emerging bias, cognitive bias, statistical bias, cultural bias, implicit/explicit bias, desirable/undesirable bias, and expected/unexpected bias.
Some key takeaways from the analysis are as follows.
There are shortcomings in AI/ML which call for additional regulatory tools. Legislation must make an effort to define what is understood as fair or unfair in the context of bias.
EU directives on discrimination include "loopholes that hinder the prevention of bias". Specific data protection regulation could play a key role in solving this issue by legislation insisting in fairness and providing for new uses for data protection impact assessments.
Strengthening data subject transparency could be extremely helpful in finding the source of biased results.
Misalignment between future regulations and the GDPR need to be corrected.
What should I do?
Ensure that your firm's use of AI systems sufficiently mitigates any potential bias whilst aligning with the requirements of GDPR. Read more here.
NIST releases new draft Playbook for AI risk management best practices
18 August 2022
Who is affected?
All stakeholders building outcomes in AI development or governance.
What is it?
As a companion to the AI Risk Management Framework (AI RMF) developed by the US National Institute of Standards and Technology (NIST), NIST have published a draft of the AI RMF Playbook (the Playbook), which sets out actionable suggestions to help:
-produce or evaluate trustworthy AI systems;
-cultivate a responsible AI environment where risk and impact are taken into account; and
-increase organisational capacity for comprehensive socio-technical approaches to the design, development, deployment, and evaluation of AI technology.
The Playbook is aimed at implementing AI technologies whilst mitigating algorithmic biases and other risks to AI systems. The Playbook includes suggested actions, references, and documentation guidance for "Map" and "Govern" two of the four overarching aspects ("functions") of the AI RMF.
Governance processes are the backbone of risk management and focus on potential impacts of AI technologies. Govern function outcomes foster a culture of risk management within organisations which is then reflected in the designing, developing, deploying, or acquiring AI systems.
The Map function establishes the context and frames risks related to an AI system. Information gathered in this function informs decisions about model management, including an initial decision about appropriateness or the need for an AI solution.
The Playbook is meant to act as a companion guide to the AI RMF (for further details, see below), the final version of which will be submitted to Congress in early 2023. The current draft is being released to allow interested parties the opportunity to comment and contribute to the first complete version (to be released in January 2023 with the first version of the AI RMF). Draft material for the other two functions, Measure and Manage, will be released at a later date.
What should I do?
Read further details of the Playbook here. Stakeholders should review whether their current processes fit within the suggestions of the Govern and Map functions of the Playbook and watchout for the upcoming draft covering the Measure and Manage functions. Interested parties should provide comments to NIST on shaping the final rules.
IEC & ISO release technical report on ethical and societal AI concerns
19 August 2022
Who is affected?
All companies interested in or involved in the use of AI.
What is it?
The International Electrotechnical Commission (IEC) and International Organization for Standardization (ISO) have jointly published a Technical Report focusing on the ethical and societal adoption of AI which sets out actionable guidance focusing on:
governance and rule of law;
trustworthiness, safety and privacy; and
accountability.
The report is designed to apply across a number of work streams and provides practical examples of ethical and societal implementation, while educating on the underlying ethical theories and concepts. For example, the report recommends creating a practical and accessible AI framework, which should be used to routinely monitor internal systems and decision-making.
Through a range of use cases, the authors highlight issues, such as over-reliance, perpetuated biases and lack of transparency, and provide a number of considerations, processes, principles and methods to help build and implement more ethically sustainable AI addressing these concerns.
What should I do?
Read the report here and consider how the suggestions can be implemented in your company.
NIST Issues a second draft of its AI Risk Management Framework
18 August 2022
Who is affected?
Individuals and organisations involved in the deployment and operation of AI technologies firms using or considering using AI.
What is it?
NIST’s AI RMF was released on 17 March 2022 as a voluntary initiative to help individuals and organisations address risks in the design, development, use, and evaluation of AI products, services, and systems. The framework intends to evolve over time and reflect new knowledge, awareness, and practice as AI technology continues its rapid development.
Following on from its release, a second draft of the AI RMF was published on 18 August 2022. The second draft has made a notable change to the subject matter in relation to trustworthy AI by moving away from a three-class taxonomy of “technical characteristics”, “socio-technical characteristics”, and “guiding principles”.
The AI RMF now adopts the following seven elements that characterise trustworthy AI:
Valid and Reliable - Accuracy and robustness contribute to the validity and reliability of AI by allowing AI systems to maintain a certain level of performance under a variety of circumstances.
Safe - AI systems should be designed safely to avoid causing physical or psychological harm as well as endangering human life, health, property, or the environment.
Fair - Equality and equity must be considered through addressing AI bias and discrimination.
Secure and Resilient - AI systems must be able to withstand adversarial attacks and unexpected changes in their environment and maintain a degree of functionality, even when a large portion of them is rendered inoperative.
Transparent and Accountable - Transparency ensures that an AI system is fair and mitigates bias, while accountability relates to the recourse available to the injured party in the event that a risky outcome occurs.
Explainable and Interpretable - Explainability and interpretability are about representing the mechanisms that underpin an AI system's operation and facilitating an understanding about its output.
Privacy-Enhanced - Values such as anonymity, confidentiality, and control should generally guide choices for AI system design, development, and deployment.
The second draft has also altered the structure of the four high-level elements of AI risk management by elevating "Governing" ahead of "Mapping", "Measuring", and "Managing". This element is stated as being a cross-cutting function that is both infused throughout AI risk management and informs the other functions.
An AI RMF Playbook was released alongside the second draft with the intention of helping organisations navigate the framework and achieve the outcomes through specified actions that can be applied within their own contexts, see above for further detail.
What should I do?
As regulatory initiatives in the AI space continue to multiply, it is important that organisations monitor the regulatory landscape and evaluate the AI systems they currently use.
Read the second draft of the AI RMF here.
UK CDEI policy paper on responsible innovation in self-driving vehicles
19 August 2022
Who is affected?
UK companies active in the automated vehicle industry.
What is it?
Self-driving vehicles or automated vehicles are fast approaching, and as a result, the Centre for Data Ethics and Innovation (CDEI) has published a Policy Paper assessing the factors that are relevant in ensuring that there is public trust in such vehicles. As expected, the Paper looks into a number of factors, including:
fairness and AI explainability;
producing effective regulatory governance frameworks; and
ensuring data privacy and data sharing.
The Policy Paper highlights the key economical and efficiency opportunities, while noting that the current framework regulating conventional vehicles and their drivers needs to be updated to ensure that accountability is properly apportioned, and safety and ethical concerns are addressed. The Paper is intended to echo and build on the approach to regulation established in the UK's National AI Strategy, and Roadmap to an effective AI assurance ecosystem.
Under such an approach, the CDEI hopes to ensure to a fair, trustworthy and proportionate take on the regulatory framework governing AVs, utilising expert and key stakeholder contributions.
What should I do?
Keep a look out for further discussion on secondary legislation on self-driving vehicles in 2023.
Read the Policy Paper here.
The UK IPO publishes a guidance note on examining patent applications relating to AI inventions
22 September 2022
Who is affected?
Firms dealing with AI inventions and AI-related intellectual property issues .
What is it?
The UK IPO's new guidance note confirms that AI inventions can be patented in all fields of technology where the task performed by the AI makes a technical contribution to the known art. The guidance note explains the concepts of “Applied AI” and “Core AI”, clarifying that an AI invention is more likely to reveal a technical contribution if its instructions:
embody or perform a technical process which exists outside a computer; or
contribute to the solution of a technical problem lying within (Core AI) or outside (Applied AI) a computer; or
define a new way of operating a computer in a technical sense.
The guidance note also states that the AI invention is unlikely to make a technical contribution if its task/process: (i) relates solely to excluded items under the Patents Act 1977; (ii) relates solely to processing or manipulating information or data; or (iii) has the sole effect of being a better or well-written program for a conventional computer.
To accompany this guidance, the UK IPO has set out scenarios which demonstrate how case law and guidance is to be applied.
What should I do?
Consider how the UK IPO's guidance can inform your understanding of and approach to dealing with AI patents. Read the guidance here and scenarios here.
The White House publishes its Blueprint for an AI Bill of Rights
5 October 2022
Who is affected?
Companies developing, deploying or affected by the use of AI systems.
What is it?
The White House published its “Blueprint for an AI Bill of Rights” which sets out The White House Office of Science and Technology Policy's five principles in relation to the proposed “bill of rights for an AI-powered world”.
These five principles are intended to help direct the “design, use, and deployment” of AI-enabled systems and are based on the advice provided by experts from various sectors, governments and international consortia. It is hoped that the five principles will protect the democratic rights of the American public and beyond, focusing in particular on the protection of civil rights, liberties and privacy.
The five principles are as follows:
Safe and Effective Systems - Diverse communities, stakeholders and domain experts should be involved in the development of automated systems in order to identify concerns, risks and potential impacts. Before a system is deployed, it should undergo sufficient pre-deployment testing, risk identification and mitigation. Ongoing monitoring should also be undertaken to demonstrate that the system is safe and effective.
Algorithmic Discrimination Protections - Designers, developers and deployers of AI-enabled systems need to take proactive and continuous measures to protect individuals and communities from algorithmic discrimination to ensure that the system can be deployed in an equitable way.
Data Privacy - Design choices which protect against abusive data practices should be included by default. The deployer of the system should only collect data which is strictly necessary.
Notice and Explanation - Plain language documentation (which includes accessible descriptions of the overall system functioning and the role automation plays) must be provided by the designers, developers and deployers of the AI-enabled system. This documentation should include a notice clearly highlighting that an automated system is in use, naming the individual or organization responsible for it and explaining any outcomes and these should be updated periodically.
Human Alternatives, Consideration and Fallback - Users affected by the AI system should have the choice to opt-out of the automated system following consideration of all information provided in favour of a human alternative, where appropriate.
What should I do?
Consider how the principles in the Blueprint will affect your firm's activities and identify where your current AI practices are not aligned with the principles. Find the Blueprint here and accompanying press release here.
.jpg?crop=300,495&format=webply&auto=webp)

.jpg?crop=300,495&format=webply&auto=webp)


_11zon.jpg?crop=300,495&format=webply&auto=webp)



.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)


_11zon.jpg?crop=300,495&format=webply&auto=webp)

