Copy
The EuropeanAI Newsletter
Covering Artificial Intelligence in Europe

Welcome to the EuropeanAI Newsletter covering the European AI and technology ecosystem. If you want to catch up on the archives, they're available here.

The US ‘CHIPS for America Act’ has passed Congress. The European equivalent can be found here.

The European Parliament’s JURI Committee has published a report on the European Commission’s ‘better regulation’ agenda. The report gives a good overview of the past two decades of the EU’s effort in this space and provides a number of concrete policy suggestions going forwards.

The EU, UK and Switzerland have pooled together funds (10m EUR) to develop a virtual “
European Lighthouse on Secure and Safe AI”, building on the industry and academic network established by the European Laboratory for Learning and Intelligent Systems (ELLIS) and coordinated by the CISPA Helmholtz Center for Information Security in Saarbrücken, Germany.

Adjacent to the deep dive on the UK’s new governance framework for AI regulation, it could be interesting for readers to dive into the Turing Institute’s new report on
Common Regulatory Capacity for AI.

We put a lot of hard work and time into this freely available newsletter. Support us by sharing the subscription link with 3 people who would enjoy this newsletter.

Policy, Strategy and Regulation

Deep dive: UK joins the ranks of countries thinking about AI regulation

Earlier this month, the UK government announced its initial plans for regulating AI, as well as its AI Action Plan for implementing the National AI Strategy. A policy paper details the governance framework that the UK contemplates adopting. 

According to this paper, the UK does not intend to introduce a single regulatory regime for AI, preferring a domain specific approach spearheaded by relevant regulators in each relevant domain, housed within an overarching set of non-binding cross-sectoral principles. 

These are: safety, technical security and robustness, transparency and explainability, fairness, legal liability for an identified or identifiable corporate or natural person, and contestability of outcomes recommended by/delivered by an AI system. Although the document states these as principles are based on the OECD Principles on AI, the Ethics Guidelines for Trustworthy AI put forward by the European Commission’s independent High Level Expert Group on AI did previously introduce many of the same principles as “key requirements”. It is unsurprising, in that sense, that many of the chosen principles by the UK government are also principles that underpin the EU’s AI Act and inform the conformity assessment outlined therein. Others (e.g. fairness) transpire in the latest tabled amendments to the AI Act.

The six core principles above will initially be introduced on a non-statutory basis, i.e. as guidance to UK regulators and not as mandatory concretised obligations, unlike the position under the EU’s AI Act. This structure means that regulators will need to decide what specific principles such as “transparency” actually mean in their sector. 

Beyond interpretation and implementation of the six core principles, UK regulators will also need to  decide if, when and what measures regulated entities need to take in order to demonstrate compliance with the core principles. This approach seems intended to build in more flexibility into AI governance compared to the EU’s AI Act and, to a certain extent, Canada’s AIDA, which introduce specific requirements for types of compliance measures to be taken by AI developers/users. 

The UK government does admit that its approach may result in some regulatory divergence and confusion. To this end, options are being considered in an attempt  to ensure coherent implementation. They include providing “a strong steer to regulators” (e.g. through “government-issued guidance”) and ensuring coordination among regulators with regards to interpretation and implementation of the core principles.

Moreover, the paper does not introduce a fixed, universally applicable definition of AI, unlike the EU and Canada in the AI Act and the AIDA, respectively. (Though, it should be clarified that, for example, in the AI Act, it will be possible to update the definition in accordance with relevant future developments.) Instead, the proposed scope of the UK’s regulatory framework, regulating “the use of AI rather than the technology itself”, and the set of AI systems which may need “bespoke regulatory responses”, is built on two core characteristics. These core characteristics shall also be further refined by regulators in their corresponding sectors. They are as follows: (i) adaptiveness, meaning an explanation of the intent or logic of the relevant AI system, and (ii) autonomy, meaning the degree to which an AI system makes decisions with or without human control.  

The proposed institutional set-up also differs substantially from those under Canada’s AIDA and the EU’s AI Act. The AIDA vests the primary responsibility for its interpretation and enforcement in a single national authority (essentially, the Artificial Intelligence and Data Commissioner). The EU’s AI Act envisions a somewhat decentralized implementation and enforcement through a web of bespoke, AI-focused EU and national authorities, such as the national supervisory authorities, notified bodies, or the European AI Board. The UK, on the other hand, contemplates a sector-specific approach. Relevant regulators - such as Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority and the Medicine and Healthcare Products Regulatory Agency - will be tasked with the interpretation and implementation.

A call for evidence on the UK’s proposal is open to academics, industry experts and civil society (until 26/09). Subsequently, the UK government will set out further details on the framework and implementation plans in a White Paper later in 2022.

The corresponding AI Action Plan envisages a number of measures to be taken by the UK government in the next few years in order to support the AI development and deployment in the UK. Most notably: the increase in investment in research hubs, government investment in the AI sector, planned changes in the UK copyright laws to enable text and data mining (of which we have reported), adoption of AI in various branches of the public sector, investment in Trusted Research Environments to support health research and diagnostics lead by the National Health Services, and heightened international outreach and engagement with international organizations and third countries in order to promote a “pro-innovation international governance and regulatory environment” for AI .

Notes in the Margins: One clear similarity between the UK’s proposed governance framework, the EU’s AI Act, and the AIDA is that the UK’s current policy paper proposes a risk-based approach to governing AI. UK regulators are expected to establish risk-based criteria and thresholds at which additional requirements will come into force. Such risk classification underpins the entire governance matrix of the EU’s AI Act, which layers various compliance requirements depending on the gravity of risks that AI systems pose.

Another interesting piece of information is that while the document states that the goal is to target only real and identifiable risks (concretely, avoiding stifling innovation by virtue of focussing on “hypothetical risks”), it does conclude by highlighting the importance of ‘holistic horizon scanning’ to ensure both immediate and long-term risks are sufficiently tackled. Presumably, it is likely that long-term risk errs on the side of hypothetical risks and that these do need to be accounted for more concretely.

Finally, it should be noted that while this policy paper espouses the view that bespoke AI legislation is not needed for the time being, it does not rule out the possibility that such legislation may be required at a later stage for the proper implementation of the framework going forward.
 

Enjoy learning about Europe? Share the subscription link with friends, colleagues, enemies...

Contact Charlotte Stix at:
www.charlottestix.com
@charlotte_stix


Dessislava Fessenko provided research and editorial support. 
Interesting events, numbers or policy developments that should be included? Send an email!

Disclaimer: this newsletter is personal opinion only and does not represent the opinion of any organisation.

Copyright © Charlotte Stix, All rights reserved.
Twitter Twitter
Website Website
Email Email






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
EuropeanAI · 16 Mill Lane · Cambridge, UK CB2 1SB · United Kingdom