Copy
The EuropeanAI Newsletter
Covering Artificial Intelligence in Europe

Welcome to the EuropeanAI Newsletter covering the European AI and technology ecosystem. If you want to catch up on the archives, they're available here.

You can share the subscription link here

The last EuropeanAI newsletter explored possible agreements on AI governance between the US and the EU. Following this train of thought, it is welcome to hear that Lynne Parker (Director of the National AI Initiative at the White House OSTP) indicated that the US should orient itself on the EU approach to AI regulation avoiding a “patchwork of regulatory approaches” (at the same time, opinions within the EU may differ. Have a look at the deep dive on the relevant AIDA report in the policy section below!). This also resonates with some of the findings from the Brookings Institute which published a report earlier this month with EU and US experts evaluating how to strengthen international cooperation on AI. Looking at AI regulation it states: “When it comes to regulation, divergent approaches can create barriers to innovation and diffusion.”

With regards to EU policy, an
agreement has been reached by the EU member states on the role of the European Commission as “sole enforcer” of regulatory interventions following the Digital Markets Act (DMA); this will effectively limit the power of tech giants such as Alphabet, Facebook. Amazon, and Apple. This development is in response to a joint letter from European national competition authorities (NCA) demanding more involvement in the enforcement of the DMA earlier this year.

Policy, Strategy and Regulation

AIDA and the AI Act: a deep dive

The European Parliament’s Special Committee on Artificial Intelligence in a Digital Age (AIDA Committee) published an own-initiative report on Artificial Intelligence in a Digital Age. While it won’t bear direct concrete consequences on the AI Act’s development within Parliament, the report is likely to influence Parliament-internal discussions; it is therefore worthwhile to explore in greater depth. Distinguished from most EU documents by its notably severe tone, the report paints a sober picture of the AI ecosystem in the EU, and the urgent promise/peril of it’s approach to governance. This is set in a backdrop of concerns that AI development in the EU has fallen behind.

Indeed, the report ascertains that the EU does not have a “single AI ecosystem that can compare with Silicon Valley, Boston, Toronto, Tel Aviv, or Seoul”. Presumably this is in reference to the relative number of AI companies domiciled within the EU versus in other hubs. Proven true, this would fit with the report’s reference to Brexit’s impact on AI’s standing in the EU. The report suggests these factors may lead to the replacement of European values with that of international actors more willing to transgress certain boundaries. A noteworthy observation in the AIDA report remarks on the increasingly strong presence of Chinese nationals in standard organisation leadership roles—and the potential geopolitical implication this may have for the EU’s ambition to drive a principles/standards/regulation-based approach to AI governance.

Particularly rousing in this context is the statement that, “the global tech race has become a fight for survival for the EU; stresses that if the EU does not act swiftly and courageously, it will end up becoming a digital colony of China, the US and other states and risk losing its political stability, social security and individual liberties.”


Along these lines, the report’s section on External Policy and the Security Dimension of AI expresses concern that, “the global community does not seem likely to reach an agreement on minimum standards for the responsible use of AI, as the stakes, in particular for the most powerful nations, are too high”. It is difficult to guess what the “stakes which are too high” actually means; while readers can guess along the common theme, no concrete explanation is given.

Interestingly, the report states that, “the vast majority of AI systems currently in use, are almost or even completely risk-free” (although many might contest strongly), and only a small number of cases would necessitate regulatory action. It also claims that a lot of fear is linked to concepts of AGI, superintelligence, or the singularity, despite their unlikelihood. Whether or not this assertion is valid, the readers of this newsletter should note that the AI Act itself makes no noteworthy reference to these concepts in its proposal. The report also suggests that high-risk AI systems should have “embedded mechanisms—or “kill switches”—for human intervention to immediately halt automated activities at any moment”.

Under AI and the Future of Democracy, the AIDA report makes mention of the pacing gap and suggests that “many policymakers tend to argue for categorical bans on certain AI technologies or use cases without sufficient prior analysis of the proportionality and necessity of outright bans”. (Given the mention of positive applications for biometric identification, the report is presumably referencing the ban on certain “real-time” biometric identification systems for the use of law enforcement in public spaces.) It argues that moves like these can hamper innovation and competitiveness, even being “counter-productive in terms of safeguarding security and fundamental rights”. 

Much of the report provides critical feedback to the current proposal of the AI Act through highlighting potential associated issues it perceives as precursors to an inflexible, fragmented, and overburdening regulatory environment. However, it does highlight the unique opportunity of a strategic first-mover advantage for the EU by establishing the first regulatory framework for AI, referencing the “
Brussels effect”.

Notes in the Margin: Looking to strengthen the EU’s AI ecosystem, Ursula von der Leyen introduced the
European Chips Act in mid-September, with the goal of reasserting the European production of semiconductors, and avoiding a rush on national public subsidies. The Industrial Alliance for Processors and Semiconductors will be tasked with safeguarding and enhancing European manufacturing capacity and reducing international dependencies on the technology. 

Concerns about the AI Act's impact on the Social Safety Net 

Coming at the AI Act from a different perspective than the AIDA report, Human Rights Watch recently published their recommendations to improve the AI Act’s ability to protect the social safety net. They argue that the ban on unacceptable levels of risk, as defined by the EU, should be fortified through codifying against use of AI algorithms in the public or private sector that delay or deny (without recourse) an individual’s access to benefits. 

To protect human rights, authors state that impact assessments involving a broad range of stakeholders need be undertaken prior to use of, or significant changes to AI systems designated as ‘high-risk’. This is to be done alongside the strengthening of transparency measures to require the public disclosure of bias findings and corrective actions, a mandate for regulatory inspections for human rights compliance, and the establishment a publicly accessible flagging apparatus. They go on to recommend that Annex VIII’s Electronic Instructions for Use of AI systems, requiring the publication of source code, be clarified to mean all previous and current versions of the source code to ensure historical transparency. Human Rights Watch suggests that entities in possession of high-risk AI systems should share this information publicly, including use cases. Finally, a whistleblowing procedure needs to be put in place alongside the individual's right to appeal the decisions made by an AI system.
 

Ecosystem

Destination Earth

​​Destination Earth (DestinE), the European Space Agency’s Digital Twin for planet Earth (as introduced in this newsletter) is continuing to progress with Member States approving a “Contribution Agreement”. Digital Twins are often used to simulate and predict future events for their physical counterpart and in doing so aid policy making, maintenance, and safety. In the case of DestinE, the Digital Twins are designed around Earth observation and climate change. They are aiming for a “full” replica of the Earth system by 2030 (though note that “full” in this context does not imply all systems on Earth).

Notes in the margin: The UK government has also been putting a strong focus on Digital Twins, with the National Digital Twin Programme.
 

Enjoy learning about Europe? Share the subscription link with friends, colleagues, enemies...

Contact Charlotte Stix at:
www.charlottestix.com
@charlotte_stix


Ben Gilburt  co-wrote this edition.
Interesting events, numbers or policy developments that should be included? Send an email!

Disclaimer: this newsletter is personal opinion only and does not represent the opinion of any organisation.

Copyright © Charlotte Stix, All rights reserved.
Twitter Twitter
Website Website
Email Email






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
EuropeanAI · 16 Mill Lane · Cambridge, UK CB2 1SB · United Kingdom