Copy
The EuropeanAI Newsletter
Covering Artificial Intelligence in Europe

Welcome to the EuropeanAI Newsletter covering the European AI and technology ecosystem. If you want to catch up on the archives, they're available here

We put a lot of hard work and time into this freely available newsletter. Support us by sharing the subscription link with 3 people who would enjoy this newsletter.

Today, Parliament is discussing key areas of the AI Act, such as the definition of AI, high risk exemptions, and banned practices. Aligning itself with future institution building necessitated by the AI Act, Spain announced that it will set up the first national oversight agency for AI in Europe. Read AlgorithmWatch’s take on this here.

EU start-up Aleph Alpha are expanding their collaboration with Graphcore into
semantic embedding using Aleph Alpha’s models. They are also working with German industry and government on OpenGPT-X, a project to develop European LLMs built on the open Gaia-X infrastructure. In similar news, AI Sweden (in collaboration with RISE and WASP WARA Media & Language) are developing a large-scale generative language model for Swedish and other Nordic languages called GPT-SW3.

Today, Mozilla researchers published a report on how the EU’s new Data Governance Act can be leveraged for public interest:
Is that even legal? A guide for builders experimenting with data governance in Germany. The report includes an investigation of alternative data governance models in four countries: Germany, India, Kenya, and the United States.

CEIMIA published a report in collaboration with Oxford researchers on “A Comparative Framework for AI Regulatory Policy”. I (Charlotte) had a great time on the steering committee for this report and hope that it is helpful for decision makers in government thinking about overlapping areas and approaches. Along those lines, I also recently published a paper tracing the regulatory and policy impacts following the introduction of “trustworthy AI” in the EU.

In non AI-related chaos, the
New York Times is taking the European Commission to court over its perceived legal obligations to release Commission President Von der Leyen’s text messages with Pfizer CEO Bourla, which allegedly contain information on the purchases of Covid-19 vaccines.

Policy, Strategy and Regulation

Compare and contrast: The NIST AI Risk Management Framework versus the European Commission's standardization approach
 

Two weeks ago, the United States National Institute for Standards and Technology (NIST) released the final version of its voluntary AI Risk Management Framework and a playbook for its application.  The NIST frameworks is generally considered to embody the U.S.’s policy perspective on and preferred approach to forthcoming standardization of risk management of AI design, development, deployment and use.

It is also
openly touted as a step by the U.S. to lead and shape the global debate on standardization, and to level-set across international organizations and sectors around the world. The final NIST framework comes shortly after two major developments of early December 2022 in the standardization domain in the EU and globally.

  • The first one is the EU and the U.S.’s joint commitments within the framework of the EU-US Trade and Technology Council to cooperation on AI pre-standardization research and international technical standards development (our report here).
  • The second major step is the European Commission’s new draft standardization request of December 2022 to the European standardization organizations, European Committee for Standardisation (CEN) and to the European Committee for Electrotechnical Standardisation (CENELEC), instructing them on the development of technical standards reflecting key requirements under the AI Act (our report here).  

In light of those developments, the points of convergence and contrast between the NIST framework and the European Commission approach to standardization, as transpiring in its draft request, are noteworthy.

Both the NIST and the European Commission seek to address the same main concerns pertaining to AI design, development and use, namely the validity and reliability of output, retaining data privacy, ensuring safe, secure and resilient, transparent, explainable and interpretable operations of AI systems. The NIST framework, however, also puts a strong emphasis on fairness. AI developers and users are expected to pay attention to the broader social context (beyond race and gender) in which they create and operate the respective AI systems. While the European Commission understandably sticks to the basis provided in the AI Act and focuses in its standardization request predominantly on biases resulting from unrepresentative data.


The NIST framework promotes an outcome-based approach to tackling the main AI-related concerns by setting the ultimate goals (e.g. reliability, transparency) and leaving to developers and users leeway to choose the technical standards, risk management tools and governance processes for attaining those outcomes. For its part, the European Commission pursues the development of specific technical standards rather than an overall risk management framework. These standards would need to ensure compliance concretely with the requirements under the AI Act for risk and quality management, reporting, data governance, transparency, human oversight, conformity assessment, accuracy, robustness and cybersecurity. For the latter three categories, the European standardization organizations will have to develop standards detailing specific technical metrics (the so-called “specifications”). A possible explanation for the outcome-based approach by the NIST and the European Commission’s focus on technical standards might be that both organizations appear to admit the possibility that further technical and governance standardization work could be meaningfully completed within the framework of the International Standardization Organization (ISO) and the International Electrotechnical Commission (IEC).


Another point of contrast, at least for the time being, between the NIST framework and the European Commission’s standardization approach is the importance assigned to context sensitivity in AI risk management. The NIST framework openly acknowledges that the trustworthiness of AI systems heavily depends on the system’s sensitivity to the underlying social, cultural and historic context. Therefore, any meaningful risk management measures throughout the AI supply chain (design, development, use) would also need to be sensitive to and geared towards the underlying context in each specific case.  The European Commission’s draft request does not explicitly recognize the significance of context sensitivity and context awareness in AI development, use and risk management. However, in a footnote, the Commission notes that upcoming technical standards would need to lay down technical metrics “in consideration of” the intended use cases, sectors and context of use of AI systems.


Notes in the Margins: Notable are also the approaches taken in the NIST framework and the European Commission’s draft request to implementing risk management throughout the AI supply chain. The NIST plans to work with the industry in developing the so-called “profiles” for the application of the NIST framework. Those profiles would illustrate the implementation of the requirements under the NIST frameworks in specific settings, use cases, sectors, current and future temporal context, etc.  For its part, the European Commission would rely on the upcoming European standards to ensure compliance with the AI Act also throughout the AI supply chain by providing “vertical specifications”, i.e. technical specifications for application to intended use cases, sectors and context of use of AI systems.
 

Numbers, Numbers, Numbers

New "Fund of Funds" for European scale-ups

On Monday, the EIB Group (European Investment Bank, European Investment Fund) alongside Germany, France, Spain, Italy, Belgium launched the European Tech Champions Initiative (ETCI). ETCI is supposed to be a “Fund of Funds” with €3.75 billion of capital supporting European scale-ups. Beyond that, the fund expects to make 10-15 investments in large VC funds of approx. € 1 billion, with the goal of accumulating an overall € 10 billion of investments. The goal is to both support European innovation and to minimize local companies being bought out from non-European parties.

Notes in the Margins: This looks to be a much needed addition to VentureEU,
which planned to distribute funds to startups and companies looking to scale up in a range of areas, including in AI. It is likely that AI will become a key area of focus for ECTI, given general progression in the field and the need for the EU to position itself.
 

Enjoy learning about Europe? Share the subscription link with friends, colleagues, enemies...

Contact Charlotte Stix at:
www.charlottestix.com
@charlotte_stix


Dessislava Fessenko provided in-depth research and review. 
Interesting events, numbers or policy developments that should be included? Send an email!

Disclaimer: this newsletter is personal opinion only and does not represent the opinion of any organisation.

Copyright © Charlotte Stix, All rights reserved.






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
EuropeanAI · 16 Mill Lane · Cambridge, UK CB2 1SB · United Kingdom