Copy
The EuropeanAI Newsletter
Covering Artificial Intelligence in Europe

Welcome to the EuropeanAI Newsletter covering the European AI and technology ecosystem. If you want to catch up on the archives, they're available here.

The European Parliament’s IMCO Committee (Committee on the Internal Market and Consumer Protection) published their opinion on the proposal for a regulation for the Data Act. €292.5 m were approved by the European Commission to be invested in the construction of a plant in the semiconductor value chain in Italy, in line with goals outlined in the  European Chips Act Communication


Germany has a new tech-think tank which crystallized out of the Stiftung Mercator, the “Agora Digitale Transformation”. It benefits from €8.5 m over the next 5 years and its goal is to make digitalisation, especially with a focus on AI, its main topic, developing national and EU-wide strategy.


The UK’s Centre for Data Ethics and Innovation published an updated version of the Algorithmic Transparency Standard.

The US published their
Blueprint for an AI Bill of Rights outlining several principles to guide future policies and practices. While the principles do, at times, closely match the legal obligations outlined in the EU’s AI Act, the non-binding nature and distributed governance implementation is a closer orientation to the UK’s proposal for a regulatory approach on AI.

We put a lot of hard work and time into this freely available newsletter. Support us by sharing the subscription link with 3 people who would enjoy this newsletter.

Policy, Strategy and Regulation

Liability rules for Artificial Intelligence?


The European Commission proposed new product liability rules, which includes AI.

(1) A draft directive for liability for defective products: Product Liability Directive. This revises the existing general product liability rules in the EU.

(2) A draft directive for product liability of AI: AI Liability Directive. This introduces rules specific to damages caused by AI systems.

The proposed Product Liability Directive updates the existing EU general product liability regime in several ways, for example, covering any product, including software, AI systems, and digital services that enable the functioning of a product. Damages resulting from the use, update, upgrade or cyber security vulnerabilities of software and AI systems qualify for redress. Software providers, providers of digital services, as well as companies that substantially modify products or integrate them as components in their own production, can also be held liable for damages. A product is presumed to be defective when it does not comply with the mandatory safety requirements intended to preempt the risk of damages (e.g. under the AI Act , the proposed Cyber Resilience Act or the draft Machinery Regulation 2021).

The proposed AI Liability Directive outlines compensation for damages resulting from a faulty output by AI systems and expands the categories of persons who can seek compensation. Some of the things it covers are: breaches of fundamental rights or others rights under national laws (e.g. discrimination, breach of privacy, hindered access to employment, etc.). The proposal also broadens the circle of entities that can be held liable for damages. It includes anyone along the AI supply chain who contributed to the faulty output of the AI system, such as users, developers, or providers.

However, it departs from the strict liability concept under the general regime and introduces fault-based one. Claimants need to prove (i) fault or omission of the entity from which compensation is sought (users, providers, developers) and (ii) that this fault/omission caused the damage. The proposed AI Liability Directive also provides for disclosure of evidence, including of technical and risk assessment documentation drawn up under the AI Act, subject to certain conditions. 

It interplays with the AI Act in several ways, for example:

 

  • the Product Liability and AI Liability Directives will install the product liability framework with respect to all types of AI systems, both high-risk and not high-risk;
     
  • the AI Act details the safety requirements that high-risk AI systems need to meet in order to be marketed in the EU. The Product Liability premises on these requirements the presumption for defectiveness of an AI system when the latter does not meet them (subject to certain additional conditions). The AI Directive draws on these requirements for establishing fault with regard to high-risks systems;
     
  • the AI Liability Directive envisages damages redress also in instance when some of the broader risks targeted under the AI Act (e.g. discrimination, intrusive surveillance) materialize.

 

Enjoy learning about Europe? Share the subscription link with friends, colleagues, enemies...

Contact Charlotte Stix at:
www.charlottestix.com
@charlotte_stix


Dessislava Fessenko provided research and editorial support. 
Interesting events, numbers or policy developments that should be included? Send an email!

Disclaimer: this newsletter is personal opinion only and does not represent the opinion of any organisation.

Copyright © Charlotte Stix, All rights reserved.
Twitter Twitter
Website Website
Email Email






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
EuropeanAI · 16 Mill Lane · Cambridge, UK CB2 1SB · United Kingdom