AI Liability Directive

In September 2022, the European Commission launched another initiative to further restrict the use and development of AI systems by proposing a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (proposed AI Liability Directive). The following key areas should be noted in particular: firstly, upon the implementation of the Directive, there would in future be an obligation to disclose evidence about high-risk AI systems. This is understood to mean systems that pose a significant risk to the fundamental rights of natural persons. On the other hand, there would be a considerable relaxation of the burden of proof in favour of injured parties because the planned reversal of the burden of proof is intended to ensure that in future it will be easier for injured parties to enforce claims for damages that may result from the use of AI. In fact, until now it has been almost impossible to access the relevant data of the affected AI system and thus prove that the injuring party was at fault.

 

However, it is important to bear in mind that the proposal for an AI Liability Directive is just one part of a larger package of measures in the field of AI. Further modernisation of the relevant liability provisions will be achieved, for example, by revising the Product Liability Directive, COM (2022).