EU AI Act

The EU Artificial Intelligence Act – proposed by the European Commission in 2021 – aims to establish a single regulatory framework for AI systems in the European Union. The framework is intended to regulate and classify AI systems according to the risk they pose to users. Set out below is a summary of the most important aspects of the legislative proposal:

  • Scope of application: The legislative proposal defines AI systems as machine-based systems that are capable of operating autonomously and imitating the way humans think and act. The rules will apply to providers (including importers and distributors) and the users of AI systems in the EU and those in third countries if the results are used in the EU.
  • Certain prohibited AI practices: In particular, the legislative proposal prohibits the use of AI systems that exploit the vulnerabilities of persons, could materially distort the behaviour of a person or cause a person physical or psychological harm. Likewise, the use of 'real-time' remote biometric identification systems will only be permitted under very strict conditions, in particular for law enforcement purposes.
  • High-risk AI systems: A risk management system must be established in relation to high-risk AI systems (these include AI systems in critical areas such as education and training, security, and the administration of justice). Providers must issue a declaration of conformity and ensure their systems comply with the legal requirements. Importers and distributors will be required to check whether the obligations to which the provider of the respective AI system is subject (in particular, conformity marking, preparation of the necessary documentation) have been met, otherwise they may not place the systems in question on the market.
  • Specific transparency obligations: When users interact with an AI system, they must be informed of this by the provider. In particular, this concerns the generation or use of what are commonly known as deep fakes; that is to say, images, audio or video content generated or manipulated by an AI system that appreciably resembles existing persons, objects, places, facilities or events and that would falsely appear to a person to be authentic.
  • Measures in support of Innovation: AI regulatory sandboxes established by the Member States should make it possible for companies to develop their AI systems in a secure "sandbox", with the participation of and in consultation with the competent authorities, before publishing a finished product. The processing of personal data for the purpose of developing and testing innovative AI systems in an AI lab will only be permissible under very limited circumstances.
  • Sanctions and fines: Infringements can result in the imposition of fines of up to EUR 30,000,000 or 6% of the offender's worldwide annual turnover.