Publications & Insights Landmark AI Act finally approved by Member States
Share This

Landmark AI Act finally approved by Member States

Thursday, 15 February 2024

On Friday 2 February 2024, the Member States of the European Union reached a unanimous agreement on the text of the landmark and somewhat controversial Artificial Intelligence Act (the AI Act). This agreement comes after concerns had been raised by certain member states, Germany, France and Italy, as to its impact on innovation. These concerns reflected the views of companies in those Member States who feared over-regulation would stifle indigenous development of new powerful AI models and hand the advantage to third countries. These arguments had earlier been rejected by the European Parliament in its proposed text. Nonetheless, while we have only had sight of unapproved copies of the official text to date, it will be interesting to see the “final, final” text in due course. The AI Act has now also been approved by the EU Internal Market and Consumer Protection Committee and the EU Civil Liberties, Justice and Home Affairs Committee ahead of final sign off by the European Parliament in the next couple of months. In addition, the process of implementing the AI Act will provide some scope by the Member States to compromise and influence the impact the AI Act may have once fully enacted. 

Once fully adopted, the AI Act will be published in the Official Journal and will enter into force 20 days after this its publication. There will be an interim period of two years once the AI Act has entered into force before it will apply across the EU. However, there are some exceptions to this time period for specific provisions. For example, the AI Act will apply

  • After 6 months to prohibited AI systems which are banned under the AI Act and include AI that uses emotional recognition, social scoring or emotional manipulation amongst other concepts;
  • After 12 months for General Purpose AI (GPAI) which are AI models that perform generally applicable functions such as responding to questions or recognising and generating words or images; and
  • After 24 or 26 months for high-risk AI systems. A number of these are defined in the AI Act and those that pose a significant risk to health, safety, fundamental rights, elections or other matters.

These timeframes are designed to assist organisations using or developing AI to prepare and bring their AI use and models in line with the requirements of the AI Act.  

The AI Act also establishes the European AI Office. Whilst Member States will play a large role in the enforcement of the AI Act and will be required to set up enforcement organisations within their own jurisdictions, the remit of the AI Office will be to monitor the implementation and compliance of GPAI model providers. Concerned parties will be able to lodge complaints to the AI office for investigation. The AI office will also conduct evaluations of GPAI models to assess their compliance with the AI Act and investigate any systemic risks imposed by their use. 

With the AI Act now one step closer to becoming law, organisations should now start to prepare for its implementation. This includes reviewing their current or intended use of AI, examining any agreements they have in place with AI providers, and formulating polices for both the organisation’s use and individual employees’ use (particularly in the latter case) of Generative AI products. Risk assessments, similar to those under the GDPR, should be actively considered before certain deployments (for example in the field of recruitment).

If you have any concerns about the implementation of the AI Act, or would like to talk about putting an AI Policy in place for your organisation, please contact Victor Timon, Emily Harrington or any member of our Technology Group.