top of page
Writer's pictureLee Gang

The EU AI Act: What It Means For Companies

The passing of the European Union Artificial Intelligence Act (EU AI Act) on 13 March 2024 marks a significant milestone in the global regulation of artificial intelligence. This landmark legislation aims to set standards for the development and use of AI systems within the EU, with the intention of safeguarding fundamental rights, ensuring user safety, and fostering trust in AI technologies. The earliest provision on the prohibited use will take effect as early as 6 months, whereas full adoption will take two years for all rules and obligations to take effect. As companies navigate this new regulatory landscape, the impact of the EU AI Act on their operations, strategies, and compliance efforts will vary. In this article, we will attempt to translate the legalese to plain language and look at the impact it will have on companies based on the different roles they play.


Different Roles of Companies


The roles and obligations will change based on the different roles’ companies play in the ecosystem. Similar to car manufacturing, car manufacturers and drivers have very different obligations and responsibilities. In the EU AI Act, the different roles are defined as below:


Provider: Individuals and companies that develops the AI System and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge. This includes model developers such as OpenAI, Anthropic and Mistral AI. As long as a person or company develops a model and made it available for commercial purpose (either as a model or part of a larger system), they are considered a Provider. Even when a company uses an external AI model (e.g. OpenAI’s GPTs) to build its software, and then sell the software in EU, they are also considered a Provider. Using a food analogy, this would be the equivalent of chefs who prepares the dishes and food (AI systems or models). The exception to this is when the model is developed and used for non-commercial research purposes.


Deployer: Individuals or companies using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity. Any companies that are using any system with AI features will fall into this category. In the food analogy, this will be the equivalent to a restaurant manager. The restaurant manager decides on the menu, how it operates and who do they serve. Companies are considered as Deployers even when they use an AI system for their own staff or own company use.


User: The end-user of the AI system. These can be internal staffs, corporate customers, contractors or individuals depending on how the AI system is deployed. In the food analogy, the User is the customer who orders and consumes the meal. 


Importer: Any person or company in the EU that brings in and place in the market an AI system from outside the EU. This would largely be software resellers and distributors that brings software from outside of the EU into the EU market. In the food analogy, these would be the food importers that import food and supplies it to the local market.


Distributor: Any person or company in the supply chain other than the Provider and Importer that makes an AI system available in the EU market.  These are the food distributors/suppliers equivalent in the food analogy that supplies food and ingredients to the restaurants.


A company can have multiples roles. For an example, if a company develops its own AI HR Chatbot for its own employees, it is considered to be both the Provider and the Deployer. The company will need to fulfil all the obligations for the two roles in the EU AI Act.


Different Risk Levels of Application


The EU AI Act adopts a risk-based approach regulate AI systems. AI systems are also regulated based on their application area instead of the underlying technology. There are four different categories of AI systems based on their corresponding risks.



1. Unacceptable Risk (Prohibited) AI Systems


Prohibited AI systems are not allowed in the EU. An AI system is prohibited if it contains any of the below features:


  • Subliminal techniques beyond a person’s consciousness to distort a person’s behavior in a manner that causes harm.

  • Exploitative or manipulative practices (e.g. exploit vulnerabilities in age, physical or mental disability etc.).

  • Social scoring systems

  • Indiscriminate surveillance

  • Biometric identification in public spaces for law enforcement


This provision will take effect the earliest, 6 months after the EU AI Act is published (around Q3 2024).


2. High Risk AI Systems


Any AI systems that have significant potential to harm individual rights and safety will be considered as a High-Risk AI system. These AI systems typically operate in critical sectors or are used in ways that could significantly impact individuals’ lives. Some examples include AI system in:


  • Critical infrastructure: traffic, transportation, utilities etc.

  • Education and vocational training: access to educational institutions, assess student performance etc.

  • Employment and workers management: recruitment processes, evaluating worker performance, making decisions on promotions, terminations etc.

  • Essential service and law enforcement: social benefits and services allocation, evidence reliability assessments, predictive policing on certain groups or areas etc.

  • Migration, asylum and border control management

  • Administration of justice and democratic processes

  • Biometric identification and categorization


2.1 Obligations of Provider


Providers have the most obligations to fulfil under the EU AI Act. The biggest impact would be that Providers will need to go through a conformity assessment conducted by a third party to ensure they fulfil their obligations, and register their AI system in an EU Database prior to any use or put to market. The obligations of Providers for High-Risk AI Systems are as below:


  • Risk Assessment and Mitigation: Providers must perform thorough risk assessments to identify and mitigate risks associated with the use of their AI systems.

  • High-Quality Datasets: Ensure that datasets used to train and test the AI systems are relevant, representative, free from errors, and complete to avoid risks and discriminatory outcomes.

  • Technical Documentation: Maintain detailed documentation that describes the AI system's development, characteristics, and intended use to enable conformity assessments and ensure traceability.

  • Transparency and Information Provision: Provide users with all necessary information about the AI system's capabilities, intended purpose, and instructions for use, ensuring transparent operations.

  • Human Oversight: Ensure that there are appropriate human oversight mechanisms to minimize risks and enable effective human intervention.

  • Accuracy, Robustness, and Security: Implement measures to ensure the ongoing accuracy, robustness, and cybersecurity of the AI system.

  • Conformity Assessment: Conduct conformity assessments administered by a third party to verify compliance with the EU AI Act's requirements for high-risk AI systems and register the AI system in an EU database.

  • Post-market Monitoring: Establish a post-market monitoring system to continuously assess the AI system's performance and compliance and take corrective actions if necessary.


Tips:

  • Prepare an internal process and technical documentations that will be used for the conformity assessment to cover the obligations on risk assessment, data lineage and source, technical specifications, accuracy, robustness and security details.

  • Publish the AI system capabilities, intended purpose, instruction for use, performance and accuracy metrics and other relevant information as part of the model card.

  • Design the AI system with a human-in-the-loop process or decision support system instead of a fully-automated system.

  • Maintain an observability and monitoring platform along with key metrics and alert notifications.



2.2 Obligations of Deployer


  • Use Compliance: Follow the AI system provider's operational and maintenance instructions strictly.

  • Risk Management: Continuously assess and mitigate operational risks throughout the AI system's lifecycle. Also to perform an assessment of the fundamental rights that the use of the system may produce.

  • Data Governance: Maintain high standards of data quality and security for system inputs.

  • Human Oversight: Ensure human oversight of the AI system to enable intervention and manage risks effectively.

  • Transparency: Inform affected individuals about the AI system’s use, capabilities, and their related rights.

  • Accuracy Monitoring: Regularly monitor the AI system for continued accuracy and to identify any emerging biases or errors.

  • Reporting and Documentation: Document operational incidents and report serious issues or malfunctions to the relevant authorities.

  • Sector-Specific Compliance: Adhere to additional regulatory requirements specific to the sector where the AI system is deployed.


Tips:

  • Request the proof of conformity (e.g. CE Marking) and relevant documentation from the Provider before commencing use of the AI system.

  • Include fallback mechanisms and safeguards in the event an AI system incident occurs.

  • Expand the current process for data breach notification under GDPR to cover AI system monitoring, reporting and documentation. Expand the communication plans and simulation exercises to include scenarios on AI system incidents. 


2.3 Obligations of Importer & Distributor


Distributors and Importers have lesser obligations as compared to Providers and Deployers.


  • Verification of Compliance: Before making a high-risk AI system available on the market, verify that it complies with the EU AI Act, including checking for the EU declaration of conformity and that the system is correctly labelled.

  • Cooperation with Regulatory Authorities: Cooperate with national authorities, providing any documentation or information necessary to demonstrate the AI system's conformity.


2.4 Obligations of User


Users will primarily need to adhere to the intended use of the AI system and report any issues encountered/observed.


  • Compliance with Operating Instructions: Use the AI system in accordance with the provider's instructions, including adhering to any stipulated conditions for operation and maintenance.

  • Monitoring and Reporting: Monitor the operation of the AI system for any issues or malfunctions and report serious incidents or malfunctioning to the Provider and/or Deployer and, if necessary, to the relevant authorities.


3. Limited Risk AI Systems


Limited-risk AI systems typically include applications that may interact with humans or make decisions that can influence users' choices or behaviors without causing significant harm. Examples might include AI-enabled chatbots that provide customer service or AI systems that generate personalized content recommendations. Other examples include personalized advertising, educational tools, financial advisory services, and real estate and housing platforms. The regulatory requirements are less stringent and generally focus on transparency and information disclosure to ensure users are adequately informed of the AI system’s operation and potential impacts.


3.1 Obligations of Provider


Providers do not require to go through a third-party conformity assessment for limited risk AI systems. However, there are still a few obligations that Providers will need to fulfil:


  • Transparency: Must clearly disclose that an AI system is being used. This involves informing users when they are interacting with an AI, rather than a human, especially in cases where the distinction is not obvious. This includes Gen AI systems where users need to be informed that they are conversing with an AI agent, or when a particular output such as videos are generated by AI.

  • Information Provision: Provide essential information about the AI system, including its capabilities, the nature of its interaction with users, and any limitations it may have. This helps users understand how the system works and what to expect from its outputs.

  • Data Usage Disclosure: If personal data is used by the AI system, providers must inform users about what data is being processed and for what purpose, ensuring compliance with data protection regulations like the GDPR.


3.2 Obligations of Deployer


Deployers will still require to ensure that the AI system is in compliance to the EU AI Act before deploying (ensure adequate transparency and information disclosure). Deployers will also need to maintain a feedback and reporting mechanism back to the Provider.


  • Transparency: Implement and maintain the transparency measures provided by the AI system's provider. This includes ensuring that notices and information about the AI's use are visible and understandable to end-users.

  • Data Governance: If involved in the operation of the AI system, ensure that data governance practices are in place to protect user data and privacy according to the provider's guidelines and relevant data protection laws.

  • Feedback and Reporting: Establish mechanisms to gather user feedback on the AI system's performance and the adequacy of transparency measures, and report any significant issues back to the provider.


3.3 Obligations of Importer and Distributor


Importers and Distributors will need to ensure the AI system complies with the transparency and information disclosure requirements before putting the AI system in the EU market.


3.4 Obligations of User


The obligations of Users remain largely the same.


  • Compliance with Operating Instructions: Use the AI system in accordance with the provider's instructions, including adhering to any stipulated conditions for operation and maintenance.

  • Monitoring and Reporting: Monitor the operation of the AI system for any issues or malfunctions and report serious incidents or malfunctioning to the Provider and/or Deployer and, if necessary, to the relevant authorities.


4. Low Risk AI Systems


Low Risk AI Systems are considered to pose minimal or no significant risk to individuals’ rights or public safety. This category is the most common type of AI applications under the EU AI Act. Examples ranges from productivity software, entertainment and games, spam filters, general website chatbots, to non-personalized content curation engines. Under the EU AI Act, the obligations for various roles associated with Low Risk AI Systems primarily revolves around general legal compliance and existing data protection and privacy laws. No additional obligations are required although best practices such as transparency and information disclosures are recommended.




5. Provisions for Generative AI Models


Additional provisions for General Purpose AI Systems (which covers the majority of Foundation Models such as GPT-4) are also introduced in the EU AI Act. The obligations are in addition to those mentioned above. General Purpose AI Models can be further categorized into General Purpose AI Models with systemic risks and without. A General Purpose AI Model is considered to have systemic risk if:


  • The model has high impact capabilities (capabilities that match or exceed the capabilities recorded in the most advanced General Purpose AI Models) evaluated on the basis of appropriate technical tools and methodologies, including benchmarks and indicators.

  • Based on the decision of the Commission, ex officio or qualified alert by the scientific panel.


While the above are stated in the EU AI Act, it is too broad and subjective without any much clarity. As long as a General Purpose AI Model has a cumulative training FLOPS above 10^25 (GPT-3.5 used about 10^23 FLOPs during training), it is presumed to be a General Purpose AI Model with Systemic Risk.


Obligations of Providers


The additional provisions on General Purpose AI Models mainly applies to Providers of the model. Other roles will have to ensure that the General Purpose AI Model that is distributed, deployed, and used is complied by the Provider.


5.1 General Purpose AI Models


  • Technical Documentation: Provide up-to-date technical documentation of the model, including its training and testing process, evaluation results, and other technical details upon request by the relevant authorities (not required for free and open license models).

  • Information Disclosure: Make available the information and documentation on the model including model details, accuracy and performance, and intended use to downstream roles (not required for free and open license models).

  • Copyright Policies: Put in place a policy to respect EU copyright law.

  • Training Data Disclosure: Make publicly available a detailed summary of the content used for training of the model.


5.2 General Purpose AI Models with Systemic Risks


In addition to the obligations required for General Purpose AI Models, Providers will have to comply to the below obligations as well.


  • Evaluation and Testing: Perform model evaluation in accordance with standardized protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identify and mitigate systemic risk.

  • Risk Management: Assess and mitigate possible systemic risks at EU level, including their sources, that may stem from the development, placing on the market, or use of General Purpose AI models with systemic risk.

  • Feedback and Reporting: Report any incidents and possible corrective measures to the AI Office or relevant authorities.

  • Security: Ensure an adequate level of cybersecurity protection for the General Purpose AI model with systemic risk and the physical infrastructure of the model.


6. Penalties


Non-compliance for prohibited AI systems is subjected to an administrative fine of up to EUR 35 million or 7% of the company’s worldwide annual turnover, whichever is higher.

Non-compliance for other provisions is subjected to an administrative fine of up to EUR 15 million or 3% of the company’s worldwide annual turnover, whichever is higher.


There are other fines such as fines for submitting incorrect information to relevant authorities, which are detailed in the EU AI Act.


7. Timeline


The general provisions and prohibited AI practices will kick in 6 months after the official publication of the EU AI Act (around Q3 2024), effectively banning the use of any use of AI in the prohibited use list. Provisions on General Purpose AI Models, Feedback and Incident Notifications, and Penalties will be in force 12 months after the official publication (around Q2 2025). The remainder of the EU AI Act such as the conformity assessment obligations for High Risk AI Systems will then take effect 24 months after publication (around Q2 2026). Only the provision where High Risk AI Systems used as a safety component of a product that is covered by EU harmonization legislation will kick in after 36 months after publication (around Q2 2027).


8. What’s Next?


With the knowledge of the EU AI Act, companies should perform an assessment on the potential impact the EU AI Act have on their existing operations. Providers and Deployers of High Risk AI Systems will be impacted the most, and it is recommended to start setting up or enhancing the AI governance framework within the company to address the required obligations. While most AI applications which falls under the Low Risk AI Systems will not have any significant impact, it is still crucial for Providers and Deployers to setup a robust AI governance framework within the company to as a best practice. Companies should also actively monitor any implementation and enforcement details announced by the EU and its member states in the coming months.


Disclaimer: This article provides a summary of the key content of the EU AI Act and is intended for general informational purposes only. It should not be considered as legal advice or used as a substitute for consultation with professional legal advisors. For more details, please refer to the EU AI Act or your professional legal advisors. Please notify us via our website if you identified any errors or missing information on this article.

40 views0 comments

Comments


bottom of page