World’s first attempt to regulate artificial intelligence
The European Parliament approved by a wide margin with 523 votes in favor, 46 against and 49 abstentions, on the 13th of March 2024, the Artificial Intelligence Act, a set of rules on the use of artificial intelligence proposed in 2021, which aims to guarantee safety and protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while also maintaining innovation.
This legislation is the first comprehensive framework filling the legislative gap of regulating artificial intelligence technology, and will be subject to final checks by lawyers and linguists before being formally endorsed by the European Council. It is expected to come into force in May, provided that it pass final checks. Implementation of the new rules will then be phased in from 2025, with most of the provisions taking effect 24 months after the AI Act becomes law, but with bans on prohibited applications being expected to be applicable after six months.
The EU AI act categorizes AI systems on the basis of their potential harmfulness in case they fail to work as intended or advertised, ranging from low-risk services such as spam filters up to high-risk tools used in critical infrastructure. The highest risk tier, labeled 'unacceptable,' would result in the prohibition of these systems under the new law. More specifically, the EU AI Act bans or restricts applications that include scraping facial images for databases, emotion recognition in work and educational settings, social scoring, and manipulative AI, while severely restricting law enforcement's use of biometric AI.
Under the new law, high-risk systems in education, hiring, and government services, including law enforcement and healthcare, will now fall under a stricter governance regime. This entails additional transparency and disclosure requirements. Developers will be mandated to log the system’s activities and processes. This ensures that the outputs can be traced in case they have an impact on contested matters later on. The most powerful general-purpose and generative AI models (those trained using a total computing power of more than 10^25 FLOPs) are deemed to have systemic risks [1] under the Act. The threshold may be adjusted over time, but OpenAI's GPT-4 and DeepMind's Gemini are believed to fall into this category.
Additionally, the legislation restricts the interpretation of emotions with AI in schools and workplaces, as well as automated profiling to predict future crimes, sexual orientation, or political opinions, impacting various companies that have hitherto been operating unhindered. These provisions are designed to safeguard citizens' rights and promote safe, ethical AI development without discrimination or bias propagation.
The act imposes limits on generative AI and manipulated media under copyright laws. Rightsholders can opt to withhold their rights to prevent text and data mining, except for scientific research purposes. AI models specifically created for research, development, and prototyping are exempt from these restrictions.
To enforce the law, each member country will have to designate supervisory authorities in charge of implementing the legislative requirements, and the European Commission will set up an AI Office that will develop methods to evaluate models and monitor risks in general-purpose models. Providers of general-purpose models that are deemed to carry systemic risks will be asked to work with the office to draw up codes of conduct. As with other EU regulations targeting tech, the penalties for violating the AI Act's provisions are designed to be effective, dissuasive, and proportionate to the type of offense, previous actions, and profile of the offender, and can be steep. Companies that break the rules will be subject to fines of [3] up to seven percent of their global annual profits.
The total aggregate cost of compliance is estimated [2] to be between €100 million and €500 million by 2025, corresponding to up to 4-5% of investment in high-risk AI, which is estimated to be between 5% and 15% of all AI applications. Verification costs could amount to another 2-5% of investment in high-risk AI. Businesses or public authorities that develop or use any AI applications not classified as high risk would not have to incur any costs. Ensuring the trustworthiness of their AI applications and following voluntary codes of conduct could entail costs at most as high as those for high-risk applications, but most likely lower. Small and Medium Enterprises (SMEs) will benefit more from a higher level of trust in AI than large companies that can also rely on their brand image. Due to the high scalability of digital technologies, SMEs developing applications classified as high risk would have to bear similar costs as large companies. Nonetheless, the framework is expected to provide specific measures, including regulatory sandboxes or assistance through the Digital Innovation Hubs, to support SMEs in their compliance with the new rules, taking their particular needs into account.
Regarding the criticism and intense lobbying by tech firms against the Act on potentially hindering innovation in the EU, the AI Act aims to significantly mitigate risks to fundamental rights of citizens as well as broader Union values, and will enhance the safety of certain products and services embedding AI technology or stand-alone AI applications. Overall, the Act attempts to balance innovation with ethical considerations by citizens' rights while fostering technological innovation, ensuring that AI systems are safe and respect EU copyright laws. This balance is praised by proponents who highlight the act's ability to protect citizens while encouraging innovation. EU officials also celebrate setting a global precedent for ethically responsible AI regulation.
Sources:
[1]: https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683
[3]: https://www.holisticai.com/blog/penalties-of-the-eu-ai-act