- The European Union lawmakers have agreed on the details of a new law known as the AI Act, intended to regulate artificial intelligence, after more than 36 hours of negotiations.
- The AI Act, aimed at protecting consumer rights while also promoting innovation, might change how tech giants like Google and Microsoft, as well as AI startups, operate in the EU.
- The bill may act as a blueprint for other countries that want to establish rules for AI, and has been described by some as a “great start” for regulating AI.
- The law categorizes AI systems into unacceptable risk, high-risk, and limited and minimal risk, providing some clarity.
- Non-compliance with the law could lead to penalties of up to 35 million euros or 7% of global turnover.
After long hours of negotiations, on December 8, 2023, European Union lawmakers finally agreed on the details of a new law intended to regulate artificial intelligence (AI). This legislation, known as the AI Act, is seen as one of the first attempts globally to establish a comprehensive set of rules for AI. The law aims to protect consumer rights while also encouraging innovation. Carme Artigas, the Spanish secretary of state for digitalization and artificial intelligence, described it as “a historical achievement and a huge milestone towards the future.”
The Act carries significant cybersecurity implications and might change how tech giants like Google and Microsoft, as well as AI startups, operate in the EU. However, its impact may not be limited to European borders; it could serve as a blueprint for other countries that want to establish rules for AI. Indeed, the way EU policymakers think about the intersection of AI and cybersecurity could serve as an indicator of future regulatory trends.
Entities failing to adhere to these rules could face penalties of up to 35 million euros or 7% of global turnover, depending on the nature of the infringement and the size of the company. Citizens also have the right to file complaints regarding AI systems and obtain clarifications on decisions taken. The bill now needs to be adopted by Parliament and Council to become law, and it will come into effect no earlier than 2025.
Several technology experts interviewed by CSO described the document as “a great start” for regulating AI. They also noted that given the swift advancement of artificial intelligence, it’s positive that the document avoids too much technical detail. Dr. Kris Shrishak, a public interest technologist and an ICCL Enforce Senior Fellow, remarked, “It’s a legal text, and it should provide a certain level of guidance in terms of requirements, but shouldn’t go too much into the technical details, because we know problems start when lawyers start reading technical documents.”
The bill categorises AI systems into unacceptable risk (which includes uses of AI that are prohibited), high-risk (which includes critical infrastructure), and limited and minimal risk. Joseph Thacker, a security researcher at AppOmni, commented, “AI is right on the cusp of the smartest human; we’re going to have to answer really tough questions about what we want to enable people to use AI for. So, I love the fact that there’s an unacceptable kind of labelling to start with.”
The Act further emphasises the need for robust cybersecurity measures for high-risk systems, encouraging the implementation of sophisticated security features to protect against any attempts to alter the systems or compromise their security properties.
As the EU moves forward with regulating AI and setting a global example, the world will undoubtedly be watching closely.