OUR PARTNERS

Proposed EU Artificial Intelligence Act Sets Global Standards


01 July, 2024

The European Union is poised to enact a landmark legislation in the form of the Artificial Intelligence Act, set to establish one of the most rigorous regulatory frameworks for AI to date. This watershed regulation aims to govern AI applications, extending across providers, deployers, importers, and distributors of AI systems, while holding wide-reaching implications beyond European borders. Prospects indicate that this regulatory environment could forge the path for AI governance on a global scale, emulating the way the General Data Protection Regulation (GDPR) has become a touchstone for privacy laws worldwide.

Under the proposed AI Act, an “AI system” is defined as a machine-based entity equipped to function autonomously to various degrees and capable of learning post-deployment. It identifies inputs to produce outputs such as predictions, decisions, content, or recommendations that exert influence over both physical and virtual environments. This definition, inspired by the Organisation for Economic Cooperation and Development’s (OECD) interpretation, encapsulates a comprehensive view of AI systems, including those operating within the metaverse.

Sifting through the latest ai news & ai tools, the AI Act’s core is a meticulous, risk-based categorization of AI systems. This innovative classification concept stratifies AI applications by their potential risk levels, applying stringent regulations particularly to those deemed ‘high-risk’. These might include AI video generators used in sensitive fields or even influential AI text generators. An estimated 5-15% of AI systems would fall under the high-risk umbrella according to the European Commission’s estimation.

The prohibitive band within this structure lists AI functionalities that present unacceptable risk profiles, resolutely barring them from deployment. These proscriptions compass practices such as using underhanded techniques to subtly influence individual behavior, targeting vulnerable groups for manipulative purposes, and implementing certain types of social scoring and real-time biometric identification in public spaces, with a handful of delineated exceptions.

Furthermore, specific attributes elevate an AI system to the high-risk category. These include AI applications in critical domains like education, employment, healthcare, credit services, biometric identification, and AI systems integral to the administration of justice and democratic processes. Nevertheless, not all AI systems that exhibit an element of risk will be deemed high-risk; a calibrated assessment weighing the probability and severity of potential harm will determine their categorization.

For high-risk AI entities, the obligations are substantial. These range from rigorous documentation prior to market introduction to registering within an EU database—the stakes involve compliance with EU safety legislation and, in some cases, necessitating third-party conformity assessments. Such actions assure that risk evaluations are thorough and transparent, meeting stringent EU standards.

Non high-risk and non-prohibited AI systems are not off the hook. Even those systems, which may include an AI images generator or an artificial intelligence generated images platform, must adhere to transparency obligations. They are required to make evident to users that they are engaging with an AI—building trust and ensuring user awareness.

This act is particularly relevant considering the swift evolution of AI technologies. Despite its protracted legislative passage, the AI Act must address the fast-paced innovation seen in the realm of AI, such as the widespread use of generative AI systems like ChatGPT. The cutting-edge nature of general-purpose AI (GPAI) systems has raised fresh deliberations regarding regulation encompassing these versatile, foundational models fuelling various downstream applications.

The penalties for non-compliance with the AI Act are weighty, paralleling those within the GDPR—reaching up to 7% of the annual global turnover for certain violations. Such hefty fines underline the EU’s commitment to enforce this regulation vigorously, which will undoubtedly influence the operational strategies of AI businesses globally.

In the dynamic terrain of artificial intelligence, the EU Artificial Intelligence Act is set to become a cornerstone of EU digital policy, offering a blueprint for international AI governance. It presents a clear message to the AI community: the future of AI lies in responsible, ethical development and implementation, with broad implications for consumer protection and societal trust in technology. As our dependency on AI deepens, the birth of the AI Act could herald a global shift towards more conscientious AI deployment, reverberating through the corridors of AI research and commercialization. For those who follow the pulse of the latest AI advancements, the AI Act marks a new chapter in how we manage the burgeoning capabilities of these transformative technologies.