OUR PARTNERS

Recent AI Regulations: US, China, Europe & Beyond


02 July, 2024

Understanding the unfolding landscape of artificial intelligence (AI) regulation is crucial as governments around the world adapt to the rapid pace of AI advancements. This article offers an insight into recent regulatory activities concerning AI in the United States and Europe, reflecting the measures taken to harness the potential of AI while addressing its risks and challenges.

In the United States, a landmark directive was signed by President Joe Biden on October 30, known as the “Safe, Secure & Trustworthy Development and Use of Artificial Intelligence” order. This directive champions a holistic approach to AI development and deployment, balancing the need for innovation with concerns for safety, security, and ethical considerations. The federal agencies have been handed down a series of mandates which include the implementation of safety and security standards to ensure that AI tools are benign, and that the public’s privacy and civil rights are fiercely guarded.

Significantly, this includes a demand for AI developers to report and disclose safety test outcomes to the government prior to the launch of any AI tool perceived as a potential risk to national security or public welfare. The White House Executive Order sets out eight core principles that will guide AI development, fostering an environment for ethical use compliant with federal statutes. Critical infrastructure sectors, overseen by the Department of Homeland Security, will be subject to stringent safety measures benchmarked by the National Institute of Standards and Technology.

Furthermore, coordination between the Department of Justice and federal civil rights offices has been mandated to tackle AI-related violations. Another aspect the Executive Order tackles is discrimination exacerbated by AI, providing guidance to a range of sectors from housing to federal contractors, to combat AI-induced biases. In healthcare, the Department of Health and Human Services has been tasked with monitoring AI safety, while the Department of Labor will focus on workplace-related AI implications.

One highlight of the Executive Order is the emphasis on combating bias in AI financial product sales and considering AI’s role in combatting unwanted communication, such as robocalls. Moreover, measures to develop guidance for authenticating and labeling AI-generated content are set to tackle deception and fraud.

Innovative strides in AI, including the emergence of tools such as AI text generators and AI images generators, have transformed the digital landscape. The Order aims to harness this innovation responsibly, boosting AI competitiveness by backing education, training, and research, and easing visa processes for foreign AI talent.

Turning to Congressional and Federal Agency activities, over 30 hearings on AI matters have taken place since January 2023, showcasing a clear prioritization of AI regulation. Although it’s still uncertain what final legislation will ensue, the Securities and Exchange Commission (SEC) has proposed rules specifically scrutinizing AI usage by broker-dealers and investment advisors. These rules, addressing biases, conflicts of interest, financial fraud, privacy, and intellectual property concerns, would necessitate measures to counteract AI-driven biases and ensure transparent AI capabilities in financial disclosures.

At the state level, California has mirrored the federal stance with its own executive order fostering AI innovation and responsible development. Across a dozen states, legislatures are considering AI governance bills, while Illinois retains its unique position since 2019 with a specific AI use law in recruitment processes. The “Illinois Artificial Intelligence Video Interview Act” outlines how AI video generator tools for evaluating candidates must be handled with respect to notice, consent, and data destruction.

On the East Coast, New York City has enforced a 2023 law banning AI use in specific hiring processes unless certain conditions are met. Such local regulations underscore the trend toward increased scrutiny of AI tools and their applications across diverse sectors.

Europe is similarly proactive in AI regulation, with the European Commission proposing the Artificial Intelligence Act, which would create a legal framework for AI governance across EU member states. The proposed Act aims to establish a balanced approach, fostering an ecosystem of trust through transparency and accountability for AI systems, while fostering the AI market’s growth.

As AI continues to evolve, so too will regulatory frameworks on both sides of the Atlantic. Keeping abreast of the latest ai news & ai tools is more than just a matter of staying informed—it’s about understanding the legal context within which these technologies operate and the implications for businesses, consumers, and society at large.