OUR PARTNERS

White House’s New AI Regulatory Order: Safe or Outdated?


04 July, 2024

The White House’s latest executive order, “Safe, Secure, and Trustworthy Artificial Intelligence,” is set to introduce a fresh wave of national AI regulation, with a focus on security and accountability across the industry. But is it up to the task? This executive order marks the initial attempt by the U.S. government to establish a comprehensive regulatory framework for AI that applies to both federal agencies and private enterprises. Although it outlines a wide range of goals for the AI ecosystem and expands on previous AI-related directives, it faces several hurdles, including a lack of clear accountability and specific timelines, as well as potentially excessive reporting requirements.

Rather than establishing a few guidelines for the AI industry to follow, the executive order clings to an antiquated regulatory approach, suggesting that the government alone can shape the future of AI. As we’ve learned from previous technological advancements, such an approach is unlikely to keep pace with the rapid developments driven by private industry. Here’s an in-depth look at its potential impact and efficacy.

The executive order proposes new safety and security standards for AI, notably requiring major model developers to share safety test results with the federal government. However, there is significant ambiguity regarding the reporting requirements for a vast number of companies and developers who are customizing these regulated large models for specific use cases.

There is no question that AI needs regulation. It is an incredibly powerful technology that, due to its immense power, requires certain safeguards. While the executive order correctly targets these standards at the largest model developers, the reporting requirements should follow the tiered structure seen in other regulated industries. This means that the largest infrastructure providers, which impact every American, should bear the brunt of the regulatory responsibility.

In contrast, U.S. regulators should take a more lenient approach with startups to preserve the country’s leading position in innovation. While it’s encouraging to see detailed provisions in certain areas – such as the Department of Commerce’s development of guidelines for content authentication and watermarking to clearly label AI-generated content – many security objectives remain vague.

The executive order’s focus on AI safety and security is particularly relevant in the context of AI tools like the AI images generator and AI video generator. These tools, which create artificial intelligence generated images and videos, have immense potential but also pose significant risks if misused. Therefore, a clear regulatory framework is vital to ensure they are used responsibly.

Furthermore, the executive order’s emphasis on transparency and accountability is crucial in the context of AI text generators. These tools, which are becoming increasingly sophisticated, have the potential to spread misinformation if not properly regulated. Therefore, the latest AI news suggests that clear guidelines and stringent reporting requirements are necessary to ensure these tools are used ethically and responsibly.

In conclusion, while the White House’s new executive order on AI regulation is a step in the right direction, it also raises several questions about its effectiveness and feasibility. As the AI industry continues to evolve at a rapid pace, it is crucial for the regulatory framework to keep up. This will ensure that AI technology can continue to advance while also protecting the public from potential risks.