OUR PARTNERS

FTC Attorney Discusses AI and Consumer Protection Laws


03 July, 2024

In a recent webinar hosted on November 7, 2023, Michael Atleson, a seasoned senior attorney with the Federal Trade Commission (FTC), shared his expertise on the intersection of artificial intelligence (AI) and consumer protection laws. Atleson, who has dedicated nearly two decades to the FTC and currently works within the Division of Advertising Practices, engaged in a dynamic discussion with Holland & Knight’s Anthony DiResta and Benjamin Genn. The discourse centered around the legal frameworks guiding AI regulation, recent enforcement activities, and the responsibilities of AI product providers.

AI technology is rapidly integrating into diverse sectors of the U.S. economy, such as healthcare, retail, and manufacturing. As AI tools become more sophisticated, including AI images generator, artificial intelligence generated images, ai text generator, and AI video generator, the regulatory landscape must evolve to address emerging challenges. The webinar offered a treasure trove of insights for companies seeking to bolster their compliance efforts and navigate the potential risks associated with AI in consumer-facing applications.

The FTC does not adhere to a singular definition of AI, recognizing its broad scope which extends far beyond basic chatbots. AI includes advanced algorithms and systems that perform complex computations and predictive analyses. The FTC’s primary concern is how AI impacts consumer protection. The agency urges companies to critically assess AI’s influence, value, and any adverse effects it may have on consumers.

Under Section 5 of the FTC Act, which prohibits “unfair or deceptive acts or practices,” the FTC has ample authority to regulate AI. This means that any marketing or application of AI must comply with well-established principles to prevent deception or harm.

The FTC has identified two prevalent issues related to AI deception. The first involves companies overstating their AI capabilities—a problem known as “the Fake AI Problem.” The second pertains to the use of AI to directly deceive consumers, such as through deepfakes or phishing schemes crafted with cloned voices and language models.

A series of enforcement actions by the FTC illustrates its commitment to combating deceptive AI practices. These actions include lawsuits against companies making unfounded claims about AI-driven online stores, “smart” health devices, automated investment services, and facial recognition technology.

The Biden Administration’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence issued on October 30, 2023, further underscores the government’s focus on AI risks. While not directing the FTC explicitly, the order implies that the agency should continue to apply its existing authority to regulate AI responsibly and protect individual rights against bias or discrimination.

The webinar underscored the importance of ongoing vigilance and proactive measures by companies utilizing AI. It is essential for businesses to monitor their AI products continuously and use disclaimers responsibly. They must also be aware of their potential liability and the remedies available to consumers and the government in the event of an enforcement action.

For our readers at ai-headlines.co who are keen on staying abreast of the latest ai news & ai tools, it is clear that regulatory compliance is not just a legal requirement but also a strategic business necessity. As AI technologies evolve, so too must the strategies to ensure they are used ethically and in accordance with consumer protection standards.

In conclusion, as we navigate a future increasingly shaped by artificial intelligence, the FTC’s watchful eye serves as a reminder that innovation must go hand in hand with integrity and responsibility. Companies leveraging AI must not only pursue technological advancement but also commit to upholding consumer trust and safety. The insights from this webinar serve as a guide for businesses to align their AI applications with regulatory expectations and societal values.