While speaking at the annual conference of the National Advertising Division on September 19, 2023, the Federal Trade Commission (“FTC”) announced a generative AI (“AI”) policy that is consistent with Chairwoman Khan’s focus on the perceived harms to consumers from large technology companies, fully embracing a plan to regulate AI swiftly, aggressively, and proactively.
The agency began its remarks on AI by observing that its purported policy decision to allow technology companies to self-regulate during the “Web 2.0” era was a mistake. Self-regulation, according to the FTC, was a failure that ultimately resulted in the collection of too much power and too much data by a handful of large technology companies.
In the FTC’s opinion, self-regulation would be equally, if not more, ineffective in the AI space because AI has the potential to dramatically increase “Big Tech’s” alleged assaults on privacy, competition, and misinformation. The FTC will, accordingly, look to existing laws and legislation to attempt to confront the perceived ills created by AI. Importantly, its overall approach to AI will be guided by three principles:
First, the FTC believes that markets must be fair, open, and competitive so that “fledgling upstarts … have the chance to compete with the incumbents who became dominant in the era of Web 2.0.” On that front, the FTC claimed that it was making significant progress, including its recent successful obstruction of the largest semiconductor merger in US history.
Second, the FTC will continue to use its tools, especially enforcement, to challenge unfair and deceptive practices involving AI. It pointed to its recent securement of a temporary restraining order against AutomatorAI – which allegedly promised that its AI-boosted tools would ensure profitability for consumers who agreed to invest in online stores – as one example of such enforcement action.
Third, the FTC will continue to push for market-wide rules to address harms from AI, including the “Impersonator Rule” which would enable the agency to seek civil and monetary penalties against those entities that use voice-cloning and other technologies to allegedly defraud the public.
According to the FTC, “[w]hat should be clear from these principles is” that the agency is not going to sit back and “let consumers fend for themselves, or let those looking to monetize AI write their own rules.” The agency made explicit that it “will use every tool” available to protect the public and, to the extent those tools fall short, it “will be upfront with Congress” about what is needs to accomplish its consumer protection objectives.
In short, the FTC has announced that it stands ready to regulate in this space using not only existing enforcement tools but also additional powers requested from Congress, including the ability to take certain proactive steps, such as by defining permissible use of AI up-front rather than waiting to take enforcement action after consumer harm has allegedly occurred from a particular use of the technology.
Notably, less than one week after the FTC’s speech, some members of Congress proposed the “Algorithmic Accountability Act of 2023” which would give the FTC more resources and authority to regulate companies’ use of AI. It remains to be seen how this Act, or any other future proposed regulations focused on AI, will fare in a split Congress.