At the end of June 2023, the FTC proposed a new rule targeted at deceptive reviews in the marketplace. The proposed rule would prohibit advertisers and marketers from employing illicit review and endorsement practices such as using fake reviews, buying positive reviews, and suppressing negative reviews.

In explaining its motivation for the proposal, the Commission noted the widespread emergence of generative artificial intelligence (AI), which it fears could lead to a drastic increase in the prevalence of fake reviews. The FTC hopes to “level the playing field for honest companies” with this new rule.

In the latest piece to come out of the FTC’s new focus on emerging technologies, the FTC Bureau of Consumer Protection issued new guidance on the use of artificial intelligence (“AI”) and algorithms. The guidance follows up on a 2018 hearing where the FTC explored AI, algorithms, and predicative analysis. As the FTC recognizes, these technologies already pervade the modern economy. They influence consumer decision making – from what video to watch next, to what ad to click on, or what product to purchase. They make investment decisions, credit decisions, and, increasingly, health decisions, which has also sparked the interest of State Attorneys General and the Department of Health & Human Services. But the promise of new technologies also comes with risk. Specifically, the FTC cites an instance in which an algorithm designed to allocate medical interventions ended up funneling resources to healthier, white populations.