Artificial Intelligence (AI)

While speaking at the annual conference of the National Advertising Division on September 19, 2023, the Federal Trade Commission (“FTC”) announced a generative AI (“AI”) policy that is consistent with Chairwoman Khan’s focus on the perceived harms to consumers from large technology companies, fully embracing a plan to regulate AI swiftly, aggressively, and proactively. 

The agency began its remarks on AI by observing that its purported policy decision to allow technology companies to self-regulate during the “Web 2.0” era was a mistake. Self-regulation, according to the FTC, was a failure that ultimately resulted in the collection of too much power and too much data by a handful of large technology companies. 

At the end of June 2023, the FTC proposed a new rule targeted at deceptive reviews in the marketplace. The proposed rule would prohibit advertisers and marketers from employing illicit review and endorsement practices such as using fake reviews, buying positive reviews, and suppressing negative reviews.

In explaining its motivation for the proposal, the Commission noted the widespread emergence of generative artificial intelligence (AI), which it fears could lead to a drastic increase in the prevalence of fake reviews. The FTC hopes to “level the playing field for honest companies” with this new rule.

ChatGPT may be smart enough to pass the bar exam, but lawyers should take caution before relying on the Artificial Intelligence (“AI”) platform to conduct any legal business.

On June 22, 2023, Judge P. Kevin Castel of the Southern District of New York released a lengthy order sanctioning two attorneys for submitting a brief drafted by ChatGPT. Judge Castel reprimanded the attorneys, explaining that while “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the attorneys “abandoned their responsibilities” by submitting a brief littered with fake judicial opinions, quotes and citations.

In an unsigned per curiam opinion yesterday in Gonzalez v. Google, the U.S. Supreme Court vacated the Ninth Circuit’s judgment— which had held that plaintiffs’ complaint was barred by Section 230 of the Communications Decency Act – and remanded it. But the Court’s opinion entirely skirted a highly-anticipated issue: whether Section 230 does, in fact, shelter as much activity as courts have held to date.

Last month, the FTC issued a report to Congress advising governments and companies to exercise “great caution” in using artificial intelligence (“AI”) to combat harmful online content.  The report responds to Congress’s request to look into whether and how AI may be used to identify, remove, or otherwise address a wide variety of specified “online harms.”  Among the “harms” covered by Congress’s request were impersonation scams, fake reviews and accounts, deepfakes, illegal drug sales, revenge pornography, hate crimes, online harassment and cyberstalking, and misinformation campaigns aimed at influencing elections.

In the latest piece to come out of the FTC’s new focus on emerging technologies, the FTC Bureau of Consumer Protection issued new guidance on the use of artificial intelligence (“AI”) and algorithms. The guidance follows up on a 2018 hearing where the FTC explored AI, algorithms, and predicative analysis. As the FTC recognizes, these technologies already pervade the modern economy. They influence consumer decision making – from what video to watch next, to what ad to click on, or what product to purchase. They make investment decisions, credit decisions, and, increasingly, health decisions, which has also sparked the interest of State Attorneys General and the Department of Health & Human Services. But the promise of new technologies also comes with risk. Specifically, the FTC cites an instance in which an algorithm designed to allocate medical interventions ended up funneling resources to healthier, white populations.

Through the help of artificial intelligence (“AI”), your smartphone can act as a GPS that adjusts its recommended route in real-time based on emerging traffic patterns. By adapting to changes in traffic, the smartphone is able to redirect a driver to a faster route. Now imagine these adaptive capabilities in the legal field. With the potential of AI growing rapidly, the use of AI technology, though still in its infancy, is gaining traction with law firms, helping to provide better outcomes for clients, faster. According to a recent survey by management consulting firm Altman Weil, law firms are beginning to explore AI’s potential. While only 7.5% of surveyed law firms are currently making use of AI, nearly a third of the surveyed law firms have begun to explore opportunities to use AI as a legal tool. The capabilities of AI, whether currently available or on the horizon, suggest that both lawyers and clients can benefit from the legal field’s embrace of AI. This is particularly true with respect to the use of AI in the many phases of contract review: contract creation, contract analysis, and contract due diligence.