Picture this: You’ve just been retained by a new client who has been named as a defendant in a complex commercial litigation. While the client has solid grounds to be dismissed from the case at an early stage via a dispositive motion, the client is also facing cost constraints. This forces you to get creative when crafting a budget for your client’s defense. You remember the shiny new toy that is generative Artificial Intelligence (“AI”). You plan to use AI to help save costs on the initial research, and even potentially assist with brief writing. It seems you’ve found a practical solution to resolve all your client’s problems. Not so fast.

In a recent public comment addressed to the United States Copyright Office, the Federal Trade Commission seemingly expanded upon remarks made at the National Advertising Division back in September that it will aggressively and proactively challenge alleged unfair practices involving artificial intelligence, even if that means stretching the meaning of “unfair” to increase its jurisdiction over such matters.

Generative AI has taken the world by storm since OpenAI launched ChatGPT in November 2022. But the buzz and excitement of GAI has come with difficult legal questions that threaten the new technology. Several lawsuits—some of which we have discussed in detail—have been filed against companies whose GAI products have been trained on copyrighted materials. Up until now, we have only been able to speculate how courts will handle GAI as the industry has held its collective breath.

While speaking at the annual conference of the National Advertising Division on September 19, 2023, the Federal Trade Commission (“FTC”) announced a generative AI (“AI”) policy that is consistent with Chairwoman Khan’s focus on the perceived harms to consumers from large technology companies, fully embracing a plan to regulate AI swiftly, aggressively, and proactively. 

The agency began its remarks on AI by observing that its purported policy decision to allow technology companies to self-regulate during the “Web 2.0” era was a mistake. Self-regulation, according to the FTC, was a failure that ultimately resulted in the collection of too much power and too much data by a handful of large technology companies. 

ChatGPT may be smart enough to pass the bar exam, but lawyers should take caution before relying on the Artificial Intelligence (“AI”) platform to conduct any legal business.

On June 22, 2023, Judge P. Kevin Castel of the Southern District of New York released a lengthy order sanctioning two attorneys for submitting a brief drafted by ChatGPT. Judge Castel reprimanded the attorneys, explaining that while “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the attorneys “abandoned their responsibilities” by submitting a brief littered with fake judicial opinions, quotes and citations.

In an unsigned per curiam opinion yesterday in Gonzalez v. Google, the U.S. Supreme Court vacated the Ninth Circuit’s judgment— which had held that plaintiffs’ complaint was barred by Section 230 of the Communications Decency Act – and remanded it. But the Court’s opinion entirely skirted a highly-anticipated issue: whether Section 230 does, in fact, shelter as much activity as courts have held to date.

In the latest piece to come out of the FTC’s new focus on emerging technologies, the FTC Bureau of Consumer Protection issued new guidance on the use of artificial intelligence (“AI”) and algorithms. The guidance follows up on a 2018 hearing where the FTC explored AI, algorithms, and predicative analysis. As the FTC recognizes, these technologies already pervade the modern economy. They influence consumer decision making – from what video to watch next, to what ad to click on, or what product to purchase. They make investment decisions, credit decisions, and, increasingly, health decisions, which has also sparked the interest of State Attorneys General and the Department of Health & Human Services. But the promise of new technologies also comes with risk. Specifically, the FTC cites an instance in which an algorithm designed to allocate medical interventions ended up funneling resources to healthier, white populations.

Through the help of artificial intelligence (“AI”), your smartphone can act as a GPS that adjusts its recommended route in real-time based on emerging traffic patterns. By adapting to changes in traffic, the smartphone is able to redirect a driver to a faster route. Now imagine these adaptive capabilities in the legal field. With the potential of AI growing rapidly, the use of AI technology, though still in its infancy, is gaining traction with law firms, helping to provide better outcomes for clients, faster. According to a recent survey by management consulting firm Altman Weil, law firms are beginning to explore AI’s potential. While only 7.5% of surveyed law firms are currently making use of AI, nearly a third of the surveyed law firms have begun to explore opportunities to use AI as a legal tool. The capabilities of AI, whether currently available or on the horizon, suggest that both lawyers and clients can benefit from the legal field’s embrace of AI. This is particularly true with respect to the use of AI in the many phases of contract review: contract creation, contract analysis, and contract due diligence.