Artificial Intelligence (AI)

Picture this: You’ve just been retained by a new client who has been named as a defendant in a complex commercial litigation. While the client has solid grounds to be dismissed from the case at an early stage via a dispositive motion, the client is also facing cost constraints. This forces you to get creative when crafting a budget for your client’s defense. You remember the shiny new toy that is generative Artificial Intelligence (“AI”). You plan to use AI to help save costs on the initial research, and even potentially assist with brief writing. It seems you’ve found a practical solution to resolve all your client’s problems. Not so fast.

Pricing algorithms are nothing new. They are, generally speaking, computer programs intended to help sellers optimize prices in real time, or close to it. These programs can use data on demand, costs, or even competitors’ prices to “learn” to set the prices of products. What is new is the proliferation of these programs across industries and the emergence of artificial intelligence-driven pricing algorithms. 

It has been eight months since the Supreme Court’s landmark copyright fair use decision in Andy Warhol Foundation for the Visual Art, Inc. v. Goldsmith. Much has been written on the subject, including in this forum, but in many ways it was a narrow decision. The Court held that the commercial licensing of Orange Prince, a work in Andy Warhol’s Prince series based on a photograph by Lynn Goldsmith, was not protected under the first factor of the four-factor fair use test under 17 U.S.C. § 107. Its discussion of the transformative use test emphasized the similarity of the uses the works were put to (depicting Prince on magazine covers), rather than the characteristics of the works themselves. This, the Court said, prevents judges from acting as art critics to determine the aesthetic differences between, or meanings behind, artistic works.

With great promise comes great scrutiny. As artificial intelligence (“AI”) has become part of industries’ and individuals’ daily repertoire, it has also come under focus by antitrust regulators. The DOJ, in its so-called “Project Gretzky,” is gearing up with data scientists and others to be a tech-savvy version

Making do on its promise to “use every tool” in its arsenal to regulate artificial intelligence (‘AI”), the Federal Trade Commission (“FTC”) unanimously approved a resolution on November 21, 2023 authorizing the use of compulsory process in non-public investigations involving AI-related products and services. 

While speaking at the annual conference of the National Advertising Division on September 19, 2023, the Federal Trade Commission (“FTC”) announced a generative AI (“AI”) policy that is consistent with Chairwoman Khan’s focus on the perceived harms to consumers from large technology companies, fully embracing a plan to regulate AI swiftly, aggressively, and proactively. 

The agency began its remarks on AI by observing that its purported policy decision to allow technology companies to self-regulate during the “Web 2.0” era was a mistake. Self-regulation, according to the FTC, was a failure that ultimately resulted in the collection of too much power and too much data by a handful of large technology companies. 

At the end of June 2023, the FTC proposed a new rule targeted at deceptive reviews in the marketplace. The proposed rule would prohibit advertisers and marketers from employing illicit review and endorsement practices such as using fake reviews, buying positive reviews, and suppressing negative reviews.

In explaining its motivation for the proposal, the Commission noted the widespread emergence of generative artificial intelligence (AI), which it fears could lead to a drastic increase in the prevalence of fake reviews. The FTC hopes to “level the playing field for honest companies” with this new rule.

ChatGPT may be smart enough to pass the bar exam, but lawyers should take caution before relying on the Artificial Intelligence (“AI”) platform to conduct any legal business.

On June 22, 2023, Judge P. Kevin Castel of the Southern District of New York released a lengthy order sanctioning two attorneys for submitting a brief drafted by ChatGPT. Judge Castel reprimanded the attorneys, explaining that while “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the attorneys “abandoned their responsibilities” by submitting a brief littered with fake judicial opinions, quotes and citations.