Courts issued two seemingly conflicting rulings on whether AI generated materials are protected. Heppner (S.D.N.Y.) found that documents created with a consumer version of Claude AI were not privileged or work product because the tool exposed data to a third party provider. Warner (E.D. Mich.) reached the opposite result the

A recent decision in United States v. Heppner appears to be the first federal ruling to directly address attorney‑client privilege and work‑product issues arising from a non‑lawyer’s use of a consumer-grade insecure AI tool for legal research. The court held that materials generated through Anthropic’s consumer version of Claude were

The Department of Justice and eight state attorneys general filed a civil antitrust lawsuit in August against RealPage Inc. – a Texas-based software company that provides property management software – and several landlords using its software. This case adds to the growing number of antitrust cases targeting algorithmic pricing tools, and is another example of federal regulators taking a tough stance on new AI technologies.

On September 10, minutes after the first presidential debate between Donald Trump and Vice President Kamala Harris, an Instagram post set the political world abuzz: Taylor Swift endorsed Harris in the race. The announcement from one of the world’s biggest stars was newsworthy in itself, but IP lawyers likely took note of why she chose to post. Swift explained that she felt compelled to share her views after a photo featuring an AI-generated image of her appearing to endorse Donald Trump was posted online. The image was shared by Trump himself on his social media platform Truth Social, and was circulated widely by his supporters. Swift wrote that the image “really conjured up my fears around AI, and the dangers of spreading misinformation.”

Picture this: You’ve just been retained by a new client who has been named as a defendant in a complex commercial litigation. While the client has solid grounds to be dismissed from the case at an early stage via a dispositive motion, the client is also facing cost constraints. This forces you to get creative when crafting a budget for your client’s defense. You remember the shiny new toy that is generative Artificial Intelligence (“AI”). You plan to use AI to help save costs on the initial research, and even potentially assist with brief writing. It seems you’ve found a practical solution to resolve all your client’s problems. Not so fast.

Pricing algorithms are nothing new. They are, generally speaking, computer programs intended to help sellers optimize prices in real time, or close to it. These programs can use data on demand, costs, or even competitors’ prices to “learn” to set the prices of products. What is new is the proliferation of these programs across industries and the emergence of artificial intelligence-driven pricing algorithms. 

It has been eight months since the Supreme Court’s landmark copyright fair use decision in Andy Warhol Foundation for the Visual Art, Inc. v. Goldsmith. Much has been written on the subject, including in this forum, but in many ways it was a narrow decision. The Court held that the commercial licensing of Orange Prince, a work in Andy Warhol’s Prince series based on a photograph by Lynn Goldsmith, was not protected under the first factor of the four-factor fair use test under 17 U.S.C. § 107. Its discussion of the transformative use test emphasized the similarity of the uses the works were put to (depicting Prince on magazine covers), rather than the characteristics of the works themselves. This, the Court said, prevents judges from acting as art critics to determine the aesthetic differences between, or meanings behind, artistic works.

With great promise comes great scrutiny. As artificial intelligence (“AI”) has become part of industries’ and individuals’ daily repertoire, it has also come under focus by antitrust regulators. The DOJ, in its so-called “Project Gretzky,” is gearing up with data scientists and others to be a tech-savvy version