On September 10, minutes after the first presidential debate between Donald Trump and Vice President Kamala Harris, an Instagram post set the political world abuzz: Taylor Swift endorsed Harris in the race. The announcement from one of the world’s biggest stars was newsworthy in itself, but IP lawyers likely took note of why she chose to post. Swift explained that she felt compelled to share her views after a photo featuring an AI-generated image of her appearing to endorse Donald Trump was posted online. The image was shared by Trump himself on his social media platform Truth Social, and was circulated widely by his supporters. Swift wrote that the image “really conjured up my fears around AI, and the dangers of spreading misinformation.”

Picture this: You’ve just been retained by a new client who has been named as a defendant in a complex commercial litigation. While the client has solid grounds to be dismissed from the case at an early stage via a dispositive motion, the client is also facing cost constraints. This forces you to get creative when crafting a budget for your client’s defense. You remember the shiny new toy that is generative Artificial Intelligence (“AI”). You plan to use AI to help save costs on the initial research, and even potentially assist with brief writing. It seems you’ve found a practical solution to resolve all your client’s problems. Not so fast.

In July 2019, the UK Supreme Court (UKSC) handed down a judgment in a case that concerned the extent and operation of the principle of open justice (Cape v Dring). The question before the UKSC was how much of the written material placed before the court in a civil action should be accessible to those who are not parties to the proceedings and how it should be made accessible to them.

Pricing algorithms are nothing new. They are, generally speaking, computer programs intended to help sellers optimize prices in real time, or close to it. These programs can use data on demand, costs, or even competitors’ prices to “learn” to set the prices of products. What is new is the proliferation of these programs across industries and the emergence of artificial intelligence-driven pricing algorithms. 

In a recent public comment addressed to the United States Copyright Office, the Federal Trade Commission seemingly expanded upon remarks made at the National Advertising Division back in September that it will aggressively and proactively challenge alleged unfair practices involving artificial intelligence, even if that means stretching the meaning of “unfair” to increase its jurisdiction over such matters.

Generative AI has taken the world by storm since OpenAI launched ChatGPT in November 2022. But the buzz and excitement of GAI has come with difficult legal questions that threaten the new technology. Several lawsuits—some of which we have discussed in detail—have been filed against companies whose GAI products have been trained on copyrighted materials. Up until now, we have only been able to speculate how courts will handle GAI as the industry has held its collective breath.

ChatGPT may be smart enough to pass the bar exam, but lawyers should take caution before relying on the Artificial Intelligence (“AI”) platform to conduct any legal business.

On June 22, 2023, Judge P. Kevin Castel of the Southern District of New York released a lengthy order sanctioning two attorneys for submitting a brief drafted by ChatGPT. Judge Castel reprimanded the attorneys, explaining that while “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the attorneys “abandoned their responsibilities” by submitting a brief littered with fake judicial opinions, quotes and citations.

Antitrust and tech is in the legal news almost daily, and often multiple times a day.  Here are a few recent developments with notable implications that may have flown under the radar: 1) renewed focus on gig economy issues; 2) potential enforcement efforts regarding director overlaps; and 3) challenges to MFN pricing.