On September 10, minutes after the first presidential debate between Donald Trump and Vice President Kamala Harris, an Instagram post set the political world abuzz: Taylor Swift endorsed Harris in the race. The announcement from one of the world’s biggest stars was newsworthy in itself, but IP lawyers likely took note of why she chose to post. Swift explained that she felt compelled to share her views after a photo featuring an AI-generated image of her appearing to endorse Donald Trump was posted online. The image was shared by Trump himself on his social media platform Truth Social, and was circulated widely by his supporters. Swift wrote that the image “really conjured up my fears around AI, and the dangers of spreading misinformation.”

In a recent public comment addressed to the United States Copyright Office, the Federal Trade Commission seemingly expanded upon remarks made at the National Advertising Division back in September that it will aggressively and proactively challenge alleged unfair practices involving artificial intelligence, even if that means stretching the meaning of “unfair” to increase its jurisdiction over such matters.

On January 18, 2022, Microsoft’s acquisition of Activision, one of the world’s most-valuable gaming companies, was announced. In April 2023, the United Kingdom’s Competition and Markets Authority (CMA) blocked the deal on concerns that the deal could “alter the future of the fast-growing cloud gaming market, leading to reduced innovation and less choice for UK gamers over the years,” a decision that Microsoft appealed to a Competition Appeal Tribunal. A few months later, in July 2023, as previously reported in Minding Your Business, the FTC’s challenge to the deal in the United States fell short, leaving the UK as the only competition authority preventing the closing of the deal.

While speaking at the annual conference of the National Advertising Division on September 19, 2023, the Federal Trade Commission (“FTC”) announced a generative AI (“AI”) policy that is consistent with Chairwoman Khan’s focus on the perceived harms to consumers from large technology companies, fully embracing a plan to regulate AI swiftly, aggressively, and proactively. 

The agency began its remarks on AI by observing that its purported policy decision to allow technology companies to self-regulate during the “Web 2.0” era was a mistake. Self-regulation, according to the FTC, was a failure that ultimately resulted in the collection of too much power and too much data by a handful of large technology companies. 

In the latest of a string of losses for antitrust enforcers, the Northern District of California resoundingly denied the FTC’s bid to enjoin the Microsoft-Activision merger, allowing the deal to proceed a week in advance of its upcoming merger termination date. In a case that tested the bounds of antitrust law in vertical integration deals, Presiding Judge Jacqueline Scott Corley found “the record evidence points to more consumer access,” rather than showing signs of reduced competition. Federal Trade Commission v. Microsoft Corporation, et al. 

Competition between Amazon’s third-party merchants is notoriously fierce. The online retail giant often finds itself playing the role of referee, banning what it considers unfair business practices (such as offering free products in exchange for perfect reviews, or targeting competitors with so-called “review bombing”). Last month, in the latest round of this push and pull, the online retail giant blew the whistle on several merchants who Amazon claims crossed a red line and may now have to face litigation in federal court.

Antitrust and tech is in the legal news almost daily, and often multiple times a day.  Here are a few recent developments with notable implications that may have flown under the radar: 1) renewed focus on gig economy issues; 2) potential enforcement efforts regarding director overlaps; and 3) challenges to MFN pricing. 

Last month, the FTC issued a report to Congress advising governments and companies to exercise “great caution” in using artificial intelligence (“AI”) to combat harmful online content.  The report responds to Congress’s request to look into whether and how AI may be used to identify, remove, or otherwise address a wide variety of specified “online harms.”  Among the “harms” covered by Congress’s request were impersonation scams, fake reviews and accounts, deepfakes, illegal drug sales, revenge pornography, hate crimes, online harassment and cyberstalking, and misinformation campaigns aimed at influencing elections.