Privacy & Data Security

While speaking at the annual conference of the National Advertising Division on September 19, 2023, the Federal Trade Commission (“FTC”) announced a generative AI (“AI”) policy that is consistent with Chairwoman Khan’s focus on the perceived harms to consumers from large technology companies, fully embracing a plan to regulate AI swiftly, aggressively, and proactively. 

The agency began its remarks on AI by observing that its purported policy decision to allow technology companies to self-regulate during the “Web 2.0” era was a mistake. Self-regulation, according to the FTC, was a failure that ultimately resulted in the collection of too much power and too much data by a handful of large technology companies. 

Class action lawsuits accusing companies of violating the Illinois Biometric Information Privacy Act (“BIPA”) have more than doubled following a February 2023 ruling by the Illinois Supreme Court, which found, based on a plain reading of the statute, a separate claim accrues each time a person’s biometric identifier is scanned in violation of the statute.  

A parent corporation is typically not held liable for the acts of a subsidiary. As such, disregarding the corporate form (i.e., by piercing the corporate veil) and holding the parent liable is an extraordinary remedy. That said, if a parent company exercises enough control over a subsidiary, however, courts may hold the parent liable. Because there is often some degree of overlap between a parent and its subsidiary, a question courts are often faced with is just how much control is enough to justify imposing liability on a parent for its subsidiary’s actions?

Increasing oversight of tech companies, particularly in the realm of consumer privacy, has been a rare example of bipartisan agreement. Despite data privacy being a growing concern for consumers, however, there has been relatively little federal policymaking. To counteract this lack of action, some states have stepped in to fill this void—and have enacted policies that could have large impacts on how businesses operate. The rapid rate at which these laws are being enacted – eleven have been enacted– indicates states are taking an increasingly protective view of consumers’ data privacy. Businesses need to be prepared to comply with these new mandates, or risk costly enforcement measures.

Last month, TikTok sued Montana’s attorney general—alleging the state’s recent TikTok ban is unconstitutional and is preempted by federal law.

On May 17, Montana Governor Greg Gianforte signed a first-of-its-kind law banning TikTok from operating in the state, in order “[t]o protect Montanans’ personal, private, and sensitive data and information from intelligence gathering by the Chinese Communist Party.”

Last month, the FTC issued a report to Congress advising governments and companies to exercise “great caution” in using artificial intelligence (“AI”) to combat harmful online content.  The report responds to Congress’s request to look into whether and how AI may be used to identify, remove, or otherwise address a wide variety of specified “online harms.”  Among the “harms” covered by Congress’s request were impersonation scams, fake reviews and accounts, deepfakes, illegal drug sales, revenge pornography, hate crimes, online harassment and cyberstalking, and misinformation campaigns aimed at influencing elections.

Website owners who seek to bind visitors to the terms of an arbitration agreement must make those terms “reasonably conspicuous” under the law, and website visitors must “manifest unambiguous assent” to those terms.  That means that the smallest of details – the font and color of the text, the color of the page, the location and appearance of the hyperlinks and the “I agree” button – carry tremendous legal significance.  Those seemingly small design details could make the difference between a dispute being resolved in arbitration, or in litigation.