With great promise comes great scrutiny. As artificial intelligence (“AI”) has become part of industries’ and individuals’ daily repertoire, it has also come under focus by antitrust regulators. The DOJ, in its so-called “Project Gretzky,” is gearing up with data scientists and others to be a tech-savvy version
Privacy & Data Security
New FTC Resolution Confirms Commission’s Intent to Regulate AI and Paves the Way for Future Investigations and Enforcement Actions
Making do on its promise to “use every tool” in its arsenal to regulate artificial intelligence (‘AI”), the Federal Trade Commission (“FTC”) unanimously approved a resolution on November 21, 2023 authorizing the use of compulsory process in non-public investigations involving AI-related products and services.
FTC Appears to Expand AI Regulatory Role into Copyright Matters
In a recent public comment addressed to the United States Copyright Office, the Federal Trade Commission seemingly expanded upon remarks made at the National Advertising Division back in September that it will aggressively and proactively challenge alleged unfair practices involving artificial intelligence, even if that means stretching the meaning of “unfair” to increase its jurisdiction over such matters.
The Return of the Video Privacy Protection Act (VPPA)
This year has seen a tremendous spike in the number of cases alleging violations of the Video Privacy Protection Act (“VPPA”), 18 U.S.C. § 2710, a statute enacted in 1988 in response to the Washington City Paper’s publication of a list of films that then-Supreme Court nominee Robert Bork had rented from a video store. The statute was originally intended to “allow[] consumers to maintain control over personal information divulged and generated in exchange for receiving services from video tape service providers.”
FTC Continues to Stake Out Role as Key AI Regulator
While speaking at the annual conference of the National Advertising Division on September 19, 2023, the Federal Trade Commission (“FTC”) announced a generative AI (“AI”) policy that is consistent with Chairwoman Khan’s focus on the perceived harms to consumers from large technology companies, fully embracing a plan to regulate AI swiftly, aggressively, and proactively.
The agency began its remarks on AI by observing that its purported policy decision to allow technology companies to self-regulate during the “Web 2.0” era was a mistake. Self-regulation, according to the FTC, was a failure that ultimately resulted in the collection of too much power and too much data by a handful of large technology companies.
Big Tech, Biometrics and BIPA: Meta’s Recent $68.5M Class Action Settlement
In July, Instagram’s parent company Meta Platforms, Inc. (“Meta”) agreed to a $68.5 million class-action biometric privacy settlement in connection with the company’s alleged violation of Illinois’ Biometric Information Privacy Act, 740 ILCS 14/1, et seq. (BIPA).
Recent Partial Dismissal of Illinois Biometric Privacy Suit May Add Some Weight to the Scale on the Side of Defendants
Class action lawsuits accusing companies of violating the Illinois Biometric Information Privacy Act (“BIPA”) have more than doubled following a February 2023 ruling by the Illinois Supreme Court, which found, based on a plain reading of the statute, a separate claim accrues each time a person’s biometric identifier is scanned in violation of the statute.
Two Sides of a Different Coin: Separating Businesses and Subsidiaries for Liability Protection
A parent corporation is typically not held liable for the acts of a subsidiary. As such, disregarding the corporate form (i.e., by piercing the corporate veil) and holding the parent liable is an extraordinary remedy. That said, if a parent company exercises enough control over a subsidiary, however, courts may hold the parent liable. Because there is often some degree of overlap between a parent and its subsidiary, a question courts are often faced with is just how much control is enough to justify imposing liability on a parent for its subsidiary’s actions?