The Federal Trade Commission’s Bureau of Competition and the Department of Justice Antitrust Division released a joint statement reiterating document preservation obligations for companies and individuals that are the subject of government investigations and litigations, emphasizing messaging platforms, such as Slack and Google Chats, that automatically delete communications. Both agencies announced updated language in their standard preservation letters, specifications for “second requests” used in pre-merger review under the Hart-Scott-Rodino Act, voluntary access letters, and grand jury subpoenas, to address these instant messaging platforms. The agencies emphasized that companies’ obligation to preserve information on such platforms is nothing new, explaining their clarification is to prevent companies from feigning ignorance if communications are not preserved after preservation obligations are triggered.

This year has seen a tremendous spike in the number of cases alleging violations of the Video Privacy Protection Act (“VPPA”), 18 U.S.C. § 2710, a statute enacted in 1988 in response to the Washington City Paper’s publication of a list of films that then-Supreme Court nominee Robert Bork had rented from a video store. The statute was originally intended to “allow[] consumers to maintain control over personal information divulged and generated in exchange for receiving services from video tape service providers.”

Increasing oversight of tech companies, particularly in the realm of consumer privacy, has been a rare example of bipartisan agreement. Despite data privacy being a growing concern for consumers, however, there has been relatively little federal policymaking. To counteract this lack of action, some states have stepped in to fill this void—and have enacted policies that could have large impacts on how businesses operate. The rapid rate at which these laws are being enacted – eleven have been enacted– indicates states are taking an increasingly protective view of consumers’ data privacy. Businesses need to be prepared to comply with these new mandates, or risk costly enforcement measures.

Last month, TikTok sued Montana’s attorney general—alleging the state’s recent TikTok ban is unconstitutional and is preempted by federal law.

On May 17, Montana Governor Greg Gianforte signed a first-of-its-kind law banning TikTok from operating in the state, in order “[t]o protect Montanans’ personal, private, and sensitive data and information from intelligence gathering by the Chinese Communist Party.”

In the latest piece to come out of the FTC’s new focus on emerging technologies, the FTC Bureau of Consumer Protection issued new guidance on the use of artificial intelligence (“AI”) and algorithms. The guidance follows up on a 2018 hearing where the FTC explored AI, algorithms, and predicative analysis. As the FTC recognizes, these technologies already pervade the modern economy. They influence consumer decision making – from what video to watch next, to what ad to click on, or what product to purchase. They make investment decisions, credit decisions, and, increasingly, health decisions, which has also sparked the interest of State Attorneys General and the Department of Health & Human Services. But the promise of new technologies also comes with risk. Specifically, the FTC cites an instance in which an algorithm designed to allocate medical interventions ended up funneling resources to healthier, white populations.

What would companies need to do to comply with the law?

The Stop Hacks and Improve Electronic Data Security (SHIELD) Act imposes requirements in two areas: cybersecurity and data breach notification. The cybersecurity provisions of the proposed SHIELD Act would require companies to adopt “reasonable safe-guards to protect the security, confidentiality and integrity” of private information. The Act provides examples of appropriate administrative, technical, and physical safeguards, such as designating an employee to oversee the company’s data security program; identifying “reasonably foreseeable” risks to data security; selecting vendors that can maintain appropriate safeguards; detecting, preventing and responding to attacks and system failures; and preventing unauthorized access to private information. 

In November 2017, New York Attorney General Eric Schneiderman introduced the Stop Hacks and Improve Electronic Data Security (SHIELD) Act (the “Act”) in the state’s Legislature. Companies – big and small – that collect information from New York residents should take note, as the Act could mean increased compliance costs, as well as potential enforcement actions for those that do not meet the Act’s requirements. This blog post provides a breakdown of the essential components of the SHIELD Act and information on how to comply with this potential new law.

On August 15, 2017, the Ninth Circuit delivered the latest episode in the Robins v. Spokeo saga, reaffirming on remand from the Supreme Court that plaintiff Robins had alleged an injury in fact sufficient for Article III standing to bring claims under the Fair Credit Reporting Act (FCRA).

Robins had brought a putative class action against Spokeo, which operates a “people search engine” that compiles consumer data into online reports of individuals’ personal information.  Robins alleged that Spokeo had willfully violated the FCRA’s procedural requirements, including that consumer reporting agencies must “follow reasonable procedures to assure maximum possible accuracy of the information” in consumer reports, because Spokeo’s report on Robins allegedly listed the wrong age, marital status, wealth, education level, and profession, and included a photo of a different person.  According to Robins, the inaccuracies in the report about him harmed his employment prospects and caused him emotional distress.