In the latest piece to come out of the FTC’s new focus on emerging technologies, the FTC Bureau of Consumer Protection issued new guidance on the use of artificial intelligence (“AI”) and algorithms. The guidance follows up on a 2018 hearing where the FTC explored AI, algorithms, and predicative analysis. As the FTC recognizes, these technologies already pervade the modern economy. They influence consumer decision making – from what video to watch next, to what ad to click on, or what product to purchase. They make investment decisions, credit decisions, and, increasingly, health decisions, which has also sparked the interest of State Attorneys General and the Department of Health & Human Services. But the promise of new technologies also comes with risk. Specifically, the FTC cites an instance in which an algorithm designed to allocate medical interventions ended up funneling resources to healthier, white populations.

While the technologies may be new, the FTC’s guidance serves as a reminder of some of the golden rules of consumer protection: be transparent, be fair, and be secure.

Be Transparent and Explain Your (or the AI’s) Decisions

Transparency issues can arise when automated tools interact with consumers, when sensitive data is collected from consumers, or when automated decisions are being made that impact consumers. The FTC recognizes that many factors affect algorithmic decision-making, but states that if companies use AI to make decisions about consumers, they must know what and how the data was used, and be able to explain their decisions to consumers.

  • Consumer Interactions. While AI often operates in the background of consumer activity, the FTC cautions that companies must be vigilant that the tool does not mislead consumers, particularly when an AI platform is directly interacting with consumers. For example, if an AI chatbot misleads consumers, the FTC may deem it a deceptive practice under the FTC Act. Also be transparent. For example, inform consumers if the terms of a deal can be altered based on AI tools.
  • Data Collection. According to the guidance, data should not be collected secretly. The FTC recommends that companies looking for data to feed their algorithms clearly disclose what data is collected, how it is collected, and how it will be used.
  • Automated Decisions.  The Fair Credit and Reporting Act (“FCRA”) employs a relatively broad definition of “consumer reporting agency,” and companies using AI to make automated decisions about eligibility for credit, employment insurance, housing or other similar benefits may be required to provide “adverse action” notices after certain automated decisions. More generally, the FTC advises companies calculating consumer risk scores with algorithms to disclose key factors that affect the score.

Be Fair and Empirically Sound

Ensuring algorithms and AI tools are behaving fairly requires care and attention. An algorithm designed with the best intentions could, for example, “result in discrimination against a protected class.” Additionally, the FCRA imposes accuracy obligations on consumer reporting agencies as well as “furnishers” that provide data about their customers to others for use in automated decision-making. The FTC provides a few guidelines that should be worked into any protocols and procedures for maintaining AI tools:

  • Rigorously test algorithms, including inputs and outputs, both before and while in use.
  • Particularly in areas covered by the FCRA, provide consumers with the information used to make important decisions and allow consumers to dispute the accuracy of the information.
  • Establish written policies and procedures to ensure the accuracy and integrity of the data used in AI models.
  • To help avoid biased results, ask “four key questions”: (1) how representative is the data set; (2) does the model account for biases; (3) how accurate are the predictions; and (4) does reliance on data raise ethical or fairness concerns?
  • Derive data from an empirical comparison of representative sample groups developed using accepted statistical principles and methodology, and periodically revalidated and adjust as needed.

Be Secure

AI tools run on data. Accordingly, companies developing AI tools to sell to others should build in data security testing and protocols to help avoid unauthorized access and use. In addition to technical security, companies should consider protocols to vet users and/or keep their technology on their own servers to maintain control over how the tools are used and secured.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of David Munkittrick David Munkittrick

David Munkittrick is a litigator and trial attorney. His practice focuses on complex and large-scale antitrust, copyright and entertainment matters in all forms of dispute resolution and litigation, from complaint through appeal.

David has been involved in some of the most significant antitrust…

David Munkittrick is a litigator and trial attorney. His practice focuses on complex and large-scale antitrust, copyright and entertainment matters in all forms of dispute resolution and litigation, from complaint through appeal.

David has been involved in some of the most significant antitrust matters over the past few years, obtaining favorable results for Fortune 500 companies and other clients in bench and jury trials involving price discrimination and group boycott claims. His practice includes the full range of antitrust matters and disputes: from class actions to competitor suits and merger review. David advises antitrust clients in a range of industries, including entertainment, automotive, pharmaceutical, healthcare, agriculture, hospitality, financial services, and sports.

David also advises music, publishing, medical device, sports, and technology clients in navigating complex copyright issues and compliance. He has represented some of the most recognized names in entertainment, including Sony Music Entertainment, Lady Gaga, U2, Madonna, Daft Punk, RCA Records, BMG Music Publishing, Live Nation, the National Academy of Recording Arts and Sciences, Universal Music Group and Warner/Chappell.

David maintains an active pro bono practice, supporting clients in the arts and in immigration proceedings. He has been repeatedly recognized as Empire State Counsel by the New York State Bar Association for his pro bono service, and is a recipient of Proskauer’s Golden Gavel Award for excellence in pro bono work.

When not practicing law, David spends time practicing piano. He recently made his Carnegie Hall debut at Weill Recital Hall with a piano trio and accompanying a Schubert lieder.

David frequently speaks on antitrust and copyright issues, and has authored or co-authored numerous articles and treatise chapters, including:

  • Causation and Remoteness, the U.S. Perspective, in GCR Private Litigation Guide.
  • Data Breach Litigation Involving Consumer Class Actions, in Proskauer on Privacy: A Guide to Privacy and Data Security Law in the Information Age.
  • Location Privacy: Technology and the Law, in Proskauer on Privacy: A Guide to Privacy and Data Security Law in the Information Age.
  • FTC Enforcement of Privacy, in Proskauer on Privacy: A Guide to Privacy and Data Security Law in the Information Age.
  • The Role of Experts in Music Copyright Cases, Intellectual Property Magazine.
  • Nonprofit Education: A Historical Basis for Tax Exemption in the Arts, 21 NYSBA Ent., Arts, & Sports L.J. 67
  • A Founding Father of Modern Music Education: The Thought and Philosophy of Karl W. Gehrkens, Journal of Historical Research in Music Education
  • Jackson Family Wines, Inc. v. Diageo North America, Inc. Represented Diageo in trademark infringement litigation
Photo of Colin Kass Colin Kass

Colin Kass is a partner in the Litigation Department and Co-Chair of Proskauer’s Antitrust Group. As a seasoned trial lawyer, Colin has handled many of the nation’s most complex and innovative antitrust cases over the past 20 years.

His practice involves a wide…

Colin Kass is a partner in the Litigation Department and Co-Chair of Proskauer’s Antitrust Group. As a seasoned trial lawyer, Colin has handled many of the nation’s most complex and innovative antitrust cases over the past 20 years.

His practice involves a wide range of industries, including financial services, healthcare, sports, media, pharmaceuticals, and automotive markets, and spans the full-range of antitrust and unfair competition-related litigation, including class actions, competitor suits, dealer/distributor termination suits, price discrimination cases, criminal price-fixing probes, and merger injunctions.

Colin also has extensive experience interfacing with the Federal Trade Commission and Department of Justice, obtaining clearance for competitively-sensitive transactions and handling anticompetitive practices investigations.

As a trusted advisor, Colin also counsels clients on their sales, distribution, and marketing practices, strategic ventures, and general antitrust compliance.