Skip to Main Content
Our Commitment to Diversity

FTC Issues New Guidance, Warning That Bias In Artificial Intelligence Could Create Potential Liability For Enforcement Actions

Date: 23 April 2021
Litigation and Dispute Resolution, Policy and Regulatory Alert
By: Allen R. Bachman, Derek W. Kelley, Elisa J. D'Amico, Danielle Castellanos

On Monday, 19 April 2021, a Federal Trade Commission (FTC) blog post warned companies to ensure that their artificial intelligence (AI) does not reflect racial or gender bias, and it indicated that failure to do so may result in “deception, discrimination—and an FTC law enforcement action.”1 Although this is not the first time the FTC has addressed the issue of bias in AI, the agency has now made it clear that if businesses fail to hold themselves accountable for all actions—both human and AI—they should “be ready for the FTC to do it for [them].”2

In recent years, the number of companies investing in AI has skyrocketed. Automating certain business operations has proven to increase efficiencies, improve client service, and speed production. But when AI is based upon a set of biased data—undervaluing or omitting races, genders, or other protected classes—the resulting system is inevitably biased, despite its purely neutral programming. Recent social justice movements, from #MeToo to #BLM and beyond, have drawn attention to biases that may arise in life and business alike. Now, regulators are paying attention too.

Bias in algorithms can cause applications to underperform, which causes the overall business to underperform due to missed opportunities and incorrect predictions. This bias has the potential to negatively impact a business’s reputation, harm the company’s bottom line, and alienate large swaths of its consumers. In recent years, some businesses have chosen to address existing practices that disadvantage certain groups of people and favor others. Earlier this week, guidance from the FTC has made it clear that recognizing and correcting bias in AI is not only a moral responsibility but also a legal requirement.

What to Expect in the Way of FTC AI-related Enforcement Actions

Section 5 of the FTC Act

Section 5(a) of the FTC Act prohibits “unfair or deceptive acts or practices in or affecting commerce.”3 Unfair and deceptive practices may include practices that are likely to mislead a reasonable consumer, as well as those likely to do more harm than good.4

Since the mandate is broad, so are the possible bases for enforcement actions. In its recent statement, the FTC explicitly categorized the sale or use of racially biased algorithms as an unfair and deceptive practice. The FTC also warns against “digital redlining”—the use of protected characteristics like race, religion, or gender in making determinations about which consumers a business will target with online advertisements. In 2019, the Department of Housing and Urban Development charged Facebook with violations of the Fair Housing Act for targeting its advertisements too narrowly along protected class lines.5 Now, the FTC has threatened to utilize the FTC Act to protect consumers from this type of prohibited targeting as well.

Fair Credit Reporting Act

The Fair Credit Reporting Act (FCRA) mandates how consumer information can be collected and used for credit reporting.6 The FCRA’s congressional statement of purpose emphasizes the need to ensure that consumer reporting agencies act with “fairness, impartiality, and a respect for the consumer’s right to privacy,” because unfair credit reporting methods “undermine the public confidence essential to the continued functioning of the banking system.”7 When a business uses an algorithm to determine eligibility for credit, housing, or other benefits, in order to produce an impartial and fair result, it must be based on impartial data.

Equal Credit Opportunity Act

The Equal Credit Opportunity Act (ECOA) prohibits credit discrimination on the basis of protected classes or on the basis of whether an individual receives public assistance.8 When consumers request credit—whether to buy a home, start a business, or follow their dreams—creditors are barred from considering factors like race and gender in deciding whether the consumer will be approved.

Further, the ECOA bars not only intentional discrimination but also unintentional discrimination that results in a disparate impact. For example, if an algorithm is programmed to deny credit to consumers based on their zip code (a seemingly neutral factor) if the zip code has a population that is predominately made up of one racial minority, the FTC could conceivably challenge the practice as a violation of the ECOA.9 

Avoid Becoming a Target of the Next FTC Enforcement Action

Remember that You Are Accountable

Continue to use and benefit from AI, but remember that you are accountable not only for the actions of the individuals you employ but also for the technology you use. If you create your own AI systems, make sure you regularly test and modify to eliminate bias. If you use AI created by someone else, it is critical to understand the technology and the data that system uses. When in doubt, consider hiring counsel to more closely examine your existing AI systems and assess your risk of noncompliance with the FTC mandates.

Think of an algorithm as a brain that learns and gets smarter over time. Regularly test for bias by changing or eliminating certain factors (such as protected characteristics) from the decision-making process, and examine how this affects outcomes. In their book, AI for Lawyers, Noah Waisberg and Dr. Alexander Hudek, co-founders of Kira Systems, remind us that although biases may be embedded in AI systems, we remain in control of the data used by those systems, and therefore, we are in control of our own destiny.10 Proactively remove traits that you do not want the algorithm to consider, and add data to fill the gaps—if a program excludes one gender from its data set, add more data to ensure the results will be unbiased.

Always Strive to Under-Promise and Over-Deliver

It is a cliché perhaps, but “under-promise and over-deliver” is a very useful mantra. The FTC explicitly cautions against over-promising fair or unbiased results to consumers when underlying data is actually biased—whether purposefully or inadvertently.11 For example, do not promise that your AI will make “100 percent unbiased hiring decisions” if the algorithm relies only on data from a single race or gender. The FTC warns that “[t]he result may be deception, discrimination—and an FTC law enforcement action.”12 While the end goal is to correct this data, it is important to accurately acknowledge and embrace the imperfect nature of the data at the outset.

Transparency Is Key

Finally, be transparent. Be authentic and accurate. Along with transparency concerning efforts to improve results, be transparent about the data you are relying on. Without transparency, AI will remain biased. Moreover, the FTC praises transparency because it creates the ability for others to detect and correct biases that a business may not be aware of itself.13

While a recent U.S. Supreme Court ruling casts doubt over the extent of the FTC’s authority to enforce this guidance moving forward, it is still not to be taken lightly.14 At a minimum, the FTC can still enforce by way of injunctions, and businesses should be wary about the accompanying negative press and high costs that come with defending against an enforcement action.

New technology is never perfect—but we can collectively improve it over time. Rather than shying away from AI due to these imperfections, businesses should embrace the FTC’s guidance as an opportunity to reflect on their own technology and find ways to make it better: for the business’s efficient operation, reputation, and consumers above all. 

1 Elisa Jillson, Aiming for truth, fairness, and equity in your company’s use of AI, FTC.GOV (Apr. 19, 2021, 9:43 AM), https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.

Id.

15 U.S.C. § 45(a)(1).

4 Fed. Trade Comm’n, A Brief Overview of the Federal Trade Commission’s Investigative, Law Enforcement, and Rulemaking Authority, FTC.GOV (Oct. 2019), https://www.ftc.gov/about-ftc/what-we-do/enforcement-authority.

5 Facebook, Inc., HUD ALJ No. 01-18-0323-8 (Mar. 28, 2019).

6 15 U.S.C. §§ 1681 et seq.

7 Id. § 1681(a) (emphasis added).

8 Id. §§ 1691 et seq.

9 Andrew Smith, Using Artificial Intelligence and Algorithms, FTC.GOV (Apr. 8, 2020, 9:58 AM), https://www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms.

10 NOAH A. WAISBERG & DR. ALEXANDER HUDEK, AI FOR LAWYERS: HOW ARTIFICIAL INTELLIGENCE IS ADDING VALUE, AMPLIFYING EXPERTISE, AND TRANSFORMING CAREERS 88–91 (2021).

11 Jillson, supra note 1.

12 Id.

13 Id.

14 AMG Cap. Mgmt., LLC v. Fed. Trade Comm’n, No. 19-508, slip op. at 1 (Apr. 22, 2021) (holding that section 13(b) of the FTC Act does not authorize the FTC to seek monetary relief such as restitution or disgorgement).

Allen R. Bachman
Allen R. Bachman
Washington DC
Derek W. Kelley
Derek W. Kelley
Washington DC

This publication/newsletter is for informational purposes and does not contain or convey legal advice. The information herein should not be used or relied upon in regard to any particular facts or circumstances without first consulting a lawyer. Any views expressed herein are those of the author(s) and not necessarily those of the law firm's clients.

Return to top of page

Email Disclaimer

We welcome your email, but please understand that if you are not already a client of K&L Gates LLP, we cannot represent you until we confirm that doing so would not create a conflict of interest and is otherwise consistent with the policies of our firm. Accordingly, please do not include any confidential information until we verify that the firm is in a position to represent you and our engagement is confirmed in a letter. Prior to that time, there is no assurance that information you send us will be maintained as confidential. Thank you for your consideration.

Accept Cancel