A Look at AI Legislation in the US

In my last blog post, I discussed AI legislation in Europe.  In this blog post, I will discuss the state of play for AI legislation in the US, first at the state and city level and then at the federal level.

 

Generated by DALL-E: “Artificial Intelligence in the United States”

 

AI Legislation at the City and State Level

AI regulation is starting to catch on in a few states (e.g., California and Illinois) and New York City. There is also a proposal that has emerged in the District of Columbia (DC). However, these regulations have been narrowly focused, centering around automated employment decision-making (e.g., recruiting, hiring, promotion, and work scheduling). In addition, these regulations ensure that AI does not present a barrier to equal employment opportunities or prevent diversity in the workforce. This is partly a reaction to the increased use during the Covid-19 pandemic of AI-based employee assessments and algorithmically analyzed video interviews.[1] For example, software providers such as Gecko, HireVue, and Mya have AI-based systems that analyze video footage of interviews to “evaluate and make hiring recommendations based on applicants’ facial expressions, body language, word choice, and tone of voice.”[2]

Illinois passed the Illinois Artificial Intelligence Video Interview Act in 2019, which went into effect in January 2020. The law regulates the use of AI in video interviews for employment decision-making purposes. Specifically, the law requires businesses to tell applicants that AI will be used to analyze video interviews, explain how the AI works and what characteristics it takes into consideration, get consent from the applicants before the interview, and not share the video interviews. Furthermore, businesses must destroy the interviews upon request from the applicants, and employers who solely use AI analysis for video interviews must annually report to the state the race and ethnicity of applicants hired and not hired. There are no fines for non-compliance, with the assumption that there may be a hit to the business’s reputation if they are found to be non-compliant.[3]

New York City passed local law 1894-A at the end of 2021, requiring a “bias audit” to be conducted on automated employment decision-making tools starting in January 2023. It also requires that candidates or employees be notified if those tools are used in the assessment or evaluation for hiring and promotion purposes.[4] The scope applies to companies operating in New York City that use AI tools in employment decisions. Non-compliant systems can incur penalties from $500 to $1,500 daily, and failure to disclose to employees constitutes a separate violation subject to the same penalty.

In California, Assembly Member Ash Kalra proposed Assembly Bill 1651 (AB 1651) — the California Workplace Technology Accountability Act — in January 2022. This proposed legislation limits electronic monitoring and automated decision systems to specific times of day, activities, and locations. It gives workers the right to know, review and correct data their employer holds. The law also requires employees to be notified if automated decision systems (ADS) are used in employment-related tasks such as hiring, assessment, and promotions. Employers using ADS must have these systems assessed through an algorithmic impact assessment (AIA) conducted by a third party. Non-compliance can result in fines between $2,500 and $20,000 per violation. As of mid-2022, the bill remains stuck in Committee.[5]

AI Legislation at the US Federal Level

Existing US law, to some extent, can regulate AI systems when they are proven to discriminate. For example, under Section 5 of the Federal Trade Commission (FTC) Act, unfair or deceptive practices are prohibited, including the sale or use of racially biased algorithms. The FTC is also responsible for the enforcement of the Fair Credit Reporting Act (FCRA) — along with state attorney generals and the Consumer Financial Protection Bureau (CFPB) — and in certain scenarios, these entities could bring enforcement actions if a business’ AI system was found to discriminate in employment, housing, credit or insurance. In addition, the CFPB is the enforcement agency for the Equal Credit Opportunity Act (ECOA), and the ECOA would make it illegal if a business utilized an AI system that discriminated on the basis of “race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.”[6] It should be noted that the US Department of Justice (DoJ) settled with Meta in June 2022, whereby Meta would stop using its advertising tools for housing that relied on biased algorithms. This represented the DoJ’s first case challenging AI bias under the Fair Housing Act.[7]

That being said, US lawmakers realize that more can be done to regulate AI better. Based on the belief that Americans need protection from businesses’ use of AI systems that can “exponentially amplify safety risks, unintentional errors, harmful bias, and dangerous design choices,” Senator Ron Wyden proposed in February 2022 the Algorithmic Accountability Act (AAA) of 2022. The AAA of 2022 is an “upgrade” to a similar legislative proposal made in 2019. The AAA requires companies to assess the impact of their AI systems for bias and effectiveness when those systems make critical decisions and submit those algorithm impact assessments to the FTC. It also asks the FTC to create regulations providing “structured guidelines” for assessment and reporting, including creating a public repository at the FTC of those assessments. It would also add 75 staff to the FTC to enforce the law.[8]

The AAA applies to the use of AI for critical decisions concerning “education and vocational training, employment, essential utilities, family planning, financial services, healthcare, housing or lodging, legal services” or any other service that the FTC deems to be critical. Furthermore, the AAA only applies to “covered providers” that are companies with either gross revenue above $50 million that use AI systems that augment a critical decision or have gross revenue above $5 million that use AI to fully make decisions. This revenue threshold implies that “Big Tech” firms are the main targets of this legislation as well as companies deploying more sensitive automated decision-making capabilities. Non-compliance could leave covered providers liable to civil action instigated by the FTC and damage claims from affected individuals. In addition, the public repository can be used to facilitate access to information for consumers who wish to contest the unfair use of AI.[9]

Unlike the proposed AI legislation at the local level that focused on automated employment decision-making, the AAA is unique in that it represents the US’ first broad “horizontal” regulation of AI (i.e., across many use cases, including product safety). By enabling accountability and transparency to consumers and regulators where AI systems are being used, the AAA would help consumers make informed choices regarding the automation of critical decisions that impact them. The AAA was stuck in Committee as of mid-2022.

Finally, a comprehensive Federal privacy bill — the American Data Protection and Privacy Act (ADPPA) — was proposed in June 2022 and incorporated many elements of the Algorithmic Accountability  Act. The ADPPA defines an “algorithm” as a computational process leveraging “machine learning or artificial intelligence techniques” that makes or facilitates decisions that impact humans. The ADPPA requires that businesses cannot collect, process, or transfer personal data in a manner that “discriminates on the basis of race, color, religion, national origin, gender, sexual orientation, or disability.”[10]

In addition, the ADPPA requires “large data holders” — businesses greater than $250 million in gross annual revenue and collecting more than 5 million consumers’ personal data — to assess their algorithms annually and submit impact assessments to the FTC. These assessments must describe the business’s steps to mitigate any potential harm from the algorithms, including any harm that may impact children. Furthermore, the assessments must also seek to mitigate algorithmic harm related to “advertising for housing, education, employment, healthcare, insurance, or credit opportunities.”[11] The ADPPA also requires algorithmic evaluations to occur at the design phase of an algorithm, including analysis of the data that is used to train and develop the algorithm. The ADPPA also recommends using independent auditors to help conduct the assessment and evaluations. Finally, the bill authorizes the FTC to publish guidelines and is granted rulemaking authority. The FTC also has enforcement capabilities.

But the ADPPA has a “poison pill” in the form of preemption of state privacy legislation, and in effect would set the ceiling for privacy in the US. The ADPPA did pass a House committee but is likely to not make it in this session to the full House for a vote over the preemption issue. Like most privacy legislation, we should expect the innovation to happen first at the state level when it comes to regulating AI.


[1] Airlie Hilliard and Emre Kazim et al., “Regulating the Robots: NYC Mandates Bias Audits for Ai-Driven Employment Decisions,” SSRN, April 27, 2022, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4083189.

[2] Frost Brown Todd, “Illinois’ Artificial Intelligence Video Interview Act: What You Need to Know,” January 13, 2020, https://frostbrowntodd.com/illinois-artificial-intelligence-video-interview-act-what-you-need-to-know/.

[3] Holistic AI, “What You Need to Know About the Illinois Artificial Intelligence Video Interview Act,” June 6, 2022, https://holisticai.com/blog/2022/06/what-you-need-to-know-about-the-illinois-artificial-intelligence-video-interview-act/.

[4] The New York City Council, “1894-A: Automated employment decisions tools,” https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9.

[5] California Legislative Information, “AB-1651 Worker rights: Workplace Technology Accountability Act.(2021-2022),” April 18, 2022, https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202120220AB1651. Airlie Hilliard, Emre Kazin, and Tom Kemp, “Overview and Commentary of the California Workplace Technology Accountability Act,” June 13, 2022, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4135237.

[6] Elisa Jillson, “Aiming for truth, fairness, and equity in your company’s use of AI,” Federal Trade Commission Business Blog, April 19, 2021, https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.

[7] US Department of Justice, “Justice Department Secures Groundbreaking Settlement Agreement with Meta Platforms, Formerly Known as Facebook, to Resolve Allegations of Discriminatory Advertising,” June 21, 2022.

[8] Senator Ron Wyden, “Wyden, Booker, and Clarke Introduce Algorithmic Accountability Act of 2022 To Require New Transparency and Accountability for Automated Decision Systems,” February 3, 2022, https://www.wyden.senate.gov/news/press-releases/wyden-booker-and-clarke-introduce-algorithmic-accountability-act-of-2022-to-require-new-transparency-and-accountability-for-automated-decision-systems.

[9] Ibid. See also https://www.congress.gov/bill/117th-congress/house-bill/6580/text.

[10] US Senate Commerce Committee, “Text of American Data Privacy and Protection Act (DRAFT),” https://www.commerce.senate.gov/services/files/6CB3B500-3DB4-4FCC-BB15-9E6A52738B6C.

[11] US Senate Commerce Committee, “Text of American Data Privacy and Protection Act (DRAFT).”.

Previous
Previous

A Look at the Different Types of Data Brokers

Next
Next

A Look at AI Legislation in Europe