top of page
Search

Federal Agencies Issue Joint Statement on Potential Compliance Risks in Automated Systems

Sophia Garris

On April 25, 2023, four federal Agencies, the Federal Trade Commission (FTC), the Civil Rights Division of the US Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the US Equal Employment Opportunity Commission (EEOC) issued a joint statement warning against the potential risks associated to the use of advanced technologies, including artificial intelligence (AI). The Agencies noted that possible risks can include unfair or deceptive practices and unfair methods of competition. Businesses that choose to rely on AI, algorithms, and other data tools to automate decision-making processes and to conduct business should heed caution against unintended discriminatory outcomes, as noncompliance with federal law requirements based on the mere fact that the technology it employs to evaluate applications is too complicated or opaque to understand will not be justified. Such businesses will likely receive analysis and investigation.


Specifically, the CFPB has taken steps to protect consumers from:

  • Black box algorithms. In May 2022, the CFPB issued a statement pulling back the reins on the swift progress of automated underwriting engines whereby the decisioning algorithm is too complex to identify the reason why the loan applicant’s loan request was denied. Read more on the CFPB’s black box algorithms from Firstline’s related blog here.

  • Algorithmic marketing and advertising. In August 2022, the CFPB issued an interpretive rule regarding digital marketers, acting as service providers for purposes of the law, can be held liable by the CFPB or other law enforcers for committing unfair, deceptive, or abusive acts or practices (“UDAAP”) as well as other consumer financial protection violations. Our related blog here.

  • Abusive use of AI technology. Earlier this month, the CFPB issued a policy statement to explain abusive conduct. The statement is about unlawful conduct in consumer financial markets generally, but the prohibition would cover abusive uses of AI technologies to, for instance, obscure important features of a product or service or leverage gaps in consumer understanding.

  • Digital redlining. The CFPB has prioritized digital redlining, including bias in algorithms and technologies marketed as AI. As part of this effort, the CFPB is working with federal partners to protect homebuyers and homeowners from algorithmic bias within home valuations and appraisals through rulemaking.

  • Repeat offenders’ use of AI technology. The CFPB proposed a registry to detect repeat offenders. The registry would require covered nonbanks to report certain agency and court orders connected to consumer financial products and services. The registry would allow the CFPB to track companies whose repeat offenses involved the use of automated systems.




Although there has not yet been further guidance on what regulated companies should do, the Agencies are signaling that they are looking for examples of AI and machine learning-related harms to consumers to pursue in enforcement actions. In the pursuit of innovation and automation in this industry, company leaders and decision makers are then reminded to pay keen attention to their compliance officers and advisors regarding the compliance fundamentals that may impact these initiatives.

Comments


We're here to help.
831.325.3369

info@firstlinecompliance.com

© 2018-2024 Firstline Compliance, LLC. All rights reserved.  Privacy Policy

bottom of page