Happy Tuesday!
As the Czar promised, I am sharing some insight on AI, no no not Allen Iverson…but how about them Sixers?? 🏀 Sorry, I couldn’t help myself.
With Artificial Intelligence (AI) increasingly invading parts of our lives, it is no surprise that federal agencies are looking closely at AI. So, how is the Consumer Financial Protection Bureau (CFPB or “the Bureau”) navigating Artificial Intelligence?
Singing to one tune and quickly dancing to another, that’s how.
But I get it, everything new is exciting and then it’s not…
Congress tasked the CFPB with ensuring that markets for consumer financial products and services operate transparently and efficiently to facilitate access and innovation. Now, the Bureau must navigate a new world and monitor the advantages and disadvantages related to artificial intelligence (AI) and machine learning (ML).
AI – Full of Possibilities with Minimal Concerns?
A couple of weeks ago, federal agencies issued a joint statement pledging to uphold America’s commitment to the core principles of fairness, equality and justice as emerging automated systems – AI – have become increasingly common in our daily lives impacting civil rights, fair competition, consumer protection and equal opportunity. But this isn’t new to the government.
In fact, for years now, federal agencies were supporting the implementation of AI focusing on the potential benefits. In 2015, the Bureau released a Data Point titled “Credit Invisibles.” It noted that 26Million consumers did not have a credit record at the nationwide credit bureaus and 19Million consumers had too little information to be used by a credit scoring model. AI brings the potential to expand credit access by enabling lenders to evaluate the credit worthiness of some of the millions of consumers who are unscorable using traditional underwriting techniques. But despite its potential, industry uncertainty about how AI fit into existing regulatory framework slowed down its adoption, especially for credit underwriting.
To reduce the regulatory uncertainty in the use of AI/ML, the Bureau implemented various tools to promote innovation and facilitate compliance. In September 2019, the Bureau’s Office of Innovation launched three new policies: Policy to Encourage Trial Disclosure Programs, the Compliance Assistance Sandbox Policy and the revised No Action Letter Policy. The first two provided a legal safe harbor for companies to stimulate growth and the use of AI.
Six years ago, the CFPB noted that financial institutions were starting to deploy AI across a range of functions – from virtual assistants to compliance monitoring. The Bureau hoped that financial institutions and other stakeholders would think creatively about how to take advantage of AI’s potential benefits. On July 7, 2020, the CFPB issued an Innovation spotlight providing guidance on adverse action notices when using artificial intelligence (AI) and machine learning (ML) models. For example, how do complex AI models address the adverse action notice requirements in the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA)? These notices serve important anti-discrimination, educational and accuracy purposes. ECOA requires creditors to provide consumers with the main reasons for a denial of credit or other adverse action. FCRA also requires similar adverse action notice requirements.
The Bureau noted then that the existing regulatory framework had ‘built-in flexibility.’ While a creditor must provide the specific reasons for an adverse action, the creditor need not describe how or why a disclosed factor adversely affected an application. So, tell the consumer you are denied because of late payment history but you do not need to tell the consumer how late payment history affected his/her application.
AI – Full of Possibilities but Now We Are Concerned!
On March 29, 2021, the Federal Reserve Board, Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, National Credit Union Administration and the Office of the Comptroller of the Currency announced the Request for Information to gain input from financial institutions, trade associations, consumer groups, and other stakeholders on the growing use of AI by financial institutions.
Collaboration among multiple federal agencies – strike up the tempo…. rightfully so.
AI has brought the acceleration of automated decision-making across our daily lives. Generative AI, which can produce voices, images and videos that are designed to simulate real-life human interactions are raising the question of whether we are ready to deal with the wide range of potential harms – from consumer fraud to privacy to fair competition.
The CFPB along with other federal agencies wants to make it very clear to everyone:
There is no exemption in our nation’s civil rights laws for new technologies that engage in unlawful discrimination.
CFPB Director Rohit Chopra stated that “Technology marketed as AI has spread to every corner of the economy and regulators need to stay ahead of its growth to prevent discriminatory outcomes that threaten families’ financial stability.” The Bureau is concerned that unchecked AI poses threats to fairness and to our civil rights in ways that are already being felt. Technology companies are amassing massive amounts of data and using it to make more and more decisions about our lives, including whether we get a loan or what advertisements we see.
You know the cruise special that just popped up on your phone after you were talking about needing a vacation….
So what is the CFPB doing about it?
Digital redlining – The Bureau is proposing rules to make sure AI and automated valuation models have basic safeguards when it comes to discrimination. Digital redlining to protect homebuyers and homeowners from algorithmic bias within home valuations and appraisals.
Scrutinizing Algorithmic Advertising – Advertising and marketing that uses sophisticated analytic techniques could subject firms to legal liability. Digital marketers are involved in the identification or selection of prospective customers or the selection or placement of content to affect consumer behavior, they are typically service providers under the Consumer Financial Protection Act. If their actions using an algorithm to determine who to market products and services violate federal consumer protection laws, they will be held accountable.
Black Box Credit Models – Companies are required to tell you why you were denied for credit and using a complex algorithm is not a defense against providing specific and accurate explanations. In other words, companies cannot claim that the technology used to make credit decisions is too complex, opaque or new to explain adverse credit decisions as a defense against violations of the Equal Credit Opportunity Act.
Repeat Offenders’ Use of AI technology – Registry would require covered nonbanks to report certain agency and court orders connected to consumer financial products and services and would allow the Bureau to track companies who repeat offenses involved the use of automated systems.
Whistleblower – The Bureau encourages those who have detailed knowledge of algorithms and technologies used by companies and who know of potential discrimination or other misconduct within the Bureau’s authority to report it. In 2021, the Bureau redesigned the whistleblower webpage and provided guidance based on user research with tech workers.
So, there you have it. The CFPB has deployed resources to ensure that civil rights are not violated by the use of AI which has become part of our daily lives.
In Spring 2023, the CFPB is set to release a white paper about chatbot market and technology’s limitations and the ways the Bureau is already seeing chatbots interfere with consumers’ ability to interact with financial institutions.
We will keep you informed as we navigate the AI World together!
Til next time, Countess!