This dude should have called Troutman Amin, LLP before deploying an A.I. model…
So last week I did a quick TCPAWorld After Dark piece on the effects of A.I. on the legal world and everyone went nuts.
The article garnered a ton of attention and engagement and so now I am stuck taking the lead for the legal industry on A.I. issues. Great.
But that’s fine. I’m here for it.
So this week will be “A.I. Week” on TCPAWorld–which is fine since we dont really have anything else going on.
That was what kids call “sarcasm” of course– this is a HUGE week at the FCC since comments to the “lead gen” NPRM are due today!!! (And the R.E.A.C.H. comment is a real doozy… )
We’re going to start the A.I. discussion this week with a quick look at the DOJ’s work in this space. Which is a bit of a two headed hydra.
First, and most importantly for A.I. users, the DOJ is absolutely committed to assuring A.I. usage is not treated as an excuse to discriminate against folks in an illegal manner.
As the DOJ spokesperson recently put it:
“As social media platforms, banks, landlords, employers and other businesses that choose to rely on artificial intelligence, algorithms and other data tools to automate decision-making and to conduct business, we stand ready to hold accountable those entities that fail to address the discriminatory outcomes that too often result…”
Pretty stern warning.
In the DOJ’s view, a common problem with AI is it relies on data and datasets which incorporate historical bias–while others are “black boxes” whose internal workings are not clear to most people and that the design of automated systems may not fully contemplate their ultimate use.
The DOJ’s statement comes not long after it filed a “statement of interest”–basically putting forth the DOJ’s position-in a recent case alleging biased algorithm usage in a fair lending suit.
In Louis et al. v. SafeRent et al., it was alleged a defendants’ use of an algorithm-based scoring system discriminated against Black and Hispanic rental applicants in violation of the FHA. The DOJ made it clear:
“Housing providers and tenant screening companies that use algorithms and data to screen tenants are not absolved from liability when their practices disproportionately deny people of color access to fair housing opportunities… We must fiercely protect the rights and protections promulgated in the Fair Housing Act. Today’s filing recognizes that our 20th century civil rights laws apply to 21st century innovations.”
It doesn’t get much clearer than that folks. If you are using A.I. systems to make decisions–particularly, but not exclusively, in the lending and housing contexts– your algorithms must not discriminate–or result in discrimination–and “the A.I. made me do it” is not going to fly as an excuse.
And make no mistake the DOJ is really doubling down on understanding and regulating AI. For instance the Justice Department’s antitrust chief Jonathan Kanter told a crowd at the South by Southwest festival–of all places– this year that the DOJ has “hired data scientists and are bringing in expertise to make sure we have the ability to understand [A.I.] technology.”
The DOJ is apparently calling its AI effort “Project Gretzky” after hockey legend Wayne Gretzky. Gretzky was known to get after it with an aggressive style of play– I don’t think that bodes well for folks who are expecting the DOJ to be limp on A.I. issues.
On the other hand–second head of the hydra–the DOJ has stated since 2020 that it is highly committed to using A.I. has part of its own processes.
Specifically, the DOJ is committed to “cultivating an AI-ready workforce, aligning activities with the DOJ Data Strategy, building a governance structure, and supporting Department-wide AI adoption—with implementation designed to adapt to the evolving technology landscape.”
Department-wide AI adoption? At the DOJ?
The DOJ goes on to references goals designed to “navigate the different challenges posed by AI including build versus buy, commercial products with embedded AI, standalone models and algorithms, and the scope of technology considered to be AI. Implementation of this strategy will position the Department to adopt AI effectively, efficiently, and in a manner that fosters public trust and confidence.”
But no mention of transparency of non-discrimination.
On the other hand the DOJ does pledge to “promote ethical and efficient governance of AI in accordance with established law, guidance, principles, and best practices to provide clear guardrails for DOJ Offices, Boards, Divisions, and Bureaus (collectively known as Components) as they apply AI to their missions.”
I guess that’s comforting, but I haven’t seen any such best practices yet. So while the DOJ is hot and heavy prosecuting the private sector it has been slow to roll out its own self-regulation on A.I. usage.
Ironic. But perhaps not surprising.
You can read more on the DOJ strategy for using A.I. here:
And you can read more about the FTC, CFPB and FCC responses to A.I. on TCPAWorld.com later this week!