CHATGPT AS THE ENEMY: New Sanctions Against Lawyers Relying on ChatGPT Likely as Jay Edelson Speaks of ChatGPT Encouraging Suicide

ChatGPT is not your friend.

Not your co-worker.

Not a reliable tool.

It is nothing but a fraud and thief.

And all the GenAI products out there are cut of the same cloth.

Enjoying your new clothes Emperor?

Here’s the latest.

A lawyer in Kansas is facing massive potential sanctions and reputational damage after using ChatGPT to fill in cites in a brief because he was dealing with a personal emergency.

Pause.

When you are dealing with an emergency tell opposing counsel and tell the court and ask for more time. In my experience 99% of the time YOU WILL GET IT. The practice of law remains an honorable profession. There is no dishonor in seeking additional time where life requires it.

There is dishonor in lying and cheating, however.

And that’s what ChatGPT empowers.

But I want you to see just how extensive the problems with ChatGPT are.

In the case of Lexos Media v. Overstock, Case No. Case 2:22-cv-02324-JAR (D. Ks) ChatGPT didn’t just spit out one problematic cite but a TON OF THEM.

Its like the program was intentionally misleading the lawyer using it and creating a MASSIVELY incorrect legal brief. According to the declaration of Sandeep Seth–containing another sniveling apology of the sort I have grown tired of reading–ChatGPT produced the following fake cites he then used in his brief:

Liquid Dynamics Corp. v. Vaughan Co., Inc., 449 F.3d 1209, 1224 (Fed. Cir. 2006):

“Expert testimony should not be excluded simply because the expert applied an incorrect
claim construction, so long as the expert’s analysis can be understood and evaluated in
light of the court’s proper construction.”

AVM Technologies, LLC v. Intel Corp., 927 F.3d 1364, 1370–71 (Fed. Cir. 2019):
“[T]he appropriate response to a potential flaw in an expert’s methodology is cross
examination, not exclusion.

Hockett v. City of Topeka, No. 19-4037-DDC, 2020 WL 6796766, at *3 (D. Kan. Nov.
19, 2020):
“The exclusion of evidence is an extreme sanction, and courts should prefer less severe
remedies, particularly where the error appears inadvertent or can be cured without
prejudice.”

Woodworker’s Supply, Inc. v. Principal Mut. Life Ins. Co., 170 F.3d 985, 993 (10th
Cir. 1999):
Courts consider “(1) the prejudice or surprise to the party against whom the testimony is
offered; (2) the ability of the party to cure the prejudice; (3) the potential for disruption;
and (4) the bad faith or willfulness involved.”

i4i Ltd. Partnership v. Microsoft Corp., 598 F.3d 831, 854 (Fed. Cir. 2010), aff’d, 564
U.S. 91 (2011):
“[T]he question of whether the expert is credible or whether his theories are correct given
the partial reliance on an incorrect claim construction is for the jury to decide after cross
examination.”

See how extensively ChatGPT just makes stuff up. 

This is what folks fail to understand about GenAI– it is NOT a tool, unless you are seeking to create fantasy and make believe. Sure it can help you to make up NONSENSE. But that’s it.

In the Lexos Media case the court required ALL the lawyers on the signature block to account for their misconduct– not just the offending attorney that used the program. This included lawyers from Fisher, Patterson, Sayler & Smith, LLP and Buether Joe & Counselors, LLC who did not draft the brief but co-signed onto it.  They all denied having any knowledge of the use of GenAI by the primary drafter Sandeep Seth.

You can read the entirety of these filings here: Lexos Media IP LLC v Overstock.com Inc. Response to Show Cause Order (1.5.26)

It will be very interesting to see how this all turns out– but, again, ChatGPT is absolutely and completely untrusthworthy. If you are using GenAI i nthe practice of law STOP.

Yet all of this is TINY potatoes compared to what ChatGPT allegedly did to a sixteen year old kid who committed suicide after lengthy back-and-forth with the program. You have to watch this to get a sense of what we’re dealing with here:

AI is the enemy of the good folks. Stay away.

 


Discover more from TCPAWorld

Subscribe to get the latest posts sent to your email.

Categories:

1 Comment

  1. It’s not just ChatGPT that makes stuff up. Even AI platforms specifically intended for legal research make stuff up. I did a trial of Paxton.ai and couldn’t believe the nonsense it was spitting out. If I could see that as a non-lawyer, I can’t believe actual barred lawyers citing unverified garbage to a court on behalf of clients. They should have their licenses revoked.

Leave a Reply