AI Chats are Discoverable in Court. Why That's Worse Than You Think.
There is no privacy expectation when using AI. That doesn't stop some employees from setting a trap for themselves.
Every day, employees across every industry are slinging prompts into AI that if the chats got released, they would be embarrassed, at best.
They draft contracts, brainstorm strategies, summarize depositions, and pressure-test arguments. Most assume there’s some degree of privacy.
They're wrong.
Your Chats are Fair Game
Courts are now treating AI chatbot interactions as discoverable electronic records, no different from emails, Slack messages, or text threads. A recent analysis from the National Law Review lays out the emerging case law: in Fortis Advisors LLC v. Krafton, Inc., the Delaware Court of Chancery cited a CEO's ChatGPT interactions in its decision, handling them like any other internal business communication. Other cases, including Warner v. Gilbarco Inc. and United States v. Heppner reinforced the same premise. AI chats fall squarely within existing discovery frameworks.
Not every ruling cuts the same way — a Michigan federal magistrate broke from the trend in February, protecting a pro se plaintiff's ChatGPT queries as work product. The pattern is still forming.
Tap any card to flip
The practical takeaway is clear. Once a company reasonably anticipates litigation, it has to preserve relevant AI conversations just like it preserves email.
And that's where things get messy. Unlike enterprise email or Slack, AI tools often aren't centrally managed. Employees use personal accounts, consumer-tier platforms, and tools with limited retention windows. Chat histories get deleted. Data lives in places IT never inventoried. The spoliation risk is real, and courts aren't inclined to cut companies slack for failing to manage a technology they chose to deploy.
AI Makes the Problem Worse
But the problem doesn’t end with retention. The AI tools themselves may be actively working against the kind of rigor that legal exposure demands.
Writing in Above the Law, Olga V. Mack describes findings from empirical classroom pilots using an AI legal coach. The more "helpful" an AI system tried to be, the less lawyers trusted it. Systems that repeated guidance in slightly different words, offered generic checklists regardless of context, and steered users toward safe, obvious answers were perceived as shallow and inattentive.
What built trust instead was resistance. AI that challenged assumptions, surfaced competing considerations, and forced users to wrestle with ambiguity earned credibility. Difficulty wasn't the problem. Repetition and overstructure were.
Most commercial AI tools are optimized for agreeableness. They want to be helpful, reassuring, and frictionless. But that very design philosophy produces the kind of shallow, pattern-matched outputs that lawyers instinctively distrust. And those same shallow outputs are now discoverable records that opposing counsel can use to reconstruct how decisions were made.
Imagine this scenario: an employee uses AI to evaluate a contractual risk. The AI gives a generic, reassuring answer, and the employee relies on it.
Now that chat is sitting in a discovery production, and opposing counsel is using it to argue the company didn't take the risk seriously.
My Thoughts
Outside of health, legal may be the most dangerous domain to use AI for right now, especially true for non-legal professionals.
We all know AI hallucinates, gives bad advice, and can be easily convinced to agree with you. But sometimes it’s just too damn easy to type in the question.
I wish I could say I would be fine with all of my AI chats on the front page of the Wall Street Journal. But I do think “how bad would this be if/when it gets leaked?” before I start any conversation.
Personally, I think it’s ridiculous there’s no legal protection for your AI chats. You mean if I tell my lawyer something that’s covered by privilege but if I ask my computer about the strategy my lawyer is using then it isn’t covered?
Privilege protections recognized in U.S. courts. AI conversations have no equivalent.
I don’t like it. But I don’t make the rules, I just try to play by them.