AI Conversations Are Not Protected

No AI attorney

If you’ve been using ChatGPT, Claude, or any other public AI tool to research legal questions, prepare for litigation, or organize sensitive information — a federal judge just confirmed that none of it is protected. AI attorney client privilege does not exist, and everything you’ve typed could be used against you.

We watch the AI platforms; you keep the profits. Cleverly Genius provides the foresight your business needs to thrive in a changing AI market. Not a client yet? Click Here.

AI attorney client privilege Judge Rakoff

On February 10, 2026, Judge Jed Rakoff of the U.S. District Court for the Southern District of New York issued a ruling that should alarm every person who has ever typed sensitive information into an AI chatbot. In United States v. Heppner, the judge held that documents a criminal defendant created using Anthropic’s AI tool Claude were not protected by attorney-client privilege or the work product doctrine. The defendant’s entire AI conversation history was ruled discoverable by the government.

This is the first ruling of its kind in the United States, and it establishes a precedent that could reshape how attorneys, businesses, and individuals approach AI tools in any legal context. The concept of AI attorney client privilege — the idea that conversations with an AI tool might be shielded from disclosure the way conversations with a lawyer would be — was definitively rejected.

The Case: United States v. Heppner

The facts of the case are straightforward. Bradley Heppner, a Dallas financial services executive, was charged with securities fraud, wire fraud, conspiracy, and other offenses arising from an alleged scheme to defraud investors of approximately $300 million. According to Dorsey & Whitney’s analysis of the ruling, after Heppner received a grand jury subpoena and knew he was the target of a federal investigation, he turned to Claude to research legal questions related to the charges he anticipated.

Without direction from his attorneys, Heppner inputted information he had learned from his lawyers into Claude and used the AI tool to prepare reports outlining his defense strategy and potential legal arguments. He generated approximately 31 documents this way and later shared them with his legal team at Quinn Emanuel. When federal agents executed a search warrant at Heppner’s residence, they seized electronic devices containing those AI-generated documents.

Heppner’s attorneys argued the documents were protected on two grounds: AI attorney client privilege (because the documents incorporated information conveyed during the attorney-client relationship) and work product doctrine (because they were created in anticipation of litigation). Judge Rakoff rejected both arguments entirely.

No AI attorney

Why AI Attorney Client Privilege Was Rejected

Judge Rakoff dismantled the privilege claim by applying the traditional three-part test. For attorney-client privilege to apply, a communication must be (1) between a client and an attorney, (2) intended to be and kept confidential, and (3) made for the purpose of obtaining or providing legal advice. As detailed in Chapman and Cutler’s legal analysis, the court found that Heppner’s AI conversations failed all three elements.

First, Claude is not an attorney. The court stated plainly that communications between two non-attorneys cannot be privileged, regardless of how legal the subject matter is. The government compared Heppner’s use of Claude to asking a friend for input on legal matters — an act that has never created privileged communications. As the Harvard Law Review noted in its analysis, Judge Rakoff wrote that “because Claude is not an attorney, that alone disposes of Heppner’s claim of privilege.”

Second, there was no expectation of confidentiality. The court pointed directly to Anthropic’s privacy policy, which explicitly notifies users that the company collects data on prompts and outputs, may use that data to train its AI models, and may disclose user data to governmental regulatory authorities and third parties in connection with legal claims or disputes. According to Jones Walker’s breakdown of the decision, the court concluded that when you type sensitive information into a publicly available AI platform, you are voluntarily sharing it with a third party — and courts have consistently held that this destroys confidentiality.

Third, the communications were not made for the purpose of obtaining legal advice from an attorney. Claude’s own terms of service and public materials expressly disclaim the ability to provide legal advice and urge users to consult qualified lawyers. As the Epstein Becker Green analysis explained, the court determined that the “predominant purpose” of Heppner’s use of Claude could not have been to obtain legal advice when the tool itself warns against treating its output as legal advice.

The Work Product Doctrine Failed Too

Heppner’s fallback argument — that the documents were protected under the work product doctrine — fared no better. The work product doctrine shields materials prepared by or at the direction of an attorney in anticipation of litigation. But Heppner created the Claude documents entirely on his own initiative. His own defense counsel conceded that they “did not direct” him to run the AI searches, according to Jones Walker’s reporting.

Without attorney direction, the work product doctrine does not attach. The 31 AI-generated documents did not reflect any attorney’s thought process or litigation strategy — they reflected Heppner’s independent use of a consumer AI tool. As Fenwick’s analysis noted, had Heppner’s attorneys directed him to use Claude as part of their legal strategy, the analysis might have been different. But that wasn’t what happened.

AI attorney client privilege waiver

The Waiver Problem: It Gets Worse

Perhaps the most alarming aspect of the ruling is its implication for privilege waiver. Heppner didn’t just ask Claude generic legal questions — he fed information he had received from his attorneys into the AI tool. The government argued, and Judge Rakoff agreed, that sharing privileged attorney-client communications with a third-party AI platform may constitute a waiver of the privilege over the original communications themselves.

This means that using a public AI tool doesn’t just fail to create new privilege — it can actively destroy privilege that already existed. As the Debevoise Data Blog explained, sending pre-existing privileged information to a public AI tool could retroactively strip the protection from those original attorney-client communications. The privilege belongs to the client, but so does the responsibility to maintain it.

This Isn’t an Isolated Ruling

The Heppner decision didn’t emerge in a vacuum. It builds on a broader trend in the same federal court. In January 2026, Judge Sidney Stein of the Southern District of New York upheld discovery orders requiring OpenAI to produce 20 million de-identified ChatGPT conversation logs as evidence in copyright litigation brought by The New York Times, the Chicago Tribune, and other publishers. According to the National Law Review’s coverage, the court found that ChatGPT users who “voluntarily submitted their communications” to OpenAI had a diminished privacy interest in those conversations.

As Jones Walker noted, the Heppner ruling builds directly on this trend: courts are increasingly treating AI conversation logs as discoverable electronically stored information, no different from emails, text messages, or any other digital record.

OpenAI CEO Sam Altman has himself acknowledged this reality. As reported by GBlock’s analysis, Altman has stated that people are talking to ChatGPT as though it were a therapist, lawyer, or priest — but those conversations can be subpoenaed. The law treats AI conversations the same as any other electronic record.

what this means for you

What This Means for You

The implications of the AI attorney client privilege ruling extend far beyond criminal defendants. Anyone who uses a public AI tool — ChatGPT, Claude, Gemini, or any consumer-tier platform — to discuss, analyze, or research legal matters should understand the following:

Anything you type into a public AI tool may be discoverable in litigation. It is almost certainly not privileged. If you input information you received from your attorney into an AI tool, you may waive the privilege over those original attorney-client communications. Documents generated by AI after you input sensitive information are also discoverable. Sending AI-generated documents to your lawyer after the fact does not retroactively make them privileged. Only enterprise-tier AI agreements — such as ChatGPT Enterprise or Claude’s commercial and government plans — exclude user data from training by default and offer contractual confidentiality protections. A $20-per-month subscription does not buy you privilege.

As Dorsey & Whitney concluded, Judge Rakoff’s ruling teaches that while AI can be a powerful litigation tool, it is one best left in the hands of attorneys operating in a secure, closed environment specifically designed to keep client information confidential and privileged. Otherwise, you may end up prompting AI to write your own smoking gun exhibit.

The Bottom Line on AI Attorney Client Privilege

AI attorney client privilege does not exist under current law. The Heppner ruling makes this unambiguous. A public AI tool is not your attorney. It does not owe you confidentiality. Its privacy policy explicitly reserves the right to share your data. And a federal court has now confirmed that everything you type into it — including information your actual attorney shared with you in confidence — can be handed to the government.

If you are involved in litigation, under investigation, or dealing with any sensitive legal matter, talk to your attorney before you type anything into an AI tool. You are paying your attorney to protect your interests. Don’t let a chatbot undo that protection.


Sources Referenced in This Article

Cleverly Genius Monitors AI Platforms

At Cleverly Genius, we watch the platforms so you can keep the profits. By providing the strategic foresight your business needs to thrive in an ever-changing AI world, we ensure you stay ahead of shifts rather than reacting to them. Our proactive monitoring transforms digital volatility into a competitive advantage, safeguarding your growth and keeping your brand's momentum uninterrupted.

Not a client yet? Click Here.