NYLJ “Don’t Tell Claude About Your Fraud: Court Finds No Privilege in AI-Generated Communications”
- 3 days ago
- 7 min read
By Steve Kramarsky
Let’s say your client is involved in a series of complex corporate transactions that may or may not constitute securities fraud. He hires you, and the government informs you that he is a target in a federal prosecution. A few days later, he meets a well-spoken stranger at a dinner party. The stranger’s name is Claude. Claude warns your client that he has a big mouth and might share their conversation with anyone who asks, as well as using what he learns for his own purposes.
He also informs your client that he is not a lawyer (and that he tends to make things up, though that is not especially relevant here). Despite these warnings, your client enjoys talking to Claude. He tells his new friend all the details of his alleged criminal activity, including the advice you have given him. Impressed by Claude’s way with words, your client asks him to write up some factual summaries and potential defense strategies that he plans to share with you.
When your client presents you with this material, you are probably not too happy. Leaving aside the possibility that your client’s “new friend” is an FBI agent, there is no argument that his dinner conversation, or any notes on that conversation, are protected by the attorney-client privilege or work product doctrine. If the government seeks this material, your client (or Claude) will have to provide it. Any of your advice that your client shared at the dinner party may also have lost the protection of the privilege.
Readers of this column know where this is going: If Claude is not a human dinner guest, but a generative AI chat product offered to the public with the same caveats, does the analysis change? Should it?
Surveys suggest that well over half of the American public has had at least some regular interaction with generative AI products, and usage is rapidly growing. There is no question that these products are in extremely widespread use, but their legal implications are only beginning to reach the courts. Until very recently, issues of privacy and privilege remained untested, though some warning flags had been raised.
On Dec. 22, 2025, the Professional Ethics Committee of the New York City Bar issued Formal Opinion 2025-6, Ethical Issues Affecting Use of AI to Record, Transcribe, and Summarize Conversations with Clients. In the opinion, the Committee addresses the growing use of generative AI to transcribe and summarize calls, videoconferences, and meetings. It notes (among other ethical concerns) that because the provider of the AI tool (not the lawyer) generally controls the use and sharing of the data, “attorneys should advise clients of the risks of the loss of confidentiality and privilege, particularly… where clients are using their own AI tools.” Just a few months later, the issue reached the courts.
In United States v. Heppner, No. 25 CR. 503 (JSR), 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026), Judge Rakoff considered and rejected a claim by a criminal defendant that his use of Anthropic’s chatbot “Claude” could be protected by attorney-client or work-product privilege. The Court’s analysis appears to be the first of its kind and contains a number of important lessons for practitioners.
The Heppner Opinion: Background
In Heppner, the court addressed essentially the facts set out at the start of this article, with a generative AI chatbot in place of the dinner guest. Defendant Heppner is an executive of several companies, one of which is publicly traded, who was indicted by a grand jury for securities fraud, wire fraud, and other related charges. The charges arise out of various transactions among companies defendant controlled, through which he is alleged to have defrauded investors out of more than $150 million.
Defendant was arrested and released on bond after pleading not guilty. In connection with defendant’s arrest, the FBI searched his home and seized various documents and electronic devices, including “approximately thirty-one documents that memorialize communications that Heppner had with the generative AI platform ‘Claude,’ which is operated by the private company Anthropic.”
According to defendant’s counsel, the documents represent communications between Heppner and Claude that took place after Heppner had received a grand jury subpoena, and after it was clear that he was the target of the investigation, but not at the explicit direction of counsel. “Without any suggestion from counsel that he do so, Heppner prepared reports that outlined his defense strategy, that outlined what he might argue with respect to the facts and the law that we anticipated that the government might be charging.”
Defendant’s counsel asserted privilege over the 31 documents reflecting his conversations with Claude (the AI Documents), arguing that (1) defendant’s conversations with Claude included, among other things, information that he learned from counsel; (2) defendant created the materials for the purpose of obtaining legal advice from counsel; and (3) defendant subsequently shared the contents with counsel. The government moved for a ruling that the documents are not protected by the attorney-client privilege or the work-product doctrine, and the court granted the government’s motion.
Attorney-Client Privilege
The court’s analysis of the attorney-client privilege issue is relatively straightforward. The court notes that the privilege protects “communications (1) between a client and his or her attorney (2) that are intended to be, and in fact were, kept confidential (3) for the purpose of obtaining or providing legal advice.” The privilege is narrowly construed, because it operates as an exception to the rule that all relevant proof is essential for a fair and just trial. Here, the court held that defendant’s communications with Claude satisfied none of the three requirements for protection.
First, the court noted that Heppner’s communications with Claude fail the first prong of the test, because Claude is not an attorney, and the discussion of legal issues between two non-lawyers is not protected by the attorney-client privilege. The court notes that “when the government asked Claude whether it could give legal advice, it responded that ‘I’m not a lawyer and can’t provide formal legal advice or recommendations’ and went on to recommend that a user ‘should consult with a qualified attorney who can properly assess your specific circumstances.’” It is perhaps notable that the government asked this question to Claude, not to Anthropic, but there is no argument that Claude was, for these purposes, acting as counsel.
Second, the court held that, under Anthropic’s written policies governing the use of the Claude product, defendant’s communications with the AI model were not confidential. The opinion cites the Anthropic privacy policy, under which users consent to Anthropic collecting data on both their “inputs” and Claude’s “outputs.” Users also consent to the use of their data for model training, and Anthropic reserves the right to disclose their data to third parties, including “governmental regulatory authorities” even in the absence of a subpoena.
Noting this, the court held that AI users “do not have substantial privacy interests” in their conversations with AI platforms, which the platform retains in the normal course of its business. (Citing In re OpenAI, Inc., Copyright Infringement Litig., No. 25 MD 3143, ECF No. 1021 at 3 (Jan. 5, 2026)). Thus, the court held that defendant could not have had a “reasonable expectation of confidentiality in his communications” with Claude. The summaries that Claude outputted were “not like confidential notes that a client prepares with the intent of sharing them with an attorney because Heppner first shared the equivalent of his notes with a third-party, Claude.”
Third, the court held Defendant did not communicate with Claude “for the purpose of obtaining legal advice.” This, the court noted, is a “closer call” because counsel asserted that Heppner communicated with Claude for the purpose of communicating Claude’s outputs to counsel; he was trying (misguidedly) to strategize with Claude and pass the results on to counsel.
The court found this insufficient to support protection. The court noted that, if counsel had directed Heppner to use Claude, the AI tool might be characterized as “akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.” Absent that direction, the court found there was no basis for the assertion of privilege over defendant’s communications.
Work-Product Doctrine
Similarly, the court held that Defendant’s communications with Claude were not protected by the work-product doctrine. The work-product doctrine is designed to protect materials that reflect the strategic decisions and “mental processes” of an attorney. Again, it is narrowly construed and generally limited to “materials prepared by or at the behest of counsel in anticipation of litigation or for trial.”
Here, the court held that the doctrine could not apply because counsel did not direct defendant to use Claude. Even if Heppner’s intention was to prepare materials in anticipation of litigation and use them to guide his counsel’s strategy, those materials could not reflect the strategic decisions of counsel, because counsel was not involved in their preparation. At oral argument, defendant’s counsel conceded this point, agreeing that while the materials affected defense strategy going forward, they did not “reflect” counsel’s strategy at the time they were created.
The opinion suggests (without saying) that the work-product analysis might be different if defendant had been acting at counsel’s specific direction, but based on the court’s privilege analysis that might not be so. If users are not entitled to any expectation of privacy in Claude’s inputs or outputs whatsoever, those materials should not be subject to protection, whether they include counsel’s strategic decisions and mental processes or not. In any case, here the court held that neither the attorney-client privilege nor work-product doctrine protected defendant’s communications with Claude.
Take Aways
Judge Rakoff’s opinion in Heppner is a wake-up call for attorneys, reinforcing the warnings set out in the City Bar’s December Opinion, but it should not be read to suggest that the use of generative AI tools automatically negates confidentiality. Generally, courts examining privilege issues in electronic communications (whether in the AI context or otherwise) have focused on whether the user has a reasonable expectation of privacy in the system. Based on Anthropic’s privacy policies, the Heppner court held that a Claude user has no such expectation.
Many other general-purpose, consumer-facing generative AI products (such as ChatGPT and Gemini) have similar policies and would likely face similar issues. Enterprise-grade generative AI tools, particularly those designed for legal environments, typically have more restrictive policies intended to address confidentiality concerns. Some can be configured to run on local machines or company-owned servers to further improve privacy. In short, it is possible to contract around many of the confidentiality issues raised by Heppner (though at substantial cost) and wherever possible practitioners should stick to those tools that do so.
Issues will tend to arise when clients (or attorneys) use consumer tools in the legal context, and attorneys should be aware of these concerns and make their clients aware of them as well. It is worth keeping in mind that these models are mind-bendingly costly to operate, so the free (or less expensive) consumer products are almost all supported by user data collection. As the saying goes: “If you’re getting something for free on the Internet, you’re not the customer, you’re the product.”
This article first appeared in the New York Law Journal on March 12, 2026.