Federal Courts Apply Traditional Privilege Principles to Generative AI Use - With Divergent Results
Introduction
Two federal district court decisions issued in February 2026 addressed a question that is increasingly relevant to modern litigation: how the use of generative artificial intelligence in legal research and case preparation affects claims of attorney‑client privilege and work‑product protection. In United States v. Heppner, the Southern District of New York ordered production of AI‑generated materials, finding that neither privilege nor work‑product protection applied. In contrast, in Warner v. Gilbarco, the Eastern District of Michigan denied a motion to compel discovery of AI‑assisted litigation materials, concluding that the work‑product doctrine remained intact.
Read together, the decisions underscore a consistent theme rather than a doctrinal split: courts are not creating new privilege rules for artificial intelligence. Instead, they are applying established privilege and work‑product principles to new factual settings. The divergent outcomes reflect how differences in who used the AI, how it was used, and whether counsel directed that use can be outcome‑determinative.
Background
Generative AI tools are now routinely used by lawyers, clients, and self‑represented litigants to research legal issues, organize facts, and draft arguments. That reality has raised questions about whether communications involving AI platforms should be treated like traditional legal research tools or like disclosures to third parties.
Attorney‑client privilege protects confidential communications between a client and counsel made for the purpose of obtaining legal advice. The work‑product doctrine, by contrast, shields materials prepared by or at the direction of counsel in anticipation of litigation, particularly where disclosure would reveal attorney mental impressions or strategy. Both doctrines are narrowly construed, and both can be waived through disclosure to third parties in certain circumstances.
The Heppner and Warner decisions each applied these settled principles to AI‑assisted litigation activity, but reached different conclusions based on the specific facts before them.
The Southern District of New York: Independent AI Use Defeated Privilege Claims
In United States v. Heppner, the court addressed whether materials generated by a criminal defendant using a publicly available AI platform were protected from government inspection. The defendant had used a consumer generative AI tool to prepare documents analyzing potential defenses and legal exposure after receiving a grand jury subpoena but without direction from counsel. Those materials were later seized pursuant to a search warrant, and the defendant asserted attorney‑client privilege and work‑product protection.
The court rejected both claims. With respect to attorney‑client privilege, the court emphasized three points. First, communications between the defendant and the AI platform were not communications with an attorney or an attorney’s agent. Second, the court found that the communications were not confidential, relying heavily on the AI provider’s publicly available privacy policy, which permitted data collection, model training, and disclosure to third parties. Third, the court concluded that the defendant did not communicate with the AI platform for the purpose of obtaining legal advice from counsel, noting that the AI use was not attorney‑directed and that the platform expressly disclaimed providing legal advice.
The work‑product doctrine fared no better. Although the defendant argued that the materials were prepared in anticipation of litigation, the court held that work‑product protection requires preparation by or at the behest of counsel and a connection to counsel’s mental impressions or strategy. Because the defendant acted independently and the documents did not reflect counsel’s thought processes at the time they were created, the doctrine did not apply.
The Eastern District of Michigan: AI as a Litigation Tool, Not a Waiver
In Warner v. Gilbarco, the Eastern District of Michigan considered whether a civil defendant could compel discovery into a pro se plaintiff’s use of generative AI tools in preparing her case. The defendants sought broad discovery of the plaintiff’s AI‑related materials and argued that any work‑product protection had been waived through disclosure to a third‑party AI platform.
The court rejected that argument. Focusing on the work‑product doctrine, the court reasoned that the plaintiff—acting as her own advocate—had used AI tools as part of her litigation preparation. The materials reflected her mental impressions and litigation strategy developed in anticipation of litigation.
Critically, the court declined to treat the use of an AI tool as a disclosure to a third party that automatically waives work‑product protection. The court emphasized that waiver generally requires disclosure to an adversary or in a manner likely to place the material in an adversary’s hands. Generative AI tools, the court explained, are “tools, not persons,” and compelling disclosure on that basis alone would risk eroding work‑product protection in nearly every modern drafting environment.
Unlike Heppner, the Warner court did not analyze attorney‑client privilege, in part because the plaintiff was proceeding without counsel. The decision instead turned on whether AI‑assisted materials retained their character as protected litigation work product.
Why the Outcomes Diverged
Although the results differed, the courts’ analytical frameworks were largely aligned. Both decisions applied traditional privilege doctrines without suggesting that AI requires new or special legal rules. The divergence stemmed from factual differences that courts have long treated as critical in privilege analysis.
In Heppner, the AI use was independent, not directed by counsel, and involved a consumer platform whose terms undermined any reasonable expectation of confidentiality. In Warner, the AI use occurred as part of litigation preparation, the materials reflected protected mental impressions, and there was no meaningful risk that the information would reach an adversary.
These distinctions illustrate that AI use does not automatically waive privilege or work‑product protection—but it also does not automatically preserve it.
Practical Implications
For in‑house counsel, employers, and other sophisticated legal consumers, these decisions reinforce several practical points:
Using AI does not automatically waive privilege, but context matters. Courts will look closely at who used the tool, for what purpose, and under what confidentiality conditions.
Independent AI use carries greater risk. Where clients or employees use public AI tools without counsel direction, privilege and work‑product claims may be difficult to sustain.
Existing doctrine governs. Courts are applying familiar privilege principles, not inventing AI‑specific rules, which means traditional risk analysis remains essential.
As AI tools become further embedded in legal workflows, courts are likely to continue evaluating privilege claims through this fact‑specific lens, applying established doctrine to evolving technology rather than reshaping the doctrine itself.