Clickcease

    Contents

      Varghese Summersett Background

      The Dangers of Asking AI Legal Questions (2026 Court Decision)

      Published:
      Author: Benson Varghese
      Category:Latest News
      Reading Time: 5 min read

      When you turn to AI for help, it can feel like a private conversation, one where you are free to think out loud, explore ideas, and ask questions you might not ask anyone else. However, a recent ruling out of a federal court in New York highlights the dangers of asking AI legal questions, or using AI as a sounding board for your legal strategy.

      Varghese Summersett Legal Team

      The full opinion came in United States v. Heppner, No. 25-Cr-503 (S.D.N.Y.) and was issued on February 17, 2026. But the legal reasoning extends far beyond federal criminal court. The same arguments can be made in civil lawsuits and in state court. If you are using AI to think through a legal situation, you may be creating evidence you cannot take back.

      You've Seen Us On

      Southlake Style
      NPR
      Fort Worth Report
      Court TV
      CBS
      WFAA
      Today Show
      PBS News
      OxyGen
      NBC News
      KERA News
      Fox News
      ABC News
      The Washington Post
      The New York Times
      Dallas Morning News
      New York Post
      Law and Crime
      Fort Worth Star Telegram
      Dallas Express
      Daily Mail
      Crime Online
      Fort Worth Business Press
      Fort Worth Inc. Magazine
      Entrepreneur
      D Magazine
      Attorney at Law Magazine
      Forbes
      The Atlantic
      Texas Monthly Magazine
      Southlake Style
      NPR
      Fort Worth Report
      Court TV
      CBS
      WFAA
      Today Show
      PBS News
      OxyGen
      NBC News
      KERA News
      Fox News
      ABC News
      The Washington Post
      The New York Times
      Dallas Morning News
      New York Post
      Law and Crime
      Fort Worth Star Telegram
      Dallas Express
      Daily Mail
      Crime Online
      Fort Worth Business Press
      Fort Worth Inc. Magazine
      Entrepreneur
      D Magazine
      Attorney at Law Magazine
      Forbes
      The Atlantic
      Texas Monthly Magazine

      The Case: What Happened in Heppner

      Bradley Heppner was a Dallas-based financial executive and CEO of GWG Holdings, a publicly traded company. In October 2025, a federal grand jury indicted him on securities fraud , wire fraud, and related charges. The government alleged he defrauded investors out of more than $150 million.

      On his own initiative, Heppner opened Claude and had extended conversations with the AI. (Big Mistake #1) He asked it to help him analyze the anticipated charges, assess the facts, and think through possible defenses. He generated 31 documents from those sessions and later sent them to his defense team.

      When FBI agents executed a search warrant at his home, they seized those printed transcripts. (Big Mistake #2) His lawyers asserted both attorney-client privilege and work product protection. Judge Jed S. Rakoff of the Southern District of New York denied both arguments in full. One week after ruling from the bench, he issued a 12-page written memorandum explaining precisely why, applying the settled privilege doctrine that governs civil and criminal proceedings alike.

      Read the Full Opinion

      You can read Judge Rakoff’s complete 12-page written memorandum below.

       

      Why the Law Does Not Protect AI Conversations

      Attorney-client privilege requires three things:

      1. a communication between a client and a licensed attorney,
      2. that is kept confidential, and
      3. for the purpose of obtaining legal advice.

      AI conversations fail on every element.

      An AI platform is not a lawyer. It cannot form an attorney-client relationship. The court noted that Claude itself, when asked by the government whether it could give legal advice, responded that it was not a lawyer, could not provide formal legal advice, and recommended that users consult a qualified attorney. The AI disclaimed the very role the defendant was claiming it had performed, and that response was submitted as an exhibit and cited in the ruling.

      There was also no confidentiality. Heppner shared his communications with Anthropic, a third-party company whose privacy policy expressly states that it collects both user inputs and AI outputs, may use that data to train its models, and reserves the right to disclose user data to third parties, including governmental regulatory authorities. The court held that any reasonable reader of that policy would understand there is no confidentiality in those conversations. The same is true of ChatGPT (OpenAI), Gemini (Google), and Copilot (Microsoft). They all have substantially similar policies.

      Defense counsel also argued the documents should be protected as attorney work product, materials prepared in anticipation of litigation. The court rejected that too, because the documents were not created by or at the direction of an attorney. Heppner’s own lawyers confirmed they never directed him to use Claude. Work product protection exists to shield an attorney’s mental processes and litigation strategy. It does not extend to documents a client independently creates, however strategically motivated. And critically: sending those documents to your lawyer afterward does not change their legal status. As the court put it, non-privileged communications are not alchemically transformed into privileged ones simply by being handed to a lawyer.

      Tough Cases Call for Tougher Lawyers. Call Varghese Summersett.

      Does Paying for AI Make Your Conversations Confidential?

      Many attorneys and litigants assume that upgrading from a free AI account to a paid subscription changes the legal picture around confidentiality. It does not. Payment changes the commercial relationship. You receive more features, higher usage limits, and in some cases stronger data handling commitments. But it does not alter the fundamental legal reality that you are transmitting information to a third-party company. The platform’s privacy policy, not your billing tier, governs how your data is handled, and those policies apply regardless of what you pay.

      Every major AI platform’s paid consumer tier, including Claude Pro, ChatGPT Plus, and Gemini Advanced, still collects user inputs and outputs, still reserves the right to use that data consistent with its policies, and still remains subject to legal compulsion. Payment alone creates no expectation of confidentiality. Courts applying the Heppner analysis would reach the same conclusion whether the user was on a free account or a paid one.

      What About Specialized Legal AI Tools and Workflow Platforms?

      Clients and potential clients are increasingly drawn to specialized AI tools, including coding and workflow platforms, desktop automation software, and AI systems with legal-specific plug-ins. The appeal is understandable. But the confidentiality analysis does not change simply because the tool is more sophisticated or marketed toward legal professionals.

      These platforms still route your inputs through third-party servers. A legal plug-in that helps you draft a motion or analyze case facts does not create attorney-client privilege between you and the software vendor. The underlying data handling architecture remains the same: your communications are processed externally, stored on systems outside your control, and subject to the vendor’s own legal obligations. If a government authority issues a subpoena to the company operating that platform, your legal plug-in agreement provides no protection. The vendor’s obligation to comply with lawful legal process supersedes any contractual privacy commitment made to you as a user.

      What About LLMs with Enterprise or Business Plans?

      Enterprise and business-tier agreements with AI companies do offer meaningful improvements over standard consumer accounts, and it is important to be precise about what they provide and where they fall short. Most enterprise agreements, including those offered by Anthropic, OpenAI, and Google, include commitments that the vendor will not use your inputs and outputs to train their models. They may also offer enhanced data retention controls and, in some cases, Business Associate Agreements for HIPAA compliance. These are real and commercially significant protections.

      However, none of them create confidentiality in the legal sense, and none of them protect against compelled disclosure. The critical distinction is between a contractual commitment about use and a legal guarantee of confidentiality. When an enterprise agreement says your data will not be used for training, that is a promise about what the vendor will do with your information under normal circumstances. It is not a promise that the vendor can resist a federal subpoena, a government investigation, or a national security demand.

      The AI company remains a third party. Its systems still process your data, and its own legal obligations govern its response to lawful process, not your contract. Courts examining privilege waiver ask whether information was voluntarily disclosed to a third party. An enterprise agreement does not change the answer to that question. The disclosure occurred the moment you transmitted the information to the platform.

      Government Compulsion, Subpoenas, and the Limits of Vendor Promises

      Even the most robust enterprise privacy agreement cannot protect your data from government compulsion. Federal and state authorities can subpoena AI companies directly, and the company’s obligation to respond is governed by law, not by its contract with you. National security letters issued under federal law may not only compel disclosure but prohibit the vendor from even notifying you that your data was requested. In civil litigation, opposing counsel can issue third-party subpoenas to AI platforms seeking your inputs and outputs as part of discovery.

      Some enterprise agreements include notification provisions that require the vendor to alert you before complying with legal process, giving you a window to seek a protective order. That is a genuinely useful contractual feature. But it is a procedural opportunity, not a substantive protection. A court that finds your communications non-privileged will not be persuaded that the vendor’s notification clause retroactively restored confidentiality you waived at the moment of transmission.

      Your Devices Can Be Seized and Searched

      A separate and equally serious vulnerability exists much closer to home. Even setting aside what AI companies store on their servers, the devices you use to access these tools, including your laptop, your phone, and your tablet, are themselves potential evidence. Law enforcement can seize devices pursuant to a warrant, and courts have consistently upheld broad searches of electronic devices in criminal and civil contexts. Your AI conversation history, cached locally or accessible through a logged-in browser session, is fully within the scope of a device search.

      This is not a theoretical risk. Device seizures in legal proceedings are increasingly common, and the data recovered from them is often far more comprehensive than individuals expect. Browser history, locally stored files, application data, and session logs can all reveal AI usage and its contents even when a user believes those conversations are private. The Stored Communications Act and related federal statutes provide some procedural protections, but they are not absolute and have significant exceptions for law enforcement access. If your device contains a record of what you asked an AI, and in most cases it does, that record is potentially accessible to anyone who lawfully obtains your device.

      Printing It Out Makes It Easy Pickings

      Perhaps the most straightforward vulnerability requires no technical analysis at all. If you print out AI-generated content, you have created a physical document that can be seen, copied, seized, and introduced as evidence by anyone with access to it. In Heppner itself, the fact that the defendant printed out his AI transcripts and later handed them to counsel was central to the government’s ability to obtain them. Printing AI output and leaving it on a desk, in a file, or in a briefcase eliminates even the modest procedural protections that might apply to data stored on encrypted devices or third-party servers.

      Physical documents have no password protection. They can be photographed, photocopied, or simply read by anyone who encounters them. In a legal context, printed AI communications could be obtained through discovery, observed during a law enforcement search, or accessed by an employer with legitimate access to office spaces. Users who are careful about their digital privacy sometimes forget that the moment they produce a physical copy of an AI conversation, they have made that information dramatically more accessible. The lesson is not simply that AI conversations lack confidentiality at the platform level. It is that every step in the chain of handling that information, from transmission to storage to printing, creates additional exposure that any reasonable person should understand and account for.

      Why This Matters Just as Much in Civil Litigation

      The Heppner case involved a federal criminal prosecution, but the privilege doctrine Judge Rakoff applied is not criminal law. It is evidence law, and it governs civil cases just as fully. Texas Rule of Evidence 503 mirrors the federal framework. Federal Rule of Civil Procedure 26 governs work product in civil federal litigation on the same terms it governs criminal cases. Any Texas court, state or federal, civil or criminal, would likely apply the same analysis and reach the same result.

      That means the risk extends to a wide range of people who have never been accused of a crime and may never be.

      Personal Injury Plaintiffs

      You were seriously injured in a car accident or truck crash. Before you hired an attorney, or even after, you opened an AI chatbot to understand your situation. You described the accident in detail, asked whether you might be at fault, asked what your injuries might be worth, asked whether the other driver’s insurance was likely to fight the claim.

      Every one of those prompts is potentially discoverable in your personal injury case. The insurance company’s lawyers can request your AI conversation records in discovery. If you gave the AI a version of events that differs from what you later tell your attorney, that inconsistency becomes a weapon. If you speculated about your own fault, those words can be quoted in a deposition. If you asked the AI to estimate your damages, opposing counsel can argue your expectations were inflated from the start.

      This is not theoretical. Standard civil discovery requests ask for all documents reflecting communications about the incident at issue. An AI conversation about your accident is a document reflecting a communication about the incident at issue.

      Accident Victims Who Did Their Own Research

      Many people injured in accidents turn to AI before they contact a lawyer. They describe the crash, ask who might be liable, ask what “pain and suffering” means, ask whether they should accept the insurance company’s first offer. That research instinct is understandable. But the record of it, the prompts you typed and the outputs you read, exists in a platform that has no obligation to protect your confidentiality and every right to produce your records in response to a subpoena.

      The moment you retain an attorney and that attorney begins advising you, your conversations with your lawyer are protected. Your earlier AI conversations are not, and retaining a lawyer does not change that retroactively.

      Business Owners and Executives in Commercial Disputes

      A contract dispute is pending. You ask AI to analyze the contract language and tell you whether the other side breached. You describe internal communications. You ask the AI to help you draft a response to a demand letter. You run through your potential liability exposure.

      In the ensuing lawsuit, opposing counsel requests all documents discussing the contract and the dispute. Your AI conversations, with their detailed prompts describing the facts from your perspective, are responsive to that request. They may reveal your internal analysis of your own weaknesses. They may reflect admissions you did not intend to make. They exist in a third-party platform with a privacy policy that permits disclosure to courts and legal process.

      HR Professionals and Employers

      Before escalating to outside counsel, an HR director asks an AI to evaluate whether a termination decision is legally defensible. A manager asks AI to help draft a performance improvement plan. A business owner asks whether a non-compete agreement is enforceable after an employee resigns. All of those conversations could surface in the wrongful termination lawsuit, the breach of contract claim, or the employment discrimination case that follows.

      Family Law Matters

      A spouse who is considering divorce asks an AI to explain community property rules, estimate what a court might award, or analyze a prenuptial agreement. A parent asks AI to help understand custody standards in Texas. Those conversations, stored in a consumer AI platform, are subject to the same discovery rules as any other document. In contested family law proceedings, parties frequently seek broad document discovery, and courts in Texas have made clear that relevant documents mean relevant documents, regardless of where they were created or stored.

      The Multiple Ways Your AI Conversations Can Surface

      Understanding how these records get discovered matters as much as understanding that they can be discovered.

      Civil discovery requests are the most common route in non-criminal litigation. Standard document requests in personal injury, employment, commercial, and family law cases ask for all communications about the subject matter of the lawsuit. An AI conversation is a communication. If it is stored on your device, in a browser, or in the platform’s servers, and it almost certainly is in all three places, it is responsive to a standard document request.

      Subpoenas to the platforms themselves are also available. Anthropic, OpenAI, Google, and Microsoft all have policies allowing them to respond to legal process. A subpoena to one of those companies for your conversation history in connection with a specific litigation matter is legally available, and the companies can and do respond. You have no privilege to assert on your own behalf once you have voluntarily shared your information with a third party under a broad privacy policy.

      Device forensics are a factor in cases involving serious enough stakes, including business divorces, high-value commercial litigation, and contested custody matters. Forensic examination of phones and computers can recover browser history, application data, cached AI conversations, and recently deleted files. Deleting your conversation history from the platform’s interface does not ensure it is gone from your devices.

      And many people copy AI outputs into their own documents, including word processing files, emails, and notes sent to colleagues or family members. Those downstream documents carry the same exposure as the original AI conversation and may be even easier for opposing counsel to find because they are in familiar document formats.

      Our Bench is Deep. Call Varghese Summersett.

      How Legal Practice Management Platforms Are Different

      There is a fundamental difference between a general-purpose AI platform and a dedicated legal practice management system, and that difference matters both doctrinally and practically. Platforms built specifically for law firms, such as MyCase, Clio, and Lawft, operate within a legal and contractual framework specifically designed to preserve the privilege that attaches to attorney-client communications and attorney work product. Critically, as tools built for lawyers, the word product privilege applies, and conversations with clients are protected through the attorney-client privilege.

      When a law firm subscribes to a legal practice management platform, the relationship is governed by a vendor agreement that expressly acknowledges the privileged and confidential nature of the data being stored. These agreements typically include provisions that the vendor will not access, use, or disclose client data except as necessary to provide the service, that all stored communications will be treated as confidential, and that the vendor will notify the firm immediately upon receiving any legal process seeking access to firm data. This is materially different from the privacy policies of consumer AI platforms, which reserve broad rights to access, use, and disclose data.

      The Necessary Third-Party Doctrine

      One of the most important doctrinal tools available to legal practice management platforms is the necessary third-party doctrine. Under this principle, sharing privileged communications with a third party does not waive privilege if that third party’s involvement is necessary to facilitate the attorney-client relationship. Courts have applied this doctrine to legal secretaries, paralegals, co-counsel, expert witnesses, and technology vendors whose services are necessary to the modern practice of law.

      A law firm that stores client files, communications, and case strategy in a practice management platform can argue, with substantial doctrinal support, that the platform is a necessary third party whose access to that information does not constitute a waiver of privilege. The attorney is not choosing to share confidential information with a stranger. The attorney is using a tool that is functionally equivalent to a filing cabinet or a legal pad, except that it happens to be cloud-based. Courts have increasingly accepted this framing, and bar associations in numerous states have issued ethics opinions confirming that attorneys may store client data in cloud-based systems without waiving privilege, provided they take reasonable steps to ensure confidentiality.

      Work Product Protection in a Closed Environment

      Attorney work product doctrine adds another critical layer of protection. Work product includes the mental impressions, legal theories, strategies, and analyses that attorneys develop in anticipation of litigation, and it receives protection that is in some respects stronger than attorney-client privilege itself. When attorneys use a practice management platform to draft case strategy, annotate documents, build timelines, or develop legal theories, that material is work product generated within a closed professional environment.

      The closed environment architecture of legal practice management systems is legally significant. Unlike a general AI tool where inputs and outputs flow through open commercial infrastructure, a practice management platform is a walled system accessible only to the firm and its authorized users. Communications within that system do not flow to third parties in any meaningful sense. The platform operator is not reading your case strategy. It is storing it on your behalf, under a contractual obligation of confidentiality, in an environment designed around the professional obligations that govern legal practice.

      How Subpoenas Are Resisted by Law Practice Management Software Platforms

      When a subpoena is directed at a legal practice management company seeking client data, the company has both contractual and legal grounds to resist disclosure. Contractually, the vendor agreement typically requires the company to notify the law firm immediately and to cooperate with the firm’s efforts to quash or limit the subpoena. Legally, the firm itself can intervene, assert privilege and work product protection over the requested materials, and seek a protective order. Because the data was generated and stored in a privileged context, by attorneys, for clients, in anticipation of legal representation, the privilege assertion is substantively strong.

      This is the procedural posture that makes legal practice management platforms categorically different from general AI tools. When Anthropic or OpenAI receives a subpoena for a user’s AI conversations, the user’s ability to resist is severely compromised because they have already conceded, by agreeing to the platform’s terms, that no confidentiality was promised and that the company may disclose data as required by law. When a dedicated legal practice management platform receives a subpoena, the law firm can step into that proceeding with a well-established privilege claim and litigate it from a position of strength. The platform’s architecture and vendor agreement support that claim rather than undermining it.

      What to Expect Going Forward

      Judge Rakoff’s opinion is methodical and well-grounded in existing law. The three elements of attorney-client privilege have been stable for generations. The work product doctrine has been settled since Hickman v. Taylor in 1947. Rakoff did not invent new rules. He simply asked whether AI conversations fit the existing framework and found that they do not.

      The court did leave one path open: if a lawyer directs a client to use an AI tool as part of the representation, that use might qualify for work product protection. The theory is that the AI would be functioning as the lawyer’s agent, similar to how an accountant or investigator working at counsel’s direction can fall within the work product umbrella. That is a narrow opening, and it requires the attorney to have affirmatively directed the client’s AI use as part of the legal strategy. It does not help anyone who turned to AI on their own before or during a legal matter.

      Benson Varghese is the founder and managing partner of Varghese Summersett, where he has built a distinguished career championing the underdog in personal injury, wrongful death, and criminal defense cases. With over 100 jury trials in Texas state and federal courts, he brings exceptional courtroom experience and a proven record with Texas juries to every case.

      Under his leadership, Varghese Summersett has grown into a powerhouse firm with dedicated teams across three core practice areas: criminal defense, family law, and personal injury. Beyond his legal practice, Benson is recognized as a legal tech entrepreneur as the founder of Lawft and a thought leader in legal technology.

      Benson is also the author of Tapped In, the definitive guide to law firm growth that has become essential reading for attorneys looking to scale their practices.

      Benson serves as an adjunct faculty at Baylor Law School.

      Related Articles

      Copy of WEBSITE FEATURED PICS 20

      What Is A Federal Sentencing Memorandum?

      What is a Federal Sentencing Memorandum? Criminal defense attorneys write a federal sentencing memorandum to help persuade the judge that...

      Guardian-Ad-Litem

      What is A Guardian Ad Litem in Texas Family Law Cases?

      Hopefully, during divorce or child custody cases in Texas, both parents have their children’s best interests at heart. But sometimes,...

      ISF: Intermediate Sanction Facilities in Texas

      Probation and parole violations can lead to revocations that generally result in a person being sentenced to jail or prison. However, incarceration...

      Criminal Personal Injury Family
      Criminal Family Personal Injury