Person photographing a legal document while an AI assistant runs on a laptop in a dimly lit office

Image generated by Grok

According to Clio's 2025 Legal Trends Report, 79% of legal professionals use AI in their practice. That number was only 19% in 2023. The productivity gains are real, but so are the risks. According to ILTA's 2025 Technology Survey, only 45% of law firms have an official policy on generative AI use. This leaves partners exposed to a growing risk called Shadow AI.

What is Shadow AI?

If you're a partner, you just see things getting done. You're in court, on calls with clients, reviewing work product, and managing the business all at the same time. What you don't know is that your associate is pasting the deposition summary into ChatGPT to get a quick analysis, while your paralegal is running client intake documents through AI to pull key dates. This is Shadow AI. Are your employees being reckless or lazy? Opinions may vary, but the reality is they're trying to keep their heads above water that grows deeper by the hour. The tech industry threw them a life raft and they're using it. The problem is that without governance, it leaves the firm exposed.

The Ban Strategy Doesn't Work

The initial reaction is to outright ban the use of AI. This seems effective on the surface, but the problem with shadow AI is that you don't know it's being used. Blocked ChatGPT on your network? Your associate just took a photo of the document, uploaded it to their personal account, and typed the output back onto your network. Fired your paralegal who was using Claude after you told them not to? They got a job at another firm that allows AI and they're now more productive. And when your clients find out they're getting faster, better service somewhere else, don't be surprised when they follow. The reality is you can't enforce an AI ban without watching over your staff's shoulder every minute of the day. Rather than turning your firm into a totalitarian dictatorship, partners should be open to its use, so long as there are controls in place.

Why This Matters Now, Not Later

On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York ruled from the bench in United States v. Heppner that approximately 31 documents a defendant created using Anthropic's Claude chatbot were protected by neither attorney-client privilege nor the work-product doctrine. The court found that Claude is not an attorney (who knew?), that the platform's terms did not ensure confidentiality sufficient to support privilege, and that the work was not directed by counsel. Widely described as the first federal court opinion directly addressing AI-generated materials and privilege, a written memorandum followed on February 17, 2026.

The ruling was narrow, and it applied to a specific set of facts involving a defendant using a consumer AI tool on his own initiative. But the reasoning has broad implications. As multiple law firm alerts have noted, the same logic applies to any situation where staff use consumer AI tools on confidential matters without attorney direction or confidentiality safeguards.

That same day, the Eastern District of Michigan reached the opposite conclusion in Warner v. Gilbarco, denying discovery into a pro se plaintiff's ChatGPT-assisted litigation drafting and holding that the requested AI-related materials were protected by the work-product doctrine. The difference came down to who directed the AI use, the context in which it was used, and the confidentiality posture.

Two courts, same day, opposite outcomes. But the one thing both rulings agree on: the details of how AI is used determine whether protections apply.

What Governance Actually Looks Like

AI governance for a law firm doesn't mean enterprise software or overpriced consultants in cheap suits showing you a fancy PowerPoint presentation. At its core, it means answering four simple questions:

  1. What tools are approved for use on client matters? This is an approved tools list. It could be a one-page document that simply lays out the firm's policy. This policy should clearly state which AI platforms meet confidentiality and security requirements and which do not.
  2. Who is responsible for supervising AI-assisted work? Under the existing rules of professional conduct, attorneys are already responsible for supervising the work of non-lawyers. The same duty applies to AI-assisted work. Firms need a simple, effective way to have visibility into how and when AI is used. This requires having a governance system in place and a person who understands how to interpret that data.
  3. Is there a record of what AI was used and how? An audit trail doesn't need to be sophisticated. At a minimum, it means logging which tool was used, on which matter, and what data was involved. While this could be done simply with a word doc or excel sheet, it's recommended to deploy governance software that automates the process and provides a live dashboard for summary.
  4. Does the firm's AI setup maintain confidentiality? After Heppner, this is the question that matters most. Enterprise versions with signed data processing agreements, no-training clauses, and zero data retention are one approach. Some firms are going further by running AI on hardware they own. This is called private AI or local AI and it's showing up more frequently in bar association discussions and law firm recommendations since Heppner. The concept is straightforward: if the AI runs on a device in your office and never connects to an external server, the third-party confidentiality problem that sank privilege in Heppner doesn't exist.

The Question to Sit With

Every firm is going to arrive at AI governance at some point. The ones doing it now are building policies, documentation, and infrastructure before the next ruling, bar opinion, or client question forces the issue. If a client asked today: "How does your firm handle AI on my matters? What tools do you use? Where does my data go? Can you prove it?" What would the answer be?


This is not legal advice. Consult your bar association or ethics counsel for guidance specific to your jurisdiction and practice.

Jordan Crooms is the founder of Alethekor. We build AI systems that run on hardware in your office: no cloud, no third parties, full audit trail. If you want to talk about what AI governance looks like for your firm, reach out at jordan@alethekor.com.

Sources

  1. Clio, "2025 Legal Trends Report" — lawnext.com
  2. ILTA, "2025 Technology Survey" — iltanet.org
  3. United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. 2026) — akingump.com
  4. Warner v. Gilbarco, Inc., No. 2:24-cv-12333 (E.D. Mich. 2026) — dwt.com
  5. ABA Model Rules, Rule 5.3 — americanbar.org