What Is Happening

Shadow AI is not a future risk. It is a current condition.

Shadow IT has existed for decades. Employees use personal Dropbox accounts to share work files because it is easier than the approved solution. They use personal Gmail when the work email system is slow. They work around friction.

Shadow AI is the same behavior in a new context, and it is moving faster because the tools are free, incredibly useful, and right there in the browser.

A paralegal drafts a demand letter by pasting case facts into ChatGPT. A staff accountant asks Claude to help analyze a client's financial situation. An associate attorney summarizes a 200-page deposition transcript by uploading it to an AI tool. None of this was approved. None of it was logged. All of it involved client data leaving your control.

How common is this

Multiple enterprise surveys from 2024 and 2025 found that between 55% and 75% of knowledge workers report using AI tools not sanctioned by their employer. In professional services firms with under 50 people, there is typically no one watching for this behavior at all.

The Actual Risk

Three distinct problems. Not just one.

The risk is not that an AI is going to do something malicious with client data. The risk is more practical than that, and it comes from three separate directions.

1. Data is leaving your environment

When an employee pastes client information into a consumer AI tool, that data is transmitted to a third-party server. Depending on the tool and the user's account settings, that data may be used to train future models, retained by the vendor, or accessible to vendor employees.

OpenAI, Anthropic, Google, and Microsoft all have commercial and enterprise tiers with stronger data handling commitments. The free consumer tiers have different terms. Most employees are using the free tier.

A real pattern

A staff member at a small CPA firm uses their personal ChatGPT free account to help draft a client letter. The client name, financial figures, and tax situation are included in the prompt. That data is now in OpenAI's systems under consumer terms of service. The firm has no record that this happened.

2. Attorney and CPA professional responsibility rules apply

This is where shadow AI becomes a bar association issue, not just an IT issue.

The Texas Disciplinary Rules of Professional Conduct require attorneys to take reasonable measures to prevent unauthorized disclosure of client information. Texas Rule 1.05 covers confidentiality, and the duty extends to supervision of non-lawyer assistants. If a paralegal is sending client data to a consumer AI tool and you did not know, that is a supervision and confidentiality exposure.

Multiple state bars have issued formal guidance on AI use in 2024 and 2025. The consistent thread is that lawyers remain responsible for client data regardless of what tool handled it.

For CPAs, the AICPA's Code of Professional Conduct covers confidential client information under ET Section 1.700. Using tools that may retain or process that information without a proper data processing agreement is a problem.

The insurance angle

Several cyber insurance and malpractice carriers have begun asking specifically about AI tool usage policies on renewal questionnaires. If you have no policy and no controls, you are disclosing that you have no visibility into how client data is handled by your staff.

3. The output problem is separate from the data problem

Attorneys and CPAs are professionals. Their work product carries liability. AI tools hallucinate. They confidently produce incorrect citations, wrong numbers, and fabricated references.

When an employee uses an unsanctioned AI tool and incorporates that output into work product without disclosing it or carefully verifying it, the firm has a liability exposure. This is documented. Attorneys have been sanctioned for submitting AI-generated briefs with fictional case citations.

Shadow AI compounds this because there is no process for review. Approved tools with defined workflows can include verification steps. Rogue tool use in someone's personal browser has no oversight at all.

The Right Response

Banning AI tools does not work. Governing them does.

The instinct to issue a blanket prohibition is understandable. It is also ineffective. Employees who find AI genuinely useful will continue to use it regardless of policy. You will just lose visibility into the fact that it is happening.

The firms that handle this well do three things.

Get visibility first

Before you can govern AI use, you need to know what is actually happening. In Microsoft 365 environments, Microsoft Defender for Cloud Apps (included in Microsoft 365 E5 and available as an add-on) can surface unsanctioned application usage across your tenant. You can see which AI tools employees are accessing through company devices and connections, how often, and by whom.

Even without that tool, a Conditional Access policy in Entra ID can block access to specific domains or categories of consumer AI services on managed devices. You may not want to block them all, but you should know what is in use before deciding.

Give people a sanctioned path

If you want to reduce shadow AI, give employees an approved tool with appropriate data protections. Microsoft Copilot for Microsoft 365 keeps data within your tenant under your existing compliance and data handling terms. It processes your Microsoft 365 content without sending it to consumer AI infrastructure.

For firms that want to allow tools like ChatGPT or Claude for specific use cases, enterprise accounts with zero data retention policies are available. The data handling terms are materially different from the consumer free tier, and that difference matters for professional responsibility purposes.

Write a policy and actually communicate it

A one-page AI acceptable use policy does not need to be comprehensive. It needs to answer three questions for your staff: which tools are approved, what kinds of data can and cannot be used with AI tools, and what to do when they are unsure.

The policy is not primarily a legal document. It is a communication. Most employees using AI tools with client data are not doing it with bad intentions. They are solving a problem with a convenient tool. They need to know the firm has thought about this and has a preferred path.

The question is not whether your staff is using AI. They are. The question is whether you know which tools, with which data, under which terms.

Where to Start

The four things to do this month.

Shadow AI is not a reason to panic. It is a reason to get ahead of the problem before a claim, a bar complaint, or an insurance renewal forces the conversation.

Want to see what AI tools are in use at your firm?

A Lowery Solutions assessment includes an AI exposure review: what tools are in use, where client data may be going, and what controls are in place.

Learn About AI Security

Built for Austin professional services firms navigating AI without the hype.