Skip to Content

The Big Idea: Shadow AI isn’t just a sign of control gaps

Articles David Salierno Dec 08, 2025

For example, when Dan faced a big deadline, he had a choice: work late — and miss his daughter’s ballet recital — or enlist help from the new kid on the block, ChatGPT. While his employer didn’t officially allow employees to use the breakout generative artificial intelligence (GenAI) tool, he’d recently set up a personal account on the platform. Years ago, Dan had used the cloud-based storage service Dropbox at work before it was allowed. IT eventually “blessed” Dropbox, and it became a mainstay in the office. Dan didn’t see how ChatGPT was any different.

Dan is not alone. In its State of AI in Business 2025 report, Massachusetts Institute of Technology researchers found that workers in 90% of the 300 companies studied were using personal chatbot accounts for daily tasks, while only 40% of those companies had official large language model (LLM) subscriptions.

Once employees start using a technology tool at a company, it can be hard to contain. That’s in part what drove the rapid adoption of Dropbox, whose marketing philosophy was described as “land and expand.” The unauthorized adoption of software as a service apps like Dropbox — called “shadow IT” — was a predecessor to today’s “shadow AI,” and both have spurred ground-up innovation.

According to Emil Holmegaard, solution and delivery lead at consultancy 7N in Copenhagen, Denmark, shadow AI occurs when employees use a tool they can’t access through official channels. He emphasizes the significant risks of shadow AI but also sees potential rewards.

Shadow IT has pushed organizations toward new solutions that increase productivity — so can shadow AI. Still, most organizations are not handling it in a way that both discourages unauthorized use and encourages responsible innovation. “I haven’t seen many companies that have an AI strategy and a policy for what you are — and aren’t — allowed to do,” he says.

Shadow AI can create substantial headaches for a company, and the risks merit serious attention. According to IBM’s Cost of a Data Breach Report 2025, shadow AI breaches cost $670,000 more than non-shadow-AI incidents, while a recent Cybernews survey found that 59% of employees use unapproved tools.

But treating shadow AI merely as a threat to quash could stifle potential progress. When workers “vote with their keystrokes,” they can reveal both where governance lags and where innovative ideas within the organization may lie, Holmegaard says.

Shadowy Risks

Unauthorized use of AI creates distinct security and compliance risks. As Holmegaard explains, it can expose sensitive data to external platforms beyond the organization’s control. “Suppose you need to prepare a report, and you ask ChatGPT for help by describing your circumstances and the details you’re looking for,” he says. “Based on your prompts, OpenAI [the company that created and maintains ChatGPT] has everything it needs to understand what’s going on inside your company.” Perhaps even more concerning, customers’ or employees’ personal data could be included in a prompt. And once it leaves an employee’s screen, it’s in the LLM, possibly forever.

Hallucinations, or the tendency for AI models to produce false, inaccurate information, are also a big concern, as are potential security gaps, biased outputs, and other ethical issues. When employees operate outside company policy, they expose the organization to significant liabilities.

When Employees Go Rogue

So why is this happening in workplaces? “You might think people turn to unauthorized tools, especially AI, because they’re lazy,” Holmegaard says. “But it’s actually because they want to do the best job possible.”

Employees may not even realize what they’re doing isn’t sanctioned by the organization, he adds. And that could signal another risk to the organization — failure to provide employees the tools they need to work effectively. In other words, when employees go outside of the official IT environment for better tech, it could be a sign that the organization needs to innovate.

The MIT study noted that AI tools’ ease of use allows users to “brew their own” solutions to problems quickly, whereas enterprise implementation is often slow and results in inflexible platforms. While 60% of the companies studied had investigated purpose-built enterprise AI tools and 20% had tested them, only 5% had successfully implemented them. Meanwhile, many employees operating under the radar are building personalized solutions that work for them.

Reconciling Control and Innovation

Companies face a dilemma: Move too slowly, and employees will find their own workarounds; move too quickly, and they risk unleashing powerful tools without appropriate guardrails. Holmegaard suggests finding a balance.

“One way to give employees AI access without endangering company systems is to set up ‘sandboxes’ where they can use AI outside the production environment,” he explains. And if the AI needs to interact directly with the production environment, the organization could create a test instance and give access to that.

Showing openness to the tech can coax employees out of the shadows when supported by effective policies around what can, and cannot, be shared with the AI. And while employees like Dan will find ways around obstacles, inviting them to become collaborators in innovation can be productive. “Especially if you see someone who’s increased their work output or efficiency, ask them, ‘Tell us how you did this, so we can get everybody else to move faster, too,’” Holmegaard says.

Organizations can foster AI use and learn from their discoveries while ensuring they’re not exposing sensitive data — or doing something else that might harm the organization.

Lingering in the Shadows

Holmegaard says the use of shadow AI is inevitable. That assertion is supported by research from data security firm Harmonic, which studied 176,000 AI prompts from 8,000 enterprise users. The AI Tightrope: Balancing Innovation and Exposure in the Enterprise found that more than 45% of “sensitive AI interactions” originated from personal email accounts. Moreover, 21% of the sensitive data was submitted to ChatGPT’s free tier, which can retain prompts to help train the LLM.

But the potential harm from unauthorized AI use may become easier to mitigate as companies’ AI infrastructure evolves. Holmegaard points to an analogy that harks back to the early days of IT. When PCs first moved into offices, they were isolated from each other. Over time, local area networks bound them together, and later, the internet opened them to the outside world, creating new pathways for malware to spread.

Much of the current risk comes from employees using public versions of LLMs, such as ChatGPT, Claude, and Gemini. And while enterprise versions of these platforms can help provide stronger data protections, Holmegaard thinks AI could be made even safer by isolating it further internally. “I see a future where you run small AI models on your own company’s data, largely unconnected to the outside world,” he predicts. “And when there is any external connection, filters and guardrails would determine how — and whether — information is shared.”

For now, shadow AI is here to stay. From the very beginning of the computer revolution, unauthorized experiments by rogue employees have eventually been tamed, accepted, and used to increase productivity. The question for leaders isn’t just how to control shadow AI — it’s whether they’re ready to learn from it.

David Salierno

David Salierno is managing partner at Nexus Brand Marketing in Winter Park, Fla.