An October 2024 study by Software AG suggests that half of all employees are Shadow AI users, and most of them wouldn’t stop even if it was banned.
The problem is the ease of access to AI tools, and a work environment that increasingly advocates the use of AI to improve corporate efficiency. It is little wonder that employees seek their own AI tools to improve their personal efficiency and maximize the potential for promotion.
It is frictionless, says Michael Marriott, VP of marketing at Harmonic Security. “Using AI at work feels like second nature for many knowledge workers now. Whether it’s summarizing meeting notes, drafting customer emails, exploring code, or creating content, employees are moving fast.” If the official tools aren’t easy to access or if they feel too locked down, they’ll use whatever’s available which is often via an open tab on their browser.
There is almost also never any malicious intent (absent, perhaps, the mistaken employment of rogue North Korean IT workers); merely a desire to do and be better. If this involves using unsanctioned AI tools, employees will likely not disclose their actions. The reasons may be complex but combine elements of a reluctance to admit that their efficiency is AI assisted rather than natural, and knowledge that use of personal shadow AI might be discouraged. The result is that enterprises often have little knowledge of the extent of Shadow IT, nor the risks it may present.
Harmonic has analyzed (PDF) 176,460 AI prompts from a sample of 8,000 end users within its customer base, collected during Q1 2024 (and compared with a similar exercise in Q4 2024). The data was acquired via the Harmonic Protection browser extension deployed by its customers. The analysis does not indicate the extent of Shadow AI usage since the capture excludes use of gen-Ai via native mobile apps or API integrations outside of the browser – but it does provide insight into how employees may be using Shadow AI.
ChatGPT is the dominant gen-AI model used by employees. Forty-five percent of data prompts occurred via personal accounts (such as Gmail) “Evidently, convenience undermines corporate governance and security,” comments the analysis. Image files dominate the file uploads to ChatGPT, accounting for 68.3%.
The key purpose of the analysis, however, is not to show what Shadow AI is being used, but what risks are being introduced by this use.
For example, the growing presence of the new Chinese AI models is being seen, obviously with DeepSeek, but now also including Baidu Chat, Qwen and others. Seven percent of employees are already using these Chinese AI models. It would be foolish to assume that any data input to a Chinese AI will be unavailable to the Chinese Communist Party, and it would be naïve to believe that the CCP would not use such data to further the aims of the Chinese nation.
Overall, there has been a slight reduction in sensitive prompt frequency from Q4 2024 (down from 8.5% to 6.7% in Q1 2025). However, there has been a shift in the risk categories that are potentially exposed. Customer data (down from 45.8% to 27.8%), employee data (from 26.8% to 14.3%) and security (6.9% to 2.1%) have all reduced. Conversely, legal and financial data (up from 14.9% to 30.8%) and sensitive code (5.6% to 10.1%) have both increased. PII is a new category introduced in Q1 2025 and was tracked at 14.9%.
Most of this data is going to ChatGPT (79.1%) – with 21% going to ChatGPT’s free tier, where prompts can be retained and used for training purposes. Next in popularity is Google Gemini, followed by Perplexity.
Harmonic suggests that its Q1 2025 analysis underscores the need for enterprises to move from passive observation of Shadow AI to proactive control and intelligent enforcement. The intent should not be to eliminate employees’ personal initiative in using AI, but to ensure its safe and secure use. Targeted training and coaching on the safe use of AI is imperative.
“This isn’t a fringe issue,” says Marriott. “It’s mainstream. It’s growing. And it’s happening in nearly every enterprise, whether or not there’s a formal AI policy in place.”
Related: Aurascape Banks Hefty $50 Million to Mitigate ‘Shadow AI’ Risks
Related: How to Eliminate “Shadow AI” in Software Development
Related: Beware Of Shadow AI – Shadow IT’s Less Well-Known Brother
Related: Shadow AI – Should I be Worried?