CISOs are having to adapt at lightning speed to the rapidly changing AI landscape. DeepSeek is just the latest example of this in practice – a new ‘latest and greatest’ tool emerges and quickly tops download charts. Employees start using it at work despite the data policy explicitly stating all information will be held in China. Even the Pentagon is forced to tell its employees to stop using it. And of course DeepSeek is just the latest in what will be a long lineup of AI tools from China and elsewhere.
Unauthorized AI usage is a ticking time bomb. Employees are integrating AI tools into their work, sometimes unknowingly exposing sensitive data to third-party models. And it’s also highly dynamic – a tool that wasn’t considered a risk yesterday may introduce new AI-powered features overnight. So what to do about it?
Both necessary and mandatory
It starts with forming an AI asset inventory since without it, organizations are flying blind, exposing sensitive data and missing critical compliance risks. And it’s now becoming mandated since regulatory frameworks such as the EU AI Act, ISO 42001, and the NIST AI Risk Management Framework (AI RMF) make this a foundational requirement.
Defining what constitutes AI is challenging. The EU AI Act adopts an extremely broad approach, encompassing nearly everything within its scope. Organizations must determine what applies to them—should they monitor every AI-enhanced feature or prioritize generative AI tools, large language models, and content creation systems? Narrowing the focus to specific AI categories can make this task much more manageable.
Identifying Shadow AI
It’s not just regulation, third-party vendor assessments are also increasingly requiring AI inventories, often referring to them as “audits” or “service catalogs.” However, beyond compliance, organizations cannot establish meaningful governance without a clear understanding of the AI tools employees are using. Effective governance goes beyond officially purchased tools—it involves identifying the shadow AI that has already become part of daily workflows.
Despite its importance, AI asset tracking remains difficult. Most organizations rely on outdated or ineffective methods to identify AI usage, and traditional IT governance tools fall short.
Six most popular existing approaches to cataloging
There are six approaches I see from organizations:
Procurement-Based Tracking – Effective for monitoring new AI acquisitions but fails to detect AI features added to existing tools or employee use of tools without commercial agreements around them.
Manual Log Gathering – Analyzing network traffic and logs can help identify AI-related activity, though it is difficult, time consuming and rarely comprehensive.
Identity and OAuth – Reviewing access logs from providers like Okta or Entra can help track AI application usage where applicable.
Cloud Security Access Brokers and DLP – Solutions like ZScaler and Netskope offer some visibility with limited AI ‘categories’, but enforcing policies remains a challenge.
Cloud Security Posture Management (CSPM) – Wiz and others can provide good insight into AWS/Google AI Use.
Extending Existing Inventories – Classifying AI tools based on risk ensures alignment with enterprise governance, but adoption moves quickly.
Automating manual efforts
While the methods outlined above can succeed in providing differing levels of visibility into AI usage, they are highly manual and time-consuming. An AI asset inventory is more than just compiling a list—it’s about assessing the risks associated with AI adoption. Security leaders must ask key questions: Are these tools learning from employee-provided data? What are their data retention policies? How do they address privacy concerns under regulations like GDPR, HIPAA, or others?
That’s why there is a shift underway towards specialized tools with more automated and repeatable methods for cataloging AI use in the enterprise. These tools provide continuous monitoring to detect AI usage, including personal and free accounts, and identify the apps training on your data.
Furthermore, after gaining visibility into AI usage, these tools help organizations to safeguard sensitive data from unapproved AI systems. Security teams should evaluate existing protections to prevent employees from unintentionally sharing confidential information. Do employees know which AI tools are safe to use?
Secure innovation and compliance
AI governance should be seen as an opportunity, not just a risk management task. Organizations that stay ahead in AI tracking can actively engage employees where they are, identify unmet needs and use cases, steering them toward secure, approved AI solutions. Security leaders who share this data with AI committees and executives offer valuable insights into real-world AI usage, moving beyond theoretical policy discussions.
With AI adoption accelerating, organizations that fail to act now risk being left behind. A well-executed AI asset inventory provides visibility, mitigates risk, and establishes a strong foundation for responsible AI governance. This enables CISOs to guide their organizations toward responsible AI adoption—securing both innovation and compliance in the AI era.