The CIO of a mid-market financial services firm thought she had a handle on AI adoption. Her team had sanctioned three tools, trained 200 employees, and built a governance policy around them. Then she ran an AI visibility audit. The audit found 23 AI tools running across the organization — seven times what she expected. Customer service had adopted a chatbot through a free trial. Marketing was using three different content generators. Two engineering teams were running code assistants that had never been security-reviewed. And an entire business unit had been piping client data through an AI summarization tool that stored data on external servers.
She’s not unusual. According to the Torii 2026 Benchmark Report, 84% of organizations consistently discover more AI tools than expected during audits. And 31% find new unsanctioned tools every single month.
This is the SEE problem — the first and most foundational step in the SEE, MEASURE, DECIDE, ACT framework for proving AI ROI. You cannot measure what you cannot see. And in most enterprises today, the AI landscape is far larger, more fragmented, and more exposed than anyone in the C-suite realizes.
The Visibility Crisis by the Numbers
The scale of unsanctioned AI usage has grown faster than most security and IT teams anticipated. A 2025 UpGuard study found that more than 80% of workers — including nearly 90% of security professionals — use unapproved AI tools on the job. That last part bears repeating: the people responsible for protecting the organization are themselves using tools that haven’t been vetted.
Deloitte’s 2026 State of AI survey tells the supply side of this story. Workforce access to AI tools expanded by 50% in a single year, from fewer than 40% of employees to roughly 60%. But that figure only counts sanctioned tools. The actual adoption rate — including shadow AI — is far higher. Research from Portal26 found that 73.8% of ChatGPT accounts used in the workplace are non-corporate accounts that lack enterprise security and privacy controls. For Gemini, that figure is 94.4%.
The result is an AI ecosystem that leadership cannot see, security cannot govern, and finance cannot account for. Only 38% of organizations report knowing which AI applications their employees actually use.
What Invisibility Actually Costs
The cost of this visibility gap isn’t hypothetical. IBM’s 2025 Cost of a Data Breach report found that breaches involving shadow AI add $670,000 to the average breach cost compared to organizations with low or no shadow AI exposure. The average organization now experiences 223 AI-related data security incidents per month — incidents that range from sensitive data shared with external AI services to policy violations that create compliance exposure.
But security costs are only one dimension. Hitachi Vantara research estimates that data infrastructure issues — many driven by ungoverned AI tooling — contribute to $108 billion in wasted annual AI spend across enterprises. When teams adopt AI tools independently, they duplicate capabilities, fragment data flows, and create redundant infrastructure costs that nobody tracks because nobody can see the full picture.
Then there’s the opportunity cost. If you don’t know what AI your organization is running, you cannot measure whether it’s working. You cannot identify which tools deliver value and which ones burn budget. You cannot rationalize spending, consolidate licenses, or negotiate enterprise agreements. And you cannot answer the one question the board increasingly cares about — what’s the return on our AI investment — because you don’t even know what the investment includes.
Why Traditional Discovery Fails
Most IT organizations approach AI discovery the same way they approach software asset management: check the procurement records, run a network scan, send out a survey. None of these methods work for AI.
Procurement records miss AI tools that employees adopt through free tiers, browser extensions, or personal accounts. Network scans miss browser-based AI tools that look like regular web traffic. Surveys depend on employees self-reporting usage they may not think of as “AI” — or usage they know isn’t sanctioned and don’t want to disclose.
The deeper problem is velocity. Employees adopt new AI tools faster than security teams can evaluate them. Eighty-three percent of organizations report that employees install AI tools faster than security can track, according to industry surveys. A quarterly discovery audit is fundamentally mismatched against a weekly adoption cycle.
And the challenge is getting more complex, not simpler. Embedded AI features — AI capabilities built into tools employees already use, like email clients, CRM platforms, and productivity suites — fly under the radar entirely. An employee isn’t “adopting a new AI tool” when their email client adds AI-powered reply suggestions. But the data exposure risk is real, and the cost shows up in per-seat licensing increases that finance sees but can’t attribute.
What a Real AI Visibility Audit Looks Like
A proper AI visibility audit goes beyond inventory. It answers four questions that are prerequisites to everything else in the AI ROI playbook:
What AI is running? A complete catalog of AI tools, models, and capabilities across the organization — including assistive AI (copilots, chatbots, content generators), agentic AI (autonomous agents executing workflows), and embedded AI (features within existing software). This isn’t a one-time list. It’s a continuously updated inventory that captures new tools as they appear.
Who is using it? Usage patterns by team, department, role, and individual. Not to police employees, but to understand where AI adoption is concentrated, where training gaps exist, and where usage patterns suggest risk or opportunity. If 60% of your customer success team uses an AI tool daily but 5% of your sales team does, that’s a signal worth understanding.
What data is it touching? The critical question from both a security and compliance perspective. Which AI tools have access to customer data, financial records, intellectual property, or regulated information? Are employees sharing sensitive data with external AI services? The shadow AI risk isn’t just that unauthorized tools exist — it’s that unauthorized tools often handle the most sensitive data, because employees turn to AI precisely when they’re working with complex, high-value information.
What is it costing? The total cost of AI across the organization, including sanctioned licenses, API consumption, infrastructure, and the hidden costs of shadow AI — duplicate tools, wasted capacity, and the remediation costs when things go wrong. Until you can see the full cost picture, you cannot calculate ROI.
From Visibility to Value
The SEE step isn’t an end in itself. It’s the foundation that makes everything else possible. Once you have visibility into your AI ecosystem, you can move to MEASURE — connecting AI activity to business outcomes. You can identify which tools are delivering value and which are creating risk. You can rationalize spending, consolidate tooling, and negotiate from a position of knowledge rather than ignorance.
The enterprises that close the AI revenue gap — the 20% who prove AI drives results, according to Deloitte’s 2026 survey — start here. Not with measurement. Not with governance. With visibility. Because every dollar of AI ROI you can prove is built on a foundation of knowing what AI you have, who’s using it, what data it touches, and what it costs.
The visibility audit typically reveals three immediate value opportunities: tool consolidation (reducing redundant AI spending by 20-30%), risk reduction (identifying unvetted tools handling sensitive data), and measurement readiness (instrumenting high-value AI workflows for ROI tracking). Most enterprises find that the audit pays for itself through spend rationalization alone.
Ready to see what AI is actually running across your organization? Schedule a demo and we’ll show you how Olakai provides unified visibility across your entire AI ecosystem — sanctioned and shadow, assistive and agentic.
