Category: Industry Analysis

Market trends, acquisitions, and competitive landscape

  • NVIDIA Surveyed 3,200 Leaders. 30% Still Can’t Measure AI ROI.

    NVIDIA Surveyed 3,200 Leaders. 30% Still Can’t Measure AI ROI.

    NVIDIA’s 2026 State of AI report reads like a victory lap. Eighty-eight percent of enterprise leaders say AI increased revenue. Eighty-seven percent report cost reductions. Eighty-six percent are growing their AI budgets this year. If you stopped there — and most of the coverage did — you’d conclude that enterprise AI has crossed every finish line that matters.

    But buried in page after page of bullish data from 3,200 enterprise leaders is a finding that should give every CFO pause: 30% of those same leaders admit they still can’t quantify the ROI of their AI investments. Nearly one in three organizations spending more on AI this year cannot say, with any rigor, whether last year’s spending worked.

    Source: All data in this analysis comes from NVIDIA’s 2026 State of AI Report, a survey of 3,200+ enterprise leaders across six industries conducted August–December 2025. Charts reproduced from the original report.

    The Headline Numbers vs. the Fine Print

    The optimistic numbers are real, and they’re striking. Among the 3,200+ respondents surveyed between August and December 2025, 30% reported revenue increases exceeding 10%, with another 33% seeing gains between 5% and 10%. Cost reductions were nearly as widespread — 87% reported savings, and a quarter achieved cuts greater than 10%. In retail and consumer packaged goods, that figure climbed to 37%. These aren’t pilot results. They’re self-reported impacts from organizations with active AI deployments across financial services, healthcare, manufacturing, telecommunications, and retail.

    NVIDIA State of AI 2026: AI impact on increasing annual revenue by industry vertical — 88% report revenue growth
    88% of organizations report AI-driven revenue increases. Healthcare leads with 44% seeing gains over 10%. Source: NVIDIA State of AI 2026.
    NVIDIA State of AI 2026: AI impact on reducing annual costs by industry vertical — 87% report cost decreases
    87% report cost reductions from AI, with retail leading at 37% achieving cuts exceeding 10%. Source: NVIDIA State of AI 2026.

    The trouble is how those numbers were generated. Self-reported revenue impact and measured revenue impact are different things. When NVIDIA’s own survey shows that 48% of respondents struggle with data sufficiency, 38% lack the AI expertise they need, and 30% can’t quantify ROI at all, you have to ask: how confident are the other 70% in their numbers? The report doesn’t say. And that gap between perception and measurement is where billions of dollars in AI spending disappear every year.

    This isn’t a reason to dismiss the progress. Organizations like PepsiCo, which achieved a 20% throughput increase and 10-15% reduction in capital expenditure through AI-powered digital twins, demonstrate that AI can deliver measurable returns. Clinomic’s medical ICU assistant reduced documentation errors by 68% and clinical workload by 33%. These are concrete, instrumented outcomes. The question is why so many organizations can’t produce similar evidence.

    The Measurement Problem Gets More Expensive Every Quarter

    Budget season makes the stakes concrete. Eighty-six percent of organizations are increasing AI budgets in 2026, with 40% planning increases of 10% or more. North American enterprises are even more aggressive — 48% are pushing budgets up by double digits. These aren’t exploratory budgets. They’re operational commitments that will demand operational proof.

    NVIDIA State of AI 2026: AI budget changes in 2026 by region — 86% increasing budgets, 48% of North America by 10%+
    86% of organizations are increasing AI budgets in 2026. Nearly half of North American enterprises plan increases of 10% or more. Source: NVIDIA State of AI 2026.

    The organizations that instrumented early have a structural advantage. Lowe’s built AI digital twins for more than 1,750 stores and can point to the cost of generating 3D product models — under $1 each — as a clear efficiency metric. Nasdaq constructed a proprietary AI platform and can trace its impact across both internal operations and external products. These companies didn’t stumble into measurement. They designed for it from the start, choosing specific outcomes to track before deploying the technology.

    The 30% without measurement infrastructure face a different budget conversation entirely. When the CFO asks what last quarter’s AI spend produced, “we believe it’s helping” isn’t an answer that protects next quarter’s allocation. As AI budgets grow, the metrics that matter to financial leadership become non-negotiable — and organizations without baselines can’t even begin that conversation.

    Agentic AI Makes the Gap Worse

    Perhaps the most forward-looking finding in the report: 44% of organizations are already deploying or actively assessing AI agents, with telecommunications (48%) and retail (47%) leading adoption. These aren’t chatbots answering customer questions. Agentic AI systems operate autonomously — making decisions, executing multi-step workflows, and interacting with other systems without human approval at every stage.

    That autonomy creates a measurement blind spot that assistive AI never did. When a human uses ChatGPT to draft an email, the value chain is short and visible. When an AI agent autonomously triages customer issues, escalates edge cases, updates CRM records, and triggers follow-up workflows, the value chain branches and compounds in ways that are nearly impossible to track without purpose-built instrumentation. If 30% of organizations can’t measure ROI on their existing AI tools, they’re not going to suddenly develop that capability for systems that are orders of magnitude more complex. Every autonomous decision an agent makes without measurement infrastructure is both an ROI blind spot and a risk exposure.

    What the Winners Did Differently

    A pattern emerges when you study the case studies NVIDIA highlights. PepsiCo didn’t deploy digital twins and hope for the best — they defined throughput improvement and capital expenditure reduction as target metrics before writing a line of code. Clinomic didn’t build a medical assistant and then try to figure out if it worked — they instrumented documentation error rates and clinician workload from day one. Lowe’s didn’t scale to 1,750 stores on intuition — they tracked cost-per-model as a unit economic that justified each expansion.

    The 30% who can’t quantify ROI didn’t fail at AI. They failed at instrumentation. They deployed capable technology without building the measurement layer that turns activity into evidence. And now, with budgets growing and boards asking harder questions, that gap is becoming a strategic liability. McKinsey’s latest research echoes this finding — organizations that establish clear performance metrics before scaling AI are significantly more likely to capture value across multiple business functions.

    The framework that separates measurement leaders from the rest follows a consistent pattern: first, establish visibility into what AI is actually doing across the organization; then, define the business metrics that matter for each use case; next, run structured evaluations against those baselines; and finally, use the evidence to make scale-or-sunset decisions. This See-Measure-Decide-Act cycle is what transforms AI investment from a faith-based initiative into a data-driven program.

    What to Do Before Your Next Budget Review

    The NVIDIA report makes one thing clear: the AI investment wave is not slowing down. With 86% of organizations increasing budgets and 44% moving into agentic AI, the volume of spending that needs justification is only growing. The organizations that thrive in this environment won’t be the ones with the most sophisticated models or the largest GPU clusters. They’ll be the ones that can answer, with precision, what their AI investments produced.

    If you’re in the 30% today, the path forward isn’t complicated — but it is urgent. Instrument before you scale. Establish baselines before your next deployment. Build measurement into your AI architecture the same way you’d build security into your cloud infrastructure: not as an afterthought, but as a foundational layer. The CFO will ask for proof. The board will ask for proof. Make sure you have it.

    Ready to close the measurement gap? Schedule a demo to see how Olakai gives you unified visibility, business-aligned KPIs, and ROI evidence across every AI tool in your enterprise.

  • JP Morgan Spent $2B on AI. Here’s What They Measured.

    JP Morgan Spent $2B on AI. Here’s What They Measured.

    In February 2026, Jamie Dimon made a claim that most enterprise leaders can only dream of: JP Morgan Chase’s $2 billion AI investment had “paid for itself.” Not “shows promise.” Not “is on track.” Paid for itself — $2 billion in measured benefits offsetting $2 billion in costs.

    Break-even isn’t a moonshot return. But in a landscape where only 20% of enterprises can prove AI drives any revenue at all, the fact that JP Morgan can put a dollar figure on AI’s contribution — and defend it to shareholders — puts them in rare company. The question worth studying isn’t how much they spent. It’s what they measured, and how.

    The Scale of the Bet

    JP Morgan’s $2 billion annual AI investment sits within a $17 billion technology budget that grew to $19.8 billion in 2026 — a 10% increase year over year. The bank employs more than 2,000 AI and machine learning specialists, including 900 data scientists, 600 machine learning engineers, and 200 AI researchers. This isn’t a skunkworks team running experiments. It’s a division-level commitment that treats AI as core infrastructure alongside payment systems and risk controls.

    The centerpiece is the LLM Suite, an internal platform built on models from OpenAI and Anthropic that 150,000 employees use weekly. The platform is updated every eight weeks with new enterprise data, and by late 2025, roughly half of all LLM Suite users were using it daily. At that adoption rate, AI interactions at JP Morgan generate measurement data at a scale most enterprises can’t match.

    But scale alone doesn’t prove value. What makes JP Morgan’s approach worth studying is that they measure AI at the use case level — not just at the platform level. Every one of their 600-plus production AI use cases has specific metrics tied to specific business outcomes.

    What They Actually Measured

    JP Morgan’s AI measurement spans four categories, each connecting AI activity to a different type of business outcome:

    Time recovery. The most widely cited metric: LLM Suite users report saving an average of four hours per day. At 150,000 weekly users, that’s potentially 600,000 hours per week of recovered employee time — time that gets redirected to higher-value work, client engagement, and analysis that was previously crowded out by routine tasks. Time recovery is the most accessible AI metric because it’s easy to measure and easy to understand, but it’s also the most dangerous if not connected to downstream outcomes. Four hours “saved” only creates value if those hours are deployed productively.

    Cost reduction. The COiN (Contract Intelligence) platform provides the clearest cost reduction case study. Before AI, JP Morgan’s legal team manually reviewed commercial loan agreements — a process that consumed approximately 360,000 hours annually. COiN now reviews 12,000 documents in seconds rather than weeks, reducing legal operations costs by 30% and cutting compliance errors by 80%. The cost reduction is measured against a known baseline (manual review hours and error rates), making the calculation straightforward and defensible.

    Revenue impact. JP Morgan’s AI trading algorithms illustrate revenue-side measurement. The bank reported that AI-driven trading systems improved win rates from 52% to 63% and saved $25 million in slippage costs. Revenue impact is harder to measure than cost reduction because attribution is more complex — markets move for many reasons, and isolating AI’s contribution requires careful methodology. But JP Morgan’s approach of measuring specific trading performance metrics (win rate, slippage) rather than aggregate revenue provides a more defensible attribution model.

    Value creation mapping. Across customer personalization, trading, fraud detection, and credit decisioning, JP Morgan identified $1 to $1.5 billion in value creation from AI. This portfolio-level view is what enables the “$2 billion investment paid for itself” claim — it aggregates use case-level measurements into an enterprise-wide picture that can be presented to shareholders.

    Lessons for the Rest of Us

    Most enterprises aren’t JP Morgan. They don’t have 2,000 AI specialists or a $17 billion technology budget. But the measurement principles that underpin JP Morgan’s ability to claim ROI are applicable at any scale.

    Measure at the use case level, not the platform level. JP Morgan doesn’t report a single “AI ROI” number derived from aggregate spending and aggregate benefits. They track 600-plus individual use cases, each with defined metrics. This granularity is what makes the portfolio-level claim credible — it’s built bottom-up from measured outcomes, not estimated top-down from spending. Even an enterprise with five AI use cases can apply this discipline: define the success metric for each use case, measure it against a baseline, and report results individually before aggregating.

    Track multiple metric categories. Time recovery alone doesn’t prove ROI. Cost reduction alone doesn’t capture the full picture. Revenue impact alone is too hard to attribute without supporting data. JP Morgan tracks all four categories (time, cost, revenue, value creation) and presents them together. This multi-dimensional view is more credible to boards and CFOs than any single metric, because it demonstrates that the organization has instrumented AI measurement comprehensively.

    Build governance alongside measurement. JP Morgan’s Model Risk Governance function and Firmwide Chief Data Officer aren’t separate from AI measurement — they’re integral to it. Governance forces the organization to define what each AI system does, which creates the accountability structure that measurement requires. As we’ve seen across 100-plus AI agent deployments, the enterprises with the strongest ROI data are the ones with the most rigorous governance frameworks.

    Treat AI as infrastructure, not R&D. JP Morgan reclassified AI from an innovation investment to core infrastructure — the same category as payment processing and risk management. This shift has measurement implications: infrastructure has uptime, performance, and cost-efficiency metrics that are reviewed continuously, not evaluated in quarterly innovation reviews. When AI becomes infrastructure, measurement becomes operational rather than experimental.

    The ServiceNow Parallel

    JP Morgan isn’t the only enterprise betting big on AI measurement. ServiceNow’s AI business reached $600 million in annual contract value in 2025 and expects to exceed $1 billion by the end of 2026. Like JP Morgan, ServiceNow measures AI at the product level — tracking adoption, usage patterns, and customer value creation for each AI capability rather than reporting a single aggregate number.

    The pattern is consistent across enterprises that prove AI ROI: measurement happens at the individual use case or product level, governance provides the accountability structure, and results are aggregated into a portfolio view for executive and board reporting. The enterprises stuck in pilot purgatory do the opposite — they measure at the platform level, lack governance infrastructure, and can’t connect aggregate spending to specific outcomes.

    What This Means for 2026

    McKinsey projects that AI could unlock $200 to $340 billion annually in value for financial services alone, and the industry is responding — more than 70% of financial institutions were using AI at scale by late 2025, up from 30% in 2023. But the gap between “using AI” and “proving AI ROI” remains wide. JP Morgan is one of the few financial institutions that can put specific dollar figures on specific AI outcomes.

    The lesson isn’t that enterprises need to spend $2 billion. It’s that the measurement infrastructure JP Morgan built — use case-level tracking, baseline metrics, multi-category measurement, governance integration — is what enables the ROI claim. That infrastructure can be built at any scale, for any number of AI initiatives. The cost of building it is a fraction of the cost of running AI without it.

    If your organization is investing in AI but can’t answer “what’s the return?” with specific numbers, the problem isn’t your AI. It’s your measurement. Our AI ROI framework provides the methodology, and Olakai’s platform provides the instrumentation to track AI value the way JP Morgan does — at the use case level, against baselines, across time, cost, revenue, and risk.

    Ready to measure your AI like JP Morgan? Schedule a demo and we’ll show you how enterprises track AI ROI across every initiative — without needing a $17 billion technology budget.

  • What JP Morgan’s Agentic AI Rollout to 250,000 Employees Tells Us

    What JP Morgan’s Agentic AI Rollout to 250,000 Employees Tells Us

    JP Morgan Chase just gave 250,000 employees access to AI — and then announced it was moving to agentic AI, where those systems won’t just answer questions but execute complex multistep tasks autonomously. The bank’s vision: “Every employee will have their own personalized AI assistant; every process is powered by AI agents.”

    It’s the largest enterprise AI deployment publicly disclosed. And it raises a question every enterprise will face as AI scales: can governance keep pace with adoption?

    The Numbers Behind the Rollout

    The scale is worth pausing on. JP Morgan’s LLM Suite — an internal AI platform built on models from OpenAI and Anthropic — is available to the bank’s entire workforce except branch and call center staff. Of the 250,000 employees with access, 150,000 use it weekly. Half of those use it daily. The platform is updated every eight weeks with new enterprise data, creating a continuously evolving tool that gets more capable and more embedded in daily work with each cycle.

    Behind the platform sit 2,000 AI and machine learning specialists — 900 data scientists, 600 machine learning engineers, and 200 AI researchers — managing 600-plus production use cases. That’s a specialist-to-user ratio of roughly 1:125. Even with one of the largest enterprise AI teams in the world, JP Morgan has far more AI usage than any team can manually oversee.

    And that’s before agentic AI enters the picture.

    From Assistive to Agentic: A Governance Leap

    There’s a fundamental difference between assistive AI and agentic AI from a governance perspective. Assistive AI — the copilots, chatbots, and summarization tools that make up most enterprise AI today — operates in a request-response pattern. An employee asks a question, the AI responds, the employee decides what to do with the response. The human is in the loop for every decision.

    Agentic AI operates differently. An AI agent receives a goal, decomposes it into subtasks, executes those subtasks autonomously (potentially across multiple systems), and delivers a result. The human may not be in the loop for intermediate decisions. The agent might query a database, analyze the results, draft a recommendation, and send it to a stakeholder — all without human review of each step.

    JP Morgan described itself as “early in the next phase” of deploying agentic AI in late 2025. The use cases in banking are obvious: automated fraud investigation that pulls transaction records, cross-references patterns, and generates suspicious activity reports. Loan underwriting that collects applicant data, runs credit analysis, and produces risk assessments. Compliance monitoring that scans communications, identifies potential violations, and initiates review workflows.

    Each of these use cases involves an AI system making decisions — or heavily influencing decisions — about sensitive data, regulated activities, and consequential outcomes. Governing these systems requires fundamentally different controls than governing a chatbot that summarizes meeting notes.

    The Governance Challenge at 250,000 Users

    Scale compounds every governance challenge. Consider the numbers: 150,000 weekly active AI users generating millions of interactions. Each interaction potentially involves customer data, financial records, proprietary strategies, or regulated communications. The average organization already experiences 223 data policy violations involving generative AI per month. At JP Morgan’s scale, that number could be orders of magnitude higher.

    Then there’s the shadow AI problem. Research shows that 9% of employees now switch between personal and enterprise AI accounts — up from 4% in the previous period. At 250,000 employees, 9% represents 22,500 potential data leakage vectors. These are employees whose company-approved tools don’t meet their needs for convenience or functionality, driving them to seek alternatives that lack enterprise security and privacy controls.

    And the challenge deepens as AI moves from assistive to agentic. With assistive AI, governance can focus on what employees share with AI tools — input monitoring. With agentic AI, governance must also cover what AI systems do autonomously — output monitoring, decision accountability, and cascading action controls. An agentic AI system that autonomously initiates a trade, approves a credit application, or files a regulatory report introduces governance requirements that most enterprises haven’t even begun to design.

    How JP Morgan Is Approaching Governance

    JP Morgan’s governance structure provides a template — though it’s one built with resources few organizations can match.

    Platform centralization. Rather than allowing AI tool sprawl, JP Morgan channeled adoption through a single platform (LLM Suite). This centralization creates a single point of governance: access controls, data policies, usage monitoring, and audit trails all flow through one system. The alternative — employees using dozens of independent AI tools — makes governance exponentially harder because each tool has different data handling practices, different security models, and different audit capabilities.

    Model governance. The LLM Suite uses models from multiple vendors (OpenAI and Anthropic), updated every eight weeks. This multi-model strategy diversifies vendor risk while the regular update cycle ensures the platform stays current. But it also means continuous governance validation — every update cycle requires re-evaluation of model behavior, security posture, and compliance alignment.

    Use case-level oversight. Each of JP Morgan’s 600-plus production use cases is individually tracked and governed. This granularity matters because the governance requirements for a document summarization tool are fundamentally different from the governance requirements for a fraud detection agent. Use case-level governance allows risk-proportional controls — more oversight for high-stakes, high-sensitivity applications and lighter controls for low-risk productivity tools. Our AI risk heatmap approach follows the same principle.

    Dedicated governance functions. JP Morgan maintains a Model Risk Governance function that assesses each AI use case and a Firmwide Chief Data Officer responsible for data governance, quality, and access control. These aren’t part-time responsibilities added to existing roles. They’re dedicated functions with the organizational authority to block or modify AI deployments that don’t meet governance standards.

    The Workforce Reality

    Jamie Dimon has been unusually candid about AI’s impact on employment. In February 2026, he acknowledged that AI is already displacing workers at JP Morgan, stating the bank has “huge redeployment plans” to move affected employees to other roles. He’d previously called AI “possibly as transformational as the printing press, steam engine, computing, and the Internet.”

    JP Morgan isn’t alone. Bank of America, Citigroup, and Wells Fargo all projected lower headcounts for 2026, with AI-driven efficiency gains cited as a key factor. The financial services industry, with its high proportion of knowledge work and data-intensive processes, is one of the sectors most exposed to AI-driven workforce changes.

    This workforce dimension adds another governance requirement that most frameworks ignore: the responsibility to plan for displacement before it happens. JP Morgan’s approach — retrain and redeploy — is more thoughtful than many enterprises, but it still requires knowing which roles AI will affect, on what timeline, and at what scale. That knowledge comes from the same measurement infrastructure that proves AI ROI: if you can’t measure what AI is doing, you can’t project what it will change.

    What This Means for Other Enterprises

    Most organizations won’t deploy AI to 250,000 users overnight. But every organization scaling AI faces the same governance challenges JP Morgan faces — just at a different magnitude. The principles translate:

    Centralize AI access before you lose control. Once employees adopt dozens of independent AI tools, governing the ecosystem becomes nearly impossible. A platform strategy — even a modest one — creates the governance foundation that fragmented tool adoption destroys.

    Design agentic governance before deploying agentic AI. The gap between assistive AI governance (monitoring inputs) and agentic AI governance (monitoring autonomous decisions) is substantial. Organizations that deploy agentic AI with assistive-era governance will face accountability gaps when agents make consequential decisions without human oversight.

    Treat shadow AI as inevitable, not preventable. At any scale, some employees will use unauthorized AI tools. The governance response should be detection and redirection (identifying unauthorized tools and channeling users to approved alternatives), not just prohibition. Prohibition fails at scale because it creates friction that drives adoption underground.

    Start workforce planning now. If AI is going to change roles at your organization — and it will — the time to plan for redeployment is before displacement occurs, not after. That planning requires visibility into where AI is being used and how it’s changing work patterns — data that only comes from measurement infrastructure.

    JP Morgan’s rollout is a preview of what every enterprise AI program will eventually face: the governance challenge of AI operating at scale, across an entire workforce, with increasing autonomy. The question isn’t whether your organization will get there. It’s whether your governance infrastructure will be ready when it does.

    For a deeper look at the frameworks and readiness assessments for agentic AI deployment, Future of Agentic’s governance hub provides comprehensive resources. And our CISO governance checklist offers a practical starting point for security leaders.

    Scaling AI and need governance that scales with it? Schedule a demo to see how Olakai provides unified AI governance across assistive and agentic AI — before risk scales with adoption.

  • What ServiceNow’s $8B AI Acquisition Spree Tells Us About the Future of Enterprise AI

    What ServiceNow’s $8B AI Acquisition Spree Tells Us About the Future of Enterprise AI

    ServiceNow just spent $7.75 billion to solve a problem most enterprises don’t know they have yet.

    In January 2026, ServiceNow announced its largest acquisition ever: Armis, a cyber exposure management platform, for $7.75 billion in cash. But this wasn’t an isolated move. It was the culmination of an acquisition strategy that signals a fundamental shift in how the enterprise software market views AI governance.

    When a $200 billion platform company makes its largest purchase in history, it’s worth paying attention to what they’re buying—and why.

    The Acquisition Timeline

    ServiceNow’s 2025 spending spree tells a coherent story. In January 2025, they acquired Cuein, an AI-native conversation data analysis platform. In April, they announced the acquisition of Logik.ai, an AI-powered configure-price-quote solution. Then came Moveworks for $2.85 billion, Data.World for data governance, and Veza for identity security.

    The Armis deal dwarfs them all. At $7.75 billion in cash—more than twice the Moveworks price—it represents a massive bet on the convergence of AI, security, and operational technology. Combined with the earlier acquisitions, ServiceNow is assembling capabilities that span AI conversation analysis, data governance, identity management, and now comprehensive exposure management across IT, OT, and IoT environments.

    This isn’t a collection of opportunistic purchases. It’s a deliberate construction of an AI governance stack.

    The AI Control Tower Vision

    ServiceNow has been explicit about their strategic direction. They’re positioning themselves not just as an AI platform, but as what they call an “AI Control Tower”—a unified system that governs and manages AI across the enterprise.

    In the Armis announcement, ServiceNow President Amit Zavery stated it directly: “In the agentic AI era, intelligent trust and governance that span any cloud, any asset, any AI system, and any device are non-negotiable if companies want to scale AI for the long-term.”

    That framing matters. ServiceNow isn’t just saying AI governance is important. They’re saying it’s non-negotiable for scaling AI—and they’re willing to spend nearly $8 billion to prove the point.

    The Armis acquisition specifically addresses a visibility gap that most organizations haven’t fully reckoned with. Without knowing what’s connected across IT, operational technology, IoT, and physical environments, ServiceNow argues that “workflow automation, AI governance, and risk prioritization all collapse into theatre.” You can write policies all day, but if you can’t see what’s actually happening across your technology footprint, those policies are aspirational at best.

    Why This Matters for Every Enterprise

    ServiceNow’s acquisition strategy validates a market reality that’s been emerging for the past two years. AI governance isn’t a nice-to-have feature for compliance teams to worry about later. It’s becoming a core enterprise capability—one that established platform companies are racing to own.

    Consider what this signals. A company with ServiceNow’s market intelligence—they see how their 8,100+ enterprise customers are actually deploying technology—has concluded that AI governance is worth a multi-billion dollar bet. This aligns with the broader trajectory we traced in The Evolution of Enterprise AI, where each era demands more robust governance. They’re not experimenting. They’re going all-in.

    This has several implications for enterprise leaders.

    First, the governance problem is real and urgent. If you’ve been treating agentic AI governance as a future concern, the market is moving faster than that timeline allows. ServiceNow, Microsoft, Salesforce, and other major platforms are all investing heavily in AI governance capabilities. They’re building for a future where governance is expected, not optional.

    Second, visibility is the foundation. Every acquisition ServiceNow made connects to visibility in some way—seeing AI conversations, understanding data flows, tracking identities, monitoring connected devices. You can’t govern what you can’t see, and the platform leaders are racing to be the ones who provide that visibility layer.

    Third, the vendor landscape is consolidating. When large platforms acquire specialized governance capabilities, they’re signaling an intent to own that layer of the stack. Organizations that wait too long may find themselves choosing between platform lock-in and building custom solutions from scratch.

    The Broader Pattern

    ServiceNow isn’t alone in this recognition. Microsoft has been embedding governance capabilities across its Copilot ecosystem. Salesforce is building AI controls into its platform. AWS, Google Cloud, and Azure are all developing AI governance tooling.

    The pattern is clear: every major platform company has concluded that AI governance will be a battleground for enterprise relationships. They’re not just selling AI capabilities—they’re selling the ability to control, secure, and measure those capabilities.

    This creates both opportunity and risk for enterprises. The opportunity is that governance capabilities will become more accessible as platform providers compete to offer them. The risk is that governance becomes another vector for platform lock-in, with organizations finding themselves dependent on a single vendor not just for AI capabilities but for their ability to manage and measure those capabilities.

    What This Means for Your AI Strategy

    The ServiceNow acquisitions should prompt several strategic questions for enterprise leaders.

    If you’re still waiting for AI governance, the market isn’t. The leading platform companies are spending billions to build governance capabilities. They’re doing this because they see demand from their largest customers—the enterprises that are furthest along in AI deployment. If you’re behind the curve on AI governance, you’re increasingly in the minority.

    Enterprise-grade governance is becoming table stakes. Two years ago, AI governance was a differentiator. Organizations that had it were ahead. Today, it’s moving toward baseline expectation. The question is shifting from “Do you have AI governance?” to “How mature is your AI governance?” Organizations without any governance infrastructure will increasingly struggle to pass security reviews, satisfy regulators, and win enterprise deals.

    You don’t need $8 billion to get started. ServiceNow is building for a world where they’re the governance layer for their entire customer base. Your organization has different needs. You need visibility into what AI is doing, measurement of what value it’s delivering, and controls that scale with your risk profile. That doesn’t require a platform acquisition strategy—it requires the right tools applied to your specific environment.

    The Vendor-Neutral Alternative

    Olakai was built on the same insight that’s driving ServiceNow’s acquisition strategy: enterprises need unified visibility, governance, and ROI measurement across their AI deployments. The difference is in how we deliver it.

    Rather than locking customers into a single platform, Olakai provides a vendor-neutral control plane that works across AI tools, models, and infrastructure. We integrate with whatever AI systems you’re using—whether that’s chatbots from one vendor, copilots from another, and agent frameworks from a third. The goal is the same governance visibility and ROI measurement that ServiceNow is assembling through acquisitions, without requiring you to commit to their ecosystem.

    This matters because most enterprises don’t have a single-vendor AI environment, and they’re unlikely to in the foreseeable future. Different teams have different needs. Different use cases have different requirements. A governance layer that only works within one platform leaves gaps that shadow AI will fill.

    Looking Ahead

    The ServiceNow acquisition spree marks a turning point. AI governance has moved from emerging concern to validated market category, with billions of dollars of M&A activity confirming its importance. This shift is playing out across every industry vertical.

    For enterprise leaders, the message is clear. The organizations that figure out AI governance in 2026 will have a significant advantage over those that don’t — a theme we explore across all eight trends in our AI Predictions for 2026. They’ll scale AI programs faster because they can prove value and manage risk. They’ll win more enterprise deals because they can satisfy security and compliance requirements. They’ll retain talent because they can offer AI tools with appropriate guardrails rather than blanket prohibitions.

    ServiceNow is betting that AI governance will be non-negotiable for enterprises that want to scale AI. Based on what we’re seeing in the market, that bet looks correct.

    The only question is whether you’ll build that governance capability before your competitors do.

    The market has validated AI governance. Schedule a demo to see how Olakai delivers it without platform lock-in.

  • AI Predictions for 2026: What Enterprise Leaders Need to Know

    AI Predictions for 2026: What Enterprise Leaders Need to Know

    As 2025 draws to a close, enterprise AI has reached an inflection point. Chatbots and copilots proved the technology works — a progression we trace in The Evolution of Enterprise AI. Agentic AI is demonstrating the power of autonomous action. But the gap between AI experimentation and AI value remains stubbornly wide for most organizations.

    The stakes are higher than ever. According to Gartner, 40% of enterprise applications will be integrated with task-specific AI agents by the end of 2026, up from less than 5% in 2025. That’s an 8x increase in a single year. But the same Gartner research warns that over 40% of agentic AI projects will be canceled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls. The difference between the winners and the laggards won’t be who has the most AI—it’ll be who extracts the most value from it.

    Here are the trends we see shaping enterprise AI in 2026—and what they mean for business leaders.

    1. From Pilots to Production

    2026 will be the year enterprises finally move beyond pilot purgatory. Organizations that have been experimenting for 2-3 years will face a “ship or kill” moment: either prove ROI and scale, or acknowledge the experiments failed. The era of open-ended experimentation is ending.

    This shift has real consequences. Expect pressure to quantify AI value in business terms, not just technology metrics. Governance and measurement become requirements, not nice-to-haves. Vendors will face harder questions about real-world results, not demo magic. According to McKinsey, high-performing organizations are three times more likely to scale agents than their peers—but success requires more than technical excellence. The key differentiator isn’t the sophistication of the AI models; it’s the willingness to redesign workflows rather than simply layering agents onto legacy processes.

    If you’ve been running pilots, define success criteria and set a deadline. Either demonstrate value or reallocate resources to use cases that can. For a structured approach to proving value, see our AI ROI measurement framework.

    2. The Rise of Multi-Agent Systems

    Single-purpose agents will give way to coordinated multi-agent systems. Just as microservices transformed software architecture, agent ecosystems will transform how enterprises automate complex workflows. Gartner reported a 1,445% surge in multi-agent system inquiries from Q1 2024 to Q2 2025—a clear signal that enterprises are thinking beyond standalone agents.

    This shift enables complex processes like order-to-cash and hire-to-retire to become fully autonomous. Agents will hand off work to other agents, creating agent-to-agent workflows that mirror how human teams collaborate. But governance complexity increases as agent interactions multiply—you’ll need visibility not just into individual agents but into the handoffs and decisions across the entire system.

    Forrester predicts that 30% of enterprise app vendors will launch their own MCP (Model Context Protocol) servers in 2026, enabling external AI agents to collaborate with vendor platforms. Gartner outlines five stages in enterprise AI evolution: Assistants for Every Application (2025), Task-Specific Agents (2026), Collaborative Agents Within Apps (2027), Ecosystems Across Apps (2028), and “The New Normal” (2029) where at least half of knowledge workers will be expected to create, govern, and deploy agents on demand.

    Design your agent architecture with coordination in mind now. Establish standards for how agents communicate and hand off work before the complexity becomes unmanageable.

    3. Governance Becomes Competitive Advantage

    Organizations with mature AI governance will scale faster than those without. While governance has been seen as a brake on innovation, 2026 will reveal it’s actually an accelerator—enabling confident deployment of higher-risk, higher-value use cases that competitors can’t touch.

    Companies with governance in place can move to production faster because security and compliance aren’t blocking deployment at the last minute. Regulatory pressure will increase with the EU AI Act fully in effect, state laws emerging in the U.S., and industry standards solidifying. Customers and partners will ask about your AI governance posture. Forrester predicts 60% of Fortune 100 companies will appoint a head of AI governance in 2026—organizations ramping up agentic exploration will especially benefit from this increased focus.

    Build governance foundations now. Start with visibility (what AI is running?), then add controls (who can do what?), then measurement (is it working?). Our CISO governance checklist provides a comprehensive framework.

    4. The ROI Reckoning

    CFOs will demand clear AI ROI numbers. The days of “we’re investing in AI for the future” are ending. 2026 will require concrete evidence that AI investments are paying off.

    McKinsey estimates generative AI could add between $2.6 and $4.4 trillion annually to global GDP, with AI productivity gains in areas like security potentially unlocking up to $2.9 trillion in economic value by 2030. But that’s the macro picture. At the individual enterprise level, AI leaders will need to connect AI metrics to business outcomes. Activity metrics like conversations and completions won’t be enough—you’ll need cost savings, revenue impact, and time-to-value calculations. Some AI projects will be cut when they can’t prove value.

    Establish baselines before deploying AI. Define what success looks like in business terms. Track outcomes, not just activity.

    5. Shadow AI Backlash

    A major data breach or compliance violation caused by shadow AI will force enterprises to take unauthorized AI use seriously. What’s been tolerated as employee experimentation will become a recognized security risk.

    Enterprises will invest in shadow AI detection and governance. Policies will shift from “don’t use AI” (which doesn’t work) to “use approved AI” (which gives employees a sanctioned path). Security teams will add AI-specific controls to their toolkit. Gartner’s warning about “agent washing”—vendors rebranding existing products without substantial agentic capabilities—adds another dimension: you’ll need to distinguish real AI tools from marketing rebadging.

    Understand your shadow AI exposure now. Provide sanctioned alternatives that meet employee needs. Build detection capabilities before an incident forces your hand.

    6. Industry-Specific Agents Emerge

    Vertical AI solutions will outperform horizontal ones. Pre-built agents for specific industries—healthcare claims processing, financial underwriting, legal document review—will deliver faster time-to-value than general-purpose platforms that require extensive customization.

    Industry expertise becomes as important as AI capability. The build vs. buy calculus shifts toward buy for common workflows, with differentiation coming from proprietary data and processes rather than technology. Gartner estimates only about 130 of the thousands of agentic AI vendors are real—the rest are rebranding without substance.

    Evaluate industry-specific AI solutions for common workflows in your sector. Reserve custom development for truly differentiating use cases where your unique processes create competitive advantage. The Future of Agentic use case library provides examples across industries.

    7. The Talent Shift

    AI will change the skills organizations need—but not in the ways people expect. Demand will grow for AI governance, integration, and change management expertise. Pure AI/ML research talent will remain concentrated at large labs; most enterprises won’t build models, they’ll integrate and govern them.

    Change management and training become critical for adoption—technology that people don’t use delivers zero value. New roles are emerging: AI Ethics Officer, AI Governance Lead, Agent Operations. Gartner predicts that through 2026, atrophy of critical-thinking skills due to GenAI use will push 50% of global organizations to require “AI-free” skills assessments. The top five HCM platforms will offer digital employee management capabilities, treating AI agents as part of the workforce requiring HR oversight.

    Invest in governance and integration capabilities. Build change management into every AI project. Upskill existing staff on AI governance rather than competing for scarce model-building talent.

    8. Cost Optimization Pressure

    AI costs will come under scrutiny. Early implementations often over-spend on model API calls, infrastructure, and maintenance. 2026 will bring focus to AI unit economics and cost optimization.

    Cost per transaction becomes a key metric alongside accuracy and time savings. Model selection will consider cost/performance tradeoffs—not every task needs the most powerful model. Right-sizing becomes standard practice: using simpler, faster, cheaper models where appropriate, reserving expensive frontier models for tasks that truly require them.

    Track AI costs at the use-case level so you understand where money is going. Experiment with smaller models for routine tasks. Optimize prompts and workflows for efficiency—often the cheapest improvement is making fewer API calls through better prompt engineering.

    The Path Forward

    2026 will separate AI leaders from AI laggards. The difference won’t be technology—it will be execution. Leaders will prove ROI, scale successful pilots, and build governance that enables rather than blocks. Laggards will remain stuck in experimentation, unable to prove value or manage risk.

    Gartner’s best case scenario projects that agentic AI could drive approximately 30% of enterprise application software revenue by 2035, surpassing $450 billion—up from 2% in 2025. By 2028, Gartner predicts 90% of B2B buying will be AI agent intermediated, pushing over $15 trillion of B2B spend through AI agent exchanges. The enterprises that build the capabilities to participate in that future will thrive; those that don’t will struggle to compete.

    The enterprises that succeed will treat AI not as a technology project but as a business transformation. They’ll measure what matters, govern what’s risky, and scale what works. The future of enterprise AI is measurable, governable, and valuable. 2026 is the year to make it real.

    Ready to move from experimentation to execution? Schedule a demo to see how Olakai helps enterprises measure ROI, govern risk, and scale AI with confidence.