Category: Industry Analysis

Market trends, acquisitions, and competitive landscape

  • Anthropic’s Mythos Crisis: What a $900B Raise Tells Enterprise AI Buyers

    Anthropic’s Mythos Crisis: What a $900B Raise Tells Enterprise AI Buyers

    The most safety-conscious frontier AI lab on earth just told the market it cannot economically serve 120 customers on a single model. Then it set out to raise the largest private funding round in history to fix the math, and the White House looked at the numbers and said no. If you are an enterprise leader being pressured to chase the next frontier capability, this is the data point that should land on your desk this week.

    The story so far

    On April 7, Anthropic announced Claude Mythos, a frontier model so capable at finding and exploiting software vulnerabilities that the company chose not to release it publicly. Instead, it stood up Project Glasswing, a controlled-access program of about 50 vetted partners. The list reads like a Fortune 50 cybersecurity wishlist: Apple, Microsoft, Google, AWS, Nvidia, JPMorgan Chase. Within days of the announcement, Bloomberg reported that unauthorized users had already accessed Mythos through a third-party vendor environment, leveraging publicly available techniques and information from the earlier Mercor breach. They reportedly also had access to other unreleased Anthropic models. Week one, fifty partners, breached.

    What changed this week

    Then came two stories that, together, reframe the entire conversation. First, The Wall Street Journal reported, with Bloomberg confirmation, that Anthropic proposed expanding Mythos access to roughly 70 additional companies, bringing the total to about 120. The White House told Anthropic, privately, that they oppose the move.

    The first reason is the obvious one: security. The system was compromised in week one with 50 partners, and adding 70 more increases the attack surface in ways the administration is not comfortable with. The second reason has not had nearly enough oxygen in the coverage. The administration also told Anthropic, in plain English, that the company does not have enough computing power to serve 120 customers without degrading the U.S. government’s own access to the model.

    Read that sentence twice, because it is the entire story.

    The compute math

    Now follow the money on the same news cycle. Bloomberg reported the same week that Anthropic is in early talks for a funding round that would value the company at over $900 billion, more than double its current $350 billion mark and enough to leapfrog OpenAI‘s $850 billion valuation and make Anthropic the most valuable AI startup in the world. What is the capital for? Per reports tied to the WSJ coverage, part of the raise is specifically aimed at funding the compute capacity required to scale Mythos.

    So here is the math, the way it actually reads. You have the most safety-conscious frontier lab on earth, what the lab itself describes as the most powerful model it has ever built, and a customer list of 50 hand-picked partners, every one of them a Fortune 50 or critical-infrastructure player. And you cannot serve them, plus 70 more, without one of two things happening: either the U.S. government’s access gets degraded, or you raise potentially the largest private funding round in history to buy enough compute to make the math work. The administration looked at that math and said no, and that is not a security-only objection. It is a market signal.

    What this means for your enterprise AI roadmap

    If you are a CIO, CISO, CHRO, or CFO anywhere near AI strategy right now, this story should land on your desk with one question attached. If Anthropic, with Google’s $40 billion commitment and Amazon’s $25 billion commitment behind it, cannot economically serve 120 customers on one model, what makes you think you should be in line to chase the next frontier capability layer right now?

    There is a story about compute scarcity that the AI vendor narrative has been quietly papering over for two years. The pitch decks talk about agentic this and frontier that and capability the other, while the compute reality is that even the leaders of the field cannot meet demand at the scale they have already promised, let alone the scale they are selling you for tomorrow. When the seller of the most powerful model on earth says, even at a $900 billion valuation, that it still cannot serve more than a few dozen customers without rationing, the buyer-side translation is direct: you are not behind, you are not missing out, you are being sold AI futures the vendor cannot deliver compute for.

    Foundation first, again

    The Mythos story is not anti-innovation, and it is not even anti-frontier. The capability is real, the breakthroughs are real, and the cybersecurity implications, both defensive and offensive, will reshape the next decade. The pace of stacking is the question, because every enterprise leader I talk to is being pressured, from above by boards and from below by ambitious teams, to be on the next thing. The Mythos story is a hard data point for pushing back, because even the people building the next thing cannot economically deliver it at the scale they are promising. The market is rationing this capability whether you want to participate or not.

    The right move, the move I see actually working when I look at every guest who has come on the main show, is the unsexy one. Build a measurement layer that tells you what your current AI is actually doing — that is the SEE step from our enterprise AI ROI playbook: full visibility before any scaling conversation. Build a governance posture that includes your third-party vendor chain, because that is exactly where Mythos itself was breached, and the same logic that flags unauthorized shadow AI inside your walls applies to vendor environments outside them — Olakai’s governance layer exists for that reason. Build a strategy that names what AI is for in your business, not what AI is in the market, because the MEASURE step tells you what to track when the board asks “is any of this paying off.” Prove value in 30 to 60 days with a structured pilot before any scaling commitment, so the unit economics are real numbers and not slide-deck promises. Then, when the next frontier capability becomes economically deliverable at the scale you actually need, you will be ready to stack it on a foundation that holds — and the measurement layer you built will tell you which capabilities are actually worth stacking.

    If you are running ahead of that, the question is no longer whether you are taking on too much risk; the question is whether your vendor can even serve you. This week’s news suggests, increasingly, that the answer is no.

    Coming next on Enterprise AI Unlocked

    I sat down last week with Jason Smith, AI Lead EMEA at Publicis Groupe, and Rob Saltrese, Co-Founder and COO of Lyra Labs, for a full Roundtable on the Mythos breach and what it tells every enterprise about foundation-first AI strategy. The White House news arrived after we hit stop on the recording, and we could not have planned a sharper data point if we had tried. The episode is now live on Enterprise AI Unlocked, and the full conversation goes deeper than this article on vendor-chain risk, board-level AI literacy, and what foundation-first looks like in practice.

    In the meantime, the math is on the table. It is not pretty, and it is telling you something important about where enterprise AI actually is, versus where the headlines say it should be.

    Want help building the measurement and governance foundation before you stack the next frontier capability? Talk to an Expert about how Olakai measures AI ROI and governs risk across your stack.

  • Meta’s $53B AI Capex Bet vs. 14,000 Layoffs: When the Market Stops Cheering

    Meta’s $53B AI Capex Bet vs. 14,000 Layoffs: When the Market Stops Cheering

    I closed the layoff trilogy last week with Disney. The plan for the next installment was to step back from headline reactions and get back to measurement frameworks that actually move enterprise decisions, because that is where Show Me The Math belongs. Then Meta dropped a memo on Thursday afternoon, and the math demanded one more episode.

    The announcement

    Meta will lay off approximately 8,000 employees on May 20, 2026, and close another 6,000 open roles, for a total of 14,000 careers affected by a single Thursday afternoon memo from Chief People Officer Janelle Gale. The memo does not mention AI once and explains the decision as an effort to run the company more efficiently and to offset other investments Meta is making.

    The other investments are not subtle.

    The capex picture

    In its January 2026 earnings report, Meta guided 2026 capital expenditures to a range of $115 billion to $135 billion. In 2025, the actual figure was $72.2 billion. Taking the midpoint of the 2026 guidance at $125 billion, the year-over-year increase is approximately $53 billion, with the spending going toward AI infrastructure, data centers, custom chips, and the company’s superintelligence research lab, which has been writing widely-reported nine-figure compensation packages for top researchers. For scale, Meta also disclosed total 2026 expense guidance of $162 billion to $169 billion, which means the capex line alone is now nearly the size of the entire operating expense base.

    The savings picture

    Meta has not publicly disclosed the average fully-loaded cost of an employee, so we have to work with estimates. Using a generous figure of $400,000 per head for a workforce concentrated in the Bay Area and dominated by engineers, the 8,000 layoffs translate to approximately $3.2 billion in annual payroll savings, and including the 6,000 closed open roles at the same blended rate brings the total avoided cost to roughly $5.5 billion per year. That figure represents about 10 to 11 percent of the year-over-year capex increase.

    In other words, even if every dollar saved from headcount reductions and unfilled roles were redirected to capex, it would not cover one-tenth of the new AI spend. The remainder has to come from somewhere, and in Meta’s case that somewhere is the advertising business, which generated $59.89 billion in revenue in Q4 2025 alone, up 24 percent year over year. So when the memo says efficiency, what it actually means is that the AI bill is bigger than the savings, someone still has to pay the difference, and the advertising business is paying it. The buffer in that equation is human beings.

    The market reaction is the real story

    This is the part I did not expect, and it is the part that motivated me to write at all. Meta shares were down approximately 2 percent in afternoon trading on Thursday, broadly tracking the market, and by Friday the stock had recovered most of the move. Effectively flat.

    Two or three years ago, a layoff memo of this scale paired with an efficiency narrative would likely have driven a multi-billion-dollar bump in market capitalization by the closing bell, and the 2022 Year of Efficiency framing added meaningful value to Meta’s stock at the time. It worked then. Today, the market shrugged.

    That muted response is arguably the most important signal in the entire story. I noticed the same thing with Disney earlier in the week. Investors have now watched the same playbook executed by Block in February (40 percent), Snap earlier this month (16 percent), Oracle in waves through last quarter, Amazon‘s 16,000 cuts in January, and on the same Thursday afternoon as Meta, Microsoft offered voluntary buyouts to roughly 8,750 US employees. The layoff-funds-AI memo is no longer news, it is a quarterly ritual, and when the market stops rewarding the action, the action stops being a strategy. It becomes a tax.

    This is the signal Olakai was built for. When the market stops accepting “we built it” and starts demanding “show me what it returned,” the gap between AI investment and measurable AI outcome becomes the most important number in your finance stack. The companies that close that gap before the next earnings call will not need a layoff memo to balance the AI capex line. The companies that do not close it will keep funding AI by subtracting people, and the market has now told them, in the most polite way possible, that the trick has stopped working.

    What this means for your AI ROI math

    Show Me The Math is a financial discipline at its core, and the discipline only works if it includes the full picture. For enterprise leaders watching this play out at the trillion-dollar scale, the buyer-side translation is direct: the largest, most well-capitalized AI spenders on earth cannot make their own AI capex math work without dipping into headcount and ad revenue, which means the assumption that AI investment self-funds through measurable productivity gains is being stress-tested in public, and the results are not yet conclusive.

    That is exactly the gap Olakai exists to close. We are the vendor-neutral Enterprise AI Intelligence Platform — the system of record that sits across every AI agent, copilot, and embedded tool in your stack, telling you what each one costs, what it returns, and where the unit economics actually break even. The thesis at trillion-dollar scale is the same as the thesis at enterprise scale: AI does not pay for itself by default, it pays for itself when you can measure it. Without that measurement layer, every CIO and CFO is running the same script Meta just ran in public, except with smaller numbers and less margin to absorb the miss.

    The four-step playbook keeps applying. See what your AI is actually doing across the stack, including the shadow AI you do not yet know is running. Measure the metrics that matter to a CFO — cost per task, completion rate, revenue impact, cycle-time reduction — not the activity metrics that look good on an internal dashboard. Decide within 30 to 60 days whether a pilot is generating the unit economics it promised, because every quarter you spend funding an unverified deployment is a quarter you cannot redirect to one that works. Act on what the data tells you, including killing pilots that are not delivering. That is the entire AI ROI playbook, and it scales from a single agent in a single department to the $125 billion capex line at Meta.

    The human factor

    When 14,000 careers at one company are called off in a single afternoon, the line items on the income statement do not capture what is actually moving. There are mortgages, school enrollments, visa statuses on different and more urgent timelines than the headlines suggest, and partners with their own careers in the same compressed labor market. That is a lot of real lives compressed into a 27-day countdown to May 20.

    The severance package is comparatively generous and worth saying so honestly: 16 weeks of base pay, two additional weeks per year of service, and 18 months of healthcare coverage for US employees. That cushion matters and is better than most. But severance is a parachute, not a destination, and severance is not a strategy. The cost of living is at multi-year highs, the tech hiring market has been compressed for two years, and the carry cost of being between roles in 2026 is materially higher than it was during the 2022 wave. The people receiving an email on May 20 are entering a labor market where the same pattern is being repeated by the very companies they would naturally apply to next.

    This is the part the math does not capture, and it is the part that matters most.

    What to do with this

    If you are a CFO, the Meta memo is your future-state preview. AI capex is going to eat budget you did not know was edible, and the answer is not to wait for Q3 surprises but to audit your AI spend against measurable outcomes now, ideally in the same quarter you read this. Olakai’s CFO use case walks through the specific framing — what to measure, what to ignore, and what a board-ready AI ROI scorecard actually looks like.

    If you are a sales or operations leader, the question is not whether AI replaces your team. The question is whether the AI you are already paying for is actually moving unit economics, or just adding another seat license to your stack. Map every AI tool to a measurable outcome before the next renewal cycle, because the measurement gap is what makes the layoff-funds-AI playbook so easy to default to. Olakai’s job is to surface that mapping automatically, so the renewal conversation starts with data rather than vibes.

    If you are an individual contributor watching this, the response is not panic, it is leverage. Innovate. Use AI to make your own work better. Become the person on the team who shows up with measurable output the rest of the team cannot match. The era in which headcount equaled value is closing, and the era in which measurable, accountable AI value defines organizational worth is opening. Both eras are tough, and the second one is at least one we can prepare for.

    The next episode and the bigger picture

    I will stress-test the Meta capex bet directly in a future installment, once there is more public information to work with. Q1 2026 earnings drop on April 29, and that call should give us something concrete to model: at what level of AI-driven revenue growth or operational savings does the $53 billion year-over-year increase actually break even, and what does Meta need to show to justify the cost the workforce is being asked to absorb? The short preview is that the spreadsheet does not yet justify the memo, and whether it will is the question Meta has to answer to investors next week, and to the 14,000 people whose lives have already been answered for them.

    The bigger picture is the one Olakai keeps pushing on every guest who comes on the podcast and every CFO we talk to: AI investment without an intelligence layer underneath it is a bet on faith, and faith is the most expensive form of capex on the books. Foundation first, measurement before scaling, governance that extends to your vendor chain, and an honest scorecard that survives a board review. Build that, and the next AI capex decision is grounded in data your CFO can defend. Skip it, and the only lever left is the one Meta just pulled.

    If you want help building that measurement and governance foundation before your own capex math forces a memo of its own, talk to an Expert. And if you want the longer-form conversations behind the analysis, the Enterprise AI Unlocked podcast goes deeper than the weekly Show Me The Math notes.

  • NVIDIA Surveyed 3,200 Leaders. 30% Still Can’t Measure AI ROI.

    NVIDIA Surveyed 3,200 Leaders. 30% Still Can’t Measure AI ROI.

    NVIDIA’s 2026 State of AI report reads like a victory lap. Eighty-eight percent of enterprise leaders say AI increased revenue. Eighty-seven percent report cost reductions. Eighty-six percent are growing their AI budgets this year. If you stopped there — and most of the coverage did — you’d conclude that enterprise AI has crossed every finish line that matters.

    But buried in page after page of bullish data from 3,200 enterprise leaders is a finding that should give every CFO pause: 30% of those same leaders admit they still can’t quantify the ROI of their AI investments. Nearly one in three organizations spending more on AI this year cannot say, with any rigor, whether last year’s spending worked.

    Source: All data in this analysis comes from NVIDIA’s 2026 State of AI Report, a survey of 3,200+ enterprise leaders across six industries conducted August–December 2025. Charts reproduced from the original report.

    The Headline Numbers vs. the Fine Print

    The optimistic numbers are real, and they’re striking. Among the 3,200+ respondents surveyed between August and December 2025, 30% reported revenue increases exceeding 10%, with another 33% seeing gains between 5% and 10%. Cost reductions were nearly as widespread — 87% reported savings, and a quarter achieved cuts greater than 10%. In retail and consumer packaged goods, that figure climbed to 37%. These aren’t pilot results. They’re self-reported impacts from organizations with active AI deployments across financial services, healthcare, manufacturing, telecommunications, and retail.

    NVIDIA State of AI 2026: AI impact on increasing annual revenue by industry vertical — 88% report revenue growth
    88% of organizations report AI-driven revenue increases. Healthcare leads with 44% seeing gains over 10%. Source: NVIDIA State of AI 2026.
    NVIDIA State of AI 2026: AI impact on reducing annual costs by industry vertical — 87% report cost decreases
    87% report cost reductions from AI, with retail leading at 37% achieving cuts exceeding 10%. Source: NVIDIA State of AI 2026.

    The trouble is how those numbers were generated. Self-reported revenue impact and measured revenue impact are different things. When NVIDIA’s own survey shows that 48% of respondents struggle with data sufficiency, 38% lack the AI expertise they need, and 30% can’t quantify ROI at all, you have to ask: how confident are the other 70% in their numbers? The report doesn’t say. And that gap between perception and measurement is where billions of dollars in AI spending disappear every year.

    This isn’t a reason to dismiss the progress. Organizations like PepsiCo, which achieved a 20% throughput increase and 10-15% reduction in capital expenditure through AI-powered digital twins, demonstrate that AI can deliver measurable returns. Clinomic’s medical ICU assistant reduced documentation errors by 68% and clinical workload by 33%. These are concrete, instrumented outcomes. The question is why so many organizations can’t produce similar evidence.

    The Measurement Problem Gets More Expensive Every Quarter

    Budget season makes the stakes concrete. Eighty-six percent of organizations are increasing AI budgets in 2026, with 40% planning increases of 10% or more. North American enterprises are even more aggressive — 48% are pushing budgets up by double digits. These aren’t exploratory budgets. They’re operational commitments that will demand operational proof.

    NVIDIA State of AI 2026: AI budget changes in 2026 by region — 86% increasing budgets, 48% of North America by 10%+
    86% of organizations are increasing AI budgets in 2026. Nearly half of North American enterprises plan increases of 10% or more. Source: NVIDIA State of AI 2026.

    The organizations that instrumented early have a structural advantage. Lowe’s built AI digital twins for more than 1,750 stores and can point to the cost of generating 3D product models — under $1 each — as a clear efficiency metric. Nasdaq constructed a proprietary AI platform and can trace its impact across both internal operations and external products. These companies didn’t stumble into measurement. They designed for it from the start, choosing specific outcomes to track before deploying the technology.

    The 30% without measurement infrastructure face a different budget conversation entirely. When the CFO asks what last quarter’s AI spend produced, “we believe it’s helping” isn’t an answer that protects next quarter’s allocation. As AI budgets grow, the metrics that matter to financial leadership become non-negotiable — and organizations without baselines can’t even begin that conversation.

    Agentic AI Makes the Gap Worse

    Perhaps the most forward-looking finding in the report: 44% of organizations are already deploying or actively assessing AI agents, with telecommunications (48%) and retail (47%) leading adoption. These aren’t chatbots answering customer questions. Agentic AI systems operate autonomously — making decisions, executing multi-step workflows, and interacting with other systems without human approval at every stage.

    That autonomy creates a measurement blind spot that assistive AI never did. When a human uses ChatGPT to draft an email, the value chain is short and visible. When an AI agent autonomously triages customer issues, escalates edge cases, updates CRM records, and triggers follow-up workflows, the value chain branches and compounds in ways that are nearly impossible to track without purpose-built instrumentation. If 30% of organizations can’t measure ROI on their existing AI tools, they’re not going to suddenly develop that capability for systems that are orders of magnitude more complex. Every autonomous decision an agent makes without measurement infrastructure is both an ROI blind spot and a risk exposure.

    What the Winners Did Differently

    A pattern emerges when you study the case studies NVIDIA highlights. PepsiCo didn’t deploy digital twins and hope for the best — they defined throughput improvement and capital expenditure reduction as target metrics before writing a line of code. Clinomic didn’t build a medical assistant and then try to figure out if it worked — they instrumented documentation error rates and clinician workload from day one. Lowe’s didn’t scale to 1,750 stores on intuition — they tracked cost-per-model as a unit economic that justified each expansion.

    The 30% who can’t quantify ROI didn’t fail at AI. They failed at instrumentation. They deployed capable technology without building the measurement layer that turns activity into evidence. And now, with budgets growing and boards asking harder questions, that gap is becoming a strategic liability. McKinsey’s latest research echoes this finding — organizations that establish clear performance metrics before scaling AI are significantly more likely to capture value across multiple business functions.

    The framework that separates measurement leaders from the rest follows a consistent pattern: first, establish visibility into what AI is actually doing across the organization; then, define the business metrics that matter for each use case; next, run structured evaluations against those baselines; and finally, use the evidence to make scale-or-sunset decisions. This See-Measure-Decide-Act cycle is what transforms AI investment from a faith-based initiative into a data-driven program.

    What to Do Before Your Next Budget Review

    The NVIDIA report makes one thing clear: the AI investment wave is not slowing down. With 86% of organizations increasing budgets and 44% moving into agentic AI, the volume of spending that needs justification is only growing. The organizations that thrive in this environment won’t be the ones with the most sophisticated models or the largest GPU clusters. They’ll be the ones that can answer, with precision, what their AI investments produced.

    If you’re in the 30% today, the path forward isn’t complicated — but it is urgent. Instrument before you scale. Establish baselines before your next deployment. Build measurement into your AI architecture the same way you’d build security into your cloud infrastructure: not as an afterthought, but as a foundational layer. The CFO will ask for proof. The board will ask for proof. Make sure you have it.

    Ready to close the measurement gap? Talk to an expert to see how Olakai gives you unified visibility, business-aligned KPIs, and ROI evidence across every AI tool in your enterprise.

  • What JP Morgan’s Agentic AI Rollout to 250,000 Employees Tells Us

    What JP Morgan’s Agentic AI Rollout to 250,000 Employees Tells Us

    JP Morgan Chase just gave 250,000 employees access to AI — and then announced it was moving to agentic AI, where those systems won’t just answer questions but execute complex multistep tasks autonomously. The bank’s vision: “Every employee will have their own personalized AI assistant; every process is powered by AI agents.”

    It’s the largest enterprise AI deployment publicly disclosed. And it raises a question every enterprise will face as AI scales: can governance keep pace with adoption?

    The Numbers Behind the Rollout

    The scale is worth pausing on. JP Morgan’s LLM Suite — an internal AI platform built on models from OpenAI and Anthropic — is available to the bank’s entire workforce except branch and call center staff. Of the 250,000 employees with access, 150,000 use it weekly. Half of those use it daily. The platform is updated every eight weeks with new enterprise data, creating a continuously evolving tool that gets more capable and more embedded in daily work with each cycle.

    Behind the platform sit 2,000 AI and machine learning specialists — 900 data scientists, 600 machine learning engineers, and 200 AI researchers — managing 600-plus production use cases. That’s a specialist-to-user ratio of roughly 1:125. Even with one of the largest enterprise AI teams in the world, JP Morgan has far more AI usage than any team can manually oversee.

    And that’s before agentic AI enters the picture.

    From Assistive to Agentic: A Governance Leap

    There’s a fundamental difference between assistive AI and agentic AI from a governance perspective. Assistive AI — the copilots, chatbots, and summarization tools that make up most enterprise AI today — operates in a request-response pattern. An employee asks a question, the AI responds, the employee decides what to do with the response. The human is in the loop for every decision.

    Agentic AI operates differently. An AI agent receives a goal, decomposes it into subtasks, executes those subtasks autonomously (potentially across multiple systems), and delivers a result. The human may not be in the loop for intermediate decisions. The agent might query a database, analyze the results, draft a recommendation, and send it to a stakeholder — all without human review of each step.

    JP Morgan described itself as “early in the next phase” of deploying agentic AI in late 2025. The use cases in banking are obvious: automated fraud investigation that pulls transaction records, cross-references patterns, and generates suspicious activity reports. Loan underwriting that collects applicant data, runs credit analysis, and produces risk assessments. Compliance monitoring that scans communications, identifies potential violations, and initiates review workflows.

    Each of these use cases involves an AI system making decisions — or heavily influencing decisions — about sensitive data, regulated activities, and consequential outcomes. Governing these systems requires fundamentally different controls than governing a chatbot that summarizes meeting notes.

    The Governance Challenge at 250,000 Users

    Scale compounds every governance challenge. Consider the numbers: 150,000 weekly active AI users generating millions of interactions. Each interaction potentially involves customer data, financial records, proprietary strategies, or regulated communications. The average organization already experiences 223 data policy violations involving generative AI per month. At JP Morgan’s scale, that number could be orders of magnitude higher.

    Then there’s the shadow AI problem. Research shows that 9% of employees now switch between personal and enterprise AI accounts — up from 4% in the previous period. At 250,000 employees, 9% represents 22,500 potential data leakage vectors. These are employees whose company-approved tools don’t meet their needs for convenience or functionality, driving them to seek alternatives that lack enterprise security and privacy controls.

    And the challenge deepens as AI moves from assistive to agentic. With assistive AI, governance can focus on what employees share with AI tools — input monitoring. With agentic AI, governance must also cover what AI systems do autonomously — output monitoring, decision accountability, and cascading action controls. An agentic AI system that autonomously initiates a trade, approves a credit application, or files a regulatory report introduces governance requirements that most enterprises haven’t even begun to design.

    How JP Morgan Is Approaching Governance

    JP Morgan’s governance structure provides a template — though it’s one built with resources few organizations can match.

    Platform centralization. Rather than allowing AI tool sprawl, JP Morgan channeled adoption through a single platform (LLM Suite). This centralization creates a single point of governance: access controls, data policies, usage monitoring, and audit trails all flow through one system. The alternative — employees using dozens of independent AI tools — makes governance exponentially harder because each tool has different data handling practices, different security models, and different audit capabilities.

    Model governance. The LLM Suite uses models from multiple vendors (OpenAI and Anthropic), updated every eight weeks. This multi-model strategy diversifies vendor risk while the regular update cycle ensures the platform stays current. But it also means continuous governance validation — every update cycle requires re-evaluation of model behavior, security posture, and compliance alignment.

    Use case-level oversight. Each of JP Morgan’s 600-plus production use cases is individually tracked and governed. This granularity matters because the governance requirements for a document summarization tool are fundamentally different from the governance requirements for a fraud detection agent. Use case-level governance allows risk-proportional controls — more oversight for high-stakes, high-sensitivity applications and lighter controls for low-risk productivity tools. Our AI risk heatmap approach follows the same principle.

    Dedicated governance functions. JP Morgan maintains a Model Risk Governance function that assesses each AI use case and a Firmwide Chief Data Officer responsible for data governance, quality, and access control. These aren’t part-time responsibilities added to existing roles. They’re dedicated functions with the organizational authority to block or modify AI deployments that don’t meet governance standards.

    The Workforce Reality

    Jamie Dimon has been unusually candid about AI’s impact on employment. In February 2026, he acknowledged that AI is already displacing workers at JP Morgan, stating the bank has “huge redeployment plans” to move affected employees to other roles. He’d previously called AI “possibly as transformational as the printing press, steam engine, computing, and the Internet.”

    JP Morgan isn’t alone. Bank of America, Citigroup, and Wells Fargo all projected lower headcounts for 2026, with AI-driven efficiency gains cited as a key factor. The financial services industry, with its high proportion of knowledge work and data-intensive processes, is one of the sectors most exposed to AI-driven workforce changes.

    This workforce dimension adds another governance requirement that most frameworks ignore: the responsibility to plan for displacement before it happens. JP Morgan’s approach — retrain and redeploy — is more thoughtful than many enterprises, but it still requires knowing which roles AI will affect, on what timeline, and at what scale. That knowledge comes from the same measurement infrastructure that proves AI ROI: if you can’t measure what AI is doing, you can’t project what it will change.

    What This Means for Other Enterprises

    Most organizations won’t deploy AI to 250,000 users overnight. But every organization scaling AI faces the same governance challenges JP Morgan faces — just at a different magnitude. The principles translate:

    Centralize AI access before you lose control. Once employees adopt dozens of independent AI tools, governing the ecosystem becomes nearly impossible. A platform strategy — even a modest one — creates the governance foundation that fragmented tool adoption destroys.

    Design agentic governance before deploying agentic AI. The gap between assistive AI governance (monitoring inputs) and agentic AI governance (monitoring autonomous decisions) is substantial. Organizations that deploy agentic AI with assistive-era governance will face accountability gaps when agents make consequential decisions without human oversight.

    Treat shadow AI as inevitable, not preventable. At any scale, some employees will use unauthorized AI tools. The governance response should be detection and redirection (identifying unauthorized tools and channeling users to approved alternatives), not just prohibition. Prohibition fails at scale because it creates friction that drives adoption underground.

    Start workforce planning now. If AI is going to change roles at your organization — and it will — the time to plan for redeployment is before displacement occurs, not after. That planning requires visibility into where AI is being used and how it’s changing work patterns — data that only comes from measurement infrastructure.

    JP Morgan’s rollout is a preview of what every enterprise AI program will eventually face: the governance challenge of AI operating at scale, across an entire workforce, with increasing autonomy. The question isn’t whether your organization will get there. It’s whether your governance infrastructure will be ready when it does.

    For a deeper look at the frameworks and readiness assessments for agentic AI deployment, Future of Agentic’s governance hub provides comprehensive resources. And our CISO governance checklist offers a practical starting point for security leaders.

    Scaling AI and need governance that scales with it? Talk to an expert to see how Olakai provides unified AI governance across assistive and agentic AI — before risk scales with adoption.

  • JP Morgan Spent $2B on AI. Here’s What They Measured.

    JP Morgan Spent $2B on AI. Here’s What They Measured.

    In February 2026, Jamie Dimon made a claim that most enterprise leaders can only dream of: JP Morgan Chase’s $2 billion AI investment had “paid for itself.” Not “shows promise.” Not “is on track.” Paid for itself — $2 billion in measured benefits offsetting $2 billion in costs.

    Break-even isn’t a moonshot return. But in a landscape where only 20% of enterprises can prove AI drives any revenue at all, the fact that JP Morgan can put a dollar figure on AI’s contribution — and defend it to shareholders — puts them in rare company. The question worth studying isn’t how much they spent. It’s what they measured, and how.

    The Scale of the Bet

    JP Morgan’s $2 billion annual AI investment sits within a $17 billion technology budget that grew to $19.8 billion in 2026 — a 10% increase year over year. The bank employs more than 2,000 AI and machine learning specialists, including 900 data scientists, 600 machine learning engineers, and 200 AI researchers. This isn’t a skunkworks team running experiments. It’s a division-level commitment that treats AI as core infrastructure alongside payment systems and risk controls.

    The centerpiece is the LLM Suite, an internal platform built on models from OpenAI and Anthropic that 150,000 employees use weekly. The platform is updated every eight weeks with new enterprise data, and by late 2025, roughly half of all LLM Suite users were using it daily. At that adoption rate, AI interactions at JP Morgan generate measurement data at a scale most enterprises can’t match.

    But scale alone doesn’t prove value. What makes JP Morgan’s approach worth studying is that they measure AI at the use case level — not just at the platform level. Every one of their 600-plus production AI use cases has specific metrics tied to specific business outcomes.

    What They Actually Measured

    JP Morgan’s AI measurement spans four categories, each connecting AI activity to a different type of business outcome:

    Time recovery. The most widely cited metric: LLM Suite users report saving an average of four hours per day. At 150,000 weekly users, that’s potentially 600,000 hours per week of recovered employee time — time that gets redirected to higher-value work, client engagement, and analysis that was previously crowded out by routine tasks. Time recovery is the most accessible AI metric because it’s easy to measure and easy to understand, but it’s also the most dangerous if not connected to downstream outcomes. Four hours “saved” only creates value if those hours are deployed productively.

    Cost reduction. The COiN (Contract Intelligence) platform provides the clearest cost reduction case study. Before AI, JP Morgan’s legal team manually reviewed commercial loan agreements — a process that consumed approximately 360,000 hours annually. COiN now reviews 12,000 documents in seconds rather than weeks, reducing legal operations costs by 30% and cutting compliance errors by 80%. The cost reduction is measured against a known baseline (manual review hours and error rates), making the calculation straightforward and defensible.

    Revenue impact. JP Morgan’s AI trading algorithms illustrate revenue-side measurement. The bank reported that AI-driven trading systems improved win rates from 52% to 63% and saved $25 million in slippage costs. Revenue impact is harder to measure than cost reduction because attribution is more complex — markets move for many reasons, and isolating AI’s contribution requires careful methodology. But JP Morgan’s approach of measuring specific trading performance metrics (win rate, slippage) rather than aggregate revenue provides a more defensible attribution model.

    Value creation mapping. Across customer personalization, trading, fraud detection, and credit decisioning, JP Morgan identified $1 to $1.5 billion in value creation from AI. This portfolio-level view is what enables the “$2 billion investment paid for itself” claim — it aggregates use case-level measurements into an enterprise-wide picture that can be presented to shareholders.

    Lessons for the Rest of Us

    Most enterprises aren’t JP Morgan. They don’t have 2,000 AI specialists or a $17 billion technology budget. But the measurement principles that underpin JP Morgan’s ability to claim ROI are applicable at any scale.

    Measure at the use case level, not the platform level. JP Morgan doesn’t report a single “AI ROI” number derived from aggregate spending and aggregate benefits. They track 600-plus individual use cases, each with defined metrics. This granularity is what makes the portfolio-level claim credible — it’s built bottom-up from measured outcomes, not estimated top-down from spending. Even an enterprise with five AI use cases can apply this discipline: define the success metric for each use case, measure it against a baseline, and report results individually before aggregating.

    Track multiple metric categories. Time recovery alone doesn’t prove ROI. Cost reduction alone doesn’t capture the full picture. Revenue impact alone is too hard to attribute without supporting data. JP Morgan tracks all four categories (time, cost, revenue, value creation) and presents them together. This multi-dimensional view is more credible to boards and CFOs than any single metric, because it demonstrates that the organization has instrumented AI measurement comprehensively.

    Build governance alongside measurement. JP Morgan’s Model Risk Governance function and Firmwide Chief Data Officer aren’t separate from AI measurement — they’re integral to it. Governance forces the organization to define what each AI system does, which creates the accountability structure that measurement requires. As we’ve seen across 100-plus AI agent deployments, the enterprises with the strongest ROI data are the ones with the most rigorous governance frameworks.

    Treat AI as infrastructure, not R&D. JP Morgan reclassified AI from an innovation investment to core infrastructure — the same category as payment processing and risk management. This shift has measurement implications: infrastructure has uptime, performance, and cost-efficiency metrics that are reviewed continuously, not evaluated in quarterly innovation reviews. When AI becomes infrastructure, measurement becomes operational rather than experimental.

    The ServiceNow Parallel

    JP Morgan isn’t the only enterprise betting big on AI measurement. ServiceNow’s AI business reached $600 million in annual contract value in 2025 and expects to exceed $1 billion by the end of 2026. Like JP Morgan, ServiceNow measures AI at the product level — tracking adoption, usage patterns, and customer value creation for each AI capability rather than reporting a single aggregate number.

    The pattern is consistent across enterprises that prove AI ROI: measurement happens at the individual use case or product level, governance provides the accountability structure, and results are aggregated into a portfolio view for executive and board reporting. The enterprises stuck in pilot purgatory do the opposite — they measure at the platform level, lack governance infrastructure, and can’t connect aggregate spending to specific outcomes.

    What This Means for 2026

    McKinsey projects that AI could unlock $200 to $340 billion annually in value for financial services alone, and the industry is responding — more than 70% of financial institutions were using AI at scale by late 2025, up from 30% in 2023. But the gap between “using AI” and “proving AI ROI” remains wide. JP Morgan is one of the few financial institutions that can put specific dollar figures on specific AI outcomes.

    The lesson isn’t that enterprises need to spend $2 billion. It’s that the measurement infrastructure JP Morgan built — use case-level tracking, baseline metrics, multi-category measurement, governance integration — is what enables the ROI claim. That infrastructure can be built at any scale, for any number of AI initiatives. The cost of building it is a fraction of the cost of running AI without it.

    If your organization is investing in AI but can’t answer “what’s the return?” with specific numbers, the problem isn’t your AI. It’s your measurement. Our AI ROI framework provides the methodology, and Olakai’s platform provides the instrumentation to track AI value the way JP Morgan does — at the use case level, against baselines, across time, cost, revenue, and risk.

    Ready to measure your AI like JP Morgan? Talk to an expert and we’ll show you how enterprises track AI ROI across every initiative — without needing a $17 billion technology budget.

  • What ServiceNow’s $8B AI Acquisition Spree Tells Us About the Future of Enterprise AI

    What ServiceNow’s $8B AI Acquisition Spree Tells Us About the Future of Enterprise AI

    ServiceNow just spent $7.75 billion to solve a problem most enterprises don’t know they have yet.

    In January 2026, ServiceNow announced its largest acquisition ever: Armis, a cyber exposure management platform, for $7.75 billion in cash. But this wasn’t an isolated move. It was the culmination of an acquisition strategy that signals a fundamental shift in how the enterprise software market views AI governance.

    When a $200 billion platform company makes its largest purchase in history, it’s worth paying attention to what they’re buying—and why.

    The Acquisition Timeline

    ServiceNow’s 2025 spending spree tells a coherent story. In January 2025, they acquired Cuein, an AI-native conversation data analysis platform. In April, they announced the acquisition of Logik.ai, an AI-powered configure-price-quote solution. Then came Moveworks for $2.85 billion, Data.World for data governance, and Veza for identity security.

    The Armis deal dwarfs them all. At $7.75 billion in cash—more than twice the Moveworks price—it represents a massive bet on the convergence of AI, security, and operational technology. Combined with the earlier acquisitions, ServiceNow is assembling capabilities that span AI conversation analysis, data governance, identity management, and now comprehensive exposure management across IT, OT, and IoT environments.

    This isn’t a collection of opportunistic purchases. It’s a deliberate construction of an AI governance stack.

    The AI Control Tower Vision

    ServiceNow has been explicit about their strategic direction. They’re positioning themselves not just as an AI platform, but as what they call an “AI Control Tower”—a unified system that governs and manages AI across the enterprise.

    In the Armis announcement, ServiceNow President Amit Zavery stated it directly: “In the agentic AI era, intelligent trust and governance that span any cloud, any asset, any AI system, and any device are non-negotiable if companies want to scale AI for the long-term.”

    That framing matters. ServiceNow isn’t just saying AI governance is important. They’re saying it’s non-negotiable for scaling AI—and they’re willing to spend nearly $8 billion to prove the point.

    The Armis acquisition specifically addresses a visibility gap that most organizations haven’t fully reckoned with. Without knowing what’s connected across IT, operational technology, IoT, and physical environments, ServiceNow argues that “workflow automation, AI governance, and risk prioritization all collapse into theatre.” You can write policies all day, but if you can’t see what’s actually happening across your technology footprint, those policies are aspirational at best.

    Why This Matters for Every Enterprise

    ServiceNow’s acquisition strategy validates a market reality that’s been emerging for the past two years. AI governance isn’t a nice-to-have feature for compliance teams to worry about later. It’s becoming a core enterprise capability—one that established platform companies are racing to own.

    Consider what this signals. A company with ServiceNow’s market intelligence—they see how their 8,100+ enterprise customers are actually deploying technology—has concluded that AI governance is worth a multi-billion dollar bet. This aligns with the broader trajectory we traced in The Evolution of Enterprise AI, where each era demands more robust governance. They’re not experimenting. They’re going all-in.

    This has several implications for enterprise leaders.

    First, the governance problem is real and urgent. If you’ve been treating agentic AI governance as a future concern, the market is moving faster than that timeline allows. ServiceNow, Microsoft, Salesforce, and other major platforms are all investing heavily in AI governance capabilities. They’re building for a future where governance is expected, not optional.

    Second, visibility is the foundation. Every acquisition ServiceNow made connects to visibility in some way—seeing AI conversations, understanding data flows, tracking identities, monitoring connected devices. You can’t govern what you can’t see, and the platform leaders are racing to be the ones who provide that visibility layer.

    Third, the vendor landscape is consolidating. When large platforms acquire specialized governance capabilities, they’re signaling an intent to own that layer of the stack. Organizations that wait too long may find themselves choosing between platform lock-in and building custom solutions from scratch.

    The Broader Pattern

    ServiceNow isn’t alone in this recognition. Microsoft has been embedding governance capabilities across its Copilot ecosystem. Salesforce is building AI controls into its platform. AWS, Google Cloud, and Azure are all developing AI governance tooling.

    The pattern is clear: every major platform company has concluded that AI governance will be a battleground for enterprise relationships. They’re not just selling AI capabilities—they’re selling the ability to control, secure, and measure those capabilities.

    This creates both opportunity and risk for enterprises. The opportunity is that governance capabilities will become more accessible as platform providers compete to offer them. The risk is that governance becomes another vector for platform lock-in, with organizations finding themselves dependent on a single vendor not just for AI capabilities but for their ability to manage and measure those capabilities.

    What This Means for Your AI Strategy

    The ServiceNow acquisitions should prompt several strategic questions for enterprise leaders.

    If you’re still waiting for AI governance, the market isn’t. The leading platform companies are spending billions to build governance capabilities. They’re doing this because they see demand from their largest customers—the enterprises that are furthest along in AI deployment. If you’re behind the curve on AI governance, you’re increasingly in the minority.

    Enterprise-grade governance is becoming table stakes. Two years ago, AI governance was a differentiator. Organizations that had it were ahead. Today, it’s moving toward baseline expectation. The question is shifting from “Do you have AI governance?” to “How mature is your AI governance?” Organizations without any governance infrastructure will increasingly struggle to pass security reviews, satisfy regulators, and win enterprise deals.

    You don’t need $8 billion to get started. ServiceNow is building for a world where they’re the governance layer for their entire customer base. Your organization has different needs. You need visibility into what AI is doing, measurement of what value it’s delivering, and controls that scale with your risk profile. That doesn’t require a platform acquisition strategy—it requires the right tools applied to your specific environment.

    The Vendor-Neutral Alternative

    Olakai was built on the same insight that’s driving ServiceNow’s acquisition strategy: enterprises need unified visibility, governance, and ROI measurement across their AI deployments. The difference is in how we deliver it.

    Rather than locking customers into a single platform, Olakai provides a vendor-neutral control plane that works across AI tools, models, and infrastructure. We integrate with whatever AI systems you’re using—whether that’s chatbots from one vendor, copilots from another, and agent frameworks from a third. The goal is the same governance visibility and ROI measurement that ServiceNow is assembling through acquisitions, without requiring you to commit to their ecosystem.

    This matters because most enterprises don’t have a single-vendor AI environment, and they’re unlikely to in the foreseeable future. Different teams have different needs. Different use cases have different requirements. A governance layer that only works within one platform leaves gaps that shadow AI will fill.

    Looking Ahead

    The ServiceNow acquisition spree marks a turning point. AI governance has moved from emerging concern to validated market category, with billions of dollars of M&A activity confirming its importance. This shift is playing out across every industry vertical.

    For enterprise leaders, the message is clear. The organizations that figure out AI governance in 2026 will have a significant advantage over those that don’t — a theme we explore across all eight trends in our AI Predictions for 2026. They’ll scale AI programs faster because they can prove value and manage risk. They’ll win more enterprise deals because they can satisfy security and compliance requirements. They’ll retain talent because they can offer AI tools with appropriate guardrails rather than blanket prohibitions.

    ServiceNow is betting that AI governance will be non-negotiable for enterprises that want to scale AI. Based on what we’re seeing in the market, that bet looks correct.

    The only question is whether you’ll build that governance capability before your competitors do.

    The market has validated AI governance. Talk to an expert to see how Olakai delivers it without platform lock-in.

  • AI Predictions for 2026: What Enterprise Leaders Need to Know

    AI Predictions for 2026: What Enterprise Leaders Need to Know

    As 2025 draws to a close, enterprise AI has reached an inflection point. Chatbots and copilots proved the technology works — a progression we trace in The Evolution of Enterprise AI. Agentic AI is demonstrating the power of autonomous action. But the gap between AI experimentation and AI value remains stubbornly wide for most organizations.

    The stakes are higher than ever. According to Gartner, 40% of enterprise applications will be integrated with task-specific AI agents by the end of 2026, up from less than 5% in 2025. That’s an 8x increase in a single year. But the same Gartner research warns that over 40% of agentic AI projects will be canceled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls. The difference between the winners and the laggards won’t be who has the most AI—it’ll be who extracts the most value from it.

    Here are the trends we see shaping enterprise AI in 2026—and what they mean for business leaders.

    1. From Pilots to Production

    2026 will be the year enterprises finally move beyond pilot purgatory. Organizations that have been experimenting for 2-3 years will face a “ship or kill” moment: either prove ROI and scale, or acknowledge the experiments failed. The era of open-ended experimentation is ending.

    This shift has real consequences. Expect pressure to quantify AI value in business terms, not just technology metrics. Governance and measurement become requirements, not nice-to-haves. Vendors will face harder questions about real-world results, not demo magic. According to McKinsey, high-performing organizations are three times more likely to scale agents than their peers—but success requires more than technical excellence. The key differentiator isn’t the sophistication of the AI models; it’s the willingness to redesign workflows rather than simply layering agents onto legacy processes.

    If you’ve been running pilots, define success criteria and set a deadline. Either demonstrate value or reallocate resources to use cases that can. For a structured approach to proving value, see our AI ROI measurement framework.

    2. The Rise of Multi-Agent Systems

    Single-purpose agents will give way to coordinated multi-agent systems. Just as microservices transformed software architecture, agent ecosystems will transform how enterprises automate complex workflows. Gartner reported a 1,445% surge in multi-agent system inquiries from Q1 2024 to Q2 2025—a clear signal that enterprises are thinking beyond standalone agents.

    This shift enables complex processes like order-to-cash and hire-to-retire to become fully autonomous. Agents will hand off work to other agents, creating agent-to-agent workflows that mirror how human teams collaborate. But governance complexity increases as agent interactions multiply—you’ll need visibility not just into individual agents but into the handoffs and decisions across the entire system.

    Forrester predicts that 30% of enterprise app vendors will launch their own MCP (Model Context Protocol) servers in 2026, enabling external AI agents to collaborate with vendor platforms. Gartner outlines five stages in enterprise AI evolution: Assistants for Every Application (2025), Task-Specific Agents (2026), Collaborative Agents Within Apps (2027), Ecosystems Across Apps (2028), and “The New Normal” (2029) where at least half of knowledge workers will be expected to create, govern, and deploy agents on demand.

    Design your agent architecture with coordination in mind now. Establish standards for how agents communicate and hand off work before the complexity becomes unmanageable.

    3. Governance Becomes Competitive Advantage

    Organizations with mature AI governance will scale faster than those without. While governance has been seen as a brake on innovation, 2026 will reveal it’s actually an accelerator—enabling confident deployment of higher-risk, higher-value use cases that competitors can’t touch.

    Companies with governance in place can move to production faster because security and compliance aren’t blocking deployment at the last minute. Regulatory pressure will increase with the EU AI Act fully in effect, state laws emerging in the U.S., and industry standards solidifying. Customers and partners will ask about your AI governance posture. Forrester predicts 60% of Fortune 100 companies will appoint a head of AI governance in 2026—organizations ramping up agentic exploration will especially benefit from this increased focus.

    Build governance foundations now. Start with visibility (what AI is running?), then add controls (who can do what?), then measurement (is it working?). Our CISO governance checklist provides a comprehensive framework.

    4. The ROI Reckoning

    CFOs will demand clear AI ROI numbers. The days of “we’re investing in AI for the future” are ending. 2026 will require concrete evidence that AI investments are paying off.

    McKinsey estimates generative AI could add between $2.6 and $4.4 trillion annually to global GDP, with AI productivity gains in areas like security potentially unlocking up to $2.9 trillion in economic value by 2030. But that’s the macro picture. At the individual enterprise level, AI leaders will need to connect AI metrics to business outcomes. Activity metrics like conversations and completions won’t be enough—you’ll need cost savings, revenue impact, and time-to-value calculations. Some AI projects will be cut when they can’t prove value.

    Establish baselines before deploying AI. Define what success looks like in business terms. Track outcomes, not just activity.

    5. Shadow AI Backlash

    A major data breach or compliance violation caused by shadow AI will force enterprises to take unauthorized AI use seriously. What’s been tolerated as employee experimentation will become a recognized security risk.

    Enterprises will invest in shadow AI detection and governance. Policies will shift from “don’t use AI” (which doesn’t work) to “use approved AI” (which gives employees a sanctioned path). Security teams will add AI-specific controls to their toolkit. Gartner’s warning about “agent washing”—vendors rebranding existing products without substantial agentic capabilities—adds another dimension: you’ll need to distinguish real AI tools from marketing rebadging.

    Understand your shadow AI exposure now. Provide sanctioned alternatives that meet employee needs. Build detection capabilities before an incident forces your hand.

    6. Industry-Specific Agents Emerge

    Vertical AI solutions will outperform horizontal ones. Pre-built agents for specific industries—healthcare claims processing, financial underwriting, legal document review—will deliver faster time-to-value than general-purpose platforms that require extensive customization.

    Industry expertise becomes as important as AI capability. The build vs. buy calculus shifts toward buy for common workflows, with differentiation coming from proprietary data and processes rather than technology. Gartner estimates only about 130 of the thousands of agentic AI vendors are real—the rest are rebranding without substance.

    Evaluate industry-specific AI solutions for common workflows in your sector. Reserve custom development for truly differentiating use cases where your unique processes create competitive advantage. The Future of Agentic use case library provides examples across industries.

    7. The Talent Shift

    AI will change the skills organizations need—but not in the ways people expect. Demand will grow for AI governance, integration, and change management expertise. Pure AI/ML research talent will remain concentrated at large labs; most enterprises won’t build models, they’ll integrate and govern them.

    Change management and training become critical for adoption—technology that people don’t use delivers zero value. New roles are emerging: AI Ethics Officer, AI Governance Lead, Agent Operations. Gartner predicts that through 2026, atrophy of critical-thinking skills due to GenAI use will push 50% of global organizations to require “AI-free” skills assessments. The top five HCM platforms will offer digital employee management capabilities, treating AI agents as part of the workforce requiring HR oversight.

    Invest in governance and integration capabilities. Build change management into every AI project. Upskill existing staff on AI governance rather than competing for scarce model-building talent.

    8. Cost Optimization Pressure

    AI costs will come under scrutiny. Early implementations often over-spend on model API calls, infrastructure, and maintenance. 2026 will bring focus to AI unit economics and cost optimization.

    Cost per transaction becomes a key metric alongside accuracy and time savings. Model selection will consider cost/performance tradeoffs—not every task needs the most powerful model. Right-sizing becomes standard practice: using simpler, faster, cheaper models where appropriate, reserving expensive frontier models for tasks that truly require them.

    Track AI costs at the use-case level so you understand where money is going. Experiment with smaller models for routine tasks. Optimize prompts and workflows for efficiency—often the cheapest improvement is making fewer API calls through better prompt engineering.

    The Path Forward

    2026 will separate AI leaders from AI laggards. The difference won’t be technology—it will be execution. Leaders will prove ROI, scale successful pilots, and build governance that enables rather than blocks. Laggards will remain stuck in experimentation, unable to prove value or manage risk.

    Gartner’s best case scenario projects that agentic AI could drive approximately 30% of enterprise application software revenue by 2035, surpassing $450 billion—up from 2% in 2025. By 2028, Gartner predicts 90% of B2B buying will be AI agent intermediated, pushing over $15 trillion of B2B spend through AI agent exchanges. The enterprises that build the capabilities to participate in that future will thrive; those that don’t will struggle to compete.

    The enterprises that succeed will treat AI not as a technology project but as a business transformation. They’ll measure what matters, govern what’s risky, and scale what works. The future of enterprise AI is measurable, governable, and valuable. 2026 is the year to make it real.

    Ready to move from experimentation to execution? Talk to an expert to see how Olakai helps enterprises measure ROI, govern risk, and scale AI with confidence.