Resource Type: Blog

  • Prioritizing User Engagement for Strategic Success

    Prioritizing User Engagement for Strategic Success

    In the rapidly evolving landscape of artificial intelligence, it’s crucial to recognize that the use cases for Enterprise AI differ significantly from B2C applications like ChatGPT. While consumer interfaces focus on providing quick answers, the enterprise environment demands a more nuanced approach—one that prioritizes user engagement to drive meaningful outcomes.

    According to a recent survey, 75% of enterprises believe that AI will be critical to their success in the next few years. However, without robust user engagement, businesses risk falling behind in their AI adoption and may struggle to achieve their desired outcomes. In the enterprise space, we aren’t just looking for everyday answers; we need to get things done!

    To truly harness the power of AI, organizations must rethink the user experience. This means designing interfaces that not only facilitate interaction but also encourage adoption and sustained use. Here are a few key areas where the experience must and will be rethought:Contextual interfaces: Provide users with relevant information and tools to solve problems efficiently. This might include presenting relevant data, offering suggested next steps, or even automating routine tasks.

    • Personalization: Tailor the user experience to each individual or group based on their jobs, roles, and responsibilities. This could involve customizing the interface, providing relevant training or support, or even offering personalized recommendations.
    • Avoid repetition: Don’t force users to re-enter the same information or complete the same tasks multiple times. Instead, use AI to automate routine tasks, reduce friction, and make the experience more seamless. This is key topic which I will continue to write about because it is, without a doubt, the next pet-peeve: having to repeat over and over again what we want to an LLM.
    • Advanced prompting: Help users craft accurate and effective prompts, which is critical in the enterprise for achieving accurate results. Or better yet, remove the need for prompting all together. For many of us, prompting is hard.

    By focusing on user engagement and rethinking the experience, businesses can unlock the full potential of AI, transforming it from a mere tool into a strategic asset that drives productivity gains. Let’s prioritize user engagement as a key factor in our strategies.

  • AI Agents in Cybersecurity: From 30 to 4 Minutes!

    AI Agents in Cybersecurity: From 30 to 4 Minutes!

    When a phishing email hits your inbox and you click “report,” what happens next?

    Until recently, that click triggered a cascade: a ticket gets created, a security analyst picks it up, spends 30 minutes to 2.5 hours investigating, checking links, detonating attachments in sandboxes, querying tools, hunting for leads across systems. Then they write it up and close the ticket.

    Seventy percent of the time? It’s nothing. False positive. Wasted effort.

    That analyst you just hired, the one who got into cybersecurity to hunt down bad guys and protect the organization? They’re spending most of their day doing repetitive investigation work that leads nowhere.

    Nathan Burke, CMO at 7AI, just showed me what happens when you solve this differently.

    The Four-Minute Investigation

    On this week’s episode of Enterprise AI Unlocked, Nathan walked me through something that should make every CISO pause: 7AI’s autonomous security agents are completing investigations that used to take 30 minutes to 2.5 hours in four minutes. Same quality. Same thoroughness. But without burning analyst time on non-human work.

    Here’s what’s different: instead of one generalist AI trying to do everything, 7AI built specialized agents, small, purpose-built pieces of LLM with specific missions. One agent checks links. Another detonates file attachments. Another queries endpoint data. They work in parallel, handling the entire investigation chain end-to-end, then hand results to humans only when something actually needs attention.

    The result? Investigations that complete in four minutes. Security teams getting 25-50% of their time back. And analysts finally doing the work they signed up for: threat hunting, proactive defense, strategic security initiatives.

    But here’s what caught my attention: this isn’t about technology. It’s about trust.

    The Trust Curve

    Nathan said something that reframed how I think about enterprise AI adoption:

    “What I’m selling you is not a new piece of software to do things better or faster. I’m saying I’ve got this thing that’s gonna do the work you don’t want your people to do.”

    That’s a fundamentally different value proposition than traditional enterprise software.

    When you buy an expense management platform, you’re buying a process. You’re implementing a tool and teaching users how to follow workflows. The work shifts, maybe gets marginally faster, but humans are still doing it.

    When you deploy AI agents, you’re replacing the work itself. The agents look at receipts, OCR them, categorize them, check your calendar for context, submit them on your behalf. They don’t shift work, they eliminate it.

    That’s a much bigger trust hurdle to cross.

    Nathan’s customers go through a predictable journey:

    1. Skepticism: “This can’t be real. AI can’t handle this.”
    2. Validation: “Wait, it actually works in our production environment.”
    3. Trust: “I can see every decision it made, and I can give feedback.”
    4. Strategic thinking: “Where else can we apply this?”

    The moment that shift happens – from “Can your technology solve my problem?” to “Where else can we use this?” That’s when AI moves from experimentation to transformation.

    The One Metric That Actually Matters

    I asked Nathan the question I ask every guest on Enterprise AI Unlocked: If you had to pick one metric to prove AI’s value to your board, what would it be?

    His answer was immediate: Time saved.

    Not “alerts processed faster.” Not “reduction in false positives.” Time saved on people.

    Here’s why that matters in cybersecurity specifically: there are between 1-3 million unfilled cybersecurity jobs in the US alone. It’s almost impossible to find good security analysts. You’ve got increasing attacks and alerts on one side, and a massive shortage of skilled people on the other.

    So you’ve got these precious resources spending 70% of their time on manual, repetitive work that turns out to be nothing most of the time. Meanwhile, they want to be threat hunting, upskilling, doing strategic work, but they’re drowning in tickets.

    Time saved isn’t just an efficiency metric. It’s unlocking capacity for work that actually matters. It’s retention in a talent pool where people can leave for another $20K across the street tomorrow. It’s the difference between responding and being strategic.

    Nathan gave me an example that stuck: One of their customers challenged a security analyst to spend a week automating one thing that was wasting her time. She picked security questionnaires, those vendor compliance forms that are repetitive and time-consuming. A week later, she had it automated with AI. Done. In production. She got her bonus.

    The CISO used that story to show the entire team what was possible. Now they’re all asking: “What else can we automate that we shouldn’t be doing manually?”

    The Real Problem: Non-Human Work

    This is where the conversation gets uncomfortable for a lot of organizations.

    Nathan introduced me to a framework that reframes everything: human work vs. non-human work.

    Non-human work is essential. It has to be done. But it doesn’t require human creativity, judgment, or strategic thinking. It’s checking links against threat databases. It’s detonating files in sandboxes. It’s writing queries and jumping between tools to gather data.

    Human work is threat hunting. It’s investigating complex, nuanced security incidents that require context only humans have. It’s building relationships with the business so security isn’t “the team that sits in a dark corner saying no to stuff.”

    The problem? Most organizations haven’t asked the foundational question:

    What work do you want your people to do?

    Not “what work needs to be done.” Not “what can AI handle.” What do you actually want your security team spending time on?

    Once you answer that question, the prioritization becomes obvious. You look at everything else and ask: Why are humans doing this?

    The 90% Failure Rate

    Here’s the harsh reality: MIT research says that more than 90% of AI initiatives fail.

    Nathan’s seen it happen. Organizations see the promise of AI, try to do everything at once, spiral out of control, burn budget, and have nothing to show for it.

    The ones that succeed follow a simple pattern:

    1. Start with one painful area that has clear impact
    2. Scope it tightly so you can measure success
    3. Know your qualification criteria before you begin
    4. Prove it works, then expand

    Don’t start with “What can AI do?” Start with “What are we doing that humans shouldn’t have to?”

    Find that one pain point. The one that’s critical, high-value, clearly measurable, and achievable quickly. Solve it. Prove the ROI. Then expand.

    J.P. Morgan didn’t deploy AI to 250,000 employees by trying to do everything at once. They started with clear use cases, measured obsessively, governed tightly, and scaled what worked.

    7AI’s customers follow the same playbook. They start with one or two security use cases – phishing emails, endpoint investigations. They prove it works. They build trust. Then CISOs start asking: “What about cloud identity? What about this other workflow?”

    The trust curve accelerates. But only if you build it right from the start.

    Governance as an Enabler, Not a Barrier

    I pushed Nathan on governance because in cybersecurity, mistakes can be catastrophic. How do you balance innovation with control when AI is literally on the front lines making security decisions?

    His answer flipped the narrative I usually hear about governance:

    “AI agents can actually be more governable than human analysts because every step is visible and auditable.”

    Here’s what that looks like in practice:

    When a 7AI agent investigates a phishing email, you can see:

    • The agent that handled each step
    • The input it received
    • The query it sent to each tool
    • The response it got back
    • The determination it made
    • Why it made that determination

    Everything is auditable. Everything is immutable. You can copy-paste queries and validate them yourself.

    Compare that to human analysts. If every person documented every step and every reasoning path for every investigation, they’d complete maybe one investigation per day. It’s not scalable. It’s not realistic.

    With AI agents, you get more transparency, more consistency, and a complete audit trail that was never possible with humans alone.

    Governance isn’t slowing down deployment. When you architect it right, governance enables scale because it builds the trust required to deploy autonomously.

    The Uncomfortable Truth About Jobs

    I asked Nathan to paint the picture: What does work look like when we’ve eliminated all the non-human work?

    His answer was honest and thought-provoking.

    At his last company, he managed a team of 43 people globally. At 7AI, his team is two people. As they scale, he’s asking: How many do I actually need?

    It’s not that AI eliminates jobs. It’s that AI fundamentally changes what jobs look like.

    In the past, you’d build specialist teams: product marketing here, demand gen here, brand here. Everyone stays in their lane. But that creates bottlenecks. Specialists end up waiting or saying “I only do this one thing.”

    AI is enabling the rise of what Nathan calls the super generalist: someone who’s curious, capable of doing more than one thing, and empowered by AI to move across disciplines without needing to master every technical detail first.

    You don’t need to learn all the little things to do something pretty well anymore. AI handles the repetitive, technical, rote work. You focus on creativity, judgment, strategy, and connecting dots across domains.

    That’s the shift happening in the next 12 months. Fewer individual specialist roles. More ultra-generalists who can tackle broad challenges because AI removed the barriers.

    Is that threatening? For some, yes. For others, it’s liberating. It depends on whether you’re someone who wants to stay narrowly specialized or someone who’s been dying to do more but couldn’t because of time constraints.

    What J.P. Morgan and 7AI Have in Common

    Here’s the pattern I’m seeing across enterprises that are actually making AI work:

    1. They treat AI as an ecosystem, not a tool J.P. Morgan built LLM Suite as a centralized platform. 7AI architected specialized agents with governance built in. Both understood that scattered tools without infrastructure fail.

    2. They measure from day one J.P. Morgan knows which use cases deliver ROI. 7AI’s customers can see time saved, false positive reduction, and analyst capacity unlocked. You can’t scale what you can’t measure.

    3. They build trust through transparency J.P. Morgan updates LLM Suite every eight weeks and feeds it proprietary data with clear guardrails. 7AI shows every agent decision and makes everything auditable. Trust isn’t assumed—it’s engineered.

    4. They start small and scale what works J.P. Morgan didn’t go from zero to 250,000 employees overnight. 7AI starts with one or two use cases. Both prove value, build trust, then expand.

    The Real Playbook

    If you’re deploying AI, especially agentic AI – in your organization, here’s the framework:

    Start with the work you don’t want people doing Not “what can AI do?” but “what are humans doing that wastes their potential?”

    Pick one painful area and prove it Don’t boil the ocean. Find the highest-value, most measurable pain point. Solve it. Show ROI.

    Build visibility and governance from day one You can’t scale what you can’t see. You can’t govern what you don’t understand. Architect transparency into the system.

    Measure time saved, not tasks completed The metric that matters is human capacity unlocked for higher-value work. That’s what executives care about. That’s what drives retention.

    Scale based on data, not intuition J.P. Morgan knows which agents work. 7AI’s customers know which workflows deliver ROI. Build the intelligence layer so you can make decisions based on evidence.

    What Comes Next

    Nathan’s answer to where enterprise AI is headed in the next 12 months was refreshingly honest:

    “It’s early innings. Twelve months from now is a lifetime away.”

    What he’s seeing: adoption happening faster than anything he’s experienced in 20+ years in tech. 7AI went from launch to large enterprise customers in production in under a year. One Fortune 350 customer went from first conversation to production deployment in eight weeks, and now they’re bundling 7AI as a service offering to their global customers.

    That acceleration is unprecedented.

    But the companies winning aren’t the ones with the best AI. They’re the ones who solved the trust problem, the measurement problem, and the governance problem first.

    Your Move

    Here’s the uncomfortable question every enterprise leader needs to answer:

    How many autonomous agents are running in your organization right now?

    Can you name them? Can you measure their ROI? Can you prove to your CFO which ones justify continued investment?

    If the answer is no, you’re building in the dark. And while you’re figuring it out, your competitors are measuring, optimizing, and scaling based on intelligence.

    Nathan said it clearly: “Don’t start with ‘what can AI do?’ Start with ‘what are we doing that humans shouldn’t have to?’”

    That question changes everything.


    This article is based on Episode 2 of Enterprise AI Unlocked, where I sat down with Nathan Burke, CMO at 7AI, to discuss deploying autonomous AI agents in cybersecurity operations. Watch the full 47-minute episode on YouTube, or catch the first 15 minutes here on LinkedIn.

    https://youtu.be/WjYJjdxCk8k

  • J.P. Morgan’s AI Journey Is a Masterclass for Enterprise Leadership

    J.P. Morgan’s AI Journey Is a Masterclass for Enterprise Leadership

    The Twist in the Tale

    Back in 2023, J.P. Morgan famously blocked ChatGPT across its operations-citing legitimate concerns around data leakage and compliance risks. This move echoed across boardrooms globally, as executives grappled with shadow AI infiltrating their organizations.

    Fast forward to today, and that same institution leads one of the most advanced, secure AI programs in global finance:

    • 200,000 employees now operate with a dedicated LLM suite
    • Over 450 generative AI use cases are deployed across the bank
    • Results include 3× efficiency gains in advisory services, 30% reduction in servicing costs, and over $1.5 billion in fraud detection savings

    In short: the company that once banned ChatGPT now runs it, at scale, safely, and strategically.

    The Hidden Risk for Enterprise AI Today

    A staggering 95% of GenAI pilots are failing to move the needle on P&L, according to MIT’s ‘The GenAI Divide: State of AI in Business 2025’ report.

    Why? The study highlights three recurring failure modes:

    • Flawed integration with workflows
    • Misaligned expectations and poor use-case focus
    • Inadequate customization to business context

    In my own work advising global enterprises, I see similar patterns every day:

    • Shadow AI is rampant, with IT seeing only 10–20% of tools actually in use
    • Executives are stuck between “enable productivity” and “avoid compliance disasters”
    • Teams are waiting for regulations, rather than building governance frameworks proactively

    What JPMorgan Did Differently

    The truth they recognized early on is subtle but critical: governance enables scale, it doesn’t block it.

    By investing in visibility, controls, and accountability, they created the foundation for enterprise-grade AI, and unlocked the upside that many others are still chasing.

    Traits That Distinguish the Winners

    From my work across industries, the organizations succeeding with AI share three common traits:

    1. Centralized AI governance frameworks, covering generative AI, agentic workflows, and embedded AI apps
    2. Clear productivity metrics aligned to adoption, so ROI isn’t a mystery
    3. Playbooks that balance enablement with accountability, not just experimentation

    These companies aren’t just piloting AI, they’re operationalizing it.

    Looking Ahead: A Test for Enterprise Leadership

    With AI adoption accelerating, and regulation advancing, the question for every leadership team is no longer Can we use AI? but Can we use it safely, and measurably?

    Those who build robust frameworks now will reap the rewards of scaled innovation. Those who delay will likely end up trying to catch up.

  • J.P. Morgan Chase deployed Agentic AI

    J.P. Morgan Chase deployed Agentic AI

    Here’s the problem no one’s talking about.

    Back in August, I wrote about J.P. Morgan’s transformation from blocking ChatGPT to deploying AI across 200,000 employees. The stats were impressive: 450+ use cases, $2B in annual value, 3× efficiency gains.

    Three months later, the story’s evolved in a way that should make every enterprise leader pause.

    The Scale Play

    J.P. Morgan now has 250,000 employees on LLM Suite, their internal AI platform built on OpenAI and Anthropic models. They’re updating it every eight weeks, feeding it proprietary data from across the bank’s operations. Half the workforce uses it daily.

    But here’s where it gets interesting: they’ve moved beyond GenAI productivity tools into agentic AI, autonomous agents handling complex, multi-step tasks across the enterprise.

    Derek Waldron, their Chief Analytics Officer, showed CNBC a demo: the system generated a complete investment banking deck in 30 seconds. Waldron put it plainly: “As those agents become increasingly powerful and increasingly connected into J.P. Morgan, they can take on more and more responsibilities.”

    The vision? Every employee with an AI assistant. Every process powered by AI agents. Every client experience managed by AI concierges.

    The Uncomfortable Gap

    Here’s what J.P. Morgan figured out early that most enterprises are missing:

    You can’t scale what you can’t see.

    They didn’t just deploy AI. They built the infrastructure to measure it, govern it, and optimize it. LLM Suite isn’t scattered tools. It’s a centralized platform connected to their data ecosystem, updated on a fixed cadence, with clear governance guardrails.

    Most enterprises I talk to are running in the opposite direction:

    • Shadow agentic workflows proliferating across teams with zero visibility
    • Pilots everywhere, but no framework for measuring what’s working
    • GenAI tools deployed, but no way to quantify productivity gains or ROI
    • C-suite asking for proof, and teams scrambling to manufacture metrics after the fact

    MIT’s study from this year said it clearly: 95% of GenAI projects show no measurable impact on P&L. J.P. Morgan is in the 5% because they solved the measurement problem first.

    The Agentic Blindspot

    The shift to agentic AI compounds this gap exponentially.

    With GenAI tools like ChatGPT or Copilot, you at least know what employees are using. With agentic workflows, you’re deploying autonomous systems that make decisions, trigger actions, and operate across multiple applications without human intervention.

    Ask yourself:

    • How many agentic workflows are running in your organization right now?
    • Which ones are delivering ROI vs. burning budget?
    • What’s the cost per task? Success rate? Rework rate?
    • Can you prove to your CFO which agents justify continued investment?

    J.P. Morgan can answer these questions. They built LLM Suite as a governed, measurable platform from day one. Every agent, every workflow, every use case is connected to their intelligence layer.

    Most enterprises can’t answer any of these questions because they’re building in the dark.

    What This Actually Means

    The gap between J.P. Morgan and everyone else isn’t just about spend. It’s about intelligence.

    They’re not guessing which AI investments work. They’re measuring, optimizing, and scaling based on data. That’s why they can confidently project $2B in annual value while other banks are still debating pilot programs.

    The enterprises that win in this phase will share three traits:

    1. Visibility across the AI stack – not just GenAI, but agentic workflows, embedded AI in SaaS tools, and every AI touchpoint in the enterprise
    2. Measurement tied to business outcomes – cost per task, productivity gains, success rates, ROI quantification
    3. Governance that enables scale -centralized intelligence platforms that let you see what’s running, what’s working, and where spend is wasted

    The Real Playbook

    J.P. Morgan’s blueprint isn’t complicated:

    • Build or buy a centralized AI platform
    • Connect it to your data and systems
    • Measure everything from day one
    • Govern before you scale
    • Optimize based on intelligence, not intuition

    The problem? Most enterprises are still treating AI like a collection of tools instead of an ecosystem that requires intelligence infrastructure.

    You wouldn’t run a company without a CRM, ERP, or data warehouse. Why are you scaling AI without a system of record to measure it?

    What Comes Next

    Here’s the reality: your competitors are already deploying agentic AI. Some are measuring it. Most aren’t.

    The ones who figure out the intelligence layer first will have a compounding advantage. They’ll know which agents to scale, which to kill, and how to prove ROI to the board. Everyone else will keep running pilots and hoping something sticks.

    J.P. Morgan proved you can go from banning ChatGPT to running one of the world’s most sophisticated AI programs in under three years. But the key wasn’t the tech. It was building the infrastructure to see, measure, and govern it.

    Your move.

  • Shadow AI Is Evolving. Are Your Metrics Evolving Too?

    Shadow AI Is Evolving. Are Your Metrics Evolving Too?

    1. The Hidden Challenge: Shadow AI Is Still Alive

    Surveys reveal that over half of employees (54%) use unauthorized AI tools, with 57% actively hiding this usage from leadership. In effect, shadow AI has transformed into a hidden productivity network, quietly active and largely unmanaged.

    Yet enterprises have responded by deploying licensed AI tools and internal systems, raising the question: Has shadow AI declined or simply become smarter and more nuanced?

    2. From Risk to Fragmentation: The New Governance Challenge

    Earlier fears centered on unauthorized usage. Today, as formal AI strategies emerge, the problem isn’t usage, it’s governance across a fragmented AI stack. Enterprises run internal bots, LLMs, agentic workflows, plus native AI features in their SaaS tools.

    TechRadar notes that strategic success now requires making AI safe, reliable, and brand-aligned through disciplined systems, not chaos. Meanwhile, Agentic AI’s rise adds another layer of complexity: agents that plan, adapt, and act across workflows, powerful, but demanding oversight.

    3. AI Adoption Isn’t Enough. It Needs Measurable Value.

    Enterprises are embracing AI, but real growth hinges on measuring its business impact. Reports show:

    • 78% of organizations use AI in some capacity, with 85% integrating agents into workflows, yet many lack holistic insights.
    • Throw in cultural transformation and integration challenges, and the path ahead requires more than tech deployment, it needs strategic execution.

    4. A Framework for the AI-First Future

    Here’s how AI-First leaders can lead, not follow:

    1. Map AI usage to outcomes: Track usage across tools, from chat prompts to automated agents, and tie them to business KPIs like efficiency gains, revenue lift, or risk mitigation.
    2. Unify governance and analytics: Break the silos, consolidate dashboards for backend audit, usage adoption & telemetry, and most importantly – ROI evaluation.
    3. Prioritize agentic context and transparency: Agent workflows demand clarity. Adopt open standards like Model Context Protocol (MCP) for seamless interoperability and traceability
    4. Close the feedback loop: Let observed usage inform strategy, feeding insights into change management, governance, and reinvestment cycles.

    5. Ready for the Next Chapter?

    If shadow AI now signals creative deviation rather than defiance, what’s your read on the new frontier?

    • What frameworks or dashboards are making AI a sustainable lever, not just a compliance challenge?
    • Have shadow usage dynamics changed with internal tools in place?
    • How are you measuring AI’s ROI, beyond usage stats to real business value?