Is your enterprise adaptive to AI?
Our take
Is your enterprise adaptive to AI? As organizations embark on their AI journeys, many discover that simply deploying individual solutions doesn’t lead to transformative impacts. The next phase requires a shift from automation to continuous adaptation, especially for complex, globally distributed entities like Global Business Services. Embracing adaptive AI ecosystems allows organizations to orchestrate workflows intelligently and respond to evolving business needs. To explore this critical transition and its implications, delve further into our insights on how adaptive AI can reshape your enterprise landscape.
The recent article from EdgeVerve highlights a crucial evolution in AI adoption for enterprises: the shift from simple automation to a more sophisticated adaptive approach. Initially, organizations embraced AI with the primary goal of speeding up processes and reducing costs. However, as they roll out individual AI solutions, many are realizing that these isolated implementations do not yield the enterprise-wide impact they anticipated. Instead, companies find themselves in a state of stagnation, with pilots proliferating but value plateauing. This realization calls for a paradigm shift toward adaptive AI ecosystems that can continuously evolve with changing business needs and conditions.
For Global Business Services (GBS) organizations in particular, this transition is not merely strategic; it's essential for survival in a complex, dynamic landscape. The article emphasizes that static automation is insufficient in environments characterized by diverse regulations and varying customer behaviors. By leveraging adaptive AI, GBS teams can orchestrate comprehensive workflows that respond intelligently to real-time signals. This flexibility enables organizations to maintain a competitive edge, as they can rapidly pivot to meet new challenges and opportunities. Such adaptability is increasingly vital as businesses face heightened scrutiny and complexity, underscoring the importance of frameworks that facilitate seamless integration and collaboration across functions.
The barriers to successful AI deployment are well-documented, ranging from poor data quality to budget constraints and inadequate skills. Yet, at the heart of these challenges lies a more profound issue: fragmentation. Many enterprises operate in silos, where data ownership is unclear, and AI initiatives are pursued locally without a cohesive strategy. This disjointed approach not only limits the effectiveness of AI deployments but also jeopardizes the integrity of decision-making processes. The shift towards adaptive AI ecosystems seeks to address these structural challenges by promoting interoperability and shared governance across the enterprise. It is not simply about having advanced tools; it is about ensuring that these tools can work in concert to deliver meaningful outcomes.
As organizations navigate this transformative journey, the role of an adaptive AI platform becomes pivotal. Such platforms must facilitate real-time data harmonization, dynamic process orchestration, and robust decision governance. By establishing a unified foundation for AI capabilities, enterprises can move beyond incremental improvements towards achieving enterprise-wide impacts. This evolution is particularly relevant in light of the ongoing discussions around responsible AI practices, as companies increasingly seek to build trust in their systems. As the article suggests, the future does not belong to those who merely implement AI but to those who embrace it as a continuously evolving part of their operational fabric.
Looking ahead, the question remains: how will enterprises adapt to this new reality? As organizations strive to build trust and integrate adaptive AI into their operations, they must remain vigilant against the risks of complacency. The ability to harness AI's full potential will depend not only on technological advancements but also on a cultural shift towards collaboration, transparency, and shared accountability. As we observe these developments, it will be fascinating to see which organizations successfully navigate this landscape and emerge as leaders in AI adoption. The path is clear, but the journey requires commitment and foresight.

Presented by EdgeVerve
For most enterprises, AI adoption began with a straightforward ambition: automate work faster, cheaper, and at scale. Chatbots replaced basic service requests, machine‑learning models optimized forecasts, and analytics dashboards promised sharper insights. Yet many organizations are now discovering that deploying individual AI solutions does not automatically translate into enterprise‑level impact. Pilots proliferate, but value plateaus.
The next phase of AI maturity is no longer about deploying more models. It is about adapting AI continuously to changing business objectives, regulatory expectations, operating conditions, and customer contexts. This shift is particularly critical for complex, globally distributed organizations such as Global Business Services (GBS), where outcomes depend on orchestrating work across functions, regions, systems, and stakeholders.
From automation to adaptation
AI can no longer be treated as a standalone tool to accelerate discrete tasks. To remain competitive, enterprises must move from isolated, single‑purpose models toward systems that can sense context, coordinate actions, and evolve over time.
This is where adaptive AI ecosystems come into play. An adaptive AI ecosystem is a network of interoperable AI agents, models, data sources, and decision services that work together dynamically. These ecosystems integrate capabilities such as natural language processing, computer vision, predictive analytics, and autonomous decision‑making, while remaining grounded in human oversight and enterprise governance.
For GBS organizations, the relevance is clear. GBS operates at the intersection of scale, standardization, and variation, managing high‑volume processes across markets that differ in regulation, customer behavior, and operational constraints. Static automation struggles in such environments. Adaptive AI, by contrast, allows GBS teams to orchestrate end‑to‑end processes, intelligently route work, and continuously improve outcomes based on real‑time signals.
Why enterprise AI deployments stall
Despite strong intent, scaling AI remains a challenge. Research consistently shows that while many organizations invest in generative and agentic AI initiatives, far fewer succeed in operationalizing them across workflows and business units. The issue is rarely ambition; it is fragmentation.
SSON Research highlights several persistent barriers to generative AI adoption in GBS, including poor data quality, lack of specialized skills, data privacy concerns, unclear ROI, and budget constraints. Beneath these symptoms lies a common root cause: siloed environments. Data is fragmented, ownership is unclear, and AI initiatives are driven locally rather than through a shared enterprise strategy.
As a result, enterprises accumulate AI solutions that cannot easily work together. Models lack shared context, decisions are hard to explain, and governance becomes an afterthought rather than a design principle.
Adaptive AI ecosystems and platforms: Clarifying the relationship
An adaptive AI ecosystem describes the enterprise‑wide outcome for how AI capabilities collaborate across the organization. An adaptive AI platform is the foundation that makes this possible.
The platform provides common services and guardrails that allow AI agents and models to:
access harmonized, trusted data
orchestrate end‑to‑end processes
enable intelligent agent handoffs between systems and humans
interoperate with both agentic and legacy applications through out‑of‑the‑box connectors
operate within defined security, compliance, and ethical boundaries
Without this platform layer, adaptive ecosystems remain theoretical. With it, AI becomes composable, governable, and scalable.
What an adaptive AI platform must enable
To meet the demands of modern enterprises, and especially GBS organizations, an adaptive AI platform must deliver a set of core capabilities.
Real‑time data harmonization is foundational. Adaptive decisions require access to both structured and unstructured data across functions and regions. Platforms must provide a unified data foundation, with observability built in, so AI systems understand not just the data itself but its quality, lineage, and relevance. Edge‑to‑cloud architectures play a role here, ensuring insights are available where decisions occur whether at the point of interaction or within a centralized decision engine.
Adaptive process orchestration is equally critical. GBS organizations increasingly rely on AI platforms that can orchestrate workflows dynamically across business units and systems. This includes coordinating multiple AI agents, enabling seamless agent‑to‑agent and human‑in‑the‑loop handoffs, and adjusting process paths in response to real‑time conditions.
Cognitive automation with governance moves beyond rule‑based automation. AI systems must be able to make context‑aware decisions with minimal human intervention, while still providing explainability, confidence indicators, and ethical constraints. The goal is not to remove humans from the loop, but to elevate their role from manual execution to oversight and judgment.
Decision governance and observability tie these capabilities together. Enterprises must be able to trace how decisions are made, understand which models contributed, and audit outcomes across markets. As regulatory expectations around AI risk management, data protection, and accountability increase globally, embedding governance into the platform becomes essential rather than optional.
Establishing trust at scale
Trust is the foundation of scalable AI. Enterprises that lack confidence in their AI systems across data integrity, model behavior, and regulatory compliance will struggle to move beyond experimentation into sustained adoption.
Building this trust requires deliberate investment. Organizations must ensure explainable AI, so decision logic is transparent to business and risk stakeholders, alongside privacy‑ and security‑by‑design principles that protect sensitive data from the outset. Continuous bias detection, model reliability, performance management, and clearly defined responsible AI guardrails are critical to maintaining consistent and ethical outcomes.
Equally important is a clear Target Operating Model. This model defines ownership across the AI lifecycle, clarifies roles and escalation paths, and aligns accountability from frontline teams to executive leadership. In GBS environments where AI‑driven decisions often span functions, geographies, and regulatory regimes these trust mechanisms are not optional. They are essential.
The road ahead
Enterprises that continue to rely on fragmented AI deployments and siloed operating models will find it increasingly difficult to keep pace. The future belongs to organizations that adopt a platform‑based approach — one that enables them to move from incremental efficiency gains to transformational, enterprise‑wide impact.
Success will not be defined by a single model or use case. It will be defined by adaptive AI ecosystems built on strong agent architectures, interoperable connectors across agentic and legacy landscapes, and shared foundations for data, orchestration, and governance. For GBS organizations in particular, this approach provides a clear path to scale AI responsibly delivering agility, trust, and sustained value in an increasingly complex world. In an era where change is constant and scrutiny is rising; the real question is no longer whether enterprises use AI but whether they are truly adaptive to it.
N. Shashidar is SVP & Global Head, Product Management at EdgeVerve.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
Read on the original site
Open the publisher's page for the full experience
Related Articles
- The consequential AI work that actually moves the needle for enterprisesPresented by OutSystems After two years of flashy AI demos, rushed agent prototypes, and breathless predictions, enterprise technology leaders are striking a more pragmatic tone in 2026. In a recent webinar hosted by OutSystems, a panel of software executives and enterprise practitioners made the case that the most consequential AI work happening now is focused on the practical matters of governance, orchestration, and iteration, along with integrating agents into the systems they've spent decades building. Enterprise leaders are increasingly focused on fundamentals. The priority is using new AI technologies to accelerate productivity, improve delivery, and produce measurable business results. Three elements shape this work: The move from AI agent prototypes to agentic systems that deliver measurable ROI in production The growing role of enterprise platforms in governing, orchestrating, and scaling AI agents safely The rise of the generalist developer and enterprise architect as the most valuable technical profiles in an era of AI-generated code Against this backdrop, the panel discussed governance frameworks, the economics of enterprise AI, and the limits of large language models without orchestration. The conversation ultimately turned to how leading organizations are building multi-agent systems grounded in existing enterprise data and workflows. Agents in the real world Enabling agents to work in production across the enterprise is best accomplished with a unified platform that handles development, iteration, and deployment. And that'swhere capabilities like the Agent Workbench in the OutSystems platform matter, said Rajkiran Vajreshwari, senior manager of app development at Thermo Fisher Scientific. It provides the infrastructure to learn, iterate, and govern agents at scale. His team at Thermo Fisher has moved away from single-task AI assistants in customer service to building a coordinated team of specialized agents using the workbench. When a support case arrives, a triage assistant classifies the request and dynamically routes it to the right specialist agent, whether that’s an intent and priority agent, a product context agent, a troubleshooting agent, or a compliance agent. "We don’t have to think about what will work and how. It’s all pre-built," he explained. "Each agent has a narrow role and clear guardrails. They stay accurate and auditable.” Governing the risks of shadow AI A new category of risk emerges when AI makes it possible for anyone in a company to generate production-level code without IT oversight. Basically, this is ungoverned shadow AI. These homegrown products are prone to hallucinations, data leakage, policy violations, model drift, and agents taking actions that were never formally approved. To get ahead of the risk, leading organizations need to do three things, said Luis Blando, CPTO of OutSystems. "Give users guardrails. They’re going to use AI whether you like it or not. Companies that seem to be getting ahead are using AI to govern AI across their full portfolio,” he explained. “That is the difference between shadow AI chaos and enterprise-grade scale.” Eric Kavanagh, CEO of The Bloor Group, noted that governance requires a layered set of disciplines that includes securing data, monitoring models for drift, and making deliberate choices about where AI connects to existing business processes. “Companies don’t have to be manually creating these controls," he added. "A lot of those guardrails and levers are baked in to platforms like OutSystems.” Why the real orchestration challenge is models vs. platforms Much of the early excitement around enterprise AI focused on selecting the right large language model. Now the harder challenge, and far more durable source of value, is orchestration. This includes routing tasks, coordinating workflows, governing execution, and integrating AI into existing enterprise systems. Scott Finkle, VP of development at McConkey Auction Group, noted that LLMs, however impressive, are pieces of complex workflows, not final solutions. Organizations should be ready to hot-swap between Gemini, ChatGPT, Claude, and whatever emerges next without having to rebuild the agentic system around it. A platform with orchestration capabilities makes that possible. It manages the lifecycle, provides visibility, and ensures processes execute reliably, even as AI handles the reasoning layer on top. “The AI and the models change, the workflows can change, but the orchestration remains the same," Finkle said. "That’s how we’re going to extract value out of AI.” The economics of enterprise AI investing Security, compliance, governance, and platform-level AI capabilities will all command greater investment in 2026, particularly as AI moves into core workflows like finance and supply chain. Enterprises should favor incremental wins rather than expect big, immediate gains. “We’re focusing on base hits," Finkle said. "The way it counts is by getting something into production and having it make an impact. Big investments in pilot projects that don’t make it into production don’t save any money. It’s not going to happen overnight, but over time I think we’ll see tremendous savings.” There's still a split in how enterprises are approaching AI transformation. Some start from scratch and reimagine every process. Others, especially those with billions of dollars in existing infrastructure depreciating in-house, want AI to integrate with their systems. They want agentic systems to reuse data, APIs, and proven processes while speeding up delivery. The agent platform approach serves both camps, but particularly the latter. Organizations can deploy agents where they add clear value while preserving the integrity of established, deterministic workflows. The rise of the enterprise architect and the generalist developer As AI accelerates code generation, bottlenecks in software delivery are dissolving. In its place is a premium on systems thinking. This is the ability to understand the broader enterprise architecture, decompose complex business problems, and reason about how AI integrates with existing infrastructure. Kavanagh pointed to enterprise architects specifically as the professionals best positioned to capitalize on this moment. “We’re entering a very interesting age of the generalist," he explained. "The better you know your enterprise architecture and your business architecture and how those things align, the better off you’re going to be. ” “The result is faster delivery with fewer interruptions and fewer bugs," Kavanaugh said. "You can focus on the non-repetitive tasks. It’s a benefit to the developer, to the business, and to the whole IT organization.” Catch the entire webinar here. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
- As models converge, the enterprise edge in AI shifts to governed data and the platforms that control itPresented by Box As frontier models converge, the advantage in enterprise AI is moving away from the model and toward the data it can safely access. For most enterprises, that advantage lives in unstructured data: the contracts, case files, product specifications, and internal knowledge. For enterprise leaders, the question is no longer which model to use, but which platform governs the content those models are allowed to reason over. "It's not what the model does anymore, it's the enterprise's own unstructured data – their content, how it's organized, how it's governed, and how it's made accessible to the AI." says Yash Bhavnani, head of AI at Box. "The organizations that will lead in AI are the ones that built the governance infrastructure to make any model trustworthy, with the right permissions in place, the right content accessible, and a clear audit trail for every action taken," says Ben Kus, CTO of Box. Enterprise AI must be grounded in secure systems of record As the advantage in AI shifts from models to governed content, systems of record are becoming the foundation that makes enterprise AI trustworthy. Employees use frontier models to summarize documents, draft reports, answer questions, but when those tools are disconnected from authoritative internal repositories, the results are difficult to trust, impossible to audit, and potentially dangerous. AI that cannot trace its outputs back to a governed source of record becomes a liability. "It's not a theoretical concern," Bhavnani says. "For an insurance enterprise using AI to analyze client claims, low accuracy is simply not acceptable, and untraceable output can't be acted upon." Systems of record provide authoritative, version-controlled content with embedded permissions and compliance controls already built in, and RAG pipelines retrieve data from live repositories at inference time, connecting responses directly to current, traceable sources. Without integration into systems of record, employees build their own workarounds, content gets duplicated across tools that don't talk to each other, and shadow knowledge stores accumulate outside the visibility of IT and compliance teams. "Customers tell us employees are uploading sensitive documents to personal accounts and running their own AI workflows, with no visibility from the enterprise into what is being shared or what is being generated," he says. "It's not just a security risk, it's an organizational one." Permission-aware access is a requirement for agentic AI As AI moves into agentic territory, executing multi-step tasks autonomously across documents, workflows, and enterprise systems, the risk profile changes entirely. Agents act faster than humans, often without the contextual judgment needed to decide what data they should access, making permissions-aware access essential. "An AI platform without permissions-aware access is too dangerous to use," Kus says. "It's a precondition for safe enterprise AI deployment, and the more it appears to have been added after the fact rather than built into the foundation, the more it should concern the enterprise considering it." In regulated industries, frameworks like HIPAA, FedRAMP High, and SOC 2 demand audit trails, policy enforcement, and demonstrable controls over who and what has accessed sensitive data. "The audit trail should cover not only the source files but the AI session that used them, and accessed only with the same controls and the same encryption mechanism," Kus says. "We don't want customers to end up with a compliance breach because the agent was looking at sensitive data and the agent records got stored somewhere unexpected." Content platforms are evolving into AI control planes Enterprise content platforms are evolving from repositories into orchestration layers — an AI control plane that sits between models, agents, and enterprise data. Rather than just storing documents, the platform governs how content is accessed, routes it to the right reasoning engine, enforces permissions, and maintains a complete audit trail of every action. "An AI-ready content platform needs to support human navigation and use in the way platforms always have, and it needs its own AI agents that understand the platform's data structures deeply enough to get the best out of them," Kus says. "It also needs to be open enough that any external agent can reach into it. An open agent ecosystem is the future of how these platforms will work." When content, permissions, audit trails, and application access are all handled by the same platform, governance stays attached to the content itself. More than any capability of the models on top of it, a unified governance layer is what allows enterprise AI to scale safely. Turning unstructured content into structured intelligence Unstructured data has long been a sticking point for organizations, which had to build specialized models to handle every subtype of unstructured data. "What's changed is that general-purpose large language models now bring enough intelligence to extract structured data from unstructured content without that level of bespoke investment," Kus says. "Box Extract applies this capability at scale, automatically pulling key information from contracts, forms, claims, and reports and applying it as structured metadata within Box. The content that previously had to be read by a person to yield its value can now be processed, structured, and made queryable across an entire repository." And once that data is extracted and operational logic lives in the system, users can visualize, search, and act on that extracted information through custom dashboards and no-code tools. Box Agents take this further by enabling multi-step reasoning and task execution grounded directly in enterprise content, with persistent sessions that support iterative knowledge work with simple, natural language direction. And because agent sessions in Box are persistent, the work is not lost between interactions. The practical result is that end-to-end workflows that previously required human coordination across multiple systems can be orchestrated directly on systems of record. "When those workflows are built on Box agents and automation operating directly on governed content, the handoffs become automated, the audit trail is built in, and the system of record remains the authoritative source throughout," Bhavani says. "Nothing falls through the cracks between systems, because there is only one system." The enterprises seeing real returns are not the ones that simply plugged in a frontier model and waited for results. They are the ones that connected AI to their systems of record, governed what it can access, and built the operational layer that makes its outputs trustworthy enough to use at scale. Platforms that bring together content management, security, automation, and AI integration in a single layer are emerging as the foundation for enterprise AI, because model capability alone is not enough. Without governance built into the platform, the gaps between systems become the point of failure. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
- Why AI breaks without context — and how to fix itPresented by Zeta Global The gap between what AI promises and what it delivers is not subtle. The same model can produce precise, useful output in one system and generic, irrelevant results in another. The issue is not the model. It's the context. Most enterprise systems were not built for how AI operates. Data is scattered across tools. Identity is inconsistent. Signals arrive late or not at all. Systems record events but fail to connect them into a continuous view. AI depends on that continuity. Without it, the model fills in the gaps so the result looks polished but lacks relevance. This is where most teams get stuck. A better model does not fix fragmented, stale, or commoditized data. Gartner estimates organizations lose an average of $12.9 million annually due to poor data quality. AI does not solve that problem, it surfaces it faster and at a greater scale. The mirror test There is a fast diagnostic test for this. Give your AI a perfect, high-intent customer signal and see what comes back. If the output is generic or irrelevant, the model needs work. But if the model produces something sharp and useful on clean data, and then falls apart on real production data, the problem is the data. In practice, it is almost always the second scenario. AI functions like a magnifying glass, so strong data systems become dramatically more powerful, and the weak ones become dramatically more visible. Organizations that have been coasting on fragmented, poorly integrated customer data can no longer hide behind reporting lag and manual interpretation. The AI renders the problem in plain sight. Context is the new identity layer This is really where the next evolution gets interesting. Even after you solve the data quality problem, there is still a second shift underway in how customer profiles are built and used. For years, enterprise data systems stored content: transactions in CRMs, demographics in data warehouses, campaign responses in marketing platforms. These records described what had already happened. They were useful for reporting but were not built for AI. AI requires context. Context is not a static record. It is a current view of the customer including recent behavior, cross-channel signals, and emerging intent. The thread that connects one interaction to the next. Identity tells you who someone is. Context tells you what they are doing and what they are likely to do next. Consider a simple example: ask an AI to recommend a beach vacation destination, and it might suggest Hawaii or Florida. Tell it you have three children, and it surfaces family-friendly options. Give it access to your recent search patterns, your affordability signals, and where you have been searching over the past year, and the recommendation changes entirely because the model is no longer working from demographic categories but from a live picture of who you are and what you are doing right now. Most enterprise systems were built to store state, not maintain context. They capture events, but they don’t maintain continuity between them. That’s the gap AI exposes. But for practitioners, the challenge is not conceptual; it is architectural. Context does not live in a single system. It is fragmented across event streams, product analytics tools, CRMs, data warehouses, and real-time pipelines. Stitching that into something an AI system can actually use requires moving from batch-oriented data models to streaming or near-real-time architectures, where signals are continuously ingested, resolved, and made available at inference time. This is where many AI initiatives stall. The model is ready, but the context layer is not operationalized. Systems are not designed to retrieve the right signals within milliseconds, or to resolve identity across channels in real time. Without that, “context” remains theoretical rather than actionable. Architectures like Model Context Protocol (MCP) are accelerating this shift by giving AI systems a way to pass memory about a user between applications, essentially threading a continuous line of context around an individual across different interactions. The result is a profile that becomes richer and more predictive over time, one that creates a line of continuity between what someone has done, what they are doing now, and what they are likely to do next. When that identity layer is strong, the same model produces better outcomes. When it is weak, no model can compensate. The compounding advantage Organizations that built first-party data systems and durable identity infrastructure before the AI wave are now benefiting from a compounding effect. Better data trains smarter models. Smarter models attract more consented users. More consented users generate richer behavioral signals. Competitors without that foundation cannot replicate this, regardless of which model they are running. The gap is structural, not algorithmic, and because identity systems improve incrementally over time, the organizations that started investing earlier have advantages that are genuinely hard to close. What this means in practice The practical implication is a shift in where AI investment goes. The organizations getting consistent results from AI are treating it as a processing layer for a living data system, not as a standalone capability to be bolted onto existing infrastructure. For builders and operators, this translates into a different set of priorities than the last two years of AI experimentation: First, instrument for real-time signals. Batch pipelines and nightly refreshes are not sufficient when AI systems are expected to respond to user intent as it happens. Teams need event-driven architectures that capture and surface behavioral signals in near real time. Second, make context retrievable at inference time. It is not enough to store data in a warehouse. Systems must be designed so that relevant context can be resolved and injected into prompts or retrieved by agents within milliseconds. Third, invest in identity resolution as infrastructure. Connecting fragmented signals across devices and channels so the system understands real individuals rather than anonymous interactions is foundational, not optional. Fourth, treat governance and consent as part of system design. First-party data built on trust is not just safer; it is more durable and ultimately more valuable than third-party data that competitors can access. These investments are less visible than a new model launch and are also far harder to copy. The real race Models are now interchangeable. The difference will come from who can operationalize context at scale and treat the model as a processing layer, not the advantage. That advantage comes from years of investment in identity infrastructure, first-party data, and systems that keep customer context current. The organizations that win won’t be the ones with better prompts. They’ll be the ones whose systems understand the customer before the prompt is ever written. Neej Gore is Chief Data Officer at Zeta Global. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
- Are we getting what we paid for? How to turn AI momentum into measurable valueEnterprise AI is entering a new phase — one where the central question is no longer what can be built, but how to make the most of our AI investment. At VentureBeat’s latest AI Impact Tour session, Brian Gracely, director of portfolio strategy at Red Hat, described the operational reality inside large organizations: AI sprawl, rising inference costs, and limited visibility into what those investments are actually returning. It’s the “Day 2” moment — when pilots give way to production, and cost, governance, and sustainability become harder than building the system in the first place. "We've seen customers who say, 'I have 50,000 licenses of Copilot. I don't really know what people are getting out of that. But I do know that I'm paying for the most expensive computing in the world, because it's GPUs,'" Gracely said. "'How am I going to get that under control?'" Why enterprise AI costs are now a board-level problem For much of the past two years, cost was not the primary concern for organizations evaluating generative AI. The experimental phase gave teams cover to spend freely, and the promise of productivity gains justified aggressive investment, but that dynamic is shifting as enterprises enter their second and third budget cycles with AI. The focus has moved from "can we build something?" to "are we getting what we paid for?" Enterprises that made large, early bets on managed AI services are conducting hard reviews of whether those investments are delivering measurable value. The issue isn’t just that GPU computing is expensive. It is that many organizations lack the instrumentation to connect spending to outcomes, making it nearly impossible to justify renewals or scale responsibly. The strategic shift from token consumer to token producer The dominant AI procurement model of the past few years has been straightforward: pay a vendor per token, per seat, or per API call, and let someone else manage the infrastructure. That model made sense as a starting point but is increasingly being questioned by organizations with enough experience to compare alternatives. Enterprises that have been through one AI cycle are starting to rethink that model. "Instead of being purely a token consumer, how can I start being a token generator?" Gracely said. "Are there use cases and workloads that make sense for me to own more? It may mean operating GPUs. It may mean renting GPUs. And then asking, 'Does that workload need the greatest state-of-the-art model? Are there more capable open models or smaller models that fit?'" The decision is not binary. The right answer depends on the workload, the organization, and the risk tolerance involved, but the math is getting more complicated as the number of capable open models, from DeepSeek to models now available through cloud marketplaces, grows. Now enterprises actually have real alternatives to the handful of providers that dominated the landscape two years ago. Falling AI costs and rising usage create a paradox for enterprise budgets Some enterprise leaders argue that locking into infrastructure investments now could mean significantly overpaying in the long run, pointing to the statement from Anthropic CEO Dario Amodei that AI inference costs are declining roughly 60% per year. The emergence of open-source models such as DeepSeek and others has meaningfully expanded the strategic options available to enterprises that are willing to invest in the underlying infrastructure in the last three years. But while costs per token are falling, usage is accelerating at a pace that more than offsets efficiency gains. It's a version of Jevons Paradox, the economic principle that improvements in resource efficiency tend to increase total consumption rather than reduce it, as lower cost enables broader adoption. For enterprise budget planners, this means declining unit costs do not translate into declining total bills. An organization that triples its AI usage while costs fall by half still ends up spending more than it did before. The consideration becomes which workloads genuinely require the most capable and most expensive models, and which can be handled just fine by smaller, cheaper alternatives. The business case for investing in AI infrastructure flexibility The prescription isn't to slow down AI investment, but to build with flexibility being top of mind. The organizations that will win aren't necessarily the ones that move fastest or spend the most; they're the ones building infrastructure and operating models capable of absorbing the next unexpected development. "The more you can build some abstractions and give yourself some flexibility, the more you can experiment without running up costs, but also without jeopardizing your business. Those are as important as asking whether you're doing everything best practice right now," Gracely explained. But despite how entrenched AI discussions have become in enterprise planning cycles, the practical experience most organizations have is still measured in years, not decades. "It feels like we've been doing this forever. We've been doing this for three years," Gracely added. "It's early and it's moving really fast. You don't know what's coming next. But the characteristics of what's coming next — you should have some sense of what that looks like.” For enterprise leaders still calibrating their AI investment strategies, that may be the most actionable takeaway: the goal is not to optimize for today's cost structure, but to build the organizational and technical flexibility to adapt when, not if, it changes again.