The consequential AI work that actually moves the needle for enterprises
Our take

Presented by OutSystems
After two years of flashy AI demos, rushed agent prototypes, and breathless predictions, enterprise technology leaders are striking a more pragmatic tone in 2026. In a recent webinar hosted by OutSystems, a panel of software executives and enterprise practitioners made the case that the most consequential AI work happening now is focused on the practical matters of governance, orchestration, and iteration, along with integrating agents into the systems they've spent decades building.
Enterprise leaders are increasingly focused on fundamentals. The priority is using new AI technologies
to accelerate productivity, improve delivery, and produce measurable business results.
Three elements shape this work:
The move from AI agent prototypes to agentic systems that deliver measurable ROI in production
The growing role of enterprise platforms in governing, orchestrating, and scaling AI agents safely
The rise of the generalist developer and enterprise architect as the most valuable technical profiles in an era of AI-generated code
Against this backdrop, the panel discussed governance frameworks, the economics of enterprise AI, and the limits of large language models without orchestration. The conversation ultimately turned to how leading organizations are building multi-agent systems grounded in existing enterprise data and workflows.
Agents in the real world
Enabling agents to work in production across the enterprise is best accomplished with a unified platform that handles development, iteration, and deployment. And that'swhere capabilities like the Agent Workbench in the OutSystems platform matter, said Rajkiran Vajreshwari, senior manager of app development at Thermo Fisher Scientific. It provides the infrastructure to learn, iterate, and govern agents at scale.
His team at Thermo Fisher has moved away from single-task AI assistants in customer service to building a coordinated team of specialized agents using the workbench. When a support case arrives, a triage assistant classifies the request and dynamically routes it to the right specialist agent, whether that’s an intent and priority agent, a product context agent, a troubleshooting agent, or a compliance agent.
"We don’t have to think about what will work and how. It’s all pre-built," he explained. "Each agent has a narrow role and clear guardrails. They stay accurate and auditable.”
Governing the risks of shadow AI
A new category of risk emerges when AI makes it possible for anyone in a company to generate production-level code without IT oversight. Basically, this is ungoverned shadow AI. These homegrown products are prone to hallucinations, data leakage, policy violations, model drift, and agents taking actions that were never formally approved.
To get ahead of the risk, leading organizations need to do three things, said Luis Blando, CPTO of OutSystems.
"Give users guardrails. They’re going to use AI whether you like it or not. Companies that seem to be getting ahead are using AI to govern AI across their full portfolio,” he explained. “That is the difference between shadow AI chaos and enterprise-grade scale.”
Eric Kavanagh, CEO of The Bloor Group, noted that governance requires a layered set of disciplines that includes securing data, monitoring models for drift, and making deliberate choices about where AI connects to existing business processes.
“Companies don’t have to be manually creating these controls," he added. "A lot of those guardrails and levers are baked in to platforms like OutSystems.”
Why the real orchestration challenge is models vs. platforms
Much of the early excitement around enterprise AI focused on selecting the right large language model. Now the harder challenge, and far more durable source of value, is orchestration. This includes routing tasks, coordinating workflows, governing execution, and integrating AI into existing enterprise systems.
Scott Finkle, VP of development at McConkey Auction Group, noted that LLMs, however impressive, are pieces of complex workflows, not final solutions. Organizations should be ready to hot-swap between Gemini, ChatGPT, Claude, and whatever emerges next without having to rebuild the agentic system around it.
A platform with orchestration capabilities makes that possible. It manages the lifecycle, provides visibility, and ensures processes execute reliably, even as AI handles the reasoning layer on top.
“The AI and the models change, the workflows can change, but the orchestration remains the same," Finkle said. "That’s how we’re going to extract value out of AI.”
The economics of enterprise AI investing
Security, compliance, governance, and platform-level AI capabilities will all command greater investment in 2026, particularly as AI moves into core workflows like finance and supply chain. Enterprises should favor incremental wins rather than expect big, immediate gains.
“We’re focusing on base hits," Finkle said. "The way it counts is by getting something into production and having it make an impact. Big investments in pilot projects that don’t make it into production don’t save any money. It’s not going to happen overnight, but over time I think we’ll see tremendous savings.”
There's still a split in how enterprises are approaching AI transformation. Some start from scratch and reimagine every process. Others, especially those with billions of dollars in existing infrastructure depreciating in-house, want AI to integrate with their systems. They want agentic systems to reuse data, APIs, and proven processes while speeding up delivery. The agent platform approach serves both camps, but particularly the latter. Organizations can deploy agents where they add clear value while preserving the integrity of established, deterministic workflows.
The rise of the enterprise architect and the generalist developer
As AI accelerates code generation, bottlenecks in software delivery are dissolving. In its place is a premium on systems thinking. This is the ability to understand the broader enterprise architecture, decompose complex business problems, and reason about how AI integrates with existing infrastructure. Kavanagh pointed to enterprise architects specifically as the professionals best positioned to capitalize on this moment.
“We’re entering a very interesting age of the generalist," he explained. "The better you know your enterprise architecture and your business architecture and how those things align, the better off you’re going to be. ”
“The result is faster delivery with fewer interruptions and fewer bugs," Kavanaugh said. "You can focus on the non-repetitive tasks. It’s a benefit to the developer, to the business, and to the whole IT organization.”
Catch the entire webinar here.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
Read on the original site
Open the publisher's page for the full experience
Related Articles
- As models converge, the enterprise edge in AI shifts to governed data and the platforms that control itPresented by Box As frontier models converge, the advantage in enterprise AI is moving away from the model and toward the data it can safely access. For most enterprises, that advantage lives in unstructured data: the contracts, case files, product specifications, and internal knowledge. For enterprise leaders, the question is no longer which model to use, but which platform governs the content those models are allowed to reason over. "It's not what the model does anymore, it's the enterprise's own unstructured data – their content, how it's organized, how it's governed, and how it's made accessible to the AI." says Yash Bhavnani, head of AI at Box. "The organizations that will lead in AI are the ones that built the governance infrastructure to make any model trustworthy, with the right permissions in place, the right content accessible, and a clear audit trail for every action taken," says Ben Kus, CTO of Box. Enterprise AI must be grounded in secure systems of record As the advantage in AI shifts from models to governed content, systems of record are becoming the foundation that makes enterprise AI trustworthy. Employees use frontier models to summarize documents, draft reports, answer questions, but when those tools are disconnected from authoritative internal repositories, the results are difficult to trust, impossible to audit, and potentially dangerous. AI that cannot trace its outputs back to a governed source of record becomes a liability. "It's not a theoretical concern," Bhavnani says. "For an insurance enterprise using AI to analyze client claims, low accuracy is simply not acceptable, and untraceable output can't be acted upon." Systems of record provide authoritative, version-controlled content with embedded permissions and compliance controls already built in, and RAG pipelines retrieve data from live repositories at inference time, connecting responses directly to current, traceable sources. Without integration into systems of record, employees build their own workarounds, content gets duplicated across tools that don't talk to each other, and shadow knowledge stores accumulate outside the visibility of IT and compliance teams. "Customers tell us employees are uploading sensitive documents to personal accounts and running their own AI workflows, with no visibility from the enterprise into what is being shared or what is being generated," he says. "It's not just a security risk, it's an organizational one." Permission-aware access is a requirement for agentic AI As AI moves into agentic territory, executing multi-step tasks autonomously across documents, workflows, and enterprise systems, the risk profile changes entirely. Agents act faster than humans, often without the contextual judgment needed to decide what data they should access, making permissions-aware access essential. "An AI platform without permissions-aware access is too dangerous to use," Kus says. "It's a precondition for safe enterprise AI deployment, and the more it appears to have been added after the fact rather than built into the foundation, the more it should concern the enterprise considering it." In regulated industries, frameworks like HIPAA, FedRAMP High, and SOC 2 demand audit trails, policy enforcement, and demonstrable controls over who and what has accessed sensitive data. "The audit trail should cover not only the source files but the AI session that used them, and accessed only with the same controls and the same encryption mechanism," Kus says. "We don't want customers to end up with a compliance breach because the agent was looking at sensitive data and the agent records got stored somewhere unexpected." Content platforms are evolving into AI control planes Enterprise content platforms are evolving from repositories into orchestration layers — an AI control plane that sits between models, agents, and enterprise data. Rather than just storing documents, the platform governs how content is accessed, routes it to the right reasoning engine, enforces permissions, and maintains a complete audit trail of every action. "An AI-ready content platform needs to support human navigation and use in the way platforms always have, and it needs its own AI agents that understand the platform's data structures deeply enough to get the best out of them," Kus says. "It also needs to be open enough that any external agent can reach into it. An open agent ecosystem is the future of how these platforms will work." When content, permissions, audit trails, and application access are all handled by the same platform, governance stays attached to the content itself. More than any capability of the models on top of it, a unified governance layer is what allows enterprise AI to scale safely. Turning unstructured content into structured intelligence Unstructured data has long been a sticking point for organizations, which had to build specialized models to handle every subtype of unstructured data. "What's changed is that general-purpose large language models now bring enough intelligence to extract structured data from unstructured content without that level of bespoke investment," Kus says. "Box Extract applies this capability at scale, automatically pulling key information from contracts, forms, claims, and reports and applying it as structured metadata within Box. The content that previously had to be read by a person to yield its value can now be processed, structured, and made queryable across an entire repository." And once that data is extracted and operational logic lives in the system, users can visualize, search, and act on that extracted information through custom dashboards and no-code tools. Box Agents take this further by enabling multi-step reasoning and task execution grounded directly in enterprise content, with persistent sessions that support iterative knowledge work with simple, natural language direction. And because agent sessions in Box are persistent, the work is not lost between interactions. The practical result is that end-to-end workflows that previously required human coordination across multiple systems can be orchestrated directly on systems of record. "When those workflows are built on Box agents and automation operating directly on governed content, the handoffs become automated, the audit trail is built in, and the system of record remains the authoritative source throughout," Bhavani says. "Nothing falls through the cracks between systems, because there is only one system." The enterprises seeing real returns are not the ones that simply plugged in a frontier model and waited for results. They are the ones that connected AI to their systems of record, governed what it can access, and built the operational layer that makes its outputs trustworthy enough to use at scale. Platforms that bring together content management, security, automation, and AI integration in a single layer are emerging as the foundation for enterprise AI, because model capability alone is not enough. Without governance built into the platform, the gaps between systems become the point of failure. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
- Is your enterprise adaptive to AI?Presented by EdgeVerve For most enterprises, AI adoption began with a straightforward ambition: automate work faster, cheaper, and at scale. Chatbots replaced basic service requests, machine‑learning models optimized forecasts, and analytics dashboards promised sharper insights. Yet many organizations are now discovering that deploying individual AI solutions does not automatically translate into enterprise‑level impact. Pilots proliferate, but value plateaus. The next phase of AI maturity is no longer about deploying more models. It is about adapting AI continuously to changing business objectives, regulatory expectations, operating conditions, and customer contexts. This shift is particularly critical for complex, globally distributed organizations such as Global Business Services (GBS), where outcomes depend on orchestrating work across functions, regions, systems, and stakeholders. From automation to adaptation AI can no longer be treated as a standalone tool to accelerate discrete tasks. To remain competitive, enterprises must move from isolated, single‑purpose models toward systems that can sense context, coordinate actions, and evolve over time. This is where adaptive AI ecosystems come into play. An adaptive AI ecosystem is a network of interoperable AI agents, models, data sources, and decision services that work together dynamically. These ecosystems integrate capabilities such as natural language processing, computer vision, predictive analytics, and autonomous decision‑making, while remaining grounded in human oversight and enterprise governance. For GBS organizations, the relevance is clear. GBS operates at the intersection of scale, standardization, and variation, managing high‑volume processes across markets that differ in regulation, customer behavior, and operational constraints. Static automation struggles in such environments. Adaptive AI, by contrast, allows GBS teams to orchestrate end‑to‑end processes, intelligently route work, and continuously improve outcomes based on real‑time signals. Why enterprise AI deployments stall Despite strong intent, scaling AI remains a challenge. Research consistently shows that while many organizations invest in generative and agentic AI initiatives, far fewer succeed in operationalizing them across workflows and business units. The issue is rarely ambition; it is fragmentation. SSON Research highlights several persistent barriers to generative AI adoption in GBS, including poor data quality, lack of specialized skills, data privacy concerns, unclear ROI, and budget constraints. Beneath these symptoms lies a common root cause: siloed environments. Data is fragmented, ownership is unclear, and AI initiatives are driven locally rather than through a shared enterprise strategy. As a result, enterprises accumulate AI solutions that cannot easily work together. Models lack shared context, decisions are hard to explain, and governance becomes an afterthought rather than a design principle. Adaptive AI ecosystems and platforms: Clarifying the relationship An adaptive AI ecosystem describes the enterprise‑wide outcome for how AI capabilities collaborate across the organization. An adaptive AI platform is the foundation that makes this possible. The platform provides common services and guardrails that allow AI agents and models to: access harmonized, trusted data orchestrate end‑to‑end processes enable intelligent agent handoffs between systems and humans interoperate with both agentic and legacy applications through out‑of‑the‑box connectors operate within defined security, compliance, and ethical boundaries Without this platform layer, adaptive ecosystems remain theoretical. With it, AI becomes composable, governable, and scalable. What an adaptive AI platform must enable To meet the demands of modern enterprises, and especially GBS organizations, an adaptive AI platform must deliver a set of core capabilities. Real‑time data harmonization is foundational. Adaptive decisions require access to both structured and unstructured data across functions and regions. Platforms must provide a unified data foundation, with observability built in, so AI systems understand not just the data itself but its quality, lineage, and relevance. Edge‑to‑cloud architectures play a role here, ensuring insights are available where decisions occur whether at the point of interaction or within a centralized decision engine. Adaptive process orchestration is equally critical. GBS organizations increasingly rely on AI platforms that can orchestrate workflows dynamically across business units and systems. This includes coordinating multiple AI agents, enabling seamless agent‑to‑agent and human‑in‑the‑loop handoffs, and adjusting process paths in response to real‑time conditions. Cognitive automation with governance moves beyond rule‑based automation. AI systems must be able to make context‑aware decisions with minimal human intervention, while still providing explainability, confidence indicators, and ethical constraints. The goal is not to remove humans from the loop, but to elevate their role from manual execution to oversight and judgment. Decision governance and observability tie these capabilities together. Enterprises must be able to trace how decisions are made, understand which models contributed, and audit outcomes across markets. As regulatory expectations around AI risk management, data protection, and accountability increase globally, embedding governance into the platform becomes essential rather than optional. Establishing trust at scale Trust is the foundation of scalable AI. Enterprises that lack confidence in their AI systems across data integrity, model behavior, and regulatory compliance will struggle to move beyond experimentation into sustained adoption. Building this trust requires deliberate investment. Organizations must ensure explainable AI, so decision logic is transparent to business and risk stakeholders, alongside privacy‑ and security‑by‑design principles that protect sensitive data from the outset. Continuous bias detection, model reliability, performance management, and clearly defined responsible AI guardrails are critical to maintaining consistent and ethical outcomes. Equally important is a clear Target Operating Model. This model defines ownership across the AI lifecycle, clarifies roles and escalation paths, and aligns accountability from frontline teams to executive leadership. In GBS environments where AI‑driven decisions often span functions, geographies, and regulatory regimes these trust mechanisms are not optional. They are essential. The road ahead Enterprises that continue to rely on fragmented AI deployments and siloed operating models will find it increasingly difficult to keep pace. The future belongs to organizations that adopt a platform‑based approach — one that enables them to move from incremental efficiency gains to transformational, enterprise‑wide impact. Success will not be defined by a single model or use case. It will be defined by adaptive AI ecosystems built on strong agent architectures, interoperable connectors across agentic and legacy landscapes, and shared foundations for data, orchestration, and governance. For GBS organizations in particular, this approach provides a clear path to scale AI responsibly delivering agility, trust, and sustained value in an increasingly complex world. In an era where change is constant and scrutiny is rising; the real question is no longer whether enterprises use AI but whether they are truly adaptive to it. N. Shashidar is SVP & Global Head, Product Management at EdgeVerve. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
- Google and AWS split the AI agent stack between control and executionThe era of enterprises stitching together prompt chains and shadow agents is nearing its end as more options for orchestrating complex multi-agent systems emerge. As organizations move AI agents into production, the question remains: "how will we manage them?" Google and Amazon Web Services offer fundamentally different answers, illustrating a split in the AI stack. Google’s approach is to run agentic management on the system layer, while AWS’s harness method sets up in the execution layer. The debate on how to manage and control gained new energy this past month as competing companies released or updated their agent builder platforms—Anthropic with the new Claude Managed Agents and OpenAI with enhancements to the Agents SDK—giving developer teams options for managing agents. AWS with new capabilities added to Bedrock AgentCore is optimizing for velocity—relying on harnesses to bring agents to product faster—while still offering identity and tool management. Meanwhile, Google’s Gemini Enterprise adopts a governance-focused approach using a Kubernetes-style control plane. Each method offers a glimpse into how agents move from short-burst task helpers to longer-running entities within a workflow. Upgrades and umbrellas To understand where each company stands, here’s what’s actually new. Google released a new version of Gemini Enterprise, bringing its enterprise AI agent offerings—Gemini Enterprise Platform and Gemini Enterprise Application—under one umbrella. The company has rebranded Vertex AI as Gemini Enterprise Platform, though it insists that, aside from the name change and new features, it’s still fundamentally the same interface. “We want to provide a platform and a front door for companies to have access to all the AI systems and tools that Google provides,” Maryam Gholami, senior director, product management for Gemini Enterprise, told VentureBeat in an interview. “The way you can think about it is that the Gemini Enterprise Application is built on top of the Gemini Enterprise Agent Platform, and the security and governance tools are all provided for free as part of Gemini Enterprise Application subscription.” On the other hand, AWS added a new managed agent harness to Bedrock Agentcore. The company said in a press release shared with VentureBeat that the harness “replaces upfront build with a config-based starting point powered by Strands Agents, AWS’s open source agent framework.” Users define what the agent does, the model it uses and the tools it calls, and AgentCore does the work to stitch all of that together to run the agent. Agents are now becoming systems The shift toward stateful, long-running autonomous agents has forced a rethink of how AI systems behave. As agents move from short-lived tasks to long-running workflows, a new class of failure is emerging: state drift. As agents continue operating, they accumulate state—memory, too, responses and evolving context. Over time, that state becomes outdated. Data sources change, or tools can return conflicting responses. But the agent becomes more vulnerable to inconsistencies and becomes less truthful. Agent reliability becomes a systems problem, and managing that drift may need more than faster execution; it may require visibility and control. It’s this failure point that platforms like Gemini Enterprise and AgentCore try to prevent. Though this shift is already happening, Gholami admitted that customers will dictate how they want to run and control any long-running agent. “We are going to learn a lot from customers where they would be using long-running agents, where they just assign a task to these autonomous agents to just go ahead and do,” Gholami said. “Of course, there are tricks and balances to get right and the agent may come back and ask for more input.” The new AI stack What’s becoming increasingly clear is that the AI stack is separating into distinct layers, solving different problems. AWS and, to a certain extent, Anthropic and OpenAI, optimize for faster deployment. Claude Managed Agents abstracts much of the backend work for standing up an agent, while the Agents SDK now includes support for sandboxes and a ready-made harness. These approaches aim to lower the barrier to getting agents up and running. Google offers a centralized control panel to manage identity, enforce policies and monitor long-running behaviors. Enterprises likely need both. As some practitioners see it, their businesses have to have a serious conversation on how much risk they are willing to take. “The main takeaway for enterprise technology leaders considering these technologies at the moment may be formulated this way: while the agent harness vs. runtime question is often perceived as build vs. buy, this is primarily a matter of risk management. If you can afford to run your agents through a third-party runtime because they do not affect your revenue streams, that is okay. On the contrary, in the context of more critical processes, the latter option will be the only one to consider from a business perspective,” Rafael Sarim Oezdemir, head of growth at EZContacts, told VentureBeat in an email. Iterating quickly lets teams experiment and discover what agents can do, while centralized control adds a layer of trust. What enterprises need is to ensure they are not locked into systems designed purely for a single way of executing agents.
- Are we getting what we paid for? How to turn AI momentum into measurable valueEnterprise AI is entering a new phase — one where the central question is no longer what can be built, but how to make the most of our AI investment. At VentureBeat’s latest AI Impact Tour session, Brian Gracely, director of portfolio strategy at Red Hat, described the operational reality inside large organizations: AI sprawl, rising inference costs, and limited visibility into what those investments are actually returning. It’s the “Day 2” moment — when pilots give way to production, and cost, governance, and sustainability become harder than building the system in the first place. "We've seen customers who say, 'I have 50,000 licenses of Copilot. I don't really know what people are getting out of that. But I do know that I'm paying for the most expensive computing in the world, because it's GPUs,'" Gracely said. "'How am I going to get that under control?'" Why enterprise AI costs are now a board-level problem For much of the past two years, cost was not the primary concern for organizations evaluating generative AI. The experimental phase gave teams cover to spend freely, and the promise of productivity gains justified aggressive investment, but that dynamic is shifting as enterprises enter their second and third budget cycles with AI. The focus has moved from "can we build something?" to "are we getting what we paid for?" Enterprises that made large, early bets on managed AI services are conducting hard reviews of whether those investments are delivering measurable value. The issue isn’t just that GPU computing is expensive. It is that many organizations lack the instrumentation to connect spending to outcomes, making it nearly impossible to justify renewals or scale responsibly. The strategic shift from token consumer to token producer The dominant AI procurement model of the past few years has been straightforward: pay a vendor per token, per seat, or per API call, and let someone else manage the infrastructure. That model made sense as a starting point but is increasingly being questioned by organizations with enough experience to compare alternatives. Enterprises that have been through one AI cycle are starting to rethink that model. "Instead of being purely a token consumer, how can I start being a token generator?" Gracely said. "Are there use cases and workloads that make sense for me to own more? It may mean operating GPUs. It may mean renting GPUs. And then asking, 'Does that workload need the greatest state-of-the-art model? Are there more capable open models or smaller models that fit?'" The decision is not binary. The right answer depends on the workload, the organization, and the risk tolerance involved, but the math is getting more complicated as the number of capable open models, from DeepSeek to models now available through cloud marketplaces, grows. Now enterprises actually have real alternatives to the handful of providers that dominated the landscape two years ago. Falling AI costs and rising usage create a paradox for enterprise budgets Some enterprise leaders argue that locking into infrastructure investments now could mean significantly overpaying in the long run, pointing to the statement from Anthropic CEO Dario Amodei that AI inference costs are declining roughly 60% per year. The emergence of open-source models such as DeepSeek and others has meaningfully expanded the strategic options available to enterprises that are willing to invest in the underlying infrastructure in the last three years. But while costs per token are falling, usage is accelerating at a pace that more than offsets efficiency gains. It's a version of Jevons Paradox, the economic principle that improvements in resource efficiency tend to increase total consumption rather than reduce it, as lower cost enables broader adoption. For enterprise budget planners, this means declining unit costs do not translate into declining total bills. An organization that triples its AI usage while costs fall by half still ends up spending more than it did before. The consideration becomes which workloads genuinely require the most capable and most expensive models, and which can be handled just fine by smaller, cheaper alternatives. The business case for investing in AI infrastructure flexibility The prescription isn't to slow down AI investment, but to build with flexibility being top of mind. The organizations that will win aren't necessarily the ones that move fastest or spend the most; they're the ones building infrastructure and operating models capable of absorbing the next unexpected development. "The more you can build some abstractions and give yourself some flexibility, the more you can experiment without running up costs, but also without jeopardizing your business. Those are as important as asking whether you're doing everything best practice right now," Gracely explained. But despite how entrenched AI discussions have become in enterprise planning cycles, the practical experience most organizations have is still measured in years, not decades. "It feels like we've been doing this forever. We've been doing this for three years," Gracely added. "It's early and it's moving really fast. You don't know what's coming next. But the characteristics of what's coming next — you should have some sense of what that looks like.” For enterprise leaders still calibrating their AI investment strategies, that may be the most actionable takeaway: the goal is not to optimize for today's cost structure, but to build the organizational and technical flexibility to adapt when, not if, it changes again.