Google and AWS split the AI agent stack between control and execution
Our take

The era of enterprises stitching together prompt chains and shadow agents is nearing its end as more options for orchestrating complex multi-agent systems emerge. As organizations move AI agents into production, the question remains: "how will we manage them?"
Google and Amazon Web Services offer fundamentally different answers, illustrating a split in the AI stack. Google’s approach is to run agentic management on the system layer, while AWS’s harness method sets up in the execution layer.
The debate on how to manage and control gained new energy this past month as competing companies released or updated their agent builder platforms—Anthropic with the new Claude Managed Agents and OpenAI with enhancements to the Agents SDK—giving developer teams options for managing agents.
AWS with new capabilities added to Bedrock AgentCore is optimizing for velocity—relying on harnesses to bring agents to product faster—while still offering identity and tool management.
Meanwhile, Google’s Gemini Enterprise adopts a governance-focused approach using a Kubernetes-style control plane. Each method offers a glimpse into how agents move from short-burst task helpers to longer-running entities within a workflow.
Upgrades and umbrellas
To understand where each company stands, here’s what’s actually new.
Google released a new version of Gemini Enterprise, bringing its enterprise AI agent offerings—Gemini Enterprise Platform and Gemini Enterprise Application—under one umbrella.
The company has rebranded Vertex AI as Gemini Enterprise Platform, though it insists that, aside from the name change and new features, it’s still fundamentally the same interface.
“We want to provide a platform and a front door for companies to have access to all the AI systems and tools that Google provides,” Maryam Gholami, senior director, product management for Gemini Enterprise, told VentureBeat in an interview. “The way you can think about it is that the Gemini Enterprise Application is built on top of the Gemini Enterprise Agent Platform, and the security and governance tools are all provided for free as part of Gemini Enterprise Application subscription.”
On the other hand, AWS added a new managed agent harness to Bedrock Agentcore. The company said in a press release shared with VentureBeat that the harness “replaces upfront build with a config-based starting point powered by Strands Agents, AWS’s open source agent framework.”
Users define what the agent does, the model it uses and the tools it calls, and AgentCore does the work to stitch all of that together to run the agent.
Agents are now becoming systems
The shift toward stateful, long-running autonomous agents has forced a rethink of how AI systems behave. As agents move from short-lived tasks to long-running workflows, a new class of failure is emerging: state drift.
As agents continue operating, they accumulate state—memory, too, responses and evolving context. Over time, that state becomes outdated. Data sources change, or tools can return conflicting responses. But the agent becomes more vulnerable to inconsistencies and becomes less truthful.
Agent reliability becomes a systems problem, and managing that drift may need more than faster execution; it may require visibility and control.
It’s this failure point that platforms like Gemini Enterprise and AgentCore try to prevent.
Though this shift is already happening, Gholami admitted that customers will dictate how they want to run and control any long-running agent.
“We are going to learn a lot from customers where they would be using long-running agents, where they just assign a task to these autonomous agents to just go ahead and do,” Gholami said. “Of course, there are tricks and balances to get right and the agent may come back and ask for more input.”
The new AI stack
What’s becoming increasingly clear is that the AI stack is separating into distinct layers, solving different problems.
AWS and, to a certain extent, Anthropic and OpenAI, optimize for faster deployment. Claude Managed Agents abstracts much of the backend work for standing up an agent, while the Agents SDK now includes support for sandboxes and a ready-made harness. These approaches aim to lower the barrier to getting agents up and running.
Google offers a centralized control panel to manage identity, enforce policies and monitor long-running behaviors.
Enterprises likely need both.
As some practitioners see it, their businesses have to have a serious conversation on how much risk they are willing to take.
“The main takeaway for enterprise technology leaders considering these technologies at the moment may be formulated this way: while the agent harness vs. runtime question is often perceived as build vs. buy, this is primarily a matter of risk management. If you can afford to run your agents through a third-party runtime because they do not affect your revenue streams, that is okay. On the contrary, in the context of more critical processes, the latter option will be the only one to consider from a business perspective,” Rafael Sarim Oezdemir, head of growth at EZContacts, told VentureBeat in an email.
Iterating quickly lets teams experiment and discover what agents can do, while centralized control adds a layer of trust. What enterprises need is to ensure they are not locked into systems designed purely for a single way of executing agents.
Read on the original site
Open the publisher's page for the full experience
Related Articles
- Anthropic wants to own your agent's memory, evals, and orchestration — and that should make enterprises nervousJust a few weeks after announcing Claude Managed Agents, Anthropic has updated the platform with three new capabilities that collapse infrastructure layers like memory, evaluation, and multi-agent orchestration, into a single runtime. This move could threaten the standalone tools that many enterprises cobble together. The new capabilities — 'Dreaming,' 'Outcomes,' and 'Multi-Agent Orchestration' — aim to make agents inside Claude Managed Agents “more capable at handling complex tasks with minimal steering,” Anthropic said in a press release. Dreaming deals with memory, where agents “reflect” on their many sessions and curate memories so they learns and surface unknown patterns. Outcomes allows teams to define and set specific rubrics to measure an agent's success, while Multi-Agent Orchestration breaks jobs down so a lead agent can delegate to other agents. Claude Managed Agents ideally provides enterprises with a simpler path to deploy agents and embeds orchestration logic in the model layer. It’s an end-to-end platform to manage state, execution graphs, and routing. With the addition of Dreaming, Outcomes and Multi-agent Orchestration, Claude Managed Agents expands capabilities even further and directly competes with tools like LangGraph or CrewAI, as well as external evaluation frameworks, RAG memory architectures, and QA loops. An integration threat Enterprises must now ask: Should we ditch our flexible, modular system in favor of an agent platform that brings almost everything in-house? Anthropic designed Claude Managed Agents to share context, state, and traceability in one place. This means the platform sees every decision agents make, rather than enterprises having to wire separate systems together. It sounds practical to have one platform that does everything. But not all enterprises want a full-service system. Claude Managed Agents already faces criticism that it encourages vendor lock-in because it owns most of the architecture and tools that govern agents. In the current paradigm, an organization may run Managed Agents but keep multi-agent orchestration, memory, or evaluations in a separate space ensures flexibility. The platform offers a fully-hosted runtime, which means memory and orchestration run on infrastructure the enterprise does not own. This can become a compliance nightmare for some organizations that have to prove data residency. Another problem to consider is that enterprises already in the middle of large-scale AI transformations must cobble together workarounds to deal with the constraints of their tech stack. Not every workflow is easily replaceable by switching to Claude Managed Agents. Dreaming and outcomes against current tools Most enterprises have a fragmented approach to AI deployment. For example, they may use LangGraph or Crew AI for agent routing and workflow management, Pinecone as a vector database for long-term memory, DeepEval for external evaluation, and a human-in-the-loop quality assurance to review some tasks. Anthropic hopes to do away with all of that. With Dreaming, Anthropic approaches memory by allowing users to actively rewrite it between sessions, so the agent essentially learns from its mistakes. Anthropic says this capability is useful for long-running states and orchestration. Current systems often handle memory persistence by storing embeddings, retrieving relevant context, and adding more state over time. Outcomes addresses the evaluation portion by detailing expectations for agents. Instead of external quality checks, which are often done by a team of humans, Anthropic is bringing evaluation into the orchestration layer rather than above it. But it’s the Multi-Agent Orchestration capability that pits Claude Managed Agents against orchestration frameworks from Microsoft, LangChain, CrewAI, and others. Model providers like Anthropic and OpenAI have already begun pushing aggressively into this space, arguing that bringing this to the model layer gives teams better control. Big decisions to make Enterprises face a big decision, and this one could depend on where they are in agent maturity. If an organization is still experimenting with agents and has not deployed many in production, they may find moving to Claude Managed Agents and configuring Dreaming and Outcomes to their needs much easier. This is the stage of development where, even if enterprises are using a third-party orchestrator like LangChain, they’re still customizing it. But for those who are already further along in the process, the calculation becomes trickier. It’s now a matter of parallel evaluation and better understanding of their processes. Businesses, though, will face the same decision even if they don’t intend to use Claude Managed Agents. Anthropic has signaled that other model and platform providers will likely shift their product roadmaps to a similar model that keeps everything locked in the same system — because models may become interchangeable, but the tooling and orchestration infrastructure will not.
- AWS Quick's personal knowledge graph is making orchestration decisions most control planes can't seeEnterprise AI teams running centralized orchestration stacks now have a new variable to account for: AWS Quick, which expanded this week to a desktop-native agent that builds a persistent personal knowledge graph and executes actions across local files and SaaS tools — outside the visibility of most control planes. Unlike chat-based copilots that reset with each session, Quick now maintains a continuously updated knowledge graph built from the user's local files, calendar, email and connected SaaS apps. It uses it to proactively trigger actions without waiting to be asked. AWS launched Quick in October last year as an alternative to AI workflow and productivity platforms coming from Google, OpenAI and Anthropic. It was a way for enterprise employees to access insights from connected applications, an agent builder, deep research, and workflow automation. Now, it’s grown beyond a simple AI assistant and acts more as a proactive workflow agent with a stateful, real-time knowledge graph of the user. It integrates with third-party apps like Google Workspace, Microsoft 365, Zoom, Salesforce and Slack — and now local files — so the agent can gather context and take actions. “What we’ve been hearing is that many enterprises have not been happy with how difficult it is to get context from their legacy tools,” Jigar Thakkar, vice president of Quick Suite at AWS, told VentureBeat in an interview. “Our vision is that Quick is a desktop experience that is the one place where people can go to get all their information and tasks.” Governance blindspots Enterprises often put orchestration layers at the center to help guide and manage agents. Context is pulled in, decisions are made, and then actions are executed within defined system boundaries. Recent releases like Anthropic’s Claude Managed Agents or updates to OpenAI’s Agent SDK also push for more stateless, autonomous agents within enterprise workflows, but still operate within defined orchestration boundaries. Quick still operates under enterprise controls, something that AWS has always underscored with its AI products, so actions taken on Quick remain bound by permissions, identity and security. Integrations remain managed by either an API or an MCP connection. However, this evolution of Quick introduces a more subtle shift in the decision layer. AWS updated Quick to build a personal knowledge graph that learns more about the user the more they interact with the platform. It builds a profile based on how they use local files, calendar, email or third-party app integrations to proactively suggest actions such as reminding a team leader to set up check-ins. Enterprises should be wary that a kind of shadow orchestration could arise in a system like this. The personalized context means the decision layer focuses on implicit triggers rather than set workflows, user-specific interpretations, and different action timings. Practitioners are rightfully wary of this much autonomy, understanding that shadow orchestration may not be something completely under their control. Upal Saha, co-founder and CTO of Bem, told VentureBeat in an email that platforms like AWS Bedrock AgentCore, its managed agent runtime, and similar ones from Salesforce "maximize autonomy rather than accountability" so enterprises are not losing agent visibility by accident. "When you deploy an agent that reasons its way to a decision across multiple steps, you have already accepted that you will not be able to fully explain what happened after the fact," Saha said. "That is fine for a demo. It is not fine for a claims processing pipeline or a financial workflow where a regulator can ask you to produce a complete audit trail for every automated decision made in the last three years." AWS said the platform's governance model is designed to address these concerns. “Users can set up different agents and automated workflows tailored to their role — things like monitoring tickets, pulling data from connected systems, or drafting docs — all managed within a governed environment where IT retains control over what's connected and what data flows where. It's designed to give individual users flexibility while keeping enterprise-level oversight in place,” an AWS spokesperson said. A possible blueprint Quick’s evolution from an AI assistant to something more proactive represents a possible approach some enterprise software providers will take to deep AI agent integration into workflows. While what AWS wants to accomplish with Quick—better context from apps and local files and a strong understanding of what its users actually want to do—is not unique, it isn’t focusing on traditional orchestration. Instead, it’s relying on context-driven agent management. This market tension is growing, as evidenced by the release of similar platforms. Mistral, for example, announced Workflows the same day as the updates to Quick. That platform uses a more traditional orchestration framework. Stateful and personalized agents continue to evolve, and so do the questions around how enterprises govern them.
- The AI scaffolding layer is collapsing. LlamaIndex's CEO explains what survives.The scaffolding layer that developers once needed to ship LLM applications — indexing layers, query engines, retrieval pipelines, carefully orchestrated agent loops — is collapsing. And according to Jerry Liu, co-founder and CEO of LlamaIndex, that's not a problem. It's the point. “As a result, there's less of a need for frameworks to actually help users compose these deterministic workflows in a light and shallow manner,” Jerry Liu, co-founder and CEO of LlamaIndex, explains in a new VentureBeat Beyond the Pilot podcast. Context is becoming the moat Liu’s LlamaIndex is one of the foremost retrieval-augmented generation (RAG) frameworks connecting private, custom, and domain-specific data to LLMs. But even he acknowledges that these types of frameworks are becoming less relevant. With every new release, models demonstrate incremental capabilities to reason over “massive amounts” of unstructured data, and they’re getting better at it than humans, he notes. They can be trusted to reason extensively, self-correct, and perform multi-step planning; Modern Context Protocol (MCP) and Claude Agent Skills plug-ins allow models to discover and use tools without requiring integrations for every one independently. Agent patterns have consolidated toward what Liu calls a "managed agent diagram" — a harness layer combined with tools, MCP connectors, and skills plug-ins, rather than custom-built orchestration for every workflow. Further, coding agents excel at writing code, meaning devs don’t need to rely on extensive libraries. In fact, about 95% of LlamaIndex code is generated by AI. “Engineers are not actually writing real code,” Liu said. “They're all typing in natural language.” This means the layers between programmers and non-programmers is collapsing, because “the new programming language is essentially English.” Instead of manual coding or struggling to understand API and document integration, devs can just point Claude Code at it. “This type of stuff was either extremely inefficient or just would break the agent three years ago,” said Liu. “It's just way easier for people to build even relatively advanced retrieval with extremely simple primitives.” So what’s the core differentiator when the stack collapses? Context, Liu says. Agents need to be able to decipher file formats to extract the right information. Providing higher accuracy and cheaper parsing becomes key, and LlamaIndex is well-positioned here, he contends, because of its developments with agentic document processing via optical character recognition (OCR). “We've really identified that there's a core set of data that has been locked up in all these file format containers,” he said. Ultimately, “whether you use OpenAI Codex or Claude Code doesn't really matter. The thing that they all need is context.” Keeping stacks modular There’s growing concern about builders like Anthropic locking in session data; in light of this, Liu emphasizes the importance of modularity and agnosticism. Builders shouldn’t bet on any one frontier model, or overbuild in a way that overcomplicates components of the stack. Retrieval has evolved into “agent-plus-sandbox,” as he describes it, and enterprises must ensure that their code bases are tech debt free and adaptable to changing patterns. They also have to acknowledge that some parts of the stack will eventually need to be thrown away as a matter of course. “Because with every new model release, there's always a different model that is kind of the winner,” Liu said. “You want to make sure you actually have some flexibility to take advantage of it.” Listen to the podcast to hear more about: LlamaIndex’s beginnings as a ‘toy project’ with initially only about 40% accuracy; How SaaS companies can tap into complicated workflows that must be standardized and repeatable for average knowledge workers; Why vertical AI companies are taking off and why ‘build versus buy’ is still a very valid question in the agent age. You can also listen and subscribe to Beyond the Pilot on Spotify, Apple or wherever you get your podcasts.
- Salesforce launches Agentforce Operations to fix the workflows breaking enterprise AIEnterprise AI teams are hitting a wall — not because their models can't reason, but because the workflows underneath them were never built for agents. Tasks fail, handoffs break, and the problem compounds as organizations push agents deeper into back-office systems. A new architectural layer is emerging to address it: workflow execution control planes that impose deterministic structure on processes agents are expected to run. One of the companies bringing this to the forefront is Salesforce, with a new workflow platform that turns back-office workflows into a set of tasks for specialized agents to complete. Users can upload their processes or use one of the set Blueprints provided by Salesforce, and Agentforce Operations will break it down for agents. Salesforce senior vice president of Product, Sanjna Parulekar, told VentureBeat in an interview that the problem is that many enterprise workflows are not built for agents. “What we’ve observed with customers is that a lot of times, the brokenness in a process is probably in your product requirements document,” Parulekar said. “So when that’s uploaded into a product, it doesn’t quite work. We can optimize it and cut out some things and replace it with an agent.” Without this control panel layer, enterprises could risk deploying agents that increase cost rather than fix their workflow problems. Making the workflow work for agents, not just humans Enterprises deploying agents are learning a costly lesson: Their workflows were designed around human judgment gaps, not machine execution. Processes that evolved through years of workarounds — loosely defined steps, implicit decisions, coordination that depends on individuals knowing what to do next — break when agents are asked to follow them literally. Even with all of an enterprise’s context at its fingertips, AI systems will have difficulty completing tasks if it is not clear what it’s supposed to do. Parulekar said her team found that focusing on what makes the process tick and breaking it down into more explicit steps and workflows makes the system more deterministic. Then, when platforms like Agentforce Operations introduce agents, those agents already know their specific tasks. “It forces companies to rethink their processes and introduces observability into the mix because of the session tracing model in the system,” she said. Parulekar said human checks can be built into the system, so the process is more transparent. What makes this approach different from other workflow automation offerings is that it doesn’t rely on agents to decide what to do next; the system does. Unlike more traditional automation tools that route tasks and agents on probabilistic decision-making, this enforces execution on a more pre-defined, deterministic structure. The problem it introduces Codifying a workflow doesn't fix a broken one. If a process has flawed steps, encoding it for agents locks in the problem at scale. And once workflows are distributed across agents, the challenge shifts from execution to governance: who owns the process, who validates it, and how it evolves when business conditions change. It puts the onus on teams to take a hard look at what works for them and what doesn’t. Organizations need to consider that, along with the execution control plane offered by platforms like Agentforce Operations, someone should be made responsible for task completion and success. Brandon Metcalf, founder and CEO of workforce orchestration company Asymbl, told VentureBeat in a separate interview that the key to both humans and agents following a workflow is a shared goal. “You have to understand the goal or the agent or human won’t complete the task successfully,” Metcalf said. “Someone has to manage that outcome that has to be delivered. It can be a person or an agent.” The bottleneck has moved. As Metcalf framed it, the question is no longer whether agents can reason through a task, it's whether the workflow underneath them is coherent enough to execute. For enterprises that built their processes around human judgment and institutional memory, that's a harder fix than swapping in a smarter model.