AI Agents Are Infiltrating Enterprises, But Security Lags Far Behind
13 May, 2026
Cybersecurity
AI Agents Are Infiltrating Enterprises, But Security Lags Far Behind
We're on the cusp of an AI revolution, with intelligent agents poised to transform industries from healthcare to manufacturing. Imagine AI seamlessly updating patient records in real-time or inspecting factory lines at speeds humans can only dream of. While the capabilities of these AI agents are breathtaking, a significant hurdle is preventing them from moving beyond pilot programs into widespread production: identity governance and a fundamental lack of trust.
According to Cisco President Jeetu Patel, a staggering 85% of enterprises are currently running AI agent pilots, yet a mere 5% have successfully deployed them into production. This massive 80% gap isn't due to a lack of advanced AI models or computing power. Instead, it stems from a critical security and trust deficit. CISOs are rightfully asking: which agents have access to sensitive systems, and who is accountable when things go wrong? As it turns out, most organizations are still grappling with robust access controls for their human employees, let alone the complex challenge of managing non-human AI identities.
The Architectural Trust Gap
Michael Dickman, SVP and GM of Cisco's Campus Networking business, highlights that the issue is deeply architectural. "The network sees what other telemetry sources miss: actual system-to-system communications rather than inferred activity," he explained. This means understanding not just what systems *should* be talking to each other, but what they *actually* are. This network-level visibility is crucial for enforcing policies at the lightning speed required by AI agents.
Dickman argues that AI agents break a long-standing tech paradigm: "deploy for productivity first, bolt on security later." With AI agents, trust can't be an afterthought; it must be a foundational requirement from day one. When agents execute actions – like updating patient records or financial transactions – the potential impact of a compromised identity expands exponentially. This shifts the question from "who has the right to do what" to an even more complex "who (or what) has the right to do what."
Dickman's Four Conditions for Trust:
Secure Delegation: Clearly defining an agent's permissions and establishing a chain of human accountability.
Cultural Readiness: Adapting to new workflows and addressing issues like alert fatigue, which can be exacerbated by agents processing vast amounts of data.
Token Economics: Understanding the computational cost of each agent action and potentially leveraging hybrid architectures where AI reasons and traditional tools execute.
Human Judgment: Recognizing that AI can assist but not fully replace human oversight, especially in complex or nuanced tasks.
Why Siloed Data Fails AI Agents
A major pitfall is the fragmentation of data across different enterprise systems. Team A builds an agent using their data, and Team B builds another using theirs. While each might offer incremental automation, the lack of cross-domain visibility prevents holistic insights and robust security enforcement. The network, Dickman emphasizes, provides the crucial unifying layer by observing actual data communications, not just inferred activity.
This is why organizations are struggling. Many default to cloning human user profiles for AI agents, leading to permission sprawl from the outset. The flat authorization plane of many AI models also fails to respect existing user permissions, creating significant vulnerabilities. As Etay Maor from Cato Networks puts it, "We need an HR view of agents: onboarding, monitoring, offboarding." This mirrors the need for a structured, human-centric approach to managing non-human entities.
The Path to Production: Five Priorities
To bridge the trust gap and enable AI agents to move into production, Dickman outlines five key priorities:
Cross-functional Alignment: Ensure business, IT, and security leaders are aligned on AI agent expectations.
Production-Ready IAM/PAM: Mature Identity and Access Management (IAM) and Privileged Access Management (PAM) to handle agent identities effectively.
Platform Approach to Networking: Adopt infrastructure that facilitates data sharing across domains for better correlation.
Hybrid Architectures: Design systems where AI handles reasoning and traditional tools execute, balancing intelligence with efficiency.
Bulletproof Trust for First Use Cases: Start with a few high-value applications, ensuring robust security controls like RBAC and microsegmentation are in place from day one.
Ultimately, the enterprises that succeed with AI agents will be those that prioritize building a strong foundation of trust and governance. The technology is advancing rapidly, but without a solid security framework, the full potential of agentic AI will remain out of reach, stuck in endless pilot phases.