Mar 11, 2026
Where we're looking to invest: Securing AI agents

AI agents are no longer copilots in a chat box. They now execute code, access production databases, make authenticated API calls, orchestrate workflows across software and internal tools, and invoke new capabilities dynamically through MCPs. This capability shift, from answers to actions, is why we believe agent security will be a defining infrastructure category necessary to unlock fully autonomous enterprise agents.
Agent security goes beyond LLM security
AI/ML is not a new concept in security. Enterprises have run AI/ML based fraud detection, anomaly detection, and behavioral analytics models in production for over a decade. However, LLMs changed the underlying value-prop of what AI meant in enterprises. Instead of AI being embedded into products for analytics, AI was the product that could reason across unstructured information and generalize across tasks.
When productionizing LLMs meant a chatbot, security was largely around content filtering; keeping bad inputs out, and keeping sensitive outputs in. The 2024/25 AI Security M&A wave saw large security incumbents quickly acquire the first generation of LLM security tools built around guardrails, prompt scanning, and input/output filtering:
- Protect AI → Palo Alto Networks for ~$650M
- Lakera → Check Point for ~$300M
- Prompt Security → SentinelOne for ~$250M
- And more LLM security 1.0 acquisitions in the space like: Aim Security → Cato Networks, CalypsoAI → to F5, and Pangea → Crowdstrike
However, these acquisitions were more about strategic positioning than validated enterprise adoption (most acquired companies had well below $10M in ARR). And just 1 year later, the landscape is already different. The shift from LLMs as chatbots to LLMs powering agents changed everything.
Agents have moved beyond chat into execution, and enterprises now have growing fleets of “Shadow AI” embedded across teams. In nearly every enterprise, the same tension exists: leaders want to safely move sanctioned agents from read-only to write access, but they’re equally concerned about the unsanctioned agents already operating outside their visibility.
The adoption paradox: sanctioned agents are stuck, shadow agents explode
In our conversations with large enterprises, a consistent pattern emerged: everyone has active agent pilots, but almost all are stuck in read-only mode. Agents can retrieve and summarize information, but organizations are nervous to grant write access to production systems. The read-to-write transition should generate an order-of-magnitude difference in value, but the controls to enable autonomous agents are still immature.
Paradoxically, that caution only applies to sanctioned 3rd party agents from vendors. Internal agent sprawl has become the new shadow IT. Developers and non-developers can build agents with access to production data in an afternoon with no security review, identity assignment, or recorded intent. The result is the worst of both worlds: sanctioned agents are bottlenecked, while unsanctioned agents are sprawling with no controls at all.
Agents require a blend of traditional security and observability
Traditional security often assumes misbehavior and enforces deterministic controls across humans and non-human identities. AI safety focuses on improving behavior through training, guardrails, and evaluation. AI agents require both because what is “secure” and “correct” is highly context dependent.
When agents become executional, they reason probabilistically, merge data with instructions, chain tools across systems, and act autonomously inside trusted environments. Behavioral guardrails alone won’t contain them, and legacy controls lack visibility into task-level intent. Securing agents will require a combination of improving agent reasoning and constraining agent authority through a deterministic control plane purpose built for autonomous agents.
Where we're actively looking to invest:
We're excited to back teams building the next wave of AI Security: the infrastructure that gives organizations confidence to grant agents autonomous write access to production systems.
This includes:
- Agent identity, governance, and real-time enforcement: Most agents today run on long-lived API keys and inherit the full permissions of whoever deployed them, with no task scoping, no expiry, and no accountability chain. When agents spawn sub-agents, permissions will escalate silently across delegation hops in ways that existing PAM & secrets providers (e.g., Cyberark, BeyondTrust, Vault) simply weren't built to handle. Getting this right requires a new control plane purpose-built for agents that treats credentials as ephemeral, traces every action back to a human owner, and evaluates what an agent is trying to do, not just what endpoint it's calling.
- Pre-production agent red teaming: Distinct from AI pentesting, we see a specific opportunity in continuous, agent-specific attack simulation to red-team agent workflows before production. Threat models differ meaningfully by agent category (e.g., customer support, financial ops, etc.) and modality, but existing tools were not built for the probabilistic, context-dependent behavior that defines how agents actually fail. Runtime and governance controls remain essential, but pre-production testing is what keeps known failure modes from reaching production in the first place.
- AI workload runtime security: To effectively constrain an AI agent without bottlenecking its utility, we need tools that offer contextual decisions at runtime. The next control platform for AI must capture the context to understand when an agent's decision has system-wide impact. We believe this will occur at runtime because runtime is a durable control point: models and frameworks will evolve rapidly, but agent workloads will still execute as processes with observable side effects.
- Agent rewind: Enterprises won't grant agents write access until they believe "undo is possible." Backup and recovery tooling exists, but it was built for infrastructure failure and tracking what changed, not why. When an agent chains mutations across systems of record, workflow configs, and production databases in minutes, existing tools struggle to identify the right rollback boundary. The failure often looks like success: no crash, no error, just semantically wrong outcomes across systems. We're looking for intent-aware recovery that is purpose-built for how agents actually fail.
- Multi-agent system security: Messages between collaborating agents skip the checks applied to user input. IBM Research confirmed agent-to-agent prompt injection as a key attack vector. The permission chain problem (e.g., propagating and auditing authorization from user to agent to sub-agent) is largely unmodeled. As multi-agent architectures move from demos to production, we expect securing multi-agent communication to become urgent fast.
What we’re looking for in AI security founders
We're looking for teams who understand that agent security is not an extension of LLM security. Many underlying security primitives (sandboxing, isolation, permissioning, auditing) have existed for decades; but the pacing, non-determinism, and autonomy of agents change the equation enough that porting over existing solutions won't work.
We're at the beginning of securing the agentic era, and we believe it's a massive opportunity that will need new security solutions. If you're building in this space, we'd love to hear from you. Reach out at rohan@cowboy.vc.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscrinolkm;pvdidunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in represdasc


