Aug 20, 2025
Where we’re looking to invest: the evolution of the MCP ecosystem

Since the Model Context Protocol (MCP) was announced by Anthropic, we have watched with excitement. But, there’s a big gap between promise and broader deployment. Why?
MCP is exciting, as it can enable LLMs to act as agents
However … MCP amplifies existing LLM vulnerabilities. This limits production use, if sensitive data or actions might be involved.
For builders, current limitations represent significant opportunities.
What is MCP & Why It’s Exciting
MCP is a protocol that standardizes how LLMs communicate with external tools and systems. Instead of building separate integrations for each AI model, developers can create a single “MCP server” that works with any MCP-compatible LLM (any LLM with “function-calling” or “tool-use” support).
How MCP works:
- Discovery: When you open an "MCP host" application (like Cursor or Claude Desktop), which contains both access to an LLM and an "MCP client", the MCP client connects to configured "MCP servers" and can retrieve their available tools (like filesystem/read_file, github/create_pull_request, postgres/query, notes/create_note).
- Request: When you message an LLM through an MCP host application, the LLM will identify which tools to use and generate function calls to those tools.
- Execution: The MCP client within a host application routes function calls to the appropriate MCP servers, which executes operations.
- Response: Results flow back through the MCP client to the LLM, which incorporates them into a response you see in the host application (like in GitHub Copilot)
The promise of MCP is to enable LLMs to interact with multiple tools and data sources easily. For an example, suppose you submit a photo of a receipt to Claude Desktop. Claude’s “MCP client” would send a request to an expense management system e.g. Ramp’s "MCP server", which would then process the image, extract the relevant information to create the expense entry, check it against company policy, add it to a report, and then route it for approval - eliminating many currently manual steps.
MCP catalyzes three big shifts:
- Interconnecting systems without custom integrations. LLMs that support MCP can read and interpret information across multiple software systems (e.g., Drata, Jira, Confluence, Github, Outlook, Salesforce)
- Digital assistants become possible. LLMs that support MCP can create and take actions that humans currently drive. A chatbot can reply to emails, commit code, update tickets, or write files.
- AI-Driven workflows to replace human-defined ones. Aspirationally, AI can plan and orchestrate entire workflows by deciding which tools to use and when. A current question is "will AI replace programmers?" But the real shift might be AI replacing programs. Engineers would still build tools, but AI would eliminate the need to write code that connects systems and coordinates data flows (what engineers call the “control plane”).
Moving from traditional human or "Engineer-defined" workflows to "AI-defined" workflows, or “agentic workflows” will allow people to quickly cut through messy layers and make data immediately usable.
Suppose you want to track weekly sales pipeline health via a Slack channel. Here are some examples of how a weekly GTM reporting workflow would be built in different ways:
Human-defined workflow:
- Set up Zapier workflows with trigger-action chains
- Create separate zaps for each data source (CRM, Tickets)
- Use Google Sheets as makeshift database to manually join data
- Build formatting templates that break when fields change
- Add webhook to post to Slack on schedule
- Every new metric = rebuild the entire automation flow
Engineer-defined workflow:
- Write complex and brittle data queries in a script
- Cross-reference CRM, tickets, docs, understand schemas, join tables
- Reformat into a doc & Slack summary
- Wire everything above together, code API calls to Slack
- Every new requirement -> write more code
AI-defined (via MCP) workflow:
Tell Claude Desktop: "By 9am Monday, prepare our GTM pack with whatever metrics best show our business health this week, and share insights to #exec-weekly."
The AI dynamically orchestrates MCP servers to:
- Discover available data sources (Salesforce, HubSpot, Zendesk, Docs)
- Decide which metrics matter most based on current context
- Adapt analysis to data quality and business conditions
- Execute data pulls, joins, and analysis across systems
- Publish findings in optimal format (Slides, CSV, Slack summary)
That's the shift: from pre-defining every path to giving AI the tools and letting it find the path.
However, we aren’t quite ready for the shift to agentic or AI-defined workflows, or deploying MCP to production. That means opportunity for builders:
Opportunity 1: Current LLMs Were Trained for Generating Text, Not Orchestrating Workflows
When AI, rather than code, decides which tools to call, we lose the ability to audit and debug workflows. Software guarantees such as reproducibility, traceability, and determinism disappear. These challenges are compounded by fundamental gaps in today's LLMs, which are designed for text generation, not action-taking or workflow orchestration. While LLMs can behave as agents, they lack reliable state management and error handling that traditional software relies on for predictable multi-step execution.
The non-deterministic nature of LLMs also means the same prompt can produce different results, making outcomes unpredictable (though temperature settings can partially mitigate this). Perhaps most critically, current LLMs struggle when faced with ambiguity and large contexts. Whether they're orchestrating complex workflows or configuring multiple MCP servers, performance degrades as context length increases, compromising both planning and task execution quality.
Potential Solutions: Agentic LLMs & Models, Hybrid Workflows and Tooling
Today’s limitations may not be permanent, and there's optimism around LLMs eventually orchestrating complex workflows with the right tools and advancements:
- New and/or re-trained LLMs & Models for Agentic Behavior: Build AI for planning, disambiguating user instructions, tool selection, and state tracking
- MCP Host Applications with Traditional Software Guarantees: bring reproducibility, traceability and determinism with human checkpoints to focused AI-defined workflows
- Solutions for Complex Context Management: to prevent performance degradation when LLMs handle multiple MCP servers and large contexts simultaneously
- Testing, Monitoring, and Auditing for AI: simulate runs, enable rollbacks, and add granular overrides when AI makes decisions
Opportunity 2: Security Gaps in MCP & LLM-Orchestrated Systems
When you expose sensitive data to an LLM via a MCP server, you create multiple attack surfaces. An MCP server connects an LLM directly to your backends and databases. A successful prompt injection wouldn’t just leak information, it could execute actions on your systems.
Implementing proper IAM with MCP is also complex. Existing systems built for human users assume coarse-grained permissions with implicit boundaries that AI won’t respect. MCP servers can also introduce new attack vectors for existing bugs like '0.0.0.0 day.'
The combination of LLMs' inherent vulnerabilities (like prompt injection and data extraction) with MCP's broad system access creates a perfect storm: attackers can manipulate AI into becoming an insider threat with legitimate access to your infrastructure.
Potential Solutions: Assume Breach, Design for Safety
These security challenges create opportunities for defensive architectures and tools:
- Defense-in-Depth App Architectures: MCP servers are isolated from potentially malicious user input (even by human review of external text input)
- MCP Security Solutions: especially for servers that touch sensitive data
- Productize LLM Security Research: solutions or models that package structured prompting, privilege separation, and adversarial training for open weight models
- AI-Native Identity and Access Management: control which AI agents can execute specific actions on behalf of users, with fine-grained permissions and audit trails for both human and AI-initiated actions
Opportunity 3: MCP Configuration and Deployment Friction
Setting up an MCP server in Claude requires users to configure JSON files, which inhibits mainstream adoption. Just like users don't need to know what database powers their favorite app, they shouldn't have to set up the MCP servers behind AI-native apps. Furthermore, the open nature of MCP means anyone can publish a server. This is a problem for non-technical users who struggle to evaluate how to safely select and host MCP servers. For developers, even though MCP is model agnostic, different models may call MCP servers differently, meaning that prompts, tools, and resources may need to be tuned for different models.
Potential Solutions: Making MCP Invisible Infrastructure
The solution isn’t only to make MCP servers easier to deploy, but also to make them completely invisible in applications:
- MCP Host Applications: safely embedding MCP for delightful, high quality, focused user experiences
- Tooling for MCP: making MCP servers portable across models and hosts, plus testing and debugging frameworks
- MCP servers-as-a-service: like Twilio and Plaid productized highly valuable services, we see opportunity for servers built for AI- orchestrated workflows
Reach Us at Cowboy VC
We have a lot more thoughts on what will be possible when the MCP-driven ecosystem continues to mature, and about the missing puzzle pieces to make it happen. If you’re building or thinking about any of the potential solutions above, please reach out to arman@cowboy.vc or aileen@cowboy.vc. The opportunities above are not exhaustive - and we'd love to connect with you to share perspectives or learn about what you are building!
For Further Reading
If you’re interested in learning more check out these great pieces:
- MCP Explained: The New Standard Connecting AI to Everything
- The Security Risks of Model Context Protocol (MCP)
- A Deep Dive Into MCP and the Future of AI Tooling
- Why MCP Is Mostly Bullshit
- Model Context Protocol (MCP): Landscape, Security Threats, and Future Research Directions
- A Survey of Agent Interoperability Protocols
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscrinolkm;pvdidunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in represdasc
