A deeper look into using MCP in the enterprise

A universal "USB-C" for AI?

A deeper look into using MCP in the enterprise

Some usage considerations and current limitations

Background

As discussed previously, we can think of MCP much like a USB‑C port for AI agents: it allows your system and models to grow modularly. This brings several benefits:

  • No retraining needed for new capabilities: The model just needs to know how and where to access new tools.
  • Decoupled evolution: External tools can be updated separately from the model’s or agent’s logic.
  • Extensibility: New tools can be easily added and integrated.

Essentially, MCP offers a flexible, universal framework for tool-enabled agentic workflows.

Major changes across the versions

The initial MCP specification was released by Anthropic in November 2024.

OAuth2.1 framework support was added in the March 2025 revision, together with the move from SSE to Streamable HTTP.

The latest version of MCP as of writing is June 2025, with the removal of JSON-RPC batching support as well as some other security enhancements.

Implications to usage in the enterprise

To better understand the current limitations of MCP in an enterprise setting, we need to look at it from a few angles, such as the deployment patterns, tool category, connection methods, and the usage scenarios.

Deployment Patterns

We previously spoke about some of the common ways MCP can be deployed. Let’s briefly recap them here:

1. Local

In this setup, tools are executed within the same runtime or environment as the agent. This is ideal for scenarios where:

  • The tools are simple scripts or processes that don’t need to be replicated or shared
  • You require low latency
  • The tool is tied to the lifecycle of the application (does not need to be available when the application is not)

Example: An agent that uses a filesystem tool that enables scanning and reading of documents in your local folder in order to summarise them.

2. Pull-remote-run-local

In this hybrid pattern, tool definitions are pulled from a remote registry, but executed locally. Useful when:

  • Central management of tools is needed for trust and reliability
  • The tools need to be updated more frequently than the application

Example: A curated tool registry provides updated SQL execution tools, but all code is sandboxed and executed on the client machine.

3. Remote

Tools are abstracted entirely behind remote servers, decoupling tool execution from model hosting. This is relevant when:

  • The tool requires significant compute that may not be available on the agent machine
  • The tool code is proprietary
  • The tool serves multiple users concurrently

Example: A remote financial forecasting tool used by agents to generate business insights.

This table neatly summarises what we’ve outlined above.

https://keyconf.dev/2025/assets/files/takashi-norimatsu-keycloak-meets-ai.pdf

Tool categories

Generally all tools would fall into one of these 2 categories:

1. Actionable Tools

These tools perform actions as directed by the agent:

  • send_email(subject, body)
  • create_ticket(project, summary)
  • execute_sql(query)

2. Retrieval Tools

These tools fetch external information for context:

  • search_docs(query)
  • lookup_customer(customer_id)
  • get_weather(location)

Tools that process and return information would also count as retrieval tools (e.g. calculate_square_root(number)).

Connection Methods

MCP currently supports a few connection methods

1. Stdio

  • Simple and reliable for local tools
  • Input/output via standard input/output streams
  • Easy to debug

2. Server-Sent Events (SSE) [Deprecated]

  • Enables streaming responses
  • Useful for long-running tools or real-time data sources
  • Requires persistent HTTP connections and dual channels for bidirectional communication, which can be problematic in some enterprise networks
  • This is currently being deprecated in favour of Streamable HTTP

3. Streamable HTTP

  • Only needs a single endpoint /mcp vs /sse and /sse/messages
  • Can dynamically switch between one-off and continuous streaming connections
  • Enables bidirectional communication over a single channel

Limitations and Mitigations

After looking at where MCP can be used and via what methods, we can then take a look at areas where there may be risks. Much of the current limitations around MCP involve security and scalability issues. We discuss some of the more common issues and also recommend some mitigation measures for each.

As stdio is more suited for prototyping, we’ll omit it from consideration here.

On the security side, the more common topics are:

1. Tool trust

Many MCP servers today offer tools for either the local or pull-remote-run-local modes of deployment. This means that based on a tool’s description alone, the agent decides whether to use it, without any verification of whether it contains malicious code or whether the source is continuously verified.

As you can imagine, this opens up the attack surface significantly. This is also more common when used within coding assistants, as these assistants usually need filesystem access to read and understand your code whilst at the same time having the same privileges as the user to execute code.

Mitigations:

  • Validate tool code using cryptographic hashes or digital signatures to guarantee integrity.
  • Verify tool sources through signed registries and version pinning, preventing “rug pull” scenarios similar to MITM attacks.
  • Sanitise tool descriptions using prompt shields and semantic filtering to guard against hidden instructions.
  • Run tools in sandboxed or containerised environments with restricted permissions.

In the enterprise context, this could be in the form of having a platform that provides a sandboxed code execution environment as well as a managed tool repository that implements the above mitigation measures. Agents could then be limited to only have access to these approved tools and sources.

2. Implementation of OAuth

For remote tools, having MCP servers acting as both authorisation and resource servers also poses another set of risks.

Mitigations:

Several methods to mitigate these problems are listed in the MCP documentation on Security Best Practices, but generally refer to

  • Disabling token passthrough
  • Having a dedicated OAuth Auth server
  • Having dynamic client registration and explicit re-consent

This could be implemented by requiring MCP servers to use a central IdP service for authentication, and leveraging a compatible APIM solution that supports Streamable HTTP to act as a proxy. The APIM can enforce strict policies such as token validation, scope checks, and rate limiting, while centralising authentication across all MCP services.

3. Excessive write permissions for tools

MCP systems may allow sensitive or destructive operations to be executed fully autonomously — posing high risk if AI behavior is compromised or misaligned.

Mitigations:

  • Apply the appropriate workflow design that would require explicit human confirmation for high-risk actions, such as database deletions or file writes.
  • Provide transparency to users such as clear indicators that the AI is invoking a tool or attempting an external operation, and provide avenues for intervention so they can intervene if something unexpected occurs.
  • Implement rollback capabilities so that if the tool does something wrongly, the system or data can be easily reverted to a previous state.

Human-in-the-loop patterns are perfectly viable if the situation calls for it, and tools do not need to be fully automated if they carry significant risk. By incorporating HITL mechanisms, organisations can maintain a critical layer of oversight in high-stakes actions, ensuring that automated systems do not inadvertently cause harm.

For scalability, some common issues are:

1. State and connection management

MCP is not stateless, and usually stateful systems are hard to load balance and scale. Prior to the June revision, MCP used SSE for remote deployments. SSE has its own host of problems related to scalability.

Mitigation:

  • Adopt stateless design patterns where the context is separately stored from processing
  • Adopt stateless Streamable HTTP instead of SSE

2. Tool paralysis and excessive token use

Agents may be overwhelmed by an excessive number of tool definitions (tool overload), leading to context/ prompt bloat and slower decision-making.

Mitigations:

  • Consider the overall ecosystem or platform design, such as multi-agent architectures that employ patterns such as agents-as-tools
  • Specialise agents for specific tasks rather than a general do-it-all agent
  • Implement MCP routing or pruning that filters appropriate tools for the situation
  • Use centralised orchestration to manage discovery and context injection to avoid clutter

Other alternatives to MCP

Not everything needs to be MCP — just because you’re building agents doesn’t mean you can’t use APIs anymore. There are also other options like webhooks for asynchronous communication, and MQ for lightweight messaging. OpenAI’s original function calling is also a good alternative.

It is important to understand which technology is better suited for your use case, and whether you’re using MCP just to follow the hype.

Conclusion

Ultimately, MCP remains an evolving protocol. As its adoption grows and more real-world use cases emerge, ongoing improvements will address its current limitations and adapt it to new challenges. Like we highlighted above, local execution via (typically) coding assistants is the more common use case today, and that has brought about further thinking and rising concerns about potential vulnerabilities introduced by giving coding assistants or tools elevated access to a user’s development environment.

While this article doesn’t cover every potential gap or scenario, it aims to provide a deeper understanding of the key considerations for implementing MCP in an enterprise context. By weighing the benefits against the potential challenges, one can make informed decisions about whether MCP is the right fit for their needs and how best to integrate it into their workflows.