DEV Community

Ricardo Rodrigues
Ricardo Rodrigues

Posted on

From Developer Laptops to Isolated Containers — Enterprise MCP Infrastructure with MCPNest

The Problem

The MCP ecosystem is growing fast. Anthropic, Microsoft, Google, AWS, and Cloudflare are all publishing official MCP servers. Developers are connecting AI tools — Claude, Cursor, Windsurf — to databases, internal APIs, GitHub, and business systems.

The infrastructure for doing this at an individual level is mature. The infrastructure for doing this at an enterprise level does not exist.

Today, at most engineering teams, MCP servers run on developer laptops via npx. This means:

  • Credentials stored in JSON files on individual machines
  • No isolation between the MCP server and the host system
  • No central visibility into what is running
  • No clean offboarding when a developer leaves
  • No audit trail of what tools were called or what data was accessed This is not a theoretical risk. It is the default state of every engineering team that has adopted MCP tooling without a governance layer.

What We Built

The MCPNest Orchestrator is the infrastructure layer that fixes this.

Instead of running MCP servers locally, the Orchestrator manages isolated Docker containers on central infrastructure. Developers deploy from the workspace dashboard. The AI client connects to the MCPNest Gateway, which authenticates the request, checks the tool allowlist, and proxies to the hosted container.

Nothing changes in the developer workflow. Everything changes in visibility and control.


The Architecture

Claude / Cursor / Windsurf
  ↓ Bearer mng_xxx (per-member token)
MCPNest Gateway (Next.js on Vercel)
  ↓ Auth + allowlist check + audit log
MCPNest Orchestrator (FastAPI on Hetzner, Nuremberg EU)
  ↓ Docker socket
MCP Bridge Container
  ↓ stdio
MCP Server (npx package)
Enter fullscreen mode Exit fullscreen mode

The Gateway handles authentication and authorisation. The Orchestrator handles the container lifecycle. The Bridge handles protocol translation between HTTP and stdio.


The MCP Bridge

Most MCP servers speak stdio — they expect to be launched as a subprocess and communicate via stdin/stdout. The Gateway speaks HTTP.

The Bridge is a FastAPI application that wraps any stdio MCP server as an HTTP endpoint. It launches the MCP server as a subprocess on startup, performs the MCP handshake, and exposes two HTTP endpoints:

  • POST /tools/list — returns the server tool definitions
  • POST /tools/call — proxies a tool call to the subprocess The Bridge image pre-installs 12 MCP server packages to avoid cold-start delays:
@modelcontextprotocol/server-filesystem
@modelcontextprotocol/server-github
@modelcontextprotocol/server-postgres
@modelcontextprotocol/server-memory
@modelcontextprotocol/server-everything
@modelcontextprotocol/server-sequential-thinking
@modelcontextprotocol/server-brave-search
@modelcontextprotocol/server-slack
@modelcontextprotocol/server-puppeteer
@upstash/context7-mcp
@notionhq/notion-mcp-server
mcp-server-sqlite
Enter fullscreen mode Exit fullscreen mode

The server to run is passed via the MCP_COMMAND environment variable. The Bridge reads it on startup using shlex.split, launches the subprocess, performs the MCP initialize handshake, and begins listening on port 8080.

If the subprocess closes stdout before the handshake completes — which happens when a server requires credentials that were not provided — the Bridge raises a RuntimeError and the container exits. This is by design: servers that require credentials must have them configured before deploy.


Container Security

Every container the Orchestrator starts is hardened at the Docker level.

cap_drop ALL
Removes all Linux capabilities from the container. The MCP server process has zero elevated permissions. It cannot bind to privileged ports, modify network interfaces, or perform any operation that requires elevated access.

no-new-privileges
Prevents the process from gaining additional privileges via setuid binaries or file capabilities. Even if the MCP server package contains a setuid binary, it cannot use it.

Resource limits
CPU and memory limits are enforced per container based on the resource profile (small / medium / large). This prevents runaway processes and resource exhaustion that could affect other containers on the same host.

Dedicated Docker network
All hosted containers run on a dedicated Docker network (mcpnest_hosted), isolated from the host network. Containers cannot reach the host network directly.

Non-root user
The Bridge runs as a non-root bridge user inside the container. The MCP server subprocess inherits this user context.

Credential isolation
Credentials are passed as environment variables to the container at deploy time. They are never logged by the Orchestrator (explicitly excluded from logs), never stored in plain text in the database, and never visible to developers after submission.


Credential Management

Not all MCP servers start cleanly without configuration. Servers like the GitHub MCP server require a Personal Access Token. The PostgreSQL server requires a connection string. Slack requires a bot token and team ID.

Without credential management, these servers fail silently — the subprocess closes stdout before the handshake completes and the container exits with an error that is difficult to diagnose.

We solved this with a required_env_vars column in the mcp_allowed_images table. Each entry defines the credentials a server needs, with a key, label, type (text or password), and help text.

When a developer clicks Deploy on a server that has required credentials, a modal opens before the deploy request is sent. The modal collects the values — clearly labelled, with help text, password inputs masked. Only when all required fields are filled can the developer proceed.

The values are passed directly to the Orchestrator in the deploy request body and injected as environment variables into the container. They are never written to disk on the host and never appear in Orchestrator logs.

When a developer's access is revoked, their Bearer token is invalidated at the Gateway level. The hosted containers continue running for other team members unaffected. The credentials inside the containers are not exposed.


Deploy Flow

The full flow from click to running container:

  1. Developer clicks Deploy in the workspace Hosted tab
  2. If the server has required_env_vars, a modal collects the credentials before the request is sent
  3. A POST /api/hosted request goes to the Next.js API route with the server slug and credentials
  4. The API route creates an instance record in Supabase with status starting and forwards the deploy request to the Orchestrator
  5. The Orchestrator checks if the Bridge image is already present locally using docker.images.get() — skips pull if found, pulls only if not cached
  6. The Orchestrator starts the container with cap_drop ALL, no-new-privileges, resource limits, and the credentials as environment variables
  7. The Orchestrator updates the instance record in Supabase with status running and the assigned host port
  8. The Next.js API route returns the instance_id to the client
  9. The workspace UI opens a terminal modal that polls /api/hosted/{id}/logs every 3 seconds
  10. When the container reaches RUNNING + HEALTHY state (Docker health check passing), the modal updates automatically and stops polling The real-time deploy console shows the container logs as they stream. The developer sees the MCP server startup output — including any errors — in real time. There is no need to SSH into the server or use Docker CLI.

The Pull-Skip Optimisation

Early in development, every deploy attempt pulled the Bridge image from the registry, even though the image is local-only and was never pushed to Docker Hub. This caused every deploy to fail with a pull access denied error.

The fix was simple but important: check if the image exists locally before attempting a pull.

try:
    cli.images.get(req.image)
    logger.info("Image %s already present locally, skipping pull", req.image)
except Exception:
    try:
        await asyncio.get_event_loop().run_in_executor(
            None, lambda: cli.images.pull(req.image)
        )
    except Exception as exc:
        raise RuntimeError(f"Failed to pull image {req.image!r}: {exc}") from exc
Enter fullscreen mode Exit fullscreen mode

This also makes deploys faster — there is no network round-trip for images that are already cached on the host.


What This Gives Enterprise Teams

Isolation
MCP servers run in sandboxed containers. A misconfigured or compromised server cannot access the host system or interfere with other containers.

Central credential management
No credentials on developer machines. No JSON files with GitHub tokens or database connection strings on laptops. Clean offboarding — when a developer leaves, their Gateway token is revoked and their access is gone immediately.

Audit trail
Every tool call is logged at the Gateway level — member, server, tool name, latency, HTTP status, timestamp. No inputs or outputs are stored. GDPR safe by design.

Consistent infrastructure
Every developer on the team deploys from the same verified catalog. No drift between machines, no manual setup, no "works on my machine" issues with MCP server configuration.

Visibility
Real-time deploy logs during startup. Per-instance log viewer for ongoing debugging. Health monitoring with automatic RUNNING + HEALTHY detection. Terminate with full cleanup — container removed, database record updated.


Current State

The Orchestrator runs on a dedicated Hetzner server in Nuremberg, EU. All state is managed in Supabase (Frankfurt, EU). The Gateway runs on Vercel's edge network.

12 verified MCP servers are available in the hosted catalog today: filesystem, GitHub, PostgreSQL, Notion, Context7, Slack, SQLite, Brave Search, Puppeteer, Memory, Sequential Thinking, Everything.

SOC 2 Type 1 is planned. Self-host via Docker is available for teams that require on-premise deployment.


Conclusion

The MCP ecosystem is at the same inflection point that npm was in 2012 — growing fast, with tooling that works well for individuals and not yet for enterprises.

The gap is not in the protocol. The protocol is solid. The gap is in infrastructure, governance, and auditability.

Running MCP servers on developer laptops is the path of least resistance. It works until it doesn't — until a developer leaves with credentials, until a server is misconfigured and accesses something it shouldn't, until a security team asks what tools the AI has been calling and nobody can answer.

The MCPNest Orchestrator is the infrastructure layer that closes that gap.

mcpnest.io

Top comments (1)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.