If you are building AI software for enterprise clients, the model is usually not the bottleneck. Data access is.
The real challenge is getting the model to interact with proprietary systems without turning your stack into a giant security and operations liability. Model Context Protocol (MCP) is a strong interface standard for this, but the hard part is not the protocol itself. The hard part is production design.
In this post, I will walk through five practical MCP deployment patterns, when to use each one, and how to organize code so examples stay runnable instead of becoming blog-only snippets.
The framing comes from a pattern I keep seeing in enterprise AI programs:
- quarter 1: teams ship a promising MCP prototype
- quarter 2: security and platform reviews surface hidden design debt
- quarter 3: the team either hardens successfully or stalls in exceptions and rework
The difference is usually not model quality. It is architecture choices made in the first few weeks.
Executive Summary
If you need one recommendation: start with read-only MCP, prove value in weeks, then add write paths and on-prem bridge patterns only when business requirements justify the added controls.
What executives should optimize for:
- Time to first production value (not architectural completeness)
- Controlled blast radius as capabilities expand
- Auditability from day one (so compliance does not block scale)
- A migration path from pilot design to enterprise-grade operations
What You Are Actually Building
When teams say “we are deploying an MCP server,” what they usually need is a small platform:
- MCP server interfaces (tools and resources)
- Connectors into internal systems (DBs, APIs, files, search)
- Identity and authorization (OAuth, scopes, service accounts)
- Runtime controls (validation, allowlists, isolation)
- Observability and governance (audit logs, quotas, alerts)
The architecture matters because MCP introduces delegated power. The model can ask for actions. Your job is to make sure those actions stay inside explicit and enforceable boundaries.
Code Organization: Keep Posts and Runnable Code Separate
I recommend this structure in this repo:
posts/
<post-slug>/
index.qmd
assets/
examples/
mcp-readonly-starter/
mcp-multitenant-saas/
mcp-write-with-approval/
mcp-onprem-bridge/
mcp-compliance-audit/This avoids mixing Quarto content with runtime dependencies.
posts/...is for prose, diagrams, and lightweight assets.examples/...is for runnable code, tests, infra manifests, and scripts.- each runnable example now includes
scripts/seed.tsand supportsSTORAGE_MODE=filefor restart-safe demo state.
For this article, each pattern maps to one folder in examples/ so readers can move from concept to implementation immediately.
Pattern Selection Matrix
Use this to choose a starting point fast:
| Pattern | Best for | Time to value | Operational risk | Compliance posture |
|---|---|---|---|---|
| Single-tenant read-only | First deployment, retrieval use cases | Fast | Low | Basic-to-moderate |
| Multi-tenant SaaS | Shared control plane across customers | Medium | Medium | Moderate |
| Write with approval | Side-effecting workflows | Medium | Medium-high | Moderate-high |
| On-prem bridge | Customer-controlled infrastructure | Medium-slow | Medium | High |
| Compliance-first | Regulated sectors, audit-heavy buyers | Slowest | Lowest long-term | Highest |
Pattern 1: Single-Tenant Read-Only MCP
When to use: first production deployment, retrieval-heavy workloads, low ops budget.
Example folder: examples/mcp-readonly-starter
mcp-readonly-starter/
src/
auth/scope_guard.ts
tools/get_invoice_status.ts
observability/audit_log.ts
scripts/seed.ts
tests/security/input_validation.test.tsMost teams should start here. In practice, this is the pattern that gets through security review fastest and still creates visible business value.
A narrow read-only surface with strict input schemas gets you real usage data without opening side-effect risk on day one.
Tradeoff: fastest path to production and lowest incident risk, but limited automation upside.
Pattern 2: Multi-Tenant SaaS MCP
When to use: one control plane serving multiple customers with strict isolation guarantees.
Example folder: examples/mcp-multitenant-saas
mcp-multitenant-saas/
src/
middleware/tenant_context.ts
middleware/authz.ts
data/tenant_router.ts
scripts/seed.ts
tests/authz/cross_tenant_access.test.tsTeams usually move to this pattern after the first customer asks for isolation guarantees, or after the second customer makes ad hoc tenant logic unmaintainable.
The non-negotiable rule is centralized tenant enforcement. If tenant checks are scattered per tool, drift is almost guaranteed over time.
Tradeoff: better infrastructure efficiency and faster customer onboarding, but more identity and policy complexity.
Pattern 3: Write-Capable MCP with Approval Gates
When to use: tools can trigger financial, operational, or irreversible side effects.
Example folder: examples/mcp-write-with-approval
mcp-write-with-approval/
src/
tools/create_refund_request.ts
tools/execute_refund.ts
policy/risk_rules.ts
scripts/seed.ts
tests/security/scope_escalation.test.tsThis is where many programs either mature or accumulate operational risk.
The key design decision is splitting intent from execution. A model can create a request, but execution should pass through explicit policy and approval gates.
Tradeoff: safer automation for high-impact actions, at the cost of additional workflow latency.
Pattern 4: On-Prem Bridge MCP
When to use: customer data must remain in customer-controlled infrastructure.
Example folder: examples/mcp-onprem-bridge
mcp-onprem-bridge/
src/
control-plane-server.ts
bridge-agent.ts
bridge/http_handlers.ts
workflows/job_store.ts
scripts/seed.ts
scripts/demo.shThis pattern becomes necessary when customers say yes to AI features but no to data egress.
A pull-based bridge agent is usually easier to deploy in enterprise networks than inbound callbacks. It also simplifies security review by reducing exposed inbound surface.
Tradeoff: strongest data residency story, but highest integration and support burden.
Pattern 5: High-Compliance MCP
When to use: regulated environments with strict audit and policy requirements.
Example folder: examples/mcp-compliance-audit
mcp-compliance-audit/
src/
server-http.ts
tools/list_audit_events.ts
tools/get_control_status.ts
policy/deny_by_default.ts
observability/redaction.ts
tests/audit/pii_redaction.test.ts
scripts/seed.ts
scripts/demo.shSome teams start here because their buyers require it. Others arrive here after the first audit questionnaire exposes gaps.
This pattern is policy-first: deny by default, enforce scoped authorization everywhere, and make logs useful without leaking secrets.
Tradeoff: strongest procurement and audit posture, but slower initial delivery.
Security Baseline Across All Patterns
Security guidance is often presented as one long checklist. In practice, sequencing matters more than volume.
The mistake I see most often is trying to implement every control at once and shipping none. A phased rollout keeps momentum while still moving toward a defensible production posture.
Security Rollout by Phase
| Phase | Controls to implement |
|---|---|
| Pilot (must-have) | OAuth + scopes, strict input schemas, redacted audit logs |
| Pre-production | Secrets manager + rotation, allowlists, rate limits |
| Production hardening | Sandboxing for write/execute paths, anomaly detection, governance approvals for tool changes |
Across phases, these seven controls remain the baseline:
- OAuth 2.1 + PKCE for remote transports, with strict scope checks on every tool call
- Narrow tools with allowlists for paths, domains, and command options
- Strict schema validation with unknown-field rejection
- Centralized secrets handling with short-lived credentials and rotation
- Sandboxing for any write or execute path
- Structured redacted audit logs for every invocation
- Rate limits, quotas, and anomaly alerts per identity
These controls are especially important for MCP because prompt injection and tool abuse are expected failure modes, not edge cases.
A Minimal Code Slice
The core mechanics are simple.
export function requireScope(scopes: string[], required: string) {
if (!scopes.includes(required)) {
throw new Error(`forbidden: missing scope ${required}`);
}
}import { z } from "zod";
const Input = z.object({
invoiceId: z.string().regex(/^inv_[a-zA-Z0-9]+$/),
}).strict();const ALLOWED_HOSTS = new Set(["api.internal.example.com"]);
export function enforceHost(url: string) {
const host = new URL(url).hostname;
if (!ALLOWED_HOSTS.has(host)) throw new Error("host_not_allowed");
}Closing Thoughts
MCP is one of the most practical ways to make proprietary data useful in AI products. Production success has less to do with flashy demos and more to do with controlled capability rollout, isolation, and operating discipline.
If you want a durable rollout path, treat MCP like a product line, not a one-off integration: start with read-only value, then add power in layers as requirements become explicit.
Start with the read-only pattern, keep the tool surface narrow, and instrument everything. Add multi-tenant routing, approval workflows, on-prem bridging, and compliance-first controls only when business requirements are concrete.
That path is slower than shipping a generic “super tool” on day one, but it is much faster than recovering from the first incident.