Skip to content

Build vs. Buy: The Hidden Costs of Custom MCP Servers

Custom MCP servers cost $50K–$150K per integration per year. Learn when to build vs. buy MCP infrastructure, and how managed MCP providers eliminate OAuth and maintenance overhead.

Uday Gajavalli Uday Gajavalli · · 14 min read
Build vs. Buy: The Hidden Costs of Custom MCP Servers

The honeymoon phase of the Model Context Protocol is officially over.

If you're an engineering leader weighing whether to build your own Model Context Protocol servers or adopt a managed MCP provider, here's the short answer: build only when the MCP server is the product. For everything else—connecting your AI agents to Salesforce, Slack, Jira, HubSpot, and the next 20 tools on your roadmap—the math overwhelmingly favors buying.

MCP standardized how clients talk to tools. It did not standardize OAuth, token refresh, tenant isolation, rate limits, pagination, or prompt-injection risk. That's production work, not demo work.

The adoption curve tells the story of why this matters right now. MCP server downloads grew from roughly 100,000 in November 2024 to over 8 million by April 2025, with over 5,800 servers available. One year after launch, MCP has become the universal standard for connecting AI agents to enterprise tools, with 97M+ monthly SDK downloads and backing from Anthropic, OpenAI, Google, and Microsoft. Major platform vendors have piled in: Salesforce added MCP support in June 2025, AWS announced a preview of AWS MCP Server in November 2025, and CData launched Connect AI as a managed MCP platform. The protocol question is settled. The hard part now is operating it safely at scale.

Developers everywhere were spinning up local SQLite databases and read-only weather APIs, connecting them to Claude Desktop, and watching the magic happen. But there is a massive gap between a local prototype and production-grade infrastructure. Organizations are realizing that while MCP standardizes the interface to the LLM, it does absolutely nothing to normalize the underlying vendor API.

This post breaks down the real numbers behind custom MCP infrastructure, the architectural nightmare of OAuth token management, and a pragmatic framework for deciding when to build versus when to buy.

The True Cost of a Custom MCP Server

Building and maintaining a single custom MCP server in-house typically costs between $50,000 and $150,000 per integration per year. That number comes up repeatedly across industry analyses. Your team must build each integration, maintain it, adapt to vendor API changes, and handle production issues. This often costs $50,000 to $150,000 per integration per year, including development, QA, monitoring, and ongoing support. An MCP server doesn't reduce that cost—it adds to it.

Here's what people miss: MCP standardizes the protocol, not the underlying API. Your MCP server still needs to handle every vendor's pagination style, rate limit strategy, error format, and field naming convention. Salesforce returns nextRecordsUrl for pagination. HubSpot uses paging.next.after. Jira uses startAt and maxResults. MCP wraps a tool interface around these differences—someone still has to write and maintain the code that deals with each one.

The initial build is the cheap part. Basic MCP server development for a single-system connector typically requires 2–3 weeks of engineering time for design, implementation, and testing. That's one integration. Now multiply it.

Cost center What it really includes Why teams miss it
Tool surface Resource discovery, descriptions, schemas, read/write scoping The demo only exposes three happy-path tools
Auth OAuth setup, refresh, reauth, secret storage, consent issues It looks like a one-time setup until tokens start dying
Execution Pagination, retries, rate-limit handling, weird vendor errors Vendor docs usually stop right before the annoying part
Safety Prompt-injection boundaries, least privilege, approval flows Teams assume MCP is just transport, not a security boundary
Operations Logs, audits, tenant isolation, incident response Nobody budgets for on-call when planning the proof of concept

The 3-year total cost of ownership is brutal. A complex integration might cost $80,000 to build but require $15,000 annually for support. This means the 3-year TCO is closer to $125,000—per integration. For a platform with 10 integrations, you're approaching $1M in pure integration overhead before your team writes a single line of product code. Industry data shows custom integrations require a maintenance tax of 15–30% of the initial development cost every year. And many businesses spend 60% of their time troubleshooting APIs.

The teams that get burned are not bad engineers. They are usually good engineers who underestimated the surface area. One Salesforce MCP server is not one piece of work. It is OAuth app management, tenant onboarding, permission modeling, schema curation, retries, docs, monitoring, support runbooks, and a reauth UX for the day a token dies in front of a customer.

Warning

If the integration does not directly differentiate your product, every sprint you spend building it is a tax on your roadmap. You are spending engineering cycles maintaining API plumbing instead of improving your agent's cognitive architecture.

We've covered the full economics of this in our post on building integrations in-house.

The Authentication Trap: Why OAuth Breaks Custom MCPs

Authentication is where custom MCP projects go to die. It looks straightforward in a demo—hardcode an API key, call the endpoint, wrap it in a tool. Then you try to ship it to real users with real OAuth flows, and suddenly you're maintaining a distributed credentials infrastructure.

Remote MCP auth is a two-hop problem:

  • Client → MCP server: The MCP spec says servers act as OAuth 2.1 resource servers and must validate access tokens and audience.
  • MCP server → underlying SaaS API: Your MCP server still has to manage Salesforce access tokens, Google refresh tokens, Slack bot scopes, service accounts, and API keys for older vendors.

Every custom MCP server ends up implementing its own OAuth on top of the existing OAuth or token model required by the underlying API. In practice, that drags you right back into an M×N maintenance problem—the exact thing MCP was supposed to eliminate, reappearing at the auth layer.

The security data is sobering. Astrix Security analyzed over 5,200 open-source MCP server implementations and found a systemic problem: the vast majority of servers (88%) require credentials, but over half (53%) rely on insecure, long-lived static secrets such as API keys and Personal Access Tokens. Meanwhile, modern authentication methods like OAuth sit at just 8.5% adoption.

Why so few use OAuth? Because implementing it properly for MCP is a massive engineering undertaking. The MCP authorization specification requires PKCE (Proof Key for Code Exchange) for all clients—both public and confidential—to mitigate authorization code interception attacks. You also need metadata discovery (RFC 8414), dynamic client registration, token introspection, and refresh token rotation.

But the reality is bleak: most SaaS APIs don't support Dynamic Client Registration (DCR). One recent security analysis found only 4% of tested authorization servers supported it. Because the vendor doesn't support DCR, your custom MCP server has to act as a proxy, holding a single static OAuth Client ID with the vendor. This forces your team to build and maintain a highly secure token vault, map the MCP client's session to the vendor's OAuth state (get this wrong and you're exposed to CSRF attacks), and handle every vendor's token lifecycle quirks.

flowchart TD
    A["AI Agent calls<br>MCP tool"] --> B{"Token valid?"}
    B -->|Yes| C["Execute API call"]
    B -->|No| D{"Refresh token<br>available?"}
    D -->|Yes| E["Refresh token flow"]
    D -->|No| F["Re-authenticate<br>user via OAuth"]
    E --> G{"Refresh<br>succeeded?"}
    G -->|Yes| C
    G -->|No| F
    F --> H["Consent screen +<br>PKCE + Callback"]
    H --> I["Store new<br>tokens securely"]
    I --> C
    C --> J["Return result<br>to agent"]

    style F fill:#f96,stroke:#333
    style H fill:#f96,stroke:#333

That diagram shows the flow for a single provider. Now imagine maintaining this for Salesforce (which uses a connected app with specific scopes and instance URLs), HubSpot (which has a different refresh token rotation policy), and Zendesk (which supports both OAuth and API token auth depending on the account type). Each one has edge cases the documentation doesn't mention.

Here's the token refresh snippet that shows up in conference talks:

async function getUsableAccessToken(connection: Connection) {
  if (connection.expiresAt < Date.now() + 30_000) {
    const refreshed = await refreshVendorToken(connection.refreshToken)
 
    if (refreshed.error === 'invalid_grant') {
      await markNeedsReauth(connection.id)
      throw new Error('Reauth required')
    }
 
    await saveConnectionTokens(connection.id, refreshed)
    return refreshed.accessToken
  }
 
  return connection.accessToken
}

The part they skip is encrypted storage, refresh locking, retry jitter, vendor-specific refresh params, audit logs, consent changes, and the UI flow that gets a user reconnected without your support team joining a Zoom at 11 PM. We've covered the gory details in our deep dive on OAuth at scale.

You need to build and maintain every part of the OAuth flow, including login interfaces, consent screens, secure credential storage, and token management. It also increases your exposure to potential security and compliance risks since you're responsible for the entire authentication system.

Danger

OpenAI explicitly warns that one MCP can be used to exfiltrate sensitive data through another MCP after a prompt-injection attack. Anthropic warns that custom connectors give Claude the ability to access and potentially modify data in third-party services. Good auth helps, but it does not remove bad trust boundaries.

What Is a Managed MCP Provider?

A managed MCP provider is a hosted platform that gives your AI agents access to third-party SaaS APIs through pre-built MCP server endpoints—handling authentication, tool generation, rate limiting, and credential management so your team doesn't have to.

The key properties:

  • Pre-built tool definitions for hundreds of SaaS APIs, exposed as MCP-compatible endpoints
  • Automatic OAuth lifecycle management, including token storage, refresh, and rotation
  • Dynamic tool generation from API definitions rather than hand-coded tool lists
  • Self-contained endpoint URLs that embed authentication context, requiring zero client-side configuration
  • Built-in rate limiting, pagination handling, and error normalization across providers

The managed MCP market has matured quickly:

Provider Public positioning What they're betting on
Composio Hosted MCP servers plus managed auth for 100+ tool coverage Tool-first agent execution
Nango Auth, refresh, connection management, then MCP/tool calling on top Auth-first connectivity
CData Connect AI Managed MCP with semantic enterprise data access Enterprise context and governance

Each platform takes a different angle. Some focus on OAuth infrastructure, others on tool catalogs. The important thing is understanding what your team actually needs.

The Trade-offs of Buying

Radical honesty requires acknowledging that managed providers are not magic bullets. You are trading internal toil for vendor dependency:

  • Abstraction Leaks: When you use a managed platform, you're bound by their schema. If a vendor releases a new, highly specific API endpoint, you may have to wait for the managed provider to support it—unless the provider offers custom endpoint passthrough.
  • Uptime Dependency: Your AI agent's reliability is now tied to a third party. If the managed MCP provider experiences an outage, your agent cannot execute tools.
  • Data Privacy: While most providers don't store payload data, the data still traverses their infrastructure. This requires rigorous security reviews and SOC 2 compliance checks.
  • Vendor Lock-in: Ask hard questions about exportability, revocation, data handling, logs, pricing under load, and whether the provider actually owns the ugly parts or just gives you a nicer dashboard.
Tip

When you evaluate vendors, count auth surfaces instead of just integrations. One Google Workspace integration can mean OAuth consent, refresh, scope drift, admin approval, webhooks, rate limits, and domain-specific edge cases.

Evaluating the ROI: When to Build vs. When to Buy

The decision framework is simpler than most articles make it.

Build your own MCP server when:

  • The integration itself is your core product differentiation
  • You need deep, custom control over exactly how an API is exposed to your models
  • You're connecting to a proprietary internal system with no public API
  • You require air-gapped security—serving defense contractors or highly regulated healthcare providers where routing traffic through a multi-tenant cloud provider is a non-starter
  • You only need one or two integrations and already have platform engineers on call

Buy a managed MCP provider when:

  • You need to support 5+ third-party integrations and growing
  • Your AI agents need access to standard SaaS tools (like Attio for CRM, HRIS, ticketing, etc.)
  • OAuth lifecycle management is eating engineering time
  • You want to ship agent features in weeks, not quarters
  • You want predictable costs instead of absorbing the $50K–$150K annual maintenance per integration

The data backs this up. Organizations using established MCP platforms report 3x faster deployment compared to building custom AI integration infrastructure from scratch. This acceleration comes from leveraging pre-built authentication, monitoring, and governance capabilities.

The infrastructure skills gap makes it even more urgent. McKinsey found that 47% of C-suite leaders said their organizations were developing AI tools too slowly, with talent skill gaps as the top reason. Only 14% of leaders say they have the right talent to meet their AI goals. Skills gaps are worsening: 61% cite shortages in managing specialized infrastructure, up from 53%. More than half of AI projects have been delayed or canceled within the last two years, citing complexities with AI infrastructure.

The pattern is clear: teams that treat integration infrastructure as undifferentiated commodity and buy it, ship faster. Teams that treat it as a fun engineering challenge end up maintaining it instead of building their actual product. Your best engineers should be optimizing LLM prompts, improving retrieval-augmented generation pipelines, and building better agent reasoning—not debugging why HubSpot's refresh token stopped working after a scope change.

How Truto Eliminates MCP Engineering Overhead

Truto approaches the MCP infrastructure problem with a core architectural bet: an MCP server should be a configuration artifact sitting on top of your integration layer, not a bespoke service you hand-code for every SaaS app.

Documentation-Driven Tool Generation

Most MCP platforms require hand-coded tool definitions for each integration. Truto takes a different approach: tools are derived dynamically from two existing data sources—the integration's resource definitions (what API endpoints exist) and documentation records (human-readable descriptions and JSON Schema definitions for each method).

A tool only appears if it has a corresponding documentation entry. This acts as both a quality gate and a curation mechanism—you don't accidentally expose a half-baked endpoint to an LLM. Badly described tools are worse than missing tools because they teach the model the wrong contract.

When an AI agent requests the available tools (tools/list), Truto generates them on the fly:

{
  "name": "list_all_hub_spot_contacts",
  "description": "Fetch a list of all contacts from HubSpot.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "limit": { "type": "string", "description": "The number of records to fetch" },
      "next_cursor": { "type": "string", "description": "Always send back exactly the cursor value you received." }
    }
  }
}

Tool names are generated as descriptive identifiers: list_all_hub_spot_contacts, create_a_jira_issue, get_single_salesforce_deal_by_id. Schemas are automatically enhanced—list methods get pagination parameters, individual methods get id injection. This happens at request time, not build time, so any documentation update is reflected immediately.

Self-Contained, Secure MCP Endpoints

Authentication is handled entirely outside of your application code.

When you create an MCP server through Truto, you get back a single URL. That URL contains a cryptographic token that encodes which account to use, what tools to expose, and optionally when the server expires. No additional client configuration needed.

# The URL is everything - paste it into Claude, ChatGPT, Cursor, or any MCP client
https://api.truto.one/mcp/a1b2c3d4e5f6...

Raw tokens are never stored. They're HMAC-hashed before being persisted to KV storage, so even a datastore compromise doesn't leak credentials. The token is returned exactly once—in the creation response. You simply pass this URL to your AI agent, and the URL alone is enough to authenticate and serve tools, bypassing the need for your team to build an OAuth proxy.

Fine-Grained Access Control

You can scope MCP servers by method type and resource tags. Want a read-only server for a reporting agent? Set methods: ["read"] and only get and list tools are exposed. Want to limit a support agent to ticket-related tools? Use tag filtering:

const res = await fetch('https://api.truto.one/integrated-account/ia_123/mcp', {
  method: 'POST',
  headers: {
    Authorization: `Bearer ${process.env.TRUTO_API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    name: 'Support Agent MCP',
    config: {
      methods: ['read', 'create'],
      tags: ['support']
    }
  })
})
 
const server = await res.json()
// server.url -> use this as the remote MCP endpoint

Method categories can be combined: ["read", "custom"] includes get, list, and any custom methods (like search or export) but excludes create, update, and delete. The system validates at creation time that the filter combination produces at least one tool—you can't accidentally create a server with zero capabilities.

Built-In TTL and Automatic Cleanup

Security teams hate permanent API keys. AI agents often only need temporary access to a system to complete a specific workflow.

MCP servers can be created with an expires_at timestamp—useful for giving a contractor agent access for a week, or generating short-lived servers for automated workflows. This isn't just a database flag. Expiration is enforced at the KV layer (entries auto-expire) and backed by scheduled Durable Object alarms that remove database records and KV entries when the TTL elapses. Once the alarm fires, the MCP server ceases to exist. No stale tokens lingering.

For higher-security environments, you can enable dual-authentication mode that requires both the MCP URL token and a valid API key in the Authorization header—useful when the URL might appear in logs or config files.

Zero Integration-Specific Code

This is Truto's core architectural bet. When the AI agent calls a tool (tools/call), the caller's flat arguments object is automatically split into query parameters and body parameters based on the dynamically generated schemas. Execution is delegated to a generic pipeline that handles the actual HTTP request to the vendor—the same code path handles pagination, auth refresh, rate limits, and error normalization for every integration.

There is no "HubSpot handler" or "Salesforce handler" in the execution path. When Truto adds a new integration, MCP tools are automatically available if documentation exists. We've written more about why this zero-code architecture matters.

The honest trade-off: because MCP tool calls go through Truto's proxy API (the raw vendor API), not the unified API, the responses use the vendor's native schema. If you need normalized fields across providers, you'd use the unified API layer separately. MCP tools give you full API surface area at the cost of per-vendor response formats.

What to Do Next

Don't evaluate MCP infrastructure with a hello-world checklist. Evaluate it with production questions:

  1. Audit your integration roadmap. Count the number of third-party APIs your agents need to access in the next 12 months. If it's more than 5, the build path will almost certainly cost more than buying.

  2. Estimate your OAuth surface area. For each integration, determine the auth method (OAuth 2.0, API key, basic auth, custom). If more than half use OAuth, you're signing up for significant credential lifecycle management.

  3. Calculate opportunity cost. Take the engineering weeks required to build and maintain MCP servers, multiply by your fully loaded engineering cost, and compare that to the product features you won't ship during that time.

  4. Ask the hard operational questions. What is your reauth path when refresh tokens die? Can you revoke one tenant instantly? How do you audit tool calls per account? How do you stop prompt injection in one tool from turning into cross-tool exfiltration?

  5. Evaluate managed providers against actual requirements. Not all platforms are equal. Some focus on auth infrastructure, others on tool breadth, others on governance. Match the platform to your biggest pain point.

  6. Start with a managed provider for standard SaaS integrations. Reserve custom builds for internal systems and proprietary APIs where no managed option can help. This hybrid approach gives you speed where it matters and control where you need it.

The teams shipping AI agents fastest aren't writing MCP infrastructure code. They're letting managed platforms handle the plumbing and putting their engineering effort where it actually differentiates their product. That's not a sales pitch—it's an engineering resource allocation decision, and the math is clear. Just don't confuse a working demo with a solved architecture.

FAQ

How much does it cost to build and maintain a custom MCP server?
A basic MCP server for a single integration takes 2–3 weeks of engineering time for initial development. Including ongoing maintenance, vendor API changes, OAuth management, and QA, the total cost runs $50,000 to $150,000 per integration per year. The 3-year TCO for a complex integration approaches $125,000, and a platform with 10 integrations can hit $1M in pure integration overhead.
Why is OAuth the hardest part of building custom MCP servers?
Remote MCP auth is a two-hop problem: client to MCP server, then MCP server to the underlying SaaS API. Each custom MCP server must implement its own OAuth on top of the vendor's existing auth model, including PKCE, token refresh, secure credential storage, and consent screens. Only 8.5% of open-source MCP servers use OAuth—most fall back to insecure static API keys because proper implementation is so complex.
When should I build my own MCP server instead of buying?
Build your own MCP server when the integration is your core product differentiation, you're connecting to proprietary internal systems, you require air-gapped security for regulatory compliance, or you only need one or two integrations and already have platform engineers on staff. For standard SaaS integrations at scale, buying is almost always faster and cheaper.
What is a managed MCP provider?
A managed MCP provider is a hosted platform that gives AI agents access to third-party SaaS APIs through pre-built MCP server endpoints, handling authentication, dynamic tool generation, rate limiting, pagination, and credential management automatically so your team can focus on product features instead of integration infrastructure.
How does Truto handle MCP server authentication and tool generation?
Truto issues a self-contained cryptographic token URL that handles all auth automatically—raw tokens are HMAC-hashed before storage. Tools are generated dynamically from Truto integration documentation entries, acting as a quality gate so only well-documented endpoints are exposed. Servers can be scoped by method type and resource tags, and optional TTL expiration ensures no stale access remains.

More from our Blog

Best MCP Server for Slack in 2026
AI & Agents

Best MCP Server for Slack in 2026

Compare the top Slack MCP servers for AI agents in 2026: open-source options vs. Truto's managed MCP with full API coverage, managed OAuth, and enterprise security.

Uday Gajavalli Uday Gajavalli · · 15 min read
Best MCP Server for Attio in 2026
AI & Agents

Best MCP Server for Attio in 2026

Compare the best MCP servers for Attio CRM in 2026. Open-source vs. Attio's hosted MCP vs. Truto's managed server — with setup guides for Claude, ChatGPT, and custom agents.

Uday Gajavalli Uday Gajavalli · · 12 min read