Skip to content

Best MCP Server for Fathom in 2026

Compare the best MCP servers for Fathom in 2026. Connect Fathom meeting transcripts to Claude, ChatGPT, Cursor, and LangChain agents without managing API keys or rate limits.

Uday Gajavalli Uday Gajavalli · · 16 min read
Best MCP Server for Fathom in 2026

If you want to connect Fathom's meeting transcripts, summaries, and action items to Claude, ChatGPT, Cursor, or a custom LangChain agent, you have three realistic paths: an open-source MCP server you host yourself, a managed platform like Composio, or Truto's managed MCP infrastructure. Just as we explored with the best MCP servers for Slack, this post breaks down the real trade-offs so you can pick the one that fits your stack and risk tolerance.

At a glance:

Option Best for What to know
Truto Production apps, multi-tenant agents, shared teams Hosted remote MCP, /tools endpoint, Fathom OAuth and API-key linking, method and tag scoping
Open-source Fathom MCP One developer on one machine Clone, install, build, and place a raw FATHOM_API_KEY in local MCP config. You still live under Fathom's 60-calls-per-minute limit
Composio Teams already committed to its Tool Router model Managed Fathom MCP, but the architecture is centered on Composio sessions and consumer keys — a real platform commitment
Zapier Event-driven automations after meetings Great for flows like New AI Summary → Run Agent, but not the same as giving an LLM a broader Fathom MCP tool surface

Why Fathom Meeting Data Is a High-Value Source for AI Agents

Meetings are black holes of unstructured data. Fathom fixes the transcription part by automatically recording, transcribing, and summarizing video calls across Zoom, Google Meet, and Microsoft Teams. With 500,000+ users saving 6+ hours weekly on meeting administration, Fathom has proven its value. But having a transcript sitting in a Fathom dashboard is only half the battle. The real value is unlocked when you pipe that data directly into an AI agent.

When you connect Fathom to a Large Language Model via the Model Context Protocol (MCP), you transform a static transcript repository into an active participant in your workflows. Here are the primary ways engineering teams are putting this data to work:

  • Automated CRM Synchronization: An agent monitors Fathom for completed sales calls, extracts qualification criteria from the transcript, and updates the corresponding deal records in Salesforce or HubSpot. This eliminates manual data entry for account executives.
  • Pre-Meeting Context Briefings (RAG): A cron job triggers an agent 15 minutes before a scheduled client call. The agent fetches the previous Fathom meeting summary, cross-references the participants in your CRM, and sends a synthesized briefing directly to the account owner via Slack or Teams.
  • Cross-Meeting Thematic Summarization: Instead of reading individual call notes, a product manager uses an MCP-connected Claude instance to query, "Summarize the feedback on the new analytics dashboard across all user interviews from last week." The agent lists the relevant meetings, fetches the transcripts, and synthesizes a single product brief.
  • Action Item Routing: A background LangChain agent extracts action items from engineering syncs and automatically creates and assigns Jira or Linear tickets based on the context of the conversation.
  • IDE Context Retrieval: A developer using Cursor asks, "What did the client say about the database migration in yesterday's call?" The IDE pulls the exact transcript snippet without breaking flow.

Fathom's public API — one of the company's most-requested features — enables users to use meeting data wherever they need it, not just within the Fathom app, from individuals streamlining daily activities to partners building custom integrations.

The catch? Actually wiring Fathom's API into an AI agent requires more work than most teams expect. If you are new to the underlying protocol making this possible, read our complete in-depth guide on MCPs to understand how hosts, clients, and servers interact.

What Fathom's API Actually Looks Like

Before choosing an MCP server, you need to understand what you're working with. API keys are created at the user level, meaning your key can only access meetings recorded by you, or those shared to your Team. If you're an Admin, your API key does not provide access to other users' unshared meetings.

The API surface is straightforward but limited:

Endpoint What it does
GET /external/v1/meetings List meetings with filters (date, team, participants)
GET /recordings/{id}/transcript Get a specific meeting's transcript
GET /recordings/{id}/summary Get AI-generated summary
Webhooks Real-time notifications for new meetings, transcripts, action items

Authentication is header-based: X-Api-Key: YOUR_KEY. Partners can register an OAuth app to be eligible for a future marketplace listing as a Fathom-approved integration.

Two things to keep in mind:

  1. API rate limiting is user-based (versus key-based) — the 60 calls per minute limit remains the same, but applies across all keys created by a single user. An AI agent making concurrent tool calls for transcripts and summaries can burn through that budget fast.
  2. Write operations (creating/editing meetings, transcripts, etc.) are not available in the Fathom API. This is a read-only API. Your agent can query and extract, but it can't push data back.
Info

Fathom does not provide real-time transcription. Summaries and transcripts are available only after the meeting ends and the recording is processed. Design your agent around post-meeting workflows, not live in-call intervention.

Why Self-Hosting a Fathom MCP Server Is a Trap

There are several open-source Fathom MCP servers on GitHub. The most popular is matthewbergvinson/fathom-mcp. It's a Model Context Protocol server that integrates Fathom.video with AI coding assistants like Cursor, letting you query meeting transcripts, export recordings to markdown, search by participant, and manage webhooks.

The setup looks simple enough:

git clone https://github.com/matthewbergvinson/fathom-mcp.git
cd fathom-mcp
npm install
npm run build

Then you add it to your ~/.cursor/mcp.json:

{
  "mcpServers": {
    "fathom": {
      "command": "node",
      "args":["/path/to/fathom-mcp/dist/index.js"],
      "env": {
        "FATHOM_API_KEY": "your-api-key-here"
      }
    }
  }
}

Here's where it falls apart:

Your API key lives in a plaintext config file. Anyone with access to that machine — or that dotfile accidentally committed to git — has full read access to your Fathom account. Distributing raw API keys across your engineering team's local machines violates basic security principles. As discussed in our analysis of MCP Server Security Risks, storing unencrypted vendor tokens in local text files exposes your organization to severe credential theft risks.

Rate limit handling is your problem. Fathom enforces a strict global rate limit of 60 requests per minute. LLMs are incredibly aggressive when executing tool calls. If an agent decides to list the last 10 meetings and then fetches transcripts for all of them concurrently, it will instantly hit a 429 error. At roughly four calls per useful meeting, fifteen concurrent flows can burn the whole minute budget. Most open-source servers don't implement backoff or queuing. They just fail.

No OAuth support. The open-source servers use raw API keys exclusively. Fathom OAuth apps require HTTPS redirect URIs, so local development with http://localhost isn't possible. If you're building a multi-tenant product where each user connects their own Fathom account, you need to manage OAuth token lifecycles yourself — and Fathom's own docs explicitly say the in-memory token store is for demos only.

It only works locally. These are stdio-based MCP servers. They run as child processes on your machine. ChatGPT's custom MCP connectors require a remote server — local servers are not supported. Claude's connectors are also built around remote endpoints. Cursor is more flexible, but once you care about more than one human or one machine, remote HTTP is the sane default.

Warning

Open-source Fathom MCP servers are fine for individual use in Cursor. They're not viable for production agent workflows, multi-tenant products, or any scenario where security and uptime matter.

Composio: The Multi-Toolkit Alternative

Composio lets you securely connect AI agents and chatbots (Claude, ChatGPT, Cursor, etc.) with Fathom MCP or direct API to record meetings, transcribe calls, generate summaries, and retrieve meeting notes through natural language.

Composio's approach is different from Truto's. Rather than generating MCP tools from Fathom's API definition, it routes through a "Tool Router" that dynamically loads tools at inference time. With a standalone Fathom MCP server, agents can only access a fixed set of Fathom tools tied to that server. With the Composio Tool Router, agents can dynamically load tools from Fathom and many other apps based on the task at hand.

The trade-off is architectural coupling. You're adopting Composio's SDK, their authentication layer, and their execution model. That might be fine if Composio is already your agent infrastructure. If you're building on a different stack, adding Composio just for Fathom creates a dependency you'll feel later.

Truto: The Best Managed Fathom MCP Server for Production

A managed MCP server shifts the execution burden from your local machine or custom backend to hosted infrastructure. Truto acts as this layer, dynamically generating MCP tools from Fathom's API definition — the same pattern it uses across 250+ other integrations.

Dynamic tool generation. When you create a Fathom MCP server through Truto (in the UI or via the API), it doesn't ship a static list of tools. It reads Fathom's resource definitions and documentation at request time, then generates tool schemas on the fly with descriptive snake_case names like list_all_fathom_meetings and strict JSON schemas for both query parameters and request bodies. If Truto adds support for a new Fathom endpoint tomorrow, your MCP server picks it up automatically — no code changes, no redeployments.

Built-in rate limiting and pagination. Truto's proxy API layer handles Fathom's 60 req/min limit natively. When a list call returns a cursor, the response includes next_cursor with instructions that tell the LLM exactly how to paginate without mangling the token. This is a detail that sounds trivial until you've watched an LLM try to base64-decode a cursor and send back garbage.

Method and tag filtering for access control. In the Truto UI or via the API, you can create a read-only MCP server that only exposes list and get methods, or scope it to specific resource groups using tags. Give a sales agent access to meetings and summaries without exposing webhook management or other sensitive operations.

Native Fathom API shape. Truto's Proxy API is 1-to-1 with the underlying Fathom API, so you keep native fields instead of flattening everything into a lowest-common-denominator abstraction. There's no unified data model normalizing Fathom meetings into the same schema as Zoom or Fireflies transcripts. If you need cross-provider meeting normalization, you'd handle that in your agent logic.

sequenceDiagram
    participant Client as Claude / ChatGPT / Cursor
    participant Truto as Truto MCP Server
    participant Fathom as Fathom API

    Client->>Truto: tools/list (JSON-RPC)
    Truto->>Truto: Generate tool schemas<br>from Fathom resource definitions
    Truto-->>Client: Available tools<br>(list_meetings, get_transcript, etc.)

    Client->>Truto: tools/call<br>(list_all_fathom_meetings)
    Truto->>Fathom: GET /external/v1/meetings<br>(with rate limit handling)
    Fathom-->>Truto: Meeting data + cursor
    Truto-->>Client: Results + next_cursor

Brutal honesty: Truto does not remove Fathom's underlying constraints. The 60 RPM limit still exists. OAuth still has user-consent and token-lifecycle semantics. Fathom still works post-meeting rather than live. What Truto does is absorb the repetitive integration work around those constraints so your team can spend time on retrieval strategy, prompt design, and product behavior instead of auth plumbing.

Connecting Fathom via OAuth or API Key in Truto

Use API key when: you are building an internal, single-user, or operator-run workflow.

Use OAuth when: you are building a multi-user product, a customer-facing integration, or anything where per-user visibility matters.

Fathom supports both authentication methods. Here's how to wire up the API key path:

  1. Generate your Fathom API key. In Fathom, go to Settings > API Access and click Generate API Key.
  2. Connect in Truto. Create a new integrated account for the Fathom integration in your Truto environment. Paste the API key. Truto stores it encrypted — not in a plaintext JSON file on someone's laptop.
  3. Verify the connection. Truto validates the credentials against Fathom's API immediately. If the key is invalid, you'll know before you build anything on top of it.

For partner integrations where end users connect their own Fathom accounts, Truto also supports OAuth flows with its embeddable Link SDK — no custom redirect URI handling required on your end. Truto manages the token exchanges, refresh cycles, and secure storage. If Fathom revokes an API key, the Truto proxy catches the 401 Unauthorized response and immediately updates the integrated account status to needs_reauth. If you've configured webhooks in Truto, your application receives an immediate notification, allowing you to prompt the user to reconnect before the agent fails silently.

Warning

If you choose OAuth, design your tools around the dedicated transcript and summary endpoints from day one. Fathom's OAuth apps cannot use include_transcript or include_summary on the meetings list endpoint. You need to fetch that data through the recordings endpoints instead.

Tip

If your app needs fresh meeting data without wasteful polling, use webhooks. Fathom webhooks can include transcript, summary, and action-item data — usually a better ingestion pattern than having an agent ask "anything new?" every minute.

Using the Fathom MCP Server with Claude, ChatGPT, and Cursor

Once your Fathom account is connected in Truto, you can create an MCP server in two ways: in the Truto UI (dashboard) or via the API. In the UI, open the connected Fathom account and use the option to create an MCP server, then copy the generated URL. For automation or backend provisioning, use a single API call:

curl -X POST https://api.truto.one/integrated-account/<FATHOM_INTEGRATED_ACCOUNT_ID>/mcp \
  -H "Authorization: Bearer <TRUTO_API_TOKEN>" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Fathom Read-Only",
    "config": {
      "methods": ["read"]
    },
    "expires_at": "2026-12-31T23:59:59Z"
  }'

The response (or the UI) gives you a URL like https://api.truto.one/mcp/<token>. That URL is the MCP server. No hosting, no Docker, no npm install. Truto generates a secure, randomized token, hashes it via HMAC, and stores it in edge KV storage. Start with methods: ['read'] and only widen permissions if you actually need mutating operations. Most meeting-data agents do not.

Adding to Claude

  1. Copy the MCP server URL from the Truto dashboard or API response.
  2. In Claude: Settings > Connectors > Add custom connector.
  3. Paste the URL. Done.

Claude discovers available tools via the MCP tools/list method and can immediately start querying your Fathom meetings.

Watch the full flow. The video below walks through creating Fathom and Attio MCP servers in the Truto UI, adding them to Claude, and syncing meeting outcomes into Attio as CRM updates — no API calls or config files required.

Adding to ChatGPT

  1. In ChatGPT: Settings > Apps > Advanced settings.
  2. Enable Developer mode.
  3. Add a new MCP server with your Truto URL.

ChatGPT's custom MCP connectors require a remote server — another reason the managed route beats a laptop-bound Fathom script. For a walkthrough of bringing 100+ connectors to ChatGPT, including Fathom, see our ChatGPT integration guide.

Adding to Cursor

Add this to your ~/.cursor/mcp.json:

{
  "mcpServers": {
    "fathom-truto": {
      "url": "https://api.truto.one/mcp/<token>"
    }
  }
}

No local Node.js process. No API key in the config. The token in the URL handles authentication, and it can be set to expire automatically for temporary access.

Info

If your connector needs to work across Claude, ChatGPT, and Cursor, design for remote MCP first. Treat local stdio servers as a dev convenience, not the deployment model.

Security Controls That Matter in Production

Truto's MCP servers support features the open-source alternatives don't:

  • Token expiration: Set expires_at to auto-revoke access after a week, a sprint, or a demo. Truto schedules a distributed cleanup alarm that automatically deletes the server configuration and invalidates the edge tokens the second the timestamp passes.
  • Dual authentication: Enable require_api_token_auth to require both the MCP URL token and a Truto API key in the Authorization header. Even if an MCP URL leaks in a log file, your Fathom data remains protected.
  • Method restrictions: Lock the server to read-only (["read"]) or specific methods (["list"]). Your agent can list meetings but can't touch webhooks.
  • Tag-based filtering: Endpoints are tagged by functional area. When creating the MCP server, pass a tags array and Truto filters tools during generation. If a resource doesn't match, it's excluded entirely. The LLM never even knows the endpoint exists.

For a deeper look at managed MCP security patterns, see our guide on managed MCP for Claude.

Handling Fathom's Asynchronous Transcript Processing

A common edge case when building Fathom agents: transcripts are not available the instant a video call ends. Fathom processes recordings asynchronously, and attempting to fetch a transcript too soon returns incomplete data.

Fathom's API provides two options for the /recordings/{recording_id}/transcript endpoint. If you omit the destination_url parameter, the API attempts to return the data directly. If you include it, Fathom treats the request asynchronously and posts the transcript to your webhook once processing completes.

Truto's dynamically generated tools expose both options to the LLM. You can explicitly prompt your agent: "If the meeting ended less than 10 minutes ago, do not fetch the transcript immediately. Wait or check the status later." By relying on Truto's strict schema definitions, the agent understands exactly which parameters to pass to handle these asynchronous realities safely.

Tip

For most meeting-data agents, the tool order should be: search or list meetings → fetch summary → fetch transcript only if needed → write follow-up or update another system. Full transcripts are expensive context. Do not dump them into the model until you know the meeting is relevant.

Beyond MCP: The /tools Endpoint for LangChain and Custom Agents

MCP is great for out-of-the-box clients like Claude and ChatGPT. But if you're building a custom agent with LangChain, Vercel AI SDK, or any other framework, you probably don't want to implement a full MCP client just to call Fathom's API.

Truto exposes the same dynamically-generated tool definitions through a REST endpoint:

GET https://api.truto.one/integrated-account/<FATHOM_INTEGRATED_ACCOUNT_ID>/tools?methods[0]=read

This returns a JSON array of tool objects, each with a descriptive name, a human-readable description for the LLM, a query_schema and body_schema in JSON Schema format, and a list of required fields. You can filter by method type (read, write, custom) or by specific tags.

These schemas plug directly into LangChain's tool definitions. Truto's LangChain.js SDK handles the registration automatically:

import { TrutoToolset } from 'truto-langchainjs-toolset';
 
const toolset = new TrutoToolset({
  apiKey: process.env.TRUTO_API_KEY,
  integratedAccountId: '<FATHOM_INTEGRATED_ACCOUNT_ID>',
});
 
const tools = await toolset.getTools();
// Pass tools to your LangChain agent

If you need more control, here's a clean pattern for turning tool definitions into function-calling schemas for any framework:

const TRUTO_BASE = process.env.TRUTO_BASE ?? 'https://api.truto.one';
 
function toParameters(tool) {
  return {
    type: 'object',
    properties: {
      ...(tool.query_schema?.properties ?? {}),
      ...(tool.body_schema?.properties ?? {}),
    },
    required: tool.required ??[],
  };
}
 
const res = await fetch(
  `${TRUTO_BASE}/integrated-account/${integratedAccountId}/tools?methods=read`,
  { headers: { Authorization: `Bearer ${trutoApiKey}` } }
);
 
const { results } = await res.json();
 
const llmTools = results.map((tool) => ({
  type: 'function',
  function: {
    name: tool.name,
    description: tool.description,
    parameters: toParameters(tool),
  },
}));

When an MCP client calls a tool, all arguments arrive as a single flat object. Truto intelligently splits these arguments into query parameters and request body payloads by parsing the generated JSON schemas. This flat input namespace significantly reduces the cognitive load on the LLM, which no longer has to guess whether recording_id belongs in the query string or the body.

The /tools endpoint and the MCP server generate tools from the same source of truth — the integration's resource definitions and documentation. Update a tool description in Truto and it's reflected in both channels instantly.

For a deeper look at how this fits into agent orchestration patterns, see our post on architecting AI agents with LangGraph and LangChain.

Head-to-Head Comparison

Criteria Open Source (self-hosted) Composio Truto
Setup time 10-15 min (clone, build, configure) 5 min (SDK install + API key) 3 min (create in UI or API + paste URL)
Fathom auth Raw API key in config file Managed OAuth / API key Managed OAuth / API key, encrypted
Rate limit handling Manual / none Managed Managed via Proxy API
Works with Claude web No (stdio only) Yes Yes
Works with ChatGPT No Yes Yes
Works with Cursor Yes Yes Yes
Token expiration No Platform-dependent Yes (auto-cleanup)
LangChain / custom agents Need MCP client SDK integration /tools REST endpoint + SDK
Multi-tenant support No Yes Yes
Cost Free (+ your time) Free tier + paid plans Paid plans

The Honest Call: When to Use What

Pick open source if you're a solo developer in Cursor who wants to query their own meetings and doesn't mind managing a local Node.js process. The blast radius is low and the setup is quick.

Pick Composio if you're already invested in their ecosystem and need Fathom as one of many tools in a multi-app agent. Their Tool Router model works well when you need dynamic tool discovery across dozens of services.

Pick Truto if you're building a production agent workflow, need to support multiple users connecting their own Fathom accounts, care about token security and expiration, or want the same infrastructure to scale across your other SaaS integrations. The /tools endpoint for LangChain and custom frameworks is a meaningful differentiator if you're not building exclusively for Claude or ChatGPT.

The meeting intelligence space is moving fast. The AI Meeting Assistants market was valued at USD 2.44 Billion in 2024 and is expected to reach USD 15.16 Billion by 2032. Fathom is a key player in that market, and programmatic access to its data — through MCP or direct API — is going to be table stakes for any serious agentic workflow that touches sales, customer success, or internal ops.

The question isn't whether to connect Fathom to your agents. It's whether you want to own the plumbing or let someone else handle it.

Recommended next steps:

  • Connect Fathom in Truto with OAuth if the workflow is multi-user.
  • Create a read-only MCP server first (in the Truto UI or via the API).
  • Add that remote URL to Claude or Cursor.
  • If you own the agent runtime, wire /tools instead of MCP.
  • Move from polling to webhooks once the workflow is stable.

FAQ

How do I connect Fathom to Claude using MCP?
Create an MCP server in the Truto UI or via Truto's API for your connected Fathom account, then paste the returned URL into Claude's Settings > Connectors > Add custom connector. Claude auto-discovers Fathom tools like list meetings and get transcript via the MCP tools/list method.
How do I connect Fathom to ChatGPT using MCP?
Create an MCP server for your Fathom account in the Truto UI or via Truto's API, then in ChatGPT go to Settings → Apps → Advanced settings, enable Developer mode, and add a custom MCP server with the Fathom MCP URL. ChatGPT will discover tools like list meetings and get transcript for use in supported flows.
What is Fathom's API rate limit?
Fathom enforces a rate limit of 60 requests per minute per user, applied across all API keys created by that user. AI agents making concurrent tool calls can easily exhaust this limit without proper queuing. Truto's managed MCP handles this automatically via built-in request pacing.
Can I use Fathom with LangChain or custom AI agents?
Yes. Truto exposes a /tools REST endpoint that returns JSON Schema definitions for all Fathom operations, which plug directly into LangChain, Vercel AI SDK, or any framework supporting function calling. A LangChain.js SDK is also available for automatic tool registration.
Is it safe to use open-source Fathom MCP servers?
Open-source servers require hardcoding your Fathom API key in local config files, which creates security risks. They also lack token expiration, rate limit handling, and multi-tenant support needed for production use. They're fine for personal experiments but not team deployments.
Does Fathom support OAuth for MCP integrations?
Fathom supports OAuth for partner integrations, but OAuth apps require HTTPS redirect URIs, making local development difficult. Managed platforms like Truto handle the full OAuth lifecycle automatically. Note that OAuth apps cannot use include_transcript on the meetings list endpoint — you must use the dedicated recordings endpoints instead.

More from our Blog

Best MCP Server for Slack in 2026
AI & Agents

Best MCP Server for Slack in 2026

Compare the top Slack MCP servers for AI agents in 2026: open-source options vs. Truto's managed MCP with full API coverage, managed OAuth, and enterprise security.

Uday Gajavalli Uday Gajavalli · · 15 min read
Best MCP Server for Attio in 2026
AI & Agents

Best MCP Server for Attio in 2026

Compare the best MCP servers for Attio CRM in 2026. Open-source vs. Attio's hosted MCP vs. Truto's managed server — with setup guides for Claude, ChatGPT, and custom agents.

Uday Gajavalli Uday Gajavalli · · 12 min read