---
title: "Connect Claude to your team data via MCP"
excerpt: "10-step playbook for building a real MCP server that exposes your team's data to Claude with auth, deploys to prod, and registers with Claude Desktop in one click."
category: "Template"
---

# Connect Claude to your team data via MCP

    A 10-step playbook. Open in Dock and you'll get four surfaces seeded:

    - **Steps** (table) the 10 gates as rows, owner + due + status
    - **Pointers** (table) every official MCP doc + SDK linked from this playbook
    - **Brief** (doc) the canonical server spec you maintain alongside the work
    - **Tools registry** (table) one row per tool you ship, with purpose + schema + auth requirements

    Open `Steps` first. Each row is a gate. Click into a step to see the tasks, pointers, and the agent prompt for that step.

## Outcome

A production MCP server that exposes your team's data to Claude with proper auth, deployed to a public URL, registered with Claude Desktop + Claude Code, and surviving real-world conversations without leaking secrets.

**Estimated time:** 3-7 days  
**Difficulty:** intermediate  
**For:** Backend engineers wiring Claude into existing internal systems.

## What you'll need

Pre-register or install before you start.

- **[Model Context Protocol spec](https://modelcontextprotocol.io/)** _(Free)_ — The protocol itself, transport, message schemas, lifecycle, capabilities.
- **[MCP TypeScript SDK](https://github.com/modelcontextprotocol/typescript-sdk)** _(Free, MIT license)_ — Official SDK for building servers in Node / TypeScript.
- **[MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)** _(Free, MIT license)_ — Official SDK for building servers in Python.
- **[MCP Inspector](https://github.com/modelcontextprotocol/inspector)** _(Free)_ — Local debugging UI: connects to your server and lets you call every tool by hand.
- **[Claude Desktop](https://claude.ai/download)** _(Free download, Claude Pro $20/mo for higher limits)_ — Loads MCP servers via a JSON config. Easiest local target to test against.
- **[Claude Code](https://www.anthropic.com/claude-code)** _(Free CLI, metered API usage)_ — Terminal client that adds MCP servers via `claude mcp add` command.
- **[Hosting (Vercel, Fly, Railway, etc.)](https://vercel.com/)** _(Free tier exists, paid from $5-20/mo)_ — Public host for the HTTP transport variant of your server.

---

# The template · 10 steps

## Step 1: Decide what data + actions Claude actually needs

_Estimated time: 1-2 hr_

MCP servers fail when they expose too much. A server that exposes 100 tools to Claude is a server where the model picks the wrong tool 30% of the time. Start with the 3-5 highest-leverage tools your users will ask Claude about. Add more only when you've watched real conversations and seen specific gaps.

### Tasks

- [ ] List the top 10 things users ask Claude about your team's data ('show me the latest leads,' 'what's our open invoice total,' 'create a ticket')
- [ ] Group them: read operations vs. write operations vs. metadata lookups
- [ ] Pick 3-5 tools to ship in v1, the rest go in v2 backlog
- [ ] Decide: which read operations should be `resources` (one-shot reads) vs. `tools` (parameterized calls)?

### Pointers

- **[Official]** [MCP concepts: tools vs. resources vs. prompts](https://modelcontextprotocol.io/docs/concepts/architecture)

> [!CAUTION]
> **Gotchas**
>
> - A server with 50 tools loaded into Claude's context burns 5-10K tokens per turn just on tool descriptions. Keep the surface small.
> - Resources are auto-loaded into context when Claude wants them. Tools require a tool call. Pick resources for stable lookups, tools for parameterized work.

## Step 2: Pick a transport: stdio vs. HTTP

_Estimated time: 30 min_

MCP supports two transports. stdio servers run as a subprocess of the client (Claude Desktop spawns them on start). HTTP servers (Streamable HTTP transport) run as a long-lived service accessed over the network. stdio is easiest to start with: no deploy, no auth headers, no public URL. HTTP is required when multiple users share a server, when the server needs persistent state, or when it has to live in your VPC.

### Tasks

- [ ] Pick stdio if: solo developer, server runs locally, no shared state
- [ ] Pick HTTP if: team of users, persistent state, VPC-only data, or you want one server installation per company
- [ ] If HTTP: pick a host (Vercel, Fly, Railway) and a domain
- [ ] If HTTP: decide auth shape now (OAuth via DCR vs. static API key headers)

### Pointers

- **[Official]** [MCP transports: stdio + Streamable HTTP](https://modelcontextprotocol.io/docs/concepts/transports)
- **[Official]** [HTTP transport authentication patterns](https://modelcontextprotocol.io/docs/concepts/authentication)

> [!CAUTION]
> **Gotchas**
>
> - stdio servers can't scale across machines. Every Claude Desktop install spawns its own subprocess.
> - HTTP servers without auth headers expose your data publicly. The protocol does not bake in auth, you have to add it.

## Step 3: Scaffold the server with the official SDK

_Estimated time: 1 hr_

Both SDKs (TypeScript, Python) handle the protocol plumbing: lifecycle, JSON-RPC framing, capability negotiation. You write tool/resource handlers. Don't roll your own MCP server from the spec, the SDK is small and the spec evolves.

### Tasks

- [ ] TypeScript: `npm i @modelcontextprotocol/sdk` + scaffold `index.ts` from the SDK README
- [ ] Python: `pip install mcp` + scaffold `server.py` from the docs
- [ ] Set the server name + version in the constructor (clients display this)
- [ ] Add a `health` tool that returns 'ok', the simplest test that the server boots
- [ ] Run the server and connect with the MCP Inspector to confirm it responds

### Pointers

- **[Code]** [TypeScript SDK quickstart](https://github.com/modelcontextprotocol/typescript-sdk#quickstart)
- **[Code]** [Python SDK quickstart](https://github.com/modelcontextprotocol/python-sdk#quickstart)
- **[Code]** [MCP Inspector](https://github.com/modelcontextprotocol/inspector) — Local UI that talks to your server, fastest debugging loop.

> [!CAUTION]
> **Gotchas**
>
> - The SDK versions move fast. Pin to an exact minor version in package.json or pyproject.toml until the protocol stabilizes.
> - Don't set the server name to something marketing-y, clients display it in their tool list. 'Acme MCP' beats 'Acme AI Workspace 2.0'.

## Step 4: Define your tools with strong schemas

_Estimated time: 2-4 hr per tool_

Tools are the agent surface area. Each tool has a name, a description (the model reads this to decide when to call), an input schema (JSON Schema, this is the contract), and a handler. Weak schemas lead to the model passing garbage args; vague descriptions lead to wrong-tool selection. Treat the schema like an API contract, because that's what it is.

### Tasks

- [ ] For each tool: name (snake_case, verb-first), 1-2 sentence description (when to call)
- [ ] Input schema: required fields named explicitly, types narrow (string vs. enum vs. number)
- [ ] Output: a typed result, not free-text. Use structured return values when possible
- [ ] Handler: validate input, call the underlying API, format the response for the model
- [ ] Add tool to the registry surface with: name, purpose, schema, auth required

### Pointers

- **[Official]** [MCP tool definition schema](https://modelcontextprotocol.io/docs/concepts/tools)
- **[Official]** [JSON Schema reference](https://json-schema.org/learn/getting-started-step-by-step)

> [!CAUTION]
> **Gotchas**
>
> - Tools whose descriptions read like API docs ('Returns a list of users') get called less than tools with task descriptions ('Use when the user asks who's on the team').
> - JSON Schema's `additionalProperties: true` is the default. Set it to false on every input schema, otherwise the model passes random extra fields and you have to ignore them.
> - Tools that return free-text strings work but lose 50% of their value. Return structured JSON, the model parses it natively.

### Agent prompt for this step

```text
Read the user's existing API (codebase or OpenAPI doc).

For each candidate operation, draft an MCP tool definition:
1. name: snake_case, verb-first ("list_users", "create_invoice", "search_tickets")
2. description: 1-2 sentences, lead with WHEN to call. ("List all users in the team. Use when the user asks about who's on the team or what users have which roles.")
3. inputSchema: JSON Schema with required fields named explicitly + types narrow
4. Returns: a typed schema, not "object"

Constraints:
- No more than 5 tools in v1. Pick the highest-leverage ones.
- Description must say when to call, not what the function does. The model reads descriptions to dispatch.
- Required fields are required. Don't ship optional fields the model has to guess.

Output as a Brief section titled "Tool definitions v1" + populate the Tools registry surface with one row per tool.
```

## Step 5: Add auth: OAuth (DCR) or API key headers

_Estimated time: 4-12 hr (OAuth is most of this; API keys are 1-2 hr)_

Auth is the part most MCP tutorials skip. For an HTTP-transport server, you need to know who's calling. The MCP-aligned approach is OAuth 2.1 with Dynamic Client Registration: the client registers, the user approves, the server gets a per-user token. Simpler approach: static API key headers, the user pastes a token into Claude Desktop config and you authenticate every request against it.

### Tasks

- [ ] Pick: OAuth 2.1 with DCR (proper, multi-user) vs. static API key headers (simple, single-user-per-token)
- [ ] If OAuth: implement the /authorize, /token, /register endpoints (or use a library like @cloudflare/oauth-provider)
- [ ] If API key: add Authorization header check to every tool handler, return 401 on missing/invalid
- [ ] Document the auth flow in the README so users know how to connect
- [ ] Test: an unauthenticated request returns 401, an authenticated one succeeds

### Pointers

- **[Official]** [MCP authentication patterns](https://modelcontextprotocol.io/docs/concepts/authentication)
- **[Official]** [OAuth 2.1 with Dynamic Client Registration](https://datatracker.ietf.org/doc/html/rfc7591)
- **[Code]** [Cloudflare OAuth provider for MCP](https://github.com/cloudflare/workers-oauth-provider) — A drop-in OAuth implementation if you're hosting on Cloudflare Workers.

> [!CAUTION]
> **Gotchas**
>
> - MCP servers without auth headers expose your data publicly. The protocol assumes the transport handles auth.
> - Static API keys work but rotate manually. OAuth scales to teams without password reuse.
> - Don't put auth in query strings. Use the Authorization header, query strings leak in URL logs.

## Step 6: Test locally with the MCP Inspector

_Estimated time: 1-2 hr_

MCP Inspector is the local debugging UI. It connects to your server (stdio or HTTP), introspects every tool + resource, and lets you call them by hand with arbitrary input. Use it before connecting to Claude, Claude's interface assumes the server works. Inspector tells you exactly what JSON the model will see.

### Tasks

- [ ] Run `npx @modelcontextprotocol/inspector your-server-command`
- [ ] For each tool: call it from Inspector, confirm the output matches the schema
- [ ] Test failure cases: invalid input, missing auth, non-existent resource
- [ ] For each resource: confirm it loads and returns the expected MIME type
- [ ] Capture a screen recording of a clean run for the README

### Pointers

- **[Code]** [MCP Inspector usage](https://github.com/modelcontextprotocol/inspector#usage)

> [!CAUTION]
> **Gotchas**
>
> - Inspector defaults to stdio transport. For HTTP, pass `--transport http` and the URL.
> - Inspector caches tool schemas. If you change a schema and don't see the update, restart Inspector.

## Step 7: Deploy the HTTP variant to a public host

_Estimated time: 1-3 hr_

If you picked HTTP transport, you need a public URL with HTTPS. Vercel, Fly, Cloudflare Workers, Railway, Render all work. The MCP server is just an HTTP service, it streams responses, so any host that supports streaming works. Confirm response streaming works end-to-end, some serverless platforms buffer responses by default.

### Tasks

- [ ] Pick a host (Vercel, Fly, Cloudflare Workers, Railway, Render)
- [ ] Configure env vars: any DB connection strings, API keys, auth secrets
- [ ] Deploy. Confirm the /health endpoint responds with 200 over HTTPS
- [ ] Connect Inspector to the deployed URL, confirm tools work end-to-end
- [ ] Set up logging (each tool call logs: tool name, args, user, latency, errors)

### Pointers

- **[Official]** [Vercel functions](https://vercel.com/docs/functions)
- **[Official]** [Cloudflare Workers MCP guide](https://developers.cloudflare.com/workers/)
- **[Official]** [Streamable HTTP transport spec](https://modelcontextprotocol.io/docs/concepts/transports#streamable-http)

> [!CAUTION]
> **Gotchas**
>
> - Some serverless platforms buffer streaming responses. MCP needs streaming, test with a long-running tool to confirm the client gets incremental updates.
> - AWS API Gateway has a 30-second timeout, MCP tools that take longer get cut off. Use Lambda Function URL or ALB instead.
> - Set CORS headers if the client is a browser-based MCP client. Claude Desktop and Claude Code don't need it.

## Step 8: Register the server with Claude Desktop and Claude Code

_Estimated time: 1 hr_

Claude Desktop loads MCP servers from a JSON config file. Claude Code adds them via `claude mcp add`. Cursor + other clients have similar add flows. Document each clearly so users can install your server in 30 seconds.

### Tasks

- [ ] For Claude Desktop: write the config block users paste into ~/.config/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows)
- [ ] For Claude Code: write the `claude mcp add` command users run
- [ ] For Cursor: write the entry users add to .cursor/mcp.json
- [ ] Add all three to the README in a copy-pasteable form
- [ ] Test each: install, restart the client, confirm the server's tools show up

### Pointers

- **[Official]** [Claude Desktop MCP config](https://modelcontextprotocol.io/quickstart/user)
- **[Official]** [Claude Code MCP setup](https://docs.claude.com/en/docs/claude-code/mcp)
- **[Official]** [Cursor MCP integration](https://docs.cursor.com/context/model-context-protocol)

> [!CAUTION]
> **Gotchas**
>
> - Claude Desktop requires a full restart to pick up config changes. Quitting from the menu bar is not enough on macOS, force-quit and relaunch.
> - Claude Desktop's config schema differs by version, the `command` + `args` format is current; older `mcp.servers` blocks no longer load.
> - Cursor caches MCP server tool lists. After config changes, toggle the server off and on in Settings -> MCP.

## Step 9: Add observability: logs, metrics, error tracking

_Estimated time: 2-4 hr_

Once Claude is calling your server in real conversations, you need to see what it's doing. Log every tool call: tool name, sanitized args, user (if auth'd), latency, success / error. Track per-tool error rate and p95 latency. Set up alerts on tool error rate > 5% or latency > 5s, those are the signals that the agent loop is wedged.

### Tasks

- [ ] Add structured logging to every tool handler (tool, user, args, latency, status)
- [ ] Strip secrets from logged args (don't log full request bodies)
- [ ] Set up an error tracker (Sentry, Honeycomb, your existing stack)
- [ ] Alert on: error rate > 5% over 5 min, p95 latency > 5s, sustained 401s (auth broken)
- [ ] Build a dashboard: tool call volume per hour, top tools, error rate by tool

### Pointers

- **[Tool]** [Sentry MCP integration patterns](https://sentry.io/welcome/)

> [!CAUTION]
> **Gotchas**
>
> - Don't log full args by default, they contain user data + sometimes secrets. Log the tool name + arg names + arg shapes only.
> - Latency creeps when the underlying API gets slower. Track p95 by tool, not by server overall, so you can pinpoint which tool degraded.
> - Sustained 401 spikes mean auth changed and clients haven't refreshed. Page yourself before users notice.

## Step 10: Iterate: add tools based on what users actually ask for

_Estimated time: Ongoing, 2-4 hr/week for first month_

v1 ships with 3-5 tools. v1.1 adds the tools real users asked for, not the ones you imagined. Watch the logs: when a tool returns 'I can't do that' or when Claude says 'I don't have access to that data,' that's your signal to add a tool. Resist the urge to ship more than 1-2 new tools per release, every tool dilutes the dispatch signal.

### Tasks

- [ ] Read the logs daily for the first 2 weeks: what tools get called, in what order, with what failure rate
- [ ] Solicit feedback from 5-10 power users: what did Claude refuse to do that they wished it could?
- [ ] Prioritize tools by: high-frequency request + low-effort to ship
- [ ] Ship 1-2 new tools per release, bump version, update Tools registry
- [ ] Deprecate any v1 tools that nobody calls (keep them for a release, then remove)

### Pointers

- **[Official]** [MCP server semver + lifecycle](https://modelcontextprotocol.io/specification/server/lifecycle)

> [!CAUTION]
> **Gotchas**
>
> - Don't ship a new tool to chase a single conversation. Wait for the same gap to show up 5+ times.
> - Removing a tool that's still being called by an old version of Claude Desktop breaks users silently. Deprecate first (add a deprecation notice in the description), remove a release later.
> - Tools with overlapping descriptions confuse the model. If two tools could both serve the same query, merge them or sharpen the descriptions.

### Agent prompt for this step

```text
Read the last 7 days of tool-call logs from the workspace.

Aggregate:
- Top 10 tool calls by volume
- Tools with > 5% error rate
- Conversations where Claude said "I can't" or "I don't have access" (proxy for missing tool)

Output as a Brief section titled "Tool gap report v<n>":
1. Tools that are working well (high volume, low error rate, no obvious replacement need)
2. Tools that need attention (high volume, high error rate)
3. Suggested new tools, ranked by frequency of "I can't" mentions in logs

Then update the Tools registry surface: bump usage counts, mark any tool with sustained > 10% error rate as needs-fix.
```

---

## Hand the template to your agent

Paste the prompt below into your agent's permanent system prompt so the agent reads, writes, and maintains this workspace as you work through the steps.

```text
You are an agent on the "Connect Claude to your team data via MCP" playbook workspace at your-org/connect-claude-to-your-data-via-mcp.

Your role: maintain the four surfaces (Steps, Pointers, Brief, Tools registry) as the user works through the 10-step playbook.

Cadence:
- When the user adds a new tool to the server, append a row to Tools registry: name, purpose, schema, auth required, ship date.
- When a tool is deprecated, mark the row deprecated (don't delete, history matters for clients still calling it).
- When the user changes auth, update the Brief's Auth section + flag every existing tool whose auth contract has changed.

First MCP tool calls:
1. list_surfaces(workspace_slug="connect-claude-to-your-data-via-mcp")
2. list_rows(workspace_slug="connect-claude-to-your-data-via-mcp", surface_slug="tools-registry")
3. get_doc(workspace_slug="connect-claude-to-your-data-via-mcp", surface_slug="brief")

Do NOT modify the canonical step titles in the Steps table. You can append substeps as new rows beneath them.
```

---

## FAQ

### What's the difference between MCP and OpenAI's function calling?

Function calling is per-conversation: you pass a list of tools when you make an API call, and the model can call them. MCP is a persistent protocol: a server runs alongside the client, exposes tools + resources, and the client (Claude Desktop, Claude Code, Cursor) auto-discovers them. MCP also supports resources (auto-loaded context) and prompts (saved templates), which function calling doesn't have. Both are useful, MCP is the right choice for a server that's always available, function calling for a per-request tool set.

### Do I need to use Claude to use MCP?

No. MCP is an open standard. Claude Desktop, Claude Code, and Cursor speak it natively. There are community implementations for other clients. Any model that can do tool calls can be wired to MCP through a thin adapter. The spec at modelcontextprotocol.io is provider-neutral.

### How does auth work for an MCP server with multiple users?

OAuth 2.1 with Dynamic Client Registration is the canonical multi-user pattern: the client registers with your server, the user approves once, your server issues a per-user token. The MCP spec at modelcontextprotocol.io/docs/concepts/authentication walks the flow. Static API key headers are simpler if every user has a long-lived token, but they don't scale to teams cleanly.

### Can my MCP server call other MCP servers?

Yes, but it's an unusual pattern. Most servers expose primitives directly. If you find yourself wanting to chain servers, consider whether the work belongs in the client (the LLM agent) instead, agents are good at chaining tool calls across servers.

### How do I version an MCP server without breaking existing clients?

MCP supports protocol-level capability negotiation, the client and server agree on a capability set on connect. For your tool surface area: add new tools freely, deprecate old tools by adding a deprecation note in the description for one release, then remove. Don't change the input schema of an existing tool, ship a new tool with the new schema and migrate clients over.

### Can my AI agents help build the MCP server?

Yes. The playbook ships agent prompts for the slow parts: drafting tool schemas from your existing API, generating handler stubs, running the eval pass against the MCP Inspector, and analyzing tool-call logs to identify gaps. The Tools registry surface is the canonical record of what's shipped, what's deprecated, and what's queued.

