- What's the difference between MCP and OpenAI's function calling?
- Function calling is per-conversation: you pass a list of tools when you make an API call, and the model can call them. MCP is a persistent protocol: a server runs alongside the client, exposes tools + resources, and the client (Claude Desktop, Claude Code, Cursor) auto-discovers them. MCP also supports resources (auto-loaded context) and prompts (saved templates), which function calling doesn't have. Both are useful, MCP is the right choice for a server that's always available, function calling for a per-request tool set.
- Do I need to use Claude to use MCP?
- No. MCP is an open standard. Claude Desktop, Claude Code, and Cursor speak it natively. There are community implementations for other clients. Any model that can do tool calls can be wired to MCP through a thin adapter. The spec at modelcontextprotocol.io is provider-neutral.
- How does auth work for an MCP server with multiple users?
- OAuth 2.1 with Dynamic Client Registration is the canonical multi-user pattern: the client registers with your server, the user approves once, your server issues a per-user token. The MCP spec at modelcontextprotocol.io/docs/concepts/authentication walks the flow. Static API key headers are simpler if every user has a long-lived token, but they don't scale to teams cleanly.
- Can my MCP server call other MCP servers?
- Yes, but it's an unusual pattern. Most servers expose primitives directly. If you find yourself wanting to chain servers, consider whether the work belongs in the client (the LLM agent) instead, agents are good at chaining tool calls across servers.
- How do I version an MCP server without breaking existing clients?
- MCP supports protocol-level capability negotiation, the client and server agree on a capability set on connect. For your tool surface area: add new tools freely, deprecate old tools by adding a deprecation note in the description for one release, then remove. Don't change the input schema of an existing tool, ship a new tool with the new schema and migrate clients over.
- Can my AI agents help build the MCP server?
- Yes. The playbook ships agent prompts for the slow parts: drafting tool schemas from your existing API, generating handler stubs, running the eval pass against the MCP Inspector, and analyzing tool-call logs to identify gaps. The Tools registry surface is the canonical record of what's shipped, what's deprecated, and what's queued.