
MCP Servers Explained: What They Are, How to Configure Them, and Best Practices

D. Rout
April 5, 2026 9 min read
On this page
If you've been following AI tooling over the past year, you've probably heard the term MCP thrown around — especially in the context of Claude, Cursor, and other AI-powered developer tools. But what exactly is an MCP server, how does it compare to familiar tools like a CLI, and how do you actually configure and use one?
This post walks through all of that in practical terms.
What Is MCP?
MCP stands for Model Context Protocol. It is an open standard introduced by Anthropic that defines how AI models communicate with external tools, data sources, and services in a structured, consistent way.
Think of MCP as a universal plug standard for AI. Instead of every AI assistant inventing its own way to call a web search API, read a file, or query a database, MCP provides a shared protocol that both the AI (the client) and the tool (the server) agree on.
An MCP server is a process that exposes tools, resources, or prompts to an MCP-compatible AI client. When an AI model needs to perform an action — search the web, read a Google Doc, create a calendar event — it sends a request to the MCP server, which handles the actual operation and returns the result.
Official MCP documentation: modelcontextprotocol.io
Is MCP Similar to a CLI?
This is a fair comparison to make, and understanding the difference is important.
A CLI (Command Line Interface) is a tool designed for humans. You type a command, a program executes it, and output is printed to your terminal. The interface is text-based and intended to be driven by a person.
An MCP server is designed for AI models, not humans directly. Instead of a human typing git status, an AI model sends a structured JSON-RPC request to an MCP server that exposes a git_status tool. The server executes the operation and returns a machine-readable response the AI can reason over.
Here's a quick comparison:
| Feature | CLI | MCP Server |
|---|---|---|
| Primary user | Human | AI model |
| Interface | Text / shell | JSON-RPC over stdio or HTTP |
| Discovery | Man pages, --help |
Tool schemas exposed at runtime |
| Composability | Shell pipes | AI reasons over multiple tools |
| State | Stateless per command | Can maintain session context |
That said, many MCP servers are thin wrappers over CLI tools. For example, an MCP server for git might internally call git commands under the hood — it just exposes them through the MCP protocol so an AI can invoke them contextually.
Core Concepts
Before diving into configuration, it helps to understand the three things an MCP server can expose:
Tools — Callable functions the AI can invoke. Examples: search_web, create_event, run_query. Tools have defined input schemas and return results.
Resources — Read-only data the AI can access. Examples: a file, a database record, a web page. Resources have URIs and can be read on demand.
Prompts — Pre-built prompt templates the server exposes for common workflows. These allow servers to provide reusable instructions to the AI.
Most servers you'll encounter in the wild primarily expose tools.
How to Configure an MCP Server
MCP servers connect to AI clients via two transports:
- stdio — The server runs as a subprocess. The client communicates with it over standard input/output. This is the most common approach for local development tools.
- HTTP + SSE — The server runs as a persistent HTTP service. The client connects via Server-Sent Events. Used for remote or cloud-hosted servers.
Configuration in Claude Desktop
Claude Desktop uses a JSON config file to define which MCP servers to connect to on startup.
macOS config location:
~/Library/Application Support/Claude/claude_desktop_config.json
Windows config location:
%APPDATA%\Claude\claude_desktop_config.json
A basic config looks like this:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/yourname/Documents"],
"env": {}
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your_token_here"
}
}
}
}
Each entry under mcpServers defines:
- A key — the name you'll see in the client UI
command— the executable to run (node, npx, python, etc.)args— arguments passed to the commandenv— environment variables, typically for API keys and secrets
After saving the config, restart Claude Desktop. Connected servers will show up as available tools in your session.
Full Claude Desktop MCP setup guide: MCP Quickstart for Users
Configuration in Claude Code (CLI)
Claude Code, Anthropic's agentic coding tool, also supports MCP servers. You configure them using the claude mcp add command:
# Add a stdio server
claude mcp add my-server -e API_KEY=your_key -- npx -y @your-org/mcp-server
# Add an HTTP server
claude mcp add remote-server --transport http --url https://mcp.example.com/sse
# List configured servers
claude mcp list
# Remove a server
claude mcp remove my-server
Servers can be scoped to the current project (stored in .mcp.json) or globally for all sessions. Project-level config is useful for team environments where you want every contributor to have the same tools available automatically.
Claude Code MCP docs: MCP in Claude Code
Configuration via the Anthropic API
If you're building your own AI application using the Claude API, you pass MCP servers directly in your API request body:
const response = await fetch("https://api.anthropic.com/v1/messages", {
method: "POST",
headers: {
"Content-Type": "application/json",
"x-api-key": process.env.ANTHROPIC_API_KEY,
"anthropic-version": "2023-06-01",
"anthropic-beta": "mcp-client-2025-04-04"
},
body: JSON.stringify({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
tools: [],
mcp_servers: [
{
type: "url",
url: "https://mcp.example.com/sse",
name: "my-remote-server"
}
],
messages: [
{ role: "user", content: "Search for recent news about TypeScript 5.8" }
]
})
});
This lets you integrate MCP servers directly into your own AI-powered products without requiring end users to configure anything on their machines.
Anthropic API MCP reference: MCP in the Claude API
Building Your Own MCP Server
You don't have to rely on community servers. If you have an internal API or service, you can expose it as an MCP server in a few dozen lines of code using the official SDKs.
TypeScript/Node.js:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({ name: "my-api-server", version: "1.0.0" });
server.tool(
"get_user",
"Fetch a user record by ID",
{ userId: z.string().describe("The user's UUID") },
async ({ userId }) => {
const user = await myDb.users.findById(userId);
return {
content: [{ type: "text", text: JSON.stringify(user) }]
};
}
);
const transport = new StdioServerTransport();
await server.connect(transport);
Python:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("my-api-server")
@mcp.tool()
def get_user(user_id: str) -> dict:
"""Fetch a user record by ID."""
return db.users.find(user_id)
if __name__ == "__main__":
mcp.run()
MCP TypeScript SDK: github.com/modelcontextprotocol/typescript-sdk
MCP Python SDK: github.com/modelcontextprotocol/python-sdk
Popular Community MCP Servers
The ecosystem has grown quickly. Some well-maintained servers worth knowing about:
- Filesystem — Read and write local files
- GitHub — Manage repos, issues, PRs, and code
- PostgreSQL — Query your database in natural language
- Brave Search — Real-time web search
- Google Maps — Geocoding, directions, place search
Browse the full registry: MCP Servers GitHub Repo
Best Practices
Scope your permissions tightly. When configuring a filesystem server, pass only the directories the AI actually needs access to. Avoid pointing it at / or your entire home directory. Treat it like you would any OAuth scope — least privilege.
Store secrets in environment variables, never in args. Config files can end up in version control. Keep API keys in env blocks, and consider pulling them from a secrets manager rather than hardcoding them.
Use project-level config for team consistency. Committing a .mcp.json to your repo ensures all collaborators automatically get the right tools when working on the project. Pair this with .gitignore entries for any local override files containing personal keys.
Name your tools descriptively. If you're building a custom server, tool names and descriptions are what the AI uses to decide when and how to invoke them. Be explicit — get_active_subscriptions_by_user_id beats fetch_data.
Validate inputs with schemas. Use Zod (TypeScript) or Pydantic (Python) to define strict input schemas for your tools. This prevents the AI from passing malformed arguments and gives it better context about what each parameter expects.
Log tool invocations. In production, emit structured logs every time a tool is called. This gives you an audit trail and makes debugging AI behavior much easier.
Test your server independently. Use the MCP Inspector to test your server's tools directly before connecting it to an AI client. It lets you send raw tool calls and inspect responses without involving the model at all.
MCP Inspector: modelcontextprotocol.io/docs/tools/inspector
Further Learning
- MCP Official Documentation — The authoritative reference for the protocol spec
- MCP Server Quickstart — Build your first server in under 10 minutes
- Anthropic MCP Guide — How to use MCP with the Claude API
- Community MCP Servers — Official and community-contributed servers
- MCP Protocol Specification — For those who want the full technical spec
MCP is one of the more meaningful developer primitives to emerge from the AI tooling space in a while. It shifts the conversation from "how do I prompt the AI to use this API" to "how do I build a clean tool the AI can reliably reach for." Once you've configured a few servers and seen the AI seamlessly chain tool calls together, it's hard to go back to doing it any other way.
Read next
Comments (1)
Join the conversation
Sign in to leave a comment on this post.
Very helpful. Thank you for sharing this. 😊