FastMCP: the boring way to build fast MCP servers in Python
FastMCP removes protocol friction so you can ship MCP tools quickly, safely, and without inventing infrastructure.
FastMCP: The Boring Way to Build Fast MCP Servers in Python
MCP (Model Context Protocol) servers sit at a critical boundary: they connect AI models to real tools, real data, and real systems.
Most failures here aren’t about AI quality — they’re about over-engineered servers.
FastMCP gets this right by doing something radical: it gets out of your way.
What FastMCP Actually Solves
Building MCP servers manually usually means:
- Learning protocol details
- Handling transports
- Managing tool schemas
- Wiring inspection and installation flows
FastMCP abstracts only the protocol plumbing, so you can focus on:
- Tools
- Resources
- Business logic
Not frameworks. Not ceremony.
A Minimal MCP Server (Calculator)
Here’s a complete MCP server using FastMCP — no boilerplate, no hidden magic:
from fastmcp import FastMCP
# Create an MCP server
mcp = FastMCP("Demo 🚀")
# Add a calculator tool
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
# Add a dynamic resource
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
"""Get a personalized greeting"""
return f"Hello, {name}!"
That’s it.
You’ve defined:
- A tool
- A resource
- A valid MCP server
Local Testing with MCP Inspector
FastMCP ships with first-class tooling.
Run:
fastmcp dev demo.py
This:
- Launches an MCP inspector
- Uses STDIO transport
- Proxies requests visually
- Lets you test tools instantly
No custom clients. No hacks.
Installing into Claude Desktop
FastMCP supports native MCP installation:
fastmcp install demo.py
This automatically:
- Registers the MCP
- Updates
claude_desktop_config.json - Makes your tools available inside Claude
From here, Claude can call:
add(2, 3)greeting://alex
Like a real production integration.
Real API Integration: Web Search MCP
FastMCP isn’t just for demos.
Here’s a real MCP server using the OpenAI Web Search API:
from openai import OpenAI
from fastmcp import FastMCP
client = OpenAI(api_key="YOUR_API_KEY")
mcp = FastMCP(
"Web News Search 🚀",
dependencies=["openai"]
)
@mcp.tool()
def websearch_newssearch(prompt: str):
"""Fetch news articles based on a query"""
completion = client.chat.completions.create(
model="gpt-4o-search-preview",
web_search_options={},
messages=[{"role": "user", "content": prompt}],
)
return completion.choices[0].message.content
@mcp.tool()
async def fetch_weather(city: str) -> str:
"""Fetch weekly weather forecast"""
completion = client.chat.completions.create(
model="gpt-4o-search-preview",
web_search_options={},
messages=[{
"role": "user",
"content": f"What is the weather forecast for this week in {city}?"
}],
)
return completion.choices[0].message.content
Install it:
fastmcp install websearch-server.py
Restart Claude — your MCP is live.
Why FastMCP Works in Production
FastMCP succeeds because it enforces good constraints:
✅ Explicit tools
✅ Typed inputs
✅ Stateless execution
✅ Simple deployment
✅ Inspectable behavior
What it doesn’t do:
- No orchestration logic
- No hidden state
- No agent brains
- No opinionated workflows
That belongs elsewhere.
When FastMCP Is the Right Choice
Choose FastMCP when:
- You’re building tool-heavy agents
- You want fast iteration
- You deploy locally or on servers
- You need Claude / MCP compatibility
- You value debuggability over cleverness
When It’s Not
FastMCP is not ideal for:
- Long-running workflows
- Stateful agents
- Complex async pipelines
- Multi-step orchestration
Use it as infrastructure, not intelligence.
Final Thoughts
FastMCP proves an important lesson:
MCP servers should be boring. Tools should be explicit. Intelligence belongs in the model — not the server.
FastMCP doesn’t try to be clever. That’s exactly why it works.
Happy coding 🚀