What is MCP — and Why Does It Matter?
Model Context Protocol (MCP) is an open standard created by Anthropic in November 2024. It defines a universal way for AI models to connect to external tools, data sources, and services. Think of it as USB-C for AI — one standard connector for everything.
Before MCP, every AI integration was custom. Want Claude to read files? Write custom code. Connect to a database? More custom code. Search the web? Even more. MCP is the single standard that replaces all of that. You build an MCP Server once, and any MCP-compatible AI (Claude, GPT-4, etc.) can use it immediately.
Function Calling (OpenAI/Claude's tool_use): Tools are defined per-request, in-process. You define them in your API call. Works great for simple use cases but doesn't scale — you have to maintain tool definitions in every application.
MCP: Tools live in a separate server process. Any application can discover and use them automatically. Build the server once, use it everywhere. Anthropic calls it "moving from N×M integrations to N+M."
| Aspect | Traditional Integration | MCP |
|---|---|---|
| Setup per app | Custom code each time | One server, reuse everywhere |
| Discovery | Hardcoded tool definitions | Auto-discovery at runtime |
| Portability | Tied to one AI provider | Works with any MCP client |
| State | Stateless per request | Server maintains state |
| Security | Credentials in app code | Credentials in isolated server |
Architecture Deep Dive
Understanding how MCP works under the hood is essential — not just for debugging, but because it helps you design better servers. Every interaction follows the same lifecycle.
Claude Desktop / Your App (MCP Host) ┌─────────────────────────────────────────────┐ │ MCP Client ←── manages connection lifecycle │ │ │ │ │ Claude API ←── sends tool results back to LLM│ └────────┼────────────────────────────────────┘ │ JSON-RPC 2.0 │ (stdio OR HTTP/SSE) ─────────┼───────────────────────────────────── │ ┌────────▼────────────────────────────────────┐ │ YOUR MCP SERVER (Python / TypeScript) │ │ │ │ ┌─────────────┐ ┌──────────┐ ┌────────┐ │ │ │ 🔧 Tools │ │📄 Resrcs │ │💬 Prompts│ │ │ │ (functions) │ │(data src)│ │(templ.)│ │ │ └──────┬──────┘ └────┬─────┘ └───┬────┘ │ │ └──────────────┴────────────┘ │ │ │ │ │ ┌──────────▼──────────┐ │ │ │ External Services │ │ │ │ DB / Files / APIs │ │ │ └────────────────────┘ │ └─────────────────────────────────────────────┘
All MCP communication uses JSON-RPC 2.0 format. Every request has {"jsonrpc":"2.0","id":1,"method":"tools/call","params":{...}}. You don't write this manually — the Python SDK handles it completely. But knowing it exists helps when you're debugging with logs.
Setup & Your First MCP Server
Let's build a real server from scratch. First, the project structure. Then the minimal server. Then we'll make Claude use it.
# Option A: using uv (recommended — much faster than pip) pip install uv uv init mcp-server cd mcp-server uv add mcp httpx python-dotenv # Option B: using pip (familiar) mkdir mcp-server && cd mcp-server python -m venv venv source venv/bin/activate # Windows: venv\Scripts\activate pip install mcp httpx python-dotenv
from mcp.server import Server from mcp.server.stdio import stdio_server from mcp.types import Tool, TextContent import asyncio """ mcp.Server is the core class — it's the thing that: 1. Receives JSON-RPC messages from the host 2. Routes them to your registered handlers 3. Sends responses back Think of it like FastAPI's app = FastAPI() — it's the container. """ # Create the server with a name and version app = Server("my-first-server") # ─── TOOL REGISTRATION ───────────────────────────────────── # @app.list_tools() is called when the host asks "what can you do?" # Returns a list of Tool objects with JSON Schema for inputs @app.list_tools() async def list_tools(): return [ Tool( name="greet", description="Say hello to someone by name. Use when user wants a greeting.", inputSchema={ "type": "object", "properties": { "name": { "type": "string", "description": "The person's name to greet" } }, "required": ["name"] } ) ] # ─── TOOL EXECUTION ───────────────────────────────────────── # @app.call_tool() is called when the AI actually wants to USE a tool # name = which tool, arguments = the JSON args the AI provided @app.call_tool() async def call_tool(name: str, arguments: dict): if name == "greet": person = arguments.get("name", "friend") return [TextContent( type="text", text=f"Hello, {person}! This response came from your MCP server. 🎉" )] raise ValueError(f"Unknown tool: {name}") # ─── ENTRY POINT ──────────────────────────────────────────── # stdio_server uses stdin/stdout for communication # This is how Claude Desktop talks to local MCP servers async def main(): async with stdio_server() as (read_stream, write_stream): await app.run( read_stream, write_stream, app.create_initialization_options() ) if __name__ == "__main__": asyncio.run(main())
The description field is NOT documentation for humans — it's how Claude decides whether to call your tool. Write it like you're telling the AI when to use it: "Use this tool when the user asks about files, documents, or wants to read/write to the filesystem." Bad descriptions = Claude never calls your tool or calls it at wrong times.
Tools — Functions the AI Can Call
Tools are the most powerful MCP primitive. They let Claude execute code, query databases, call APIs, manipulate files — anything. A Tool is a function with a JSON Schema for its inputs. Let's build real, useful tools.
import os from pathlib import Path from mcp.types import Tool, TextContent from typing import Sequence # Define allowed base directory — security: never allow paths outside this! ALLOWED_DIR = Path.home() / "Documents" def get_file_tools() -> Sequence[Tool]: return [ Tool( name="read_file", description="Read the contents of a file. Use when user asks to open, read, or view a file.", inputSchema={ "type": "object", "properties": { "path": {"type": "string", "description": "File path to read"} }, "required": ["path"] } ), Tool( name="write_file", description="Write or append content to a file. Use when user asks to save, create, or update a file.", inputSchema={ "type": "object", "properties": { "path": {"type": "string"}, "content": {"type": "string", "description": "Content to write"}, "append": {"type": "boolean", "default": False} }, "required": ["path", "content"] } ), Tool( name="list_directory", description="List files in a directory. Use when user asks what files exist, to browse folder contents.", inputSchema={ "type": "object", "properties": { "path": {"type": "string", "description": "Directory path to list"} }, "required": ["path"] } ) ] async def execute_file_tool(name: str, arguments: dict): # ─── Security: validate path is inside allowed directory ── raw_path = arguments.get("path", "") target = (ALLOWED_DIR / raw_path).resolve() if not str(target).startswith(str(ALLOWED_DIR)): return [TextContent(type="text", text="Error: Access denied. Path outside allowed directory.")] if name == "read_file": if not target.exists(): return [TextContent(type="text", text=f"Error: File not found: {target}")] content = target.read_text(encoding="utf-8") return [TextContent(type="text", text=f"File: {target}\n\n{content}")] elif name == "write_file": mode = "a" if arguments.get("append") else "w" target.parent.mkdir(parents=True, exist_ok=True) target.write_text(arguments["content"], encoding="utf-8") return [TextContent(type="text", text=f"✅ Written to {target}")] elif name == "list_directory": items = [f"{'📁' if p.is_dir() else '📄'} {p.name}" for p in sorted(target.iterdir())] return [TextContent(type="text", text="\n".join(items) or "Empty directory")]
Resources — Exposing Data Sources
Resources are like read-only data endpoints. Think of them as URLs that the AI can fetch. A file, a database record, a configuration value, a real-time metric. Unlike Tools (which DO things), Resources just expose data.
Tool: Has side effects, takes parameters, performs actions. Example: search_database(query="..."), send_email(to="..."), create_file(path="...").
Resource: Static or semi-static data that can be fetched by URI. Example: the current config file, a static lookup table, the contents of a specific known document. Resources have URIs like file:///config.json or db://users/schema.
from mcp.types import Resource, TextResourceContents import json, os @app.list_resources() async def list_resources(): """Returns all resources available to the AI.""" return [ Resource( uri="config://app/settings", name="Application Settings", description="Current app configuration. Read this when user asks about settings.", mimeType="application/json" ), Resource( uri="db://schema/tables", name="Database Schema", description="List of all database tables and columns. Read before writing SQL queries.", mimeType="text/plain" ) ] @app.read_resource() async def read_resource(uri: str): """Called when the AI wants to read a specific resource by URI.""" if uri == "config://app/settings": settings = { "environment": os.getenv("APP_ENV", "development"), "version": "1.0.0", "features": ["rag", "mcp", "gemini"] } return [TextResourceContents( uri=uri, mimeType="application/json", text=json.dumps(settings, indent=2) )] elif uri == "db://schema/tables": schema = """Tables: - users (id, email, name, created_at) - documents (id, filename, content, chunks_count, uploaded_at) - queries (id, user_id, query_text, response, timestamp)""" return [TextResourceContents(uri=uri, mimeType="text/plain", text=schema)] raise ValueError(f"Unknown resource: {uri}")
Prompts — Reusable Prompt Templates
Prompts are server-defined templates that the host can invoke. Think of them as slash commands (/summarize, /analyze, /debug) that appear in the UI. They combine static instructions with dynamic arguments.
from mcp.types import Prompt, PromptArgument, GetPromptResult, PromptMessage, TextContent @app.list_prompts() async def list_prompts(): return [ Prompt( name="analyze_document", description="Deep analysis of a document: summary, key insights, action items", arguments=[ PromptArgument(name="document_name", description="Name of the document", required=True), PromptArgument(name="focus", description="Analysis focus: 'technical', 'business', 'legal'", required=False) ] ), Prompt( name="debug_code", description="Analyze code for bugs, performance issues, and improvements", arguments=[ PromptArgument(name="language", description="Programming language", required=True) ] ) ] @app.get_prompt() async def get_prompt(name: str, arguments: dict): if name == "analyze_document": doc = arguments.get("document_name") focus = arguments.get("focus", "general") return GetPromptResult( description=f"Analyze {doc}", messages=[ PromptMessage( role="user", content=TextContent(type="text", text=f""" Please analyze '{doc}' with a {focus} focus. Provide: 1. Executive Summary (3-4 sentences) 2. Key Insights (5 bullet points) 3. Action Items (prioritized list) 4. Risks or Concerns Use the read_file tool to access the document first. """) ) ] )
Real MCP Server — Files + SQLite Database
Now let's put it all together into a production-grade server. This one gives Claude access to your file system AND a SQLite database. This is the kind of server you'd put in a portfolio.
from mcp.server import Server from mcp.server.stdio import stdio_server from mcp.types import Tool, TextContent, Resource, TextResourceContents from pathlib import Path import sqlite3, asyncio, json, os from dotenv import load_dotenv load_dotenv() app = Server("personal-assistant-mcp") DB_PATH = Path(os.getenv("DB_PATH", "data/assistant.db")) FILES_DIR = Path(os.getenv("FILES_DIR", str(Path.home() / "Documents"))) # ─── Database helper ───────────────────────────────────── def run_query(sql: str, params=()) -> str: """Run a SQL query safely. Returns results as formatted string.""" with sqlite3.connect(DB_PATH) as conn: conn.row_factory = sqlite3.Row cursor = conn.execute(sql, params) if sql.strip().upper().startswith('SELECT'): rows = [dict(r) for r in cursor.fetchall()] return json.dumps(rows, indent=2, default=str) conn.commit() return f"Query OK. Rows affected: {cursor.rowcount}" # ─── TOOLS ─────────────────────────────────────────────── @app.list_tools() async def list_tools(): return [ Tool(name="read_file", description="Read a file's contents. Use when user wants to open or view a document.", inputSchema={"type":"object","properties":{"path":{"type":"string"}},"required":["path"]}), Tool(name="query_database", description="Run a SQL SELECT query on the local database. Use for data retrieval questions.", inputSchema={"type":"object","properties":{"sql":{"type":"string","description":"SQL SELECT query"}},"required":["sql"]}), Tool(name="search_files", description="Search for files by name or extension. Use when user asks to find files.", inputSchema={"type":"object","properties":{"pattern":{"type":"string","description":"Search pattern like *.pdf or report*"}},"required":["pattern"]}), Tool(name="get_current_time", description="Get current date and time. Use for any time-sensitive queries.", inputSchema={"type":"object","properties":{}}), ] @app.call_tool() async def call_tool(name: str, arguments: dict): if name == "read_file": path = (FILES_DIR / arguments["path"]).resolve() if not str(path).startswith(str(FILES_DIR)): return [TextContent(type="text", text="Access denied")] content = path.read_text(encoding="utf-8", errors="replace") return [TextContent(type="text", text=content[:10000])] # limit size elif name == "query_database": sql = arguments["sql"] if not sql.strip().upper().startswith("SELECT"): return [TextContent(type="text", text="Only SELECT queries allowed for safety")] result = run_query(sql) return [TextContent(type="text", text=result)] elif name == "search_files": pattern = arguments["pattern"] matches = list(FILES_DIR.rglob(pattern)) result = "\n".join(str(p.relative_to(FILES_DIR)) for p in matches[:50]) return [TextContent(type="text", text=result or "No files found")] elif name == "get_current_time": from datetime import datetime return [TextContent(type="text", text=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))] async def main(): async with stdio_server() as (r, w): await app.run(r, w, app.create_initialization_options()) if __name__ == "__main__": asyncio.run(main())
Connecting to Claude Desktop
Claude Desktop has built-in MCP support. You just need to edit one JSON config file and restart the app. This is the fastest way to see your server working with a real AI.
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.json{
"mcpServers": {
"personal-assistant": {
"command": "python",
"args": ["/absolute/path/to/mcp-server/src/server.py"],
"env": {
"DB_PATH": "/absolute/path/to/data/assistant.db",
"FILES_DIR": "/Users/yourname/Documents"
}
},
// You can add multiple servers — each runs as a separate process
"rag-system": {
"command": "python",
"args": ["/path/to/rag-mcp-server/server.py"]
}
}
}
Claude Desktop spawns your server as a subprocess. Relative paths will fail because the working directory is unknown. Always use full absolute paths in the config. Test by running python /absolute/path/to/server.py from a fresh terminal first.
Production, Security & Remote Servers
Stdio works great locally. But for remote deployment — like giving Claude access to your cloud database, or sharing your server with a team — you need HTTP transport.
from mcp.server.sse import SseServerTransport from starlette.applications import Starlette from starlette.routing import Route import uvicorn # Same app = Server("...") as before # Just swap the transport layer sse = SseServerTransport("/messages/") async def handle_sse(request): # Optional: check API key for authentication api_key = request.headers.get("x-api-key") if api_key != os.getenv("MCP_API_KEY"): from starlette.responses import Response return Response("Unauthorized", status_code=401) async with sse.connect_sse(request.scope, request.receive, request._send) as streams: await app.run(streams[0], streams[1], app.create_initialization_options()) starlette_app = Starlette( routes=[ Route("/sse", endpoint=handle_sse), Route("/messages/", endpoint=sse.handle_post_message, methods=["POST"]) ] ) if __name__ == "__main__": uvicorn.run(starlette_app, host="0.0.0.0", port=8080)
Before sharing your MCP server with anyone:
- All file paths validated against allowed directory (path traversal protection)
- Database: only SELECT queries allowed through tool (no DELETE/DROP)
- API keys loaded from environment variables, not hardcoded
- Tool descriptions are clear and specific (tested with Claude)
- Error handling: all tool functions handle exceptions gracefully
- Logging added so you can debug what Claude is calling and why
MCP + RAG — The Ultimate Portfolio Project
You've built a RAG system. You've built an MCP server. Now combine them. Give Claude an MCP tool that queries your RAG backend. This is genuinely impressive — it means Claude can upload documents AND answer questions about them, all through natural conversation.
Without MCP: User opens your Angular app → uploads doc → types question → gets answer.
With MCP + RAG: User tells Claude Desktop "Please read this PDF and answer questions about it" → Claude uses your MCP upload_document tool → then uses query_rag tool to answer questions → all in a single conversation, no UI needed.
import httpx from mcp.types import Tool, TextContent from pathlib import Path import os RAG_BASE_URL = os.getenv("RAG_API_URL", "http://localhost:8000/api") def get_rag_tools(): return [ Tool( name="upload_to_rag", description="Upload a document to the RAG knowledge base. Use when user wants to index a file for Q&A.", inputSchema={ "type": "object", "properties": { "file_path": {"type": "string", "description": "Absolute path to file to upload"} }, "required": ["file_path"] } ), Tool( name="query_rag", description="Ask a question against the indexed documents in the RAG system. Use when user asks about document contents.", inputSchema={ "type": "object", "properties": { "question": {"type": "string"}, "provider": {"type": "string", "enum": ["openai", "gemini", "puter"], "default": "openai"} }, "required": ["question"] } ), Tool( name="list_rag_documents", description="List all documents currently indexed in the RAG system.", inputSchema={"type": "object", "properties": {}} ) ] async def execute_rag_tool(name: str, arguments: dict): async with httpx.AsyncClient(timeout=60.0) as client: if name == "upload_to_rag": path = Path(arguments["file_path"]) with open(path, "rb") as f: response = await client.post( f"{RAG_BASE_URL}/upload", files={"file": (path.name, f, "application/octet-stream")} ) data = response.json() return [TextContent(type="text", text= f"✅ Uploaded '{data['filename']}' — {data['chunks_created']} chunks indexed")] elif name == "query_rag": response = await client.post(f"{RAG_BASE_URL}/query", json={ "query": arguments["question"], "provider": arguments.get("provider", "openai"), "top_k": 5 }) data = response.json() sources = ", ".join(data.get("sources", [])) return [TextContent(type="text", text= f"{data['answer']}\n\n📚 Sources: {sources}\n⏱ {data['processing_time_ms']:.0f}ms")] elif name == "list_rag_documents": response = await client.get(f"{RAG_BASE_URL}/stats") data = response.json() return [TextContent(type="text", text= f"📄 {data['total_documents']} documents, {data['total_chunks']} chunks indexed")]
You're an MCP developer when you can:
- Explain MCP's three primitives (Tools, Resources, Prompts) and when to use each
- Build a working server from scratch with Python SDK in under 30 minutes
- Connect it to Claude Desktop and have it work correctly
- Describe the JSON-RPC lifecycle from user message to tool result
- Implement path security and SQL injection prevention in your tools
- Deploy a remote MCP server with HTTP+SSE transport and authentication
- Combine MCP with your RAG system to create a genuinely useful portfolio piece