Your MCP Curriculum

Build an MCP Server
From Scratch

Model Context Protocol — the open standard that lets AI models like Claude talk to your tools, files, databases, and APIs. You already built a RAG system. Now give it superpowers.

Python SDK
Tools / Resources / Prompts
Claude Desktop
File System Server
Database Server
MCP + RAG
10
Chapters
3
Primitives
100%
From Scratch
Use Cases
Chapter 01

What is MCP — and Why Does It Matter?

Model Context Protocol (MCP) is an open standard created by Anthropic in November 2024. It defines a universal way for AI models to connect to external tools, data sources, and services. Think of it as USB-C for AI — one standard connector for everything.

💡 The Big Insight

Before MCP, every AI integration was custom. Want Claude to read files? Write custom code. Connect to a database? More custom code. Search the web? Even more. MCP is the single standard that replaces all of that. You build an MCP Server once, and any MCP-compatible AI (Claude, GPT-4, etc.) can use it immediately.

🤖
MCP Host
The AI application that USES your tools. Examples: Claude Desktop, Claude.ai, Cursor IDE, your custom chatbot built with the API.
🔌
MCP Client
Sits inside the host. Manages the connection to MCP servers, sends requests, receives results. Usually invisible — the SDK handles it.
⚙️
MCP Server
The program YOU build. Exposes tools (functions), resources (data), and prompts (templates) that the AI can use. Can be local or remote.
📡
Transport Layer
How host ↔ server communicate. Stdio: local process via stdin/stdout. HTTP+SSE: remote server over the network. Both use JSON-RPC 2.0.
🆚 MCP vs Function Calling — What's the Difference?

Function Calling (OpenAI/Claude's tool_use): Tools are defined per-request, in-process. You define them in your API call. Works great for simple use cases but doesn't scale — you have to maintain tool definitions in every application.

MCP: Tools live in a separate server process. Any application can discover and use them automatically. Build the server once, use it everywhere. Anthropic calls it "moving from N×M integrations to N+M."

AspectTraditional IntegrationMCP
Setup per appCustom code each timeOne server, reuse everywhere
DiscoveryHardcoded tool definitionsAuto-discovery at runtime
PortabilityTied to one AI providerWorks with any MCP client
StateStateless per requestServer maintains state
SecurityCredentials in app codeCredentials in isolated server
Chapter 02

Architecture Deep Dive

Understanding how MCP works under the hood is essential — not just for debugging, but because it helps you design better servers. Every interaction follows the same lifecycle.

MCP Architecture — Full Flow
  Claude Desktop / Your App (MCP Host)
  ┌─────────────────────────────────────────────┐
  │  MCP Client  ←── manages connection lifecycle   │
  │       │                                     │
  │   Claude API  ←── sends tool results back to LLM│
  └────────┼────────────────────────────────────┘
           │  JSON-RPC 2.0
           │  (stdio OR HTTP/SSE)
  ─────────┼─────────────────────────────────────
           │
  ┌────────▼────────────────────────────────────┐
  │       YOUR MCP SERVER (Python / TypeScript)  │
  │                                             │
  │  ┌─────────────┐  ┌──────────┐  ┌────────┐ │
  │  │ 🔧 Tools    │  │📄 Resrcs │  │💬 Prompts│ │
  │  │ (functions) │  │(data src)│  │(templ.)│ │
  │  └──────┬──────┘  └────┬─────┘  └───┬────┘ │
  │         └──────────────┴────────────┘      │
  │                         │                  │
  │              ┌──────────▼──────────┐        │
  │              │  External Services  │        │
  │              │  DB / Files / APIs  │        │
  │              └────────────────────┘        │
  └─────────────────────────────────────────────┘
1
Initialize
Host starts your MCP server process. Server responds with its capabilities: list of tools, resources, and prompts it provides. This is the "handshake."
2
Discover
Host calls tools/list, resources/list, prompts/list. Gets back JSON descriptions including name, description, and input schema. This is how Claude knows what tools exist.
3
User Sends Message
User types a query. Claude sees the message AND the tool definitions injected into its context. Claude decides whether to call a tool.
4
Tool Call
Claude decides to call your_tool with specific arguments. The host sends tools/call to your server via JSON-RPC.
5
Execute & Return
Your server runs the function, queries the database, reads the file, calls the API — whatever it does. Returns the result as text (or image, or structured data).
6
Generate Answer
Claude receives the tool result. Now it has the real data it needed. It generates a natural language answer based on that data and returns it to the user.
💡 JSON-RPC 2.0 — What You Need to Know

All MCP communication uses JSON-RPC 2.0 format. Every request has {"jsonrpc":"2.0","id":1,"method":"tools/call","params":{...}}. You don't write this manually — the Python SDK handles it completely. But knowing it exists helps when you're debugging with logs.

Chapter 03

Setup & Your First MCP Server

Let's build a real server from scratch. First, the project structure. Then the minimal server. Then we'll make Claude use it.

# Your MCP project structure
mcp-server/
├── src/
│ ├── server.py # ← Main server entry point
│ ├── tools/
│ │ ├── file_tools.py # ← File system operations
│ │ ├── db_tools.py # ← Database queries
│ │ └── web_tools.py # ← HTTP / API calls
│ ├── resources/
│ │ └── data_resources.py
│ └── prompts/
│ └── templates.py
├── pyproject.toml # ← Dependencies (uv)
├── claude_desktop_config.json # ← Config snippet
└── .env # ← Secrets (never commit)
Terminal — Install MCP SDK
bash
# Option A: using uv (recommended — much faster than pip)
pip install uv
uv init mcp-server
cd mcp-server
uv add mcp httpx python-dotenv

# Option B: using pip (familiar)
mkdir mcp-server && cd mcp-server
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate
pip install mcp httpx python-dotenv
src/server.py — The Minimal MCP Server
python
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
import asyncio

"""
mcp.Server is the core class — it's the thing that:
  1. Receives JSON-RPC messages from the host
  2. Routes them to your registered handlers
  3. Sends responses back

Think of it like FastAPI's app = FastAPI() — it's the container.
"""

# Create the server with a name and version
app = Server("my-first-server")

# ─── TOOL REGISTRATION ─────────────────────────────────────
# @app.list_tools() is called when the host asks "what can you do?"
# Returns a list of Tool objects with JSON Schema for inputs
@app.list_tools()
async def list_tools():
    return [
        Tool(
            name="greet",
            description="Say hello to someone by name. Use when user wants a greeting.",
            inputSchema={
                "type": "object",
                "properties": {
                    "name": {
                        "type": "string",
                        "description": "The person's name to greet"
                    }
                },
                "required": ["name"]
            }
        )
    ]

# ─── TOOL EXECUTION ─────────────────────────────────────────
# @app.call_tool() is called when the AI actually wants to USE a tool
# name = which tool, arguments = the JSON args the AI provided
@app.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "greet":
        person = arguments.get("name", "friend")
        return [TextContent(
            type="text",
            text=f"Hello, {person}! This response came from your MCP server. 🎉"
        )]
    raise ValueError(f"Unknown tool: {name}")

# ─── ENTRY POINT ────────────────────────────────────────────
# stdio_server uses stdin/stdout for communication
# This is how Claude Desktop talks to local MCP servers
async def main():
    async with stdio_server() as (read_stream, write_stream):
        await app.run(
            read_stream, write_stream,
            app.create_initialization_options()
        )

if __name__ == "__main__":
    asyncio.run(main())
⚠️ Critical — Write Good Tool Descriptions

The description field is NOT documentation for humans — it's how Claude decides whether to call your tool. Write it like you're telling the AI when to use it: "Use this tool when the user asks about files, documents, or wants to read/write to the filesystem." Bad descriptions = Claude never calls your tool or calls it at wrong times.

Chapter 04

Tools — Functions the AI Can Call

Tools are the most powerful MCP primitive. They let Claude execute code, query databases, call APIs, manipulate files — anything. A Tool is a function with a JSON Schema for its inputs. Let's build real, useful tools.

src/tools/file_tools.py — Real File System Tools
python
import os
from pathlib import Path
from mcp.types import Tool, TextContent
from typing import Sequence

# Define allowed base directory — security: never allow paths outside this!
ALLOWED_DIR = Path.home() / "Documents"

def get_file_tools() -> Sequence[Tool]:
    return [
        Tool(
            name="read_file",
            description="Read the contents of a file. Use when user asks to open, read, or view a file.",
            inputSchema={
                "type": "object",
                "properties": {
                    "path": {"type": "string", "description": "File path to read"}
                },
                "required": ["path"]
            }
        ),
        Tool(
            name="write_file",
            description="Write or append content to a file. Use when user asks to save, create, or update a file.",
            inputSchema={
                "type": "object",
                "properties": {
                    "path": {"type": "string"},
                    "content": {"type": "string", "description": "Content to write"},
                    "append": {"type": "boolean", "default": False}
                },
                "required": ["path", "content"]
            }
        ),
        Tool(
            name="list_directory",
            description="List files in a directory. Use when user asks what files exist, to browse folder contents.",
            inputSchema={
                "type": "object",
                "properties": {
                    "path": {"type": "string", "description": "Directory path to list"}
                },
                "required": ["path"]
            }
        )
    ]

async def execute_file_tool(name: str, arguments: dict):
    # ─── Security: validate path is inside allowed directory ──
    raw_path = arguments.get("path", "")
    target = (ALLOWED_DIR / raw_path).resolve()

    if not str(target).startswith(str(ALLOWED_DIR)):
        return [TextContent(type="text", text="Error: Access denied. Path outside allowed directory.")]

    if name == "read_file":
        if not target.exists():
            return [TextContent(type="text", text=f"Error: File not found: {target}")]
        content = target.read_text(encoding="utf-8")
        return [TextContent(type="text", text=f"File: {target}\n\n{content}")]

    elif name == "write_file":
        mode = "a" if arguments.get("append") else "w"
        target.parent.mkdir(parents=True, exist_ok=True)
        target.write_text(arguments["content"], encoding="utf-8")
        return [TextContent(type="text", text=f"✅ Written to {target}")]

    elif name == "list_directory":
        items = [f"{'📁' if p.is_dir() else '📄'} {p.name}" for p in sorted(target.iterdir())]
        return [TextContent(type="text", text="\n".join(items) or "Empty directory")]
Interview Q
"How does Claude decide which tool to call in MCP?"
Claude sees tool descriptions injected into its system context at the start of the conversation. When you send a message, Claude uses its reasoning to decide: (1) does this query need external data? (2) which tool description best matches the intent? (3) what arguments to pass? The AI never calls a tool randomly — it constructs arguments from the user's message and tool schema. This is why description quality is critical: poor descriptions lead to wrong tool selection or no tool use.
Chapter 05

Resources — Exposing Data Sources

Resources are like read-only data endpoints. Think of them as URLs that the AI can fetch. A file, a database record, a configuration value, a real-time metric. Unlike Tools (which DO things), Resources just expose data.

📌 Tools vs Resources — When to Use Each

Tool: Has side effects, takes parameters, performs actions. Example: search_database(query="..."), send_email(to="..."), create_file(path="...").

Resource: Static or semi-static data that can be fetched by URI. Example: the current config file, a static lookup table, the contents of a specific known document. Resources have URIs like file:///config.json or db://users/schema.

src/resources/data_resources.py — Resources
python
from mcp.types import Resource, TextResourceContents
import json, os

@app.list_resources()
async def list_resources():
    """Returns all resources available to the AI."""
    return [
        Resource(
            uri="config://app/settings",
            name="Application Settings",
            description="Current app configuration. Read this when user asks about settings.",
            mimeType="application/json"
        ),
        Resource(
            uri="db://schema/tables",
            name="Database Schema",
            description="List of all database tables and columns. Read before writing SQL queries.",
            mimeType="text/plain"
        )
    ]

@app.read_resource()
async def read_resource(uri: str):
    """Called when the AI wants to read a specific resource by URI."""

    if uri == "config://app/settings":
        settings = {
            "environment": os.getenv("APP_ENV", "development"),
            "version": "1.0.0",
            "features": ["rag", "mcp", "gemini"]
        }
        return [TextResourceContents(
            uri=uri,
            mimeType="application/json",
            text=json.dumps(settings, indent=2)
        )]

    elif uri == "db://schema/tables":
        schema = """Tables:
  - users (id, email, name, created_at)
  - documents (id, filename, content, chunks_count, uploaded_at)
  - queries (id, user_id, query_text, response, timestamp)"""
        return [TextResourceContents(uri=uri, mimeType="text/plain", text=schema)]

    raise ValueError(f"Unknown resource: {uri}")
Chapter 06

Prompts — Reusable Prompt Templates

Prompts are server-defined templates that the host can invoke. Think of them as slash commands (/summarize, /analyze, /debug) that appear in the UI. They combine static instructions with dynamic arguments.

src/prompts/templates.py — Prompt Templates
python
from mcp.types import Prompt, PromptArgument, GetPromptResult, PromptMessage, TextContent

@app.list_prompts()
async def list_prompts():
    return [
        Prompt(
            name="analyze_document",
            description="Deep analysis of a document: summary, key insights, action items",
            arguments=[
                PromptArgument(name="document_name", description="Name of the document", required=True),
                PromptArgument(name="focus", description="Analysis focus: 'technical', 'business', 'legal'", required=False)
            ]
        ),
        Prompt(
            name="debug_code",
            description="Analyze code for bugs, performance issues, and improvements",
            arguments=[
                PromptArgument(name="language", description="Programming language", required=True)
            ]
        )
    ]

@app.get_prompt()
async def get_prompt(name: str, arguments: dict):
    if name == "analyze_document":
        doc = arguments.get("document_name")
        focus = arguments.get("focus", "general")
        return GetPromptResult(
            description=f"Analyze {doc}",
            messages=[
                PromptMessage(
                    role="user",
                    content=TextContent(type="text", text=f"""
Please analyze '{doc}' with a {focus} focus. Provide:
1. Executive Summary (3-4 sentences)
2. Key Insights (5 bullet points)
3. Action Items (prioritized list)
4. Risks or Concerns
Use the read_file tool to access the document first.
""")
                )
            ]
        )
Chapter 07

Real MCP Server — Files + SQLite Database

Now let's put it all together into a production-grade server. This one gives Claude access to your file system AND a SQLite database. This is the kind of server you'd put in a portfolio.

src/server.py — Complete Production Server
python
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent, Resource, TextResourceContents
from pathlib import Path
import sqlite3, asyncio, json, os
from dotenv import load_dotenv

load_dotenv()

app = Server("personal-assistant-mcp")

DB_PATH = Path(os.getenv("DB_PATH", "data/assistant.db"))
FILES_DIR = Path(os.getenv("FILES_DIR", str(Path.home() / "Documents")))

# ─── Database helper ─────────────────────────────────────
def run_query(sql: str, params=()) -> str:
    """Run a SQL query safely. Returns results as formatted string."""
    with sqlite3.connect(DB_PATH) as conn:
        conn.row_factory = sqlite3.Row
        cursor = conn.execute(sql, params)
        if sql.strip().upper().startswith('SELECT'):
            rows = [dict(r) for r in cursor.fetchall()]
            return json.dumps(rows, indent=2, default=str)
        conn.commit()
        return f"Query OK. Rows affected: {cursor.rowcount}"

# ─── TOOLS ───────────────────────────────────────────────
@app.list_tools()
async def list_tools():
    return [
        Tool(name="read_file",
             description="Read a file's contents. Use when user wants to open or view a document.",
             inputSchema={"type":"object","properties":{"path":{"type":"string"}},"required":["path"]}),

        Tool(name="query_database",
             description="Run a SQL SELECT query on the local database. Use for data retrieval questions.",
             inputSchema={"type":"object","properties":{"sql":{"type":"string","description":"SQL SELECT query"}},"required":["sql"]}),

        Tool(name="search_files",
             description="Search for files by name or extension. Use when user asks to find files.",
             inputSchema={"type":"object","properties":{"pattern":{"type":"string","description":"Search pattern like *.pdf or report*"}},"required":["pattern"]}),

        Tool(name="get_current_time",
             description="Get current date and time. Use for any time-sensitive queries.",
             inputSchema={"type":"object","properties":{}}),
    ]

@app.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "read_file":
        path = (FILES_DIR / arguments["path"]).resolve()
        if not str(path).startswith(str(FILES_DIR)):
            return [TextContent(type="text", text="Access denied")]
        content = path.read_text(encoding="utf-8", errors="replace")
        return [TextContent(type="text", text=content[:10000])]  # limit size

    elif name == "query_database":
        sql = arguments["sql"]
        if not sql.strip().upper().startswith("SELECT"):
            return [TextContent(type="text", text="Only SELECT queries allowed for safety")]
        result = run_query(sql)
        return [TextContent(type="text", text=result)]

    elif name == "search_files":
        pattern = arguments["pattern"]
        matches = list(FILES_DIR.rglob(pattern))
        result = "\n".join(str(p.relative_to(FILES_DIR)) for p in matches[:50])
        return [TextContent(type="text", text=result or "No files found")]

    elif name == "get_current_time":
        from datetime import datetime
        return [TextContent(type="text", text=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))]

async def main():
    async with stdio_server() as (r, w):
        await app.run(r, w, app.create_initialization_options())

if __name__ == "__main__":
    asyncio.run(main())
Chapter 08

Connecting to Claude Desktop

Claude Desktop has built-in MCP support. You just need to edit one JSON config file and restart the app. This is the fastest way to see your server working with a real AI.

1
Find the Config File
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
2
Add Your Server
Edit the config file to add your server. Shown below.
3
Restart Claude Desktop
Fully quit and reopen. Look for the 🔌 icon in the bottom-left of the input box — that shows active MCP servers.
claude_desktop_config.json
json
{
  "mcpServers": {
    "personal-assistant": {
      "command": "python",
      "args": ["/absolute/path/to/mcp-server/src/server.py"],
      "env": {
        "DB_PATH": "/absolute/path/to/data/assistant.db",
        "FILES_DIR": "/Users/yourname/Documents"
      }
    },

    // You can add multiple servers — each runs as a separate process
    "rag-system": {
      "command": "python",
      "args": ["/path/to/rag-mcp-server/server.py"]
    }
  }
}
⚠️ Always Use Absolute Paths

Claude Desktop spawns your server as a subprocess. Relative paths will fail because the working directory is unknown. Always use full absolute paths in the config. Test by running python /absolute/path/to/server.py from a fresh terminal first.

Chapter 09

Production, Security & Remote Servers

Stdio works great locally. But for remote deployment — like giving Claude access to your cloud database, or sharing your server with a team — you need HTTP transport.

🏠
Stdio Transport
Local only. Fast, simple, zero network config. Perfect for personal tools and local files. What we've been building.
🌐
HTTP + SSE Transport
Network transport using Server-Sent Events. Can be deployed to cloud. Multiple clients can connect. Requires authentication.
🔐
Authentication
For remote servers: use API key in headers, or OAuth 2.0. Never expose an unauthenticated MCP server to the internet.
🐳
Docker Deploy
Package server in a Docker container. Deploy to any cloud. Same Dockerfile pattern as your RAG system backend.
src/server_http.py — Remote HTTP Server
python
from mcp.server.sse import SseServerTransport
from starlette.applications import Starlette
from starlette.routing import Route
import uvicorn

# Same app = Server("...") as before
# Just swap the transport layer

sse = SseServerTransport("/messages/")

async def handle_sse(request):
    # Optional: check API key for authentication
    api_key = request.headers.get("x-api-key")
    if api_key != os.getenv("MCP_API_KEY"):
        from starlette.responses import Response
        return Response("Unauthorized", status_code=401)

    async with sse.connect_sse(request.scope, request.receive, request._send) as streams:
        await app.run(streams[0], streams[1], app.create_initialization_options())

starlette_app = Starlette(
    routes=[
        Route("/sse", endpoint=handle_sse),
        Route("/messages/", endpoint=sse.handle_post_message, methods=["POST"])
    ]
)

if __name__ == "__main__":
    uvicorn.run(starlette_app, host="0.0.0.0", port=8080)
✅ Production Checklist

Before sharing your MCP server with anyone:

  • All file paths validated against allowed directory (path traversal protection)
  • Database: only SELECT queries allowed through tool (no DELETE/DROP)
  • API keys loaded from environment variables, not hardcoded
  • Tool descriptions are clear and specific (tested with Claude)
  • Error handling: all tool functions handle exceptions gracefully
  • Logging added so you can debug what Claude is calling and why
Chapter 10

MCP + RAG — The Ultimate Portfolio Project

You've built a RAG system. You've built an MCP server. Now combine them. Give Claude an MCP tool that queries your RAG backend. This is genuinely impressive — it means Claude can upload documents AND answer questions about them, all through natural conversation.

🚀 What This Unlocks

Without MCP: User opens your Angular app → uploads doc → types question → gets answer.

With MCP + RAG: User tells Claude Desktop "Please read this PDF and answer questions about it" → Claude uses your MCP upload_document tool → then uses query_rag tool to answer questions → all in a single conversation, no UI needed.

src/tools/rag_tools.py — MCP Tools for Your RAG System
python
import httpx
from mcp.types import Tool, TextContent
from pathlib import Path
import os

RAG_BASE_URL = os.getenv("RAG_API_URL", "http://localhost:8000/api")

def get_rag_tools():
    return [
        Tool(
            name="upload_to_rag",
            description="Upload a document to the RAG knowledge base. Use when user wants to index a file for Q&A.",
            inputSchema={
                "type": "object",
                "properties": {
                    "file_path": {"type": "string", "description": "Absolute path to file to upload"}
                },
                "required": ["file_path"]
            }
        ),
        Tool(
            name="query_rag",
            description="Ask a question against the indexed documents in the RAG system. Use when user asks about document contents.",
            inputSchema={
                "type": "object",
                "properties": {
                    "question": {"type": "string"},
                    "provider": {"type": "string", "enum": ["openai", "gemini", "puter"], "default": "openai"}
                },
                "required": ["question"]
            }
        ),
        Tool(
            name="list_rag_documents",
            description="List all documents currently indexed in the RAG system.",
            inputSchema={"type": "object", "properties": {}}
        )
    ]

async def execute_rag_tool(name: str, arguments: dict):
    async with httpx.AsyncClient(timeout=60.0) as client:

        if name == "upload_to_rag":
            path = Path(arguments["file_path"])
            with open(path, "rb") as f:
                response = await client.post(
                    f"{RAG_BASE_URL}/upload",
                    files={"file": (path.name, f, "application/octet-stream")}
                )
            data = response.json()
            return [TextContent(type="text", text=
                f"✅ Uploaded '{data['filename']}' — {data['chunks_created']} chunks indexed")]

        elif name == "query_rag":
            response = await client.post(f"{RAG_BASE_URL}/query", json={
                "query": arguments["question"],
                "provider": arguments.get("provider", "openai"),
                "top_k": 5
            })
            data = response.json()
            sources = ", ".join(data.get("sources", []))
            return [TextContent(type="text", text=
                f"{data['answer']}\n\n📚 Sources: {sources}\n⏱ {data['processing_time_ms']:.0f}ms")]

        elif name == "list_rag_documents":
            response = await client.get(f"{RAG_BASE_URL}/stats")
            data = response.json()
            return [TextContent(type="text", text=
                f"📄 {data['total_documents']} documents, {data['total_chunks']} chunks indexed")]
Interview Q
"Walk me through an architecture where you combined MCP with a RAG system."
I built an MCP server that acts as a bridge to a RAG backend. The server exposes three tools: upload_to_rag (sends documents to a FastAPI endpoint that chunks, embeds, and stores them in ChromaDB), query_rag (sends a question to the retrieval pipeline which finds relevant chunks and passes them to an LLM), and list_documents. Claude Desktop was configured to use this server via stdio transport. The result: a user can have a natural conversation with Claude — "Please read this report and summarize the financial risks" — and Claude automatically uploads the file and queries it without any explicit user interface. The MCP layer handled tool routing; the RAG layer handled knowledge retrieval; the LLM handled synthesis.
🎓 Full MCP Curriculum Complete

You're an MCP developer when you can:

  • Explain MCP's three primitives (Tools, Resources, Prompts) and when to use each
  • Build a working server from scratch with Python SDK in under 30 minutes
  • Connect it to Claude Desktop and have it work correctly
  • Describe the JSON-RPC lifecycle from user message to tool result
  • Implement path security and SQL injection prevention in your tools
  • Deploy a remote MCP server with HTTP+SSE transport and authentication
  • Combine MCP with your RAG system to create a genuinely useful portfolio piece