Skip to content

Context-Sharing MCP Server Implementation

A sophisticated persistent memory system for AI development workflows

This implementation combines Cloudflare Workers, D1 database, and Workers AI for content processing to create a powerful foundation for context sharing between different AI tools. It enables seamless handoffs between AI agents while maintaining project continuity and building searchable knowledge over time.

Overview

A Cloudflare Workers-based MCP (Model Context Protocol) server that provides intelligent context sharing and persistence across AI development sessions. Built to bridge the gap between different AI tools and maintain project continuity.

Database Schema

sql
CREATE TABLE IF NOT EXISTS context_snapshots (
  id TEXT PRIMARY KEY,
  project TEXT NOT NULL,
  summary TEXT NOT NULL,
  source TEXT DEFAULT 'unknown',
  metadata TEXT,
  tags TEXT DEFAULT '',
  timestamp DATETIME DEFAULT CURRENT_TIMESTAMP
);

Core Architecture

TypeScript Interface

typescript
interface Context {
  id: string;
  project: string;
  summary: string;
  source: string;
  metadata: string | null;
  tags: string;
  timestamp: string;
}

AI Enhancement Functions

  • Auto-summarization: Uses Cloudflare Workers AI (@cf/meta/llama-3.1-8b-instruct) to generate 2-3 sentence summaries
  • Auto-tagging: Generates 3-5 relevant tags for each context entry
  • Fallback handling: Graceful degradation when AI services are unavailable
typescript
async function generateSummary(content: string, env: any): Promise<string> {
  try {
    if (env.AI) {
      const response = await env.AI.run('@cf/meta/llama-3.1-8b-instruct', {
        messages: [{ 
          role: 'user', 
          content: `Summarize in 2-3 sentences: ${content}` 
        }]
      });
      return response.response || content.slice(0, 200) + '...';
    }
  } catch (error) {
    console.error('AI summary failed:', error);
  }
  return content.slice(0, 200) + '...';
}

async function generateTags(content: string, env: any): Promise<string> {
  try {
    if (env.AI) {
      const response = await env.AI.run('@cf/meta/llama-3.1-8b-instruct', {
        messages: [{ 
          role: 'user', 
          content: `Generate 3-5 relevant tags (comma-separated): ${content}` 
        }]
      });
      return response.response || 'auto-generated';
    }
  } catch (error) {
    console.error('AI tags failed:', error);
  }
  return 'auto-generated';
}

Available Tools

save_context

Saves conversation context with AI enhancement

  • Required: project, content
  • Optional: source, metadata
  • Features: Auto-generates summary and tags using AI
  • Output: Confirmation with generated summary and tags
typescript
case 'save_context':
  const summary = await generateSummary(args.content, env);
  const autoTags = await generateTags(summary, env);
  
  const id = crypto.randomUUID();
  await env.DB.prepare(
    `INSERT INTO context_snapshots (id, project, summary, source, metadata, tags)
     VALUES (?, ?, ?, ?, ?, ?)`
  ).bind(id, args.project, summary, args.source || "mcp", JSON.stringify(args.metadata || null), autoTags).run();

  result = {
    content: [{
      type: "text",
      text: `Context saved!\nID: ${id}\nSummary: ${summary}\nTags: ${autoTags}`
    }]
  };
  break;

load_context

Retrieves recent context for a project

  • Required: project
  • Optional: limit (default: 1, max: 10)
  • Output: Formatted list of contexts with timestamps and tags

search_context

Searches contexts using keyword matching

  • Required: query
  • Optional: project (for scoped search)
  • Features: Searches both summaries and tags
  • Output: Ranked results with relevance matching
typescript
case 'search_context':
  let dbQuery = `SELECT * FROM context_snapshots WHERE (summary LIKE ? OR tags LIKE ?)`;
  let params = [`%${args.query}%`, `%${args.query}%`];

  if (args.project) {
    dbQuery += ` AND project = ?`;
    params.push(args.project);
  }

  dbQuery += ` ORDER BY timestamp DESC LIMIT 10`;
  const searchResults = await env.DB.prepare(dbQuery).bind(...params).all();

  if (searchResults.results.length === 0) {
    result = {
      content: [{ type: "text", text: `No contexts found matching: "${args.query}"` }]
    };
  } else {
    const searchList = searchResults.results.map((ctx: any) => 
      `**${ctx.project}** (${ctx.timestamp})\n${ctx.summary}\nTags: ${ctx.tags}`
    ).join('\n\n');

    result = {
      content: [{
        type: "text",
        text: `Found ${searchResults.results.length} context(s) for "${args.query}":\n\n${searchList}`
      }]
    };
  }
  break;

Protocol Implementation

MCP Standard Compliance

  • Full JSON-RPC 2.0 implementation
  • Protocol version: 2025-06-18
  • Handles initialization, tool listing, and execution
  • Proper error handling and CORS support
typescript
// MCP Protocol Initialization
if (body.method === 'initialize') {
  const response = {
    jsonrpc: "2.0",
    id: body.id,
    result: {
      protocolVersion: "2025-06-18",
      capabilities: {
        tools: {}
      },
      serverInfo: {
        name: "Enhanced Context Manager",
        version: "1.0.0"
      }
    }
  };
  return new Response(JSON.stringify(response), {
    headers: { 
      'Content-Type': 'application/json',
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
      'Access-Control-Allow-Headers': 'Content-Type'
    }
  });
}

Transport Methods

  • Primary endpoint: /mcp for POST requests
  • Alternative: /sse endpoint support
  • CORS-enabled for cross-origin requests

Error Handling

  • Graceful AI service failures
  • JSON parsing error recovery
  • Unknown method responses
  • Database operation error handling

Key Features

Intelligent Context Processing

  • Automatic content summarization reduces verbose logs to actionable insights
  • Smart tagging creates discoverable knowledge base over time
  • Metadata flexibility allows structured data storage

Cross-Session Continuity

  • Project-based organization enables focused context retrieval
  • Source tracking identifies which "agent" created each context
  • Timestamp ordering maintains chronological context flow

Search & Discovery

  • Keyword search across summaries and tags
  • Project-scoped or global search capabilities
  • Configurable result limits for performance

Scalability Considerations

  • Cloudflare Workers serverless architecture
  • D1 database for reliable persistence
  • Built-in rate limiting and resource management

Use Cases

AI Agent Handoffs

  • Save context when switching between different AI tools
  • Load relevant context to maintain conversation continuity
  • Bridge sessions between Claude Desktop and Claude Code

Project Knowledge Base

  • Persistent storage of architectural decisions
  • Searchable history of implementation choices
  • Team knowledge sharing and onboarding

Development Workflow Integration

  • Context-aware deployments and infrastructure changes
  • Integration with existing MCP ecosystem
  • Support for multi-modal AI development workflows

Implementation Notes

  • Built on Cloudflare Workers platform for global distribution
  • Uses Workers AI for intelligent content processing
  • Follows MCP protocol standards for compatibility
  • Designed for integration with Claude Desktop's MCP framework

Strategic Intelligence Hub Documentation