# 🐝 WHOOSH + Claude Integration Guide Complete guide to integrate your WHOOSH Distributed AI Orchestration Platform with Claude via Model Context Protocol (MCP). ## 🎯 What This Enables With WHOOSH MCP integration, Claude can: - **🤖 Orchestrate Your AI Cluster** - Assign development tasks across specialized agents - **📊 Monitor Real-time Progress** - Track task execution and agent utilization - **🔄 Coordinate Complex Workflows** - Plan and execute multi-step distributed projects - **📈 Access Live Metrics** - Get cluster status, performance data, and health checks - **🧠 Make Intelligent Decisions** - Optimize task distribution based on agent capabilities ## 🚀 Quick Setup ### 1. Ensure WHOOSH is Running ```bash cd /home/tony/AI/projects/whoosh docker compose ps ``` You should see all services running: - ✅ `whoosh-backend` on port 8087 - ✅ `whoosh-frontend` on port 3001 - ✅ `prometheus`, `grafana`, `redis` ### 2. Run the Integration Setup ```bash ./scripts/setup_claude_integration.sh ``` This will: - ✅ Build the MCP server if needed - ✅ Detect your Claude Desktop configuration location - ✅ Create the proper MCP configuration - ✅ Backup any existing config ### 3. Restart Claude Desktop After running the setup script, restart Claude Desktop to load the WHOOSH MCP server. ## 🎮 Using Claude with WHOOSH Once integrated, you can use natural language to control your distributed AI cluster: ### Agent Management ``` "Show me all my registered agents and their current status" "Register a new agent: - ID: walnut-kernel-dev - Endpoint: http://walnut.local:11434 - Model: codellama:34b - Specialization: kernel development" ``` ### Task Creation & Monitoring ``` "Create a high-priority kernel development task to optimize FlashAttention for RDNA3 GPUs. Include constraints for backward compatibility and focus on memory coalescing." "What's the status of task kernel_dev_1704671234?" "Show me all pending tasks grouped by specialization" ``` ### Complex Project Coordination ``` "Help me coordinate development of a new PyTorch operator: 1. CUDA/HIP kernel implementation (high priority) 2. PyTorch integration layer (medium priority) 3. Performance benchmarks (medium priority) 4. Documentation and examples (low priority) 5. Unit and integration tests (high priority) Use parallel coordination where dependencies allow." ``` ### Cluster Monitoring ``` "What's my cluster status? Show agent utilization and recent performance metrics." "Give me a summary of completed tasks from the last hour" "What are the current capabilities of my distributed AI cluster?" ``` ### Workflow Management ``` "Create a workflow for distributed model training that includes data preprocessing, training coordination, and result validation across my agents" "Execute workflow 'distributed-training' with input parameters for ResNet-50" "Show me the execution history for all workflows" ``` ## 🔧 Available MCP Tools ### Agent Management - **`whoosh_get_agents`** - List all registered agents with status - **`whoosh_register_agent`** - Register new agents in the cluster ### Task Management - **`whoosh_create_task`** - Create development tasks for specialized agents - **`whoosh_get_task`** - Get details of specific tasks - **`whoosh_get_tasks`** - List tasks with filtering options ### Workflow Management - **`whoosh_get_workflows`** - List available workflows - **`whoosh_create_workflow`** - Create new distributed workflows - **`whoosh_execute_workflow`** - Execute workflows with inputs ### Monitoring & Status - **`whoosh_get_cluster_status`** - Get comprehensive cluster status - **`whoosh_get_metrics`** - Retrieve Prometheus metrics - **`whoosh_get_executions`** - View workflow execution history ### Advanced Coordination - **`whoosh_coordinate_development`** - Orchestrate complex multi-agent projects ## 📊 Available MCP Resources Claude can access real-time cluster data through these resources: - **`whoosh://cluster/status`** - Live cluster health and status - **`whoosh://agents/list`** - Agent registry with capabilities - **`whoosh://tasks/active`** - Currently running and pending tasks - **`whoosh://tasks/completed`** - Recent task results and metrics - **`whoosh://workflows/available`** - All configured workflows - **`whoosh://executions/recent`** - Recent workflow executions - **`whoosh://metrics/prometheus`** - Raw Prometheus metrics - **`whoosh://capabilities/overview`** - Cluster capabilities summary ## 🏗️ Architecture Overview ``` ┌─────────────────┐ MCP Protocol ┌─────────────────┐ │ │ ◄─────────────────► │ │ │ Claude Desktop │ │ WHOOSH MCP │ │ │ │ Server │ └─────────────────┘ └─────────────────┘ │ │ HTTP/WebSocket ▼ ┌─────────────────┐ │ │ │ WHOOSH Backend │ │ (FastAPI) │ └─────────────────┘ │ │ ┌───────────────────────┼───────────────────────┐ ▼ ▼ ▼ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ Agent 1 │ │ Agent 2 │ │ Agent N │ │ (Kernel Dev)│ │(PyTorch Dev)│ │ (Tester) │ └─────────────┘ └─────────────┘ └─────────────┘ ``` ## 🔍 Example Integration Session Here's what a complete interaction might look like: ``` You: "What's the current status of my WHOOSH cluster?" Claude: I'll check your WHOOSH cluster status for you. [Uses whoosh_get_cluster_status tool] 🐝 **WHOOSH Cluster Status** 🟢 **System**: operational (v1.0.0) ⏱️ **Uptime**: 2h 15m 🤖 **Agents**: 6 total • Available: 4 • Busy: 2 🎯 **Tasks**: 15 total • Pending: 3 • Running: 2 • Completed: 8 • Failed: 2 Your cluster is healthy and processing tasks efficiently! --- You: "Create a kernel optimization task for FlashAttention on RDNA3" Claude: I'll create a high-priority kernel development task for FlashAttention optimization. [Uses whoosh_create_task tool] 🎯 Created development task **kernel_dev_1704671234** 📋 Task Details: • Type: kernel_dev • Priority: 5/5 • Status: pending • Objective: Optimize FlashAttention kernel for RDNA3 GPUs • Created: 2024-01-07T14:12:34 The task has been queued and will be assigned to an available kernel_dev agent. --- You: "Show me all my agents and coordinate a full PyTorch operator development project" Claude: Let me first check your available agents, then coordinate the development project. [Uses whoosh_get_agents and whoosh_coordinate_development tools] 📋 Your agent capabilities: 🤖 **walnut-kernel-dev** (kernel_dev) - Available 🤖 **oak-pytorch-dev** (pytorch_dev) - Available 🤖 **ironwood-profiler** (profiler) - Busy (1/2 tasks) 🤖 **acacia-docs** (docs_writer) - Available 🤖 **rosewood-tester** (tester) - Available 🎯 **Development Coordination Initiated** 📋 **Project**: Complete PyTorch operator development 🔄 **Strategy**: mixed (parallel where possible) 🎯 **Tasks Created**: 5 **Task Breakdown:** • **kernel_dev_1704671235** (kernel_dev) - Priority 5/5 • **pytorch_dev_1704671236** (pytorch_dev) - Priority 4/5 • **profiler_1704671237** (profiler) - Priority 4/5 • **docs_writer_1704671238** (docs_writer) - Priority 2/5 • **tester_1704671239** (tester) - Priority 5/5 All tasks have been distributed to specialized agents. Kernel development and testing will run in parallel, followed by PyTorch integration and documentation. ``` ## 🛠️ Advanced Configuration ### Custom WHOOSH Backend URL If your WHOOSH backend is running on a different host/port: ```bash # Edit the Claude config to point to your WHOOSH instance vim ~/Library/Application\ Support/Claude/claude_desktop_config.json # Update the env section: "env": { "WHOOSH_API_URL": "https://your-whoosh-host/api", "WHOOSH_WS_URL": "wss://your-whoosh-host/socket.io" } ``` ### Multiple WHOOSH Clusters You can configure multiple WHOOSH clusters: ```json { "mcpServers": { "whoosh-production": { "command": "node", "args": ["/path/to/whoosh/mcp-server/dist/index.js"], "env": { "WHOOSH_API_URL": "https://prod-whoosh/api" } }, "whoosh-development": { "command": "node", "args": ["/path/to/whoosh/mcp-server/dist/index.js"], "env": { "WHOOSH_API_URL": "https://dev-whoosh/api" } } } } ``` ## 🔐 Security Considerations - 🔒 The MCP server only connects to your local WHOOSH cluster - 🌐 No external network access required for the integration - 🏠 All communication stays within your development environment - 🔑 Agent endpoints should be on trusted networks only - 📝 Consider authentication if deploying WHOOSH on public networks ## 🐛 Troubleshooting ### MCP Server Won't Start ```bash # Check if WHOOSH backend is accessible curl http://localhost:8087/health # Test MCP server manually cd /home/tony/AI/projects/whoosh/mcp-server npm run dev ``` ### Claude Can't See WHOOSH Tools 1. Verify Claude Desktop configuration path 2. Check the config file syntax with `json_pp < claude_desktop_config.json` 3. Restart Claude Desktop completely 4. Check Claude Desktop logs (varies by OS) ### Agent Connection Issues ```bash # Verify your agent endpoints are accessible curl http://your-agent-host:11434/api/tags # Check WHOOSH backend logs docker compose logs whoosh-backend ``` ## 🎉 What's Next? With Claude integrated into your WHOOSH cluster, you can: 1. **🧠 Intelligent Task Planning** - Let Claude analyze requirements and create optimal task breakdowns 2. **🔄 Adaptive Coordination** - Claude can monitor progress and adjust task priorities dynamically 3. **📈 Performance Optimization** - Use Claude to analyze metrics and optimize agent utilization 4. **🚀 Automated Workflows** - Create complex workflows through natural conversation 5. **🐛 Proactive Issue Resolution** - Claude can detect and resolve common cluster issues **🐝 Welcome to the future of distributed AI development orchestration!**