- Migrated from HIVE branding to WHOOSH across all components - Enhanced backend API with new services: AI models, BZZZ integration, templates, members - Added comprehensive testing suite with security, performance, and integration tests - Improved frontend with new components for project setup, AI models, and team management - Updated MCP server implementation with WHOOSH-specific tools and resources - Enhanced deployment configurations with production-ready Docker setups - Added comprehensive documentation and setup guides - Implemented age encryption service and UCXL integration 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
11 KiB
11 KiB
🐝 WHOOSH + Claude Integration Guide
Complete guide to integrate your WHOOSH Distributed AI Orchestration Platform with Claude via Model Context Protocol (MCP).
🎯 What This Enables
With WHOOSH MCP integration, Claude can:
- 🤖 Orchestrate Your AI Cluster - Assign development tasks across specialized agents
- 📊 Monitor Real-time Progress - Track task execution and agent utilization
- 🔄 Coordinate Complex Workflows - Plan and execute multi-step distributed projects
- 📈 Access Live Metrics - Get cluster status, performance data, and health checks
- 🧠 Make Intelligent Decisions - Optimize task distribution based on agent capabilities
🚀 Quick Setup
1. Ensure WHOOSH is Running
cd /home/tony/AI/projects/whoosh
docker compose ps
You should see all services running:
- ✅
whoosh-backendon port 8087 - ✅
whoosh-frontendon port 3001 - ✅
prometheus,grafana,redis
2. Run the Integration Setup
./scripts/setup_claude_integration.sh
This will:
- ✅ Build the MCP server if needed
- ✅ Detect your Claude Desktop configuration location
- ✅ Create the proper MCP configuration
- ✅ Backup any existing config
3. Restart Claude Desktop
After running the setup script, restart Claude Desktop to load the WHOOSH MCP server.
🎮 Using Claude with WHOOSH
Once integrated, you can use natural language to control your distributed AI cluster:
Agent Management
"Show me all my registered agents and their current status"
"Register a new agent:
- ID: walnut-kernel-dev
- Endpoint: http://walnut.local:11434
- Model: codellama:34b
- Specialization: kernel development"
Task Creation & Monitoring
"Create a high-priority kernel development task to optimize FlashAttention for RDNA3 GPUs.
Include constraints for backward compatibility and focus on memory coalescing."
"What's the status of task kernel_dev_1704671234?"
"Show me all pending tasks grouped by specialization"
Complex Project Coordination
"Help me coordinate development of a new PyTorch operator:
1. CUDA/HIP kernel implementation (high priority)
2. PyTorch integration layer (medium priority)
3. Performance benchmarks (medium priority)
4. Documentation and examples (low priority)
5. Unit and integration tests (high priority)
Use parallel coordination where dependencies allow."
Cluster Monitoring
"What's my cluster status? Show agent utilization and recent performance metrics."
"Give me a summary of completed tasks from the last hour"
"What are the current capabilities of my distributed AI cluster?"
Workflow Management
"Create a workflow for distributed model training that includes data preprocessing,
training coordination, and result validation across my agents"
"Execute workflow 'distributed-training' with input parameters for ResNet-50"
"Show me the execution history for all workflows"
🔧 Available MCP Tools
Agent Management
whoosh_get_agents- List all registered agents with statuswhoosh_register_agent- Register new agents in the cluster
Task Management
whoosh_create_task- Create development tasks for specialized agentswhoosh_get_task- Get details of specific taskswhoosh_get_tasks- List tasks with filtering options
Workflow Management
whoosh_get_workflows- List available workflowswhoosh_create_workflow- Create new distributed workflowswhoosh_execute_workflow- Execute workflows with inputs
Monitoring & Status
whoosh_get_cluster_status- Get comprehensive cluster statuswhoosh_get_metrics- Retrieve Prometheus metricswhoosh_get_executions- View workflow execution history
Advanced Coordination
whoosh_coordinate_development- Orchestrate complex multi-agent projects
📊 Available MCP Resources
Claude can access real-time cluster data through these resources:
whoosh://cluster/status- Live cluster health and statuswhoosh://agents/list- Agent registry with capabilitieswhoosh://tasks/active- Currently running and pending taskswhoosh://tasks/completed- Recent task results and metricswhoosh://workflows/available- All configured workflowswhoosh://executions/recent- Recent workflow executionswhoosh://metrics/prometheus- Raw Prometheus metricswhoosh://capabilities/overview- Cluster capabilities summary
🏗️ Architecture Overview
┌─────────────────┐ MCP Protocol ┌─────────────────┐
│ │ ◄─────────────────► │ │
│ Claude Desktop │ │ WHOOSH MCP │
│ │ │ Server │
└─────────────────┘ └─────────────────┘
│
│ HTTP/WebSocket
▼
┌─────────────────┐
│ │
│ WHOOSH Backend │
│ (FastAPI) │
└─────────────────┘
│
│
┌───────────────────────┼───────────────────────┐
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Agent 1 │ │ Agent 2 │ │ Agent N │
│ (Kernel Dev)│ │(PyTorch Dev)│ │ (Tester) │
└─────────────┘ └─────────────┘ └─────────────┘
🔍 Example Integration Session
Here's what a complete interaction might look like:
You: "What's the current status of my WHOOSH cluster?"
Claude: I'll check your WHOOSH cluster status for you.
[Uses whoosh_get_cluster_status tool]
🐝 **WHOOSH Cluster Status**
🟢 **System**: operational (v1.0.0)
⏱️ **Uptime**: 2h 15m
🤖 **Agents**: 6 total
• Available: 4
• Busy: 2
🎯 **Tasks**: 15 total
• Pending: 3
• Running: 2
• Completed: 8
• Failed: 2
Your cluster is healthy and processing tasks efficiently!
---
You: "Create a kernel optimization task for FlashAttention on RDNA3"
Claude: I'll create a high-priority kernel development task for FlashAttention optimization.
[Uses whoosh_create_task tool]
🎯 Created development task **kernel_dev_1704671234**
📋 Task Details:
• Type: kernel_dev
• Priority: 5/5
• Status: pending
• Objective: Optimize FlashAttention kernel for RDNA3 GPUs
• Created: 2024-01-07T14:12:34
The task has been queued and will be assigned to an available kernel_dev agent.
---
You: "Show me all my agents and coordinate a full PyTorch operator development project"
Claude: Let me first check your available agents, then coordinate the development project.
[Uses whoosh_get_agents and whoosh_coordinate_development tools]
📋 Your agent capabilities:
🤖 **walnut-kernel-dev** (kernel_dev) - Available
🤖 **oak-pytorch-dev** (pytorch_dev) - Available
🤖 **ironwood-profiler** (profiler) - Busy (1/2 tasks)
🤖 **acacia-docs** (docs_writer) - Available
🤖 **rosewood-tester** (tester) - Available
🎯 **Development Coordination Initiated**
📋 **Project**: Complete PyTorch operator development
🔄 **Strategy**: mixed (parallel where possible)
🎯 **Tasks Created**: 5
**Task Breakdown:**
• **kernel_dev_1704671235** (kernel_dev) - Priority 5/5
• **pytorch_dev_1704671236** (pytorch_dev) - Priority 4/5
• **profiler_1704671237** (profiler) - Priority 4/5
• **docs_writer_1704671238** (docs_writer) - Priority 2/5
• **tester_1704671239** (tester) - Priority 5/5
All tasks have been distributed to specialized agents. Kernel development and testing will run in parallel, followed by PyTorch integration and documentation.
🛠️ Advanced Configuration
Custom WHOOSH Backend URL
If your WHOOSH backend is running on a different host/port:
# Edit the Claude config to point to your WHOOSH instance
vim ~/Library/Application\ Support/Claude/claude_desktop_config.json
# Update the env section:
"env": {
"WHOOSH_API_URL": "https://your-whoosh-host/api",
"WHOOSH_WS_URL": "wss://your-whoosh-host/socket.io"
}
Multiple WHOOSH Clusters
You can configure multiple WHOOSH clusters:
{
"mcpServers": {
"whoosh-production": {
"command": "node",
"args": ["/path/to/whoosh/mcp-server/dist/index.js"],
"env": {
"WHOOSH_API_URL": "https://prod-whoosh/api"
}
},
"whoosh-development": {
"command": "node",
"args": ["/path/to/whoosh/mcp-server/dist/index.js"],
"env": {
"WHOOSH_API_URL": "https://dev-whoosh/api"
}
}
}
}
🔐 Security Considerations
- 🔒 The MCP server only connects to your local WHOOSH cluster
- 🌐 No external network access required for the integration
- 🏠 All communication stays within your development environment
- 🔑 Agent endpoints should be on trusted networks only
- 📝 Consider authentication if deploying WHOOSH on public networks
🐛 Troubleshooting
MCP Server Won't Start
# Check if WHOOSH backend is accessible
curl http://localhost:8087/health
# Test MCP server manually
cd /home/tony/AI/projects/whoosh/mcp-server
npm run dev
Claude Can't See WHOOSH Tools
- Verify Claude Desktop configuration path
- Check the config file syntax with
json_pp < claude_desktop_config.json - Restart Claude Desktop completely
- Check Claude Desktop logs (varies by OS)
Agent Connection Issues
# Verify your agent endpoints are accessible
curl http://your-agent-host:11434/api/tags
# Check WHOOSH backend logs
docker compose logs whoosh-backend
🎉 What's Next?
With Claude integrated into your WHOOSH cluster, you can:
- 🧠 Intelligent Task Planning - Let Claude analyze requirements and create optimal task breakdowns
- 🔄 Adaptive Coordination - Claude can monitor progress and adjust task priorities dynamically
- 📈 Performance Optimization - Use Claude to analyze metrics and optimize agent utilization
- 🚀 Automated Workflows - Create complex workflows through natural conversation
- 🐛 Proactive Issue Resolution - Claude can detect and resolve common cluster issues
🐝 Welcome to the future of distributed AI development orchestration!