Created 10 detailed GitHub issues covering: - Project activation and management UI (#1-2) - Worker node coordination and visualization (#3-4) - Automated GitHub repository scanning (#5) - Intelligent model-to-issue matching (#6) - Multi-model task execution system (#7) - N8N workflow integration (#8) - Hive-Bzzz P2P bridge (#9) - Peer assistance protocol (#10) Each issue includes detailed specifications, acceptance criteria, technical implementation notes, and dependency mapping. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
🐝 Hive MCP Server
Model Context Protocol (MCP) server that exposes the Hive Distributed AI Orchestration Platform to AI assistants like Claude.
Overview
This MCP server allows AI assistants to:
- 🤖 Orchestrate Agent Tasks - Assign development work across your distributed cluster
- 📊 Monitor Executions - Track task progress and results in real-time
- 🔄 Manage Workflows - Create and execute complex distributed pipelines
- 📈 Access Cluster Resources - Get status, metrics, and performance data
Quick Start
1. Install Dependencies
cd mcp-server
npm install
2. Build the Server
npm run build
3. Configure Claude Desktop
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json):
Production (Swarm Deployment):
{
"mcpServers": {
"hive": {
"command": "node",
"args": ["/path/to/hive/mcp-server/dist/index.js"],
"env": {
"HIVE_API_URL": "https://hive.home.deepblack.cloud",
"HIVE_WS_URL": "wss://hive.home.deepblack.cloud"
}
}
}
}
Development/Local Testing:
{
"mcpServers": {
"hive": {
"command": "node",
"args": ["/path/to/hive/mcp-server/dist/index.js"],
"env": {
"HIVE_API_URL": "http://localhost:8087",
"HIVE_WS_URL": "ws://localhost:8087"
}
}
}
}
4. Restart Claude Desktop
The Hive MCP server will automatically connect to your running Hive cluster.
Available Tools
Agent Management
hive_get_agents- List all registered agents with statushive_register_agent- Register new agents in the cluster
Task Management
hive_create_task- Create development tasks for specialized agentshive_get_task- Get details of specific taskshive_get_tasks- List tasks with filtering options
Workflow Management
hive_get_workflows- List available workflowshive_create_workflow- Create new distributed workflowshive_execute_workflow- Execute workflows with inputs
Monitoring
hive_get_cluster_status- Get comprehensive cluster statushive_get_metrics- Retrieve Prometheus metricshive_get_executions- View workflow execution history
Coordination
hive_coordinate_development- Orchestrate complex multi-agent development projects
Available Resources
Real-time Cluster Data
hive://cluster/status- Live cluster status and healthhive://agents/list- Agent registry with capabilitieshive://tasks/active- Currently running and pending taskshive://tasks/completed- Recent task results and metrics
Workflow Data
hive://workflows/available- All configured workflowshive://executions/recent- Recent workflow executions
Monitoring Data
hive://metrics/prometheus- Raw Prometheus metricshive://capabilities/overview- Cluster capabilities summary
Example Usage with Claude
Register an Agent
Please register a new agent in my Hive cluster:
- ID: walnut-kernel-dev
- Endpoint: http://walnut.local:11434
- Model: codellama:34b
- Specialization: kernel_dev
Create a Development Task
Create a high-priority kernel development task to optimize FlashAttention for RDNA3 GPUs.
The task should focus on memory coalescing and include constraints for backward compatibility.
Coordinate Complex Development
Help me coordinate development of a new PyTorch operator that includes:
1. CUDA/HIP kernel implementation (high priority)
2. PyTorch integration layer (medium priority)
3. Performance benchmarks (medium priority)
4. Documentation and examples (low priority)
5. Unit and integration tests (high priority)
Use parallel coordination where possible.
Monitor Cluster Status
What's the current status of my Hive cluster? Show me agent utilization and recent task performance.
Configuration
The MCP server connects to the Hive backend using domain endpoints by default. You can customize this by setting environment variables:
Production (Default):
HIVE_API_URL-https://hive.home.deepblack.cloudHIVE_WS_URL-wss://hive.home.deepblack.cloud
Development/Local Testing:
HIVE_API_URL-http://localhost:8087HIVE_WS_URL-ws://localhost:8087
Additional Options:
HIVE_TIMEOUT- Request timeout in milliseconds (default:30000)
Copy .env.example to .env and modify as needed for your deployment.
Development
Watch Mode
npm run watch
Direct Run
npm run dev
Integration with Hive
This MCP server connects to your running Hive platform and provides a standardized interface for AI assistants to:
- Understand your cluster capabilities and current state
- Plan complex development tasks across multiple agents
- Execute coordinated workflows with real-time monitoring
- Optimize task distribution based on agent specializations
The server automatically handles task queuing, agent assignment, and result aggregation - allowing AI assistants to focus on high-level orchestration and decision-making.
Security Notes
- The MCP server connects to your local Hive cluster
- No external network access required
- All communication stays within your development environment
- Agent endpoints should be on trusted networks only
🐝 Ready to let Claude orchestrate your distributed AI development cluster!