Changes: - Updated default endpoints to use https://hive.home.deepblack.cloud - Added support for HIVE_TIMEOUT environment variable - Created .env.example with multiple deployment configurations - Updated Claude Desktop configuration for production/development - Updated README with comprehensive configuration guide Production endpoints: - API: https://hive.home.deepblack.cloud - WebSocket: wss://hive.home.deepblack.cloud Development fallback to localhost still available via env vars. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
192 lines
5.5 KiB
Markdown
192 lines
5.5 KiB
Markdown
# 🐝 Hive MCP Server
|
|
|
|
Model Context Protocol (MCP) server that exposes the Hive Distributed AI Orchestration Platform to AI assistants like Claude.
|
|
|
|
## Overview
|
|
|
|
This MCP server allows AI assistants to:
|
|
|
|
- 🤖 **Orchestrate Agent Tasks** - Assign development work across your distributed cluster
|
|
- 📊 **Monitor Executions** - Track task progress and results in real-time
|
|
- 🔄 **Manage Workflows** - Create and execute complex distributed pipelines
|
|
- 📈 **Access Cluster Resources** - Get status, metrics, and performance data
|
|
|
|
## Quick Start
|
|
|
|
### 1. Install Dependencies
|
|
|
|
```bash
|
|
cd mcp-server
|
|
npm install
|
|
```
|
|
|
|
### 2. Build the Server
|
|
|
|
```bash
|
|
npm run build
|
|
```
|
|
|
|
### 3. Configure Claude Desktop
|
|
|
|
Add to your Claude Desktop configuration (`~/Library/Application Support/Claude/claude_desktop_config.json`):
|
|
|
|
**Production (Swarm Deployment):**
|
|
```json
|
|
{
|
|
"mcpServers": {
|
|
"hive": {
|
|
"command": "node",
|
|
"args": ["/path/to/hive/mcp-server/dist/index.js"],
|
|
"env": {
|
|
"HIVE_API_URL": "https://hive.home.deepblack.cloud",
|
|
"HIVE_WS_URL": "wss://hive.home.deepblack.cloud"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
**Development/Local Testing:**
|
|
```json
|
|
{
|
|
"mcpServers": {
|
|
"hive": {
|
|
"command": "node",
|
|
"args": ["/path/to/hive/mcp-server/dist/index.js"],
|
|
"env": {
|
|
"HIVE_API_URL": "http://localhost:8087",
|
|
"HIVE_WS_URL": "ws://localhost:8087"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
### 4. Restart Claude Desktop
|
|
|
|
The Hive MCP server will automatically connect to your running Hive cluster.
|
|
|
|
## Available Tools
|
|
|
|
### Agent Management
|
|
- **`hive_get_agents`** - List all registered agents with status
|
|
- **`hive_register_agent`** - Register new agents in the cluster
|
|
|
|
### Task Management
|
|
- **`hive_create_task`** - Create development tasks for specialized agents
|
|
- **`hive_get_task`** - Get details of specific tasks
|
|
- **`hive_get_tasks`** - List tasks with filtering options
|
|
|
|
### Workflow Management
|
|
- **`hive_get_workflows`** - List available workflows
|
|
- **`hive_create_workflow`** - Create new distributed workflows
|
|
- **`hive_execute_workflow`** - Execute workflows with inputs
|
|
|
|
### Monitoring
|
|
- **`hive_get_cluster_status`** - Get comprehensive cluster status
|
|
- **`hive_get_metrics`** - Retrieve Prometheus metrics
|
|
- **`hive_get_executions`** - View workflow execution history
|
|
|
|
### Coordination
|
|
- **`hive_coordinate_development`** - Orchestrate complex multi-agent development projects
|
|
|
|
## Available Resources
|
|
|
|
### Real-time Cluster Data
|
|
- **`hive://cluster/status`** - Live cluster status and health
|
|
- **`hive://agents/list`** - Agent registry with capabilities
|
|
- **`hive://tasks/active`** - Currently running and pending tasks
|
|
- **`hive://tasks/completed`** - Recent task results and metrics
|
|
|
|
### Workflow Data
|
|
- **`hive://workflows/available`** - All configured workflows
|
|
- **`hive://executions/recent`** - Recent workflow executions
|
|
|
|
### Monitoring Data
|
|
- **`hive://metrics/prometheus`** - Raw Prometheus metrics
|
|
- **`hive://capabilities/overview`** - Cluster capabilities summary
|
|
|
|
## Example Usage with Claude
|
|
|
|
### Register an Agent
|
|
```
|
|
Please register a new agent in my Hive cluster:
|
|
- ID: walnut-kernel-dev
|
|
- Endpoint: http://walnut.local:11434
|
|
- Model: codellama:34b
|
|
- Specialization: kernel_dev
|
|
```
|
|
|
|
### Create a Development Task
|
|
```
|
|
Create a high-priority kernel development task to optimize FlashAttention for RDNA3 GPUs.
|
|
The task should focus on memory coalescing and include constraints for backward compatibility.
|
|
```
|
|
|
|
### Coordinate Complex Development
|
|
```
|
|
Help me coordinate development of a new PyTorch operator that includes:
|
|
1. CUDA/HIP kernel implementation (high priority)
|
|
2. PyTorch integration layer (medium priority)
|
|
3. Performance benchmarks (medium priority)
|
|
4. Documentation and examples (low priority)
|
|
5. Unit and integration tests (high priority)
|
|
|
|
Use parallel coordination where possible.
|
|
```
|
|
|
|
### Monitor Cluster Status
|
|
```
|
|
What's the current status of my Hive cluster? Show me agent utilization and recent task performance.
|
|
```
|
|
|
|
## Configuration
|
|
|
|
The MCP server connects to the Hive backend using domain endpoints by default. You can customize this by setting environment variables:
|
|
|
|
**Production (Default):**
|
|
- **`HIVE_API_URL`** - `https://hive.home.deepblack.cloud`
|
|
- **`HIVE_WS_URL`** - `wss://hive.home.deepblack.cloud`
|
|
|
|
**Development/Local Testing:**
|
|
- **`HIVE_API_URL`** - `http://localhost:8087`
|
|
- **`HIVE_WS_URL`** - `ws://localhost:8087`
|
|
|
|
**Additional Options:**
|
|
- **`HIVE_TIMEOUT`** - Request timeout in milliseconds (default: `30000`)
|
|
|
|
Copy `.env.example` to `.env` and modify as needed for your deployment.
|
|
|
|
## Development
|
|
|
|
### Watch Mode
|
|
```bash
|
|
npm run watch
|
|
```
|
|
|
|
### Direct Run
|
|
```bash
|
|
npm run dev
|
|
```
|
|
|
|
## Integration with Hive
|
|
|
|
This MCP server connects to your running Hive platform and provides a standardized interface for AI assistants to:
|
|
|
|
1. **Understand** your cluster capabilities and current state
|
|
2. **Plan** complex development tasks across multiple agents
|
|
3. **Execute** coordinated workflows with real-time monitoring
|
|
4. **Optimize** task distribution based on agent specializations
|
|
|
|
The server automatically handles task queuing, agent assignment, and result aggregation - allowing AI assistants to focus on high-level orchestration and decision-making.
|
|
|
|
## Security Notes
|
|
|
|
- The MCP server connects to your local Hive cluster
|
|
- No external network access required
|
|
- All communication stays within your development environment
|
|
- Agent endpoints should be on trusted networks only
|
|
|
|
---
|
|
|
|
🐝 **Ready to let Claude orchestrate your distributed AI development cluster!** |