Files
bzzz/deploy/DEPLOYMENT_GUIDE.md
anthonyrawlins 065dddf8d5 Prepare for v2 development: Add MCP integration and future development planning
- Add FUTURE_DEVELOPMENT.md with comprehensive v2 protocol specification
- Add MCP integration design and implementation foundation
- Add infrastructure and deployment configurations
- Update system architecture for v2 evolution

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-07 14:38:22 +10:00

590 lines
14 KiB
Markdown

# BZZZ MCP Integration Deployment Guide
This guide provides step-by-step instructions for deploying the BZZZ MCP integration with GPT-4 agents across the CHORUS cluster.
## Prerequisites
### Infrastructure Requirements
- **Cluster Nodes**: Minimum 3 nodes (WALNUT, IRONWOOD, ACACIA)
- **RAM**: 32GB+ per node for optimal performance
- **Storage**: 1TB+ SSD per node for conversation history and logs
- **Network**: High-speed connection between nodes for P2P communication
### Software Prerequisites
```bash
# On each node, ensure these are installed:
docker --version # Docker 24.0+
docker-compose --version # Docker Compose 2.20+
go version # Go 1.21+
node --version # Node.js 18+
```
### API Keys and Secrets
Ensure the OpenAI API key is properly stored:
```bash
# Verify the OpenAI API key exists
cat ~/chorus/business/secrets/openai-api-key-for-bzzz.txt
```
## Deployment Steps
### 1. Pre-Deployment Setup
#### Clone and Build
```bash
cd /home/tony/chorus/project-queues/active/BZZZ
# Build Go components
go mod download
go build -o bzzz main.go
# Build MCP server
cd mcp-server
npm install
npm run build
cd ..
# Build Docker images
docker build -t bzzz/mcp-node:latest .
docker build -t bzzz/mcp-server:latest mcp-server/
```
#### Environment Configuration
```bash
# Create environment file
cat > .env << EOF
# BZZZ Network Configuration
BZZZ_NODE_ID=bzzz-mcp-walnut
BZZZ_NETWORK_ID=bzzz-chorus-cluster
BZZZ_P2P_PORT=4001
BZZZ_HTTP_PORT=8080
# OpenAI Configuration
OPENAI_MODEL=gpt-4
OPENAI_MAX_TOKENS=4000
OPENAI_TEMPERATURE=0.7
# Cost Management
DAILY_COST_LIMIT=100.0
MONTHLY_COST_LIMIT=1000.0
COST_WARNING_THRESHOLD=0.8
# Agent Configuration
MAX_AGENTS=5
MAX_ACTIVE_THREADS=10
THREAD_TIMEOUT=3600
# Database Configuration
POSTGRES_PASSWORD=$(openssl rand -base64 32)
# Monitoring
GRAFANA_PASSWORD=$(openssl rand -base64 16)
# Integration URLs
WHOOSH_API_URL=http://192.168.1.72:8001
SLURP_API_URL=http://192.168.1.113:8002
EOF
# Source the environment
source .env
```
### 2. Database Initialization
Create the PostgreSQL schema:
```bash
cat > deploy/init-db.sql << EOF
-- BZZZ MCP Database Schema
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- Agents table
CREATE TABLE agents (
id VARCHAR(255) PRIMARY KEY,
role VARCHAR(100) NOT NULL,
model VARCHAR(100) NOT NULL,
capabilities TEXT[],
specialization VARCHAR(255),
max_tasks INTEGER DEFAULT 3,
status VARCHAR(50) DEFAULT 'idle',
created_at TIMESTAMP DEFAULT NOW(),
last_active TIMESTAMP DEFAULT NOW(),
node_id VARCHAR(255),
system_prompt TEXT
);
-- Conversations table
CREATE TABLE conversations (
id VARCHAR(255) PRIMARY KEY,
topic TEXT NOT NULL,
state VARCHAR(50) DEFAULT 'active',
created_at TIMESTAMP DEFAULT NOW(),
last_activity TIMESTAMP DEFAULT NOW(),
creator_id VARCHAR(255),
shared_context JSONB DEFAULT '{}'::jsonb
);
-- Conversation participants
CREATE TABLE conversation_participants (
conversation_id VARCHAR(255) REFERENCES conversations(id),
agent_id VARCHAR(255) REFERENCES agents(id),
role VARCHAR(100),
status VARCHAR(50) DEFAULT 'active',
joined_at TIMESTAMP DEFAULT NOW(),
PRIMARY KEY (conversation_id, agent_id)
);
-- Messages table
CREATE TABLE messages (
id UUID DEFAULT uuid_generate_v4() PRIMARY KEY,
conversation_id VARCHAR(255) REFERENCES conversations(id),
from_agent VARCHAR(255) REFERENCES agents(id),
content TEXT NOT NULL,
message_type VARCHAR(100),
timestamp TIMESTAMP DEFAULT NOW(),
reply_to UUID REFERENCES messages(id),
token_count INTEGER DEFAULT 0,
model VARCHAR(100)
);
-- Agent tasks
CREATE TABLE agent_tasks (
id VARCHAR(255) PRIMARY KEY,
agent_id VARCHAR(255) REFERENCES agents(id),
repository VARCHAR(255),
task_number INTEGER,
title TEXT,
status VARCHAR(50) DEFAULT 'active',
start_time TIMESTAMP DEFAULT NOW(),
context JSONB DEFAULT '{}'::jsonb,
thread_id VARCHAR(255)
);
-- Token usage tracking
CREATE TABLE token_usage (
id UUID DEFAULT uuid_generate_v4() PRIMARY KEY,
agent_id VARCHAR(255) REFERENCES agents(id),
conversation_id VARCHAR(255),
timestamp TIMESTAMP DEFAULT NOW(),
model VARCHAR(100),
prompt_tokens INTEGER,
completion_tokens INTEGER,
total_tokens INTEGER,
cost_usd DECIMAL(10,6)
);
-- Agent memory
CREATE TABLE agent_memory (
agent_id VARCHAR(255) REFERENCES agents(id),
memory_type VARCHAR(50), -- 'working', 'episodic', 'semantic'
key VARCHAR(255),
value JSONB,
timestamp TIMESTAMP DEFAULT NOW(),
expires_at TIMESTAMP,
PRIMARY KEY (agent_id, memory_type, key)
);
-- Escalations
CREATE TABLE escalations (
id UUID DEFAULT uuid_generate_v4() PRIMARY KEY,
conversation_id VARCHAR(255) REFERENCES conversations(id),
reason VARCHAR(255),
escalated_at TIMESTAMP DEFAULT NOW(),
escalated_by VARCHAR(255),
status VARCHAR(50) DEFAULT 'pending',
resolved_at TIMESTAMP,
resolution TEXT
);
-- Indexes for performance
CREATE INDEX idx_agents_role ON agents(role);
CREATE INDEX idx_agents_status ON agents(status);
CREATE INDEX idx_conversations_state ON conversations(state);
CREATE INDEX idx_messages_conversation_timestamp ON messages(conversation_id, timestamp);
CREATE INDEX idx_token_usage_agent_timestamp ON token_usage(agent_id, timestamp);
CREATE INDEX idx_agent_memory_agent_type ON agent_memory(agent_id, memory_type);
EOF
```
### 3. Deploy to Cluster
#### Node-Specific Deployment
**On WALNUT (192.168.1.27):**
```bash
# Set node-specific configuration
export BZZZ_NODE_ID=bzzz-mcp-walnut
export NODE_ROLE=primary
# Deploy with primary node configuration
docker-compose -f deploy/docker-compose.mcp.yml up -d
```
**On IRONWOOD (192.168.1.72):**
```bash
# Set node-specific configuration
export BZZZ_NODE_ID=bzzz-mcp-ironwood
export NODE_ROLE=secondary
# Deploy as secondary node
docker-compose -f deploy/docker-compose.mcp.yml up -d
```
**On ACACIA (192.168.1.113):**
```bash
# Set node-specific configuration
export BZZZ_NODE_ID=bzzz-mcp-acacia
export NODE_ROLE=secondary
# Deploy as secondary node
docker-compose -f deploy/docker-compose.mcp.yml up -d
```
### 4. Service Health Verification
#### Check Service Status
```bash
# Check all services are running
docker-compose -f deploy/docker-compose.mcp.yml ps
# Check BZZZ node connectivity
curl http://localhost:8080/health
# Check MCP server status
curl http://localhost:8081/health
# Check P2P network connectivity
curl http://localhost:8080/api/peers
```
#### Verify Agent Registration
```bash
# List registered agents
curl http://localhost:8081/api/agents
# Check agent capabilities
curl http://localhost:8081/api/agents/review_agent_architect
```
#### Test MCP Integration
```bash
# Test MCP server connection
cd examples
python3 test-mcp-connection.py
# Run collaborative review example
python3 collaborative-review-example.py
```
### 5. Integration with CHORUS Systems
#### WHOOSH Integration
```bash
# Verify WHOOSH connectivity
curl -X POST http://192.168.1.72:8001/api/agents \
-H "Content-Type: application/json" \
-d '{
"agent_id": "bzzz-mcp-agent-1",
"type": "gpt_agent",
"role": "architect",
"endpoint": "http://192.168.1.27:8081"
}'
```
#### SLURP Integration
```bash
# Test SLURP context event submission
curl -X POST http://192.168.1.113:8002/api/events \
-H "Content-Type: application/json" \
-d '{
"type": "agent_consensus",
"source": "bzzz_mcp_integration",
"context": {
"conversation_id": "test-thread-1",
"participants": ["architect", "reviewer"],
"consensus_reached": true
}
}'
```
### 6. Monitoring Setup
#### Access Monitoring Dashboards
- **Grafana**: http://localhost:3000 (admin/password from .env)
- **Prometheus**: http://localhost:9090
- **Logs**: Access via Grafana Loki integration
#### Key Metrics to Monitor
```bash
# Agent performance metrics
curl http://localhost:8081/api/stats
# Token usage and costs
curl http://localhost:8081/api/costs/daily
# Conversation thread health
curl http://localhost:8081/api/conversations?status=active
```
## Configuration Management
### Agent Role Configuration
Create custom agent roles:
```bash
# Create custom agent configuration
cat > config/custom-agent-roles.json << EOF
{
"roles": [
{
"name": "security_architect",
"specialization": "security_design",
"capabilities": [
"threat_modeling",
"security_architecture",
"compliance_review",
"risk_assessment"
],
"system_prompt": "You are a security architect specializing in distributed systems security...",
"interaction_patterns": {
"architects": "security_consultation",
"developers": "security_guidance",
"reviewers": "security_validation"
}
}
]
}
EOF
```
### Cost Management Configuration
```bash
# Configure cost alerts
cat > config/cost-limits.json << EOF
{
"global_limits": {
"daily_limit": 100.0,
"monthly_limit": 1000.0,
"per_agent_daily": 20.0
},
"alert_thresholds": {
"warning": 0.8,
"critical": 0.95
},
"alert_channels": {
"slack_webhook": "${SLACK_WEBHOOK_URL}",
"email": "admin@deepblack.cloud"
}
}
EOF
```
### Escalation Rules Configuration
```bash
# Configure escalation rules
cat > config/escalation-rules.json << EOF
{
"rules": [
{
"name": "Long Running Thread",
"conditions": [
{"type": "thread_duration", "threshold": 7200},
{"type": "no_progress", "threshold": true, "timeframe": 1800}
],
"actions": [
{"type": "notify_human", "target": "project_manager"},
{"type": "escalate_to_senior", "role": "senior_architect"}
]
},
{
"name": "High Cost Alert",
"conditions": [
{"type": "token_cost", "threshold": 50.0, "timeframe": 3600}
],
"actions": [
{"type": "throttle_agents", "reduction": 0.5},
{"type": "notify_admin", "urgency": "high"}
]
}
]
}
EOF
```
## Troubleshooting
### Common Issues
#### MCP Server Connection Issues
```bash
# Check MCP server logs
docker logs bzzz-mcp-server
# Verify OpenAI API key
docker exec bzzz-mcp-server cat /secrets/openai-api-key-for-bzzz.txt
# Test API key validity
curl -H "Authorization: Bearer $(cat ~/chorus/business/secrets/openai-api-key-for-bzzz.txt)" \
https://api.openai.com/v1/models
```
#### P2P Network Issues
```bash
# Check P2P connectivity
docker exec bzzz-mcp-node ./bzzz status
# View P2P logs
docker logs bzzz-mcp-node | grep p2p
# Check firewall settings
sudo ufw status | grep 4001
```
#### Agent Performance Issues
```bash
# Check agent memory usage
curl http://localhost:8081/api/agents/memory-stats
# Review token usage
curl http://localhost:8081/api/costs/breakdown
# Check conversation thread status
curl http://localhost:8081/api/conversations?status=active
```
### Performance Optimization
#### Database Tuning
```sql
-- Optimize PostgreSQL for BZZZ MCP workload
ALTER SYSTEM SET shared_buffers = '256MB';
ALTER SYSTEM SET work_mem = '16MB';
ALTER SYSTEM SET maintenance_work_mem = '128MB';
ALTER SYSTEM SET max_connections = 100;
SELECT pg_reload_conf();
```
#### Agent Optimization
```bash
# Optimize agent memory usage
curl -X POST http://localhost:8081/api/agents/cleanup-memory
# Adjust token limits based on usage patterns
curl -X PUT http://localhost:8081/api/config/token-limits \
-H "Content-Type: application/json" \
-d '{"max_tokens": 2000, "context_window": 16000}'
```
## Backup and Recovery
### Database Backup
```bash
# Create database backup
docker exec bzzz-mcp-postgres pg_dump -U bzzz bzzz_mcp | gzip > backup/bzzz-mcp-$(date +%Y%m%d).sql.gz
# Restore from backup
gunzip -c backup/bzzz-mcp-20250107.sql.gz | docker exec -i bzzz-mcp-postgres psql -U bzzz -d bzzz_mcp
```
### Configuration Backup
```bash
# Backup agent configurations
docker exec bzzz-mcp-server tar czf - /var/lib/mcp/config > backup/mcp-config-$(date +%Y%m%d).tar.gz
# Backup conversation data
docker exec bzzz-conversation-manager tar czf - /var/lib/conversations > backup/conversations-$(date +%Y%m%d).tar.gz
```
## Security Considerations
### API Key Security
```bash
# Rotate OpenAI API key monthly
echo "new-api-key" > ~/chorus/business/secrets/openai-api-key-for-bzzz.txt
docker-compose -f deploy/docker-compose.mcp.yml restart mcp-server
# Monitor API key usage
curl -H "Authorization: Bearer $(cat ~/chorus/business/secrets/openai-api-key-for-bzzz.txt)" \
https://api.openai.com/v1/usage
```
### Network Security
```bash
# Configure firewall rules
sudo ufw allow from 192.168.1.0/24 to any port 4001 # P2P port
sudo ufw allow from 192.168.1.0/24 to any port 8080 # BZZZ API
sudo ufw allow from 192.168.1.0/24 to any port 8081 # MCP API
# Enable audit logging
docker-compose -f deploy/docker-compose.mcp.yml \
-f deploy/docker-compose.audit.yml up -d
```
## Maintenance
### Regular Maintenance Tasks
```bash
# Weekly maintenance script
#!/bin/bash
set -e
echo "Starting BZZZ MCP maintenance..."
# Clean up old conversation threads
curl -X POST http://localhost:8081/api/maintenance/cleanup-threads
# Optimize database
docker exec bzzz-mcp-postgres psql -U bzzz -d bzzz_mcp -c "VACUUM ANALYZE;"
# Update cost tracking
curl -X POST http://localhost:8081/api/maintenance/update-costs
# Rotate logs
docker exec bzzz-mcp-server logrotate /etc/logrotate.d/mcp
echo "Maintenance completed successfully"
```
### Performance Monitoring
```bash
# Monitor key performance indicators
curl http://localhost:8081/api/metrics | jq '{
active_agents: .active_agents,
active_threads: .active_threads,
avg_response_time: .avg_response_time,
token_efficiency: .token_efficiency,
cost_per_task: .cost_per_task
}'
```
This deployment guide provides a comprehensive approach to deploying and maintaining the BZZZ MCP integration with GPT-4 agents across the CHORUS cluster. Follow the steps carefully and refer to the troubleshooting section for common issues.