Add comprehensive documentation for BZZZ MCP Server

- Complete API reference with all interfaces and examples
- Detailed deployment guide for development and production
- Main README with architecture overview and usage instructions

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
anthonyrawlins
2025-08-10 11:50:26 +10:00
parent 31d0cac324
commit dd098a5c84
3 changed files with 1801 additions and 0 deletions

View File

@@ -0,0 +1,552 @@
# BZZZ MCP Server
A sophisticated Model Context Protocol (MCP) server that enables GPT-5 agents to participate in the BZZZ P2P network for distributed AI coordination and collaboration.
## Overview
The BZZZ MCP Server bridges the gap between OpenAI's GPT-5 and the BZZZ distributed coordination system, allowing AI agents to:
- **Announce capabilities** and join the P2P network
- **Discover and communicate** with other agents using semantic addressing
- **Coordinate complex tasks** through threaded conversations
- **Escalate decisions** to human operators when needed
- **Track costs** and manage OpenAI API usage
- **Maintain performance metrics** and agent health monitoring
## Architecture
```
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ GPT-5 Agent │◄──►│ BZZZ MCP Server │◄──►│ BZZZ Go Service │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│ │
▼ ▼
┌─────────────┐ ┌──────────────┐
│ Cost Tracker│ │ P2P Network │
│ & Logging │ │ (libp2p) │
└─────────────┘ └──────────────┘
```
### Core Components
| Component | Purpose | Features |
|-----------|---------|----------|
| **Agent Manager** | Agent lifecycle & task coordination | Performance tracking, task queuing, capability matching |
| **Conversation Manager** | Multi-threaded discussions | Auto-escalation, thread summarization, participant management |
| **P2P Connector** | BZZZ network integration | HTTP/WebSocket client, semantic addressing, network discovery |
| **OpenAI Integration** | GPT-5 API wrapper | Streaming, cost tracking, model management, prompt engineering |
| **Cost Tracker** | Usage monitoring | Daily/monthly limits, model pricing, usage analytics |
| **Logger** | Structured logging | Winston-based, multi-transport, component-specific |
## Quick Start
### Prerequisites
- Node.js 18+
- OpenAI API key with GPT-5 access
- BZZZ Go service running on `localhost:8080`
### Installation
```bash
cd /path/to/BZZZ/mcp-server
npm install
npm run build
```
### Configuration
Create your OpenAI API key file:
```bash
echo "your-openai-api-key-here" > ~/chorus/business/secrets/openai-api-key-for-bzzz.txt
```
### Running the Server
```bash
# Development mode
npm run dev
# Production mode
npm start
# Run integration test
node test-integration.js
```
## Configuration
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `OPENAI_MODEL` | `gpt-5` | OpenAI model to use |
| `OPENAI_MAX_TOKENS` | `4000` | Maximum tokens per request |
| `OPENAI_TEMPERATURE` | `0.7` | Model temperature |
| `BZZZ_NODE_URL` | `http://localhost:8080` | BZZZ Go service URL |
| `BZZZ_NETWORK_ID` | `bzzz-local` | Network identifier |
| `DAILY_COST_LIMIT` | `100.0` | Daily spending limit (USD) |
| `MONTHLY_COST_LIMIT` | `1000.0` | Monthly spending limit (USD) |
| `MAX_ACTIVE_THREADS` | `10` | Maximum concurrent threads |
| `LOG_LEVEL` | `info` | Logging level |
### Advanced Configuration
The server automatically configures escalation rules and agent role templates. See `src/config/config.ts` for detailed options.
## MCP Tools Reference
The BZZZ MCP Server provides 6 core tools for agent interaction:
### 1. bzzz_announce
Announce agent presence and capabilities on the BZZZ network.
**Input Schema:**
```json
{
"agent_id": "string (required)",
"role": "string (required)",
"capabilities": ["string"],
"specialization": "string",
"max_tasks": "number (default: 3)"
}
```
**Example:**
```json
{
"agent_id": "architect-001",
"role": "architect",
"capabilities": ["system_design", "code_review", "performance_analysis"],
"specialization": "distributed_systems",
"max_tasks": 5
}
```
### 2. bzzz_lookup
Discover agents and resources using semantic addressing.
**Input Schema:**
```json
{
"semantic_address": "string (required)",
"filter_criteria": {
"expertise": ["string"],
"availability": "boolean",
"performance_threshold": "number"
}
}
```
**Address Format:** `bzzz://agent:role@project:task/path`
**Example:**
```json
{
"semantic_address": "bzzz://*:architect@myproject:api_design",
"filter_criteria": {
"expertise": ["REST", "GraphQL"],
"availability": true,
"performance_threshold": 0.8
}
}
```
### 3. bzzz_get
Retrieve content from BZZZ semantic addresses.
**Input Schema:**
```json
{
"address": "string (required)",
"include_metadata": "boolean (default: true)",
"max_history": "number (default: 10)"
}
```
### 4. bzzz_post
Post events or messages to BZZZ addresses.
**Input Schema:**
```json
{
"target_address": "string (required)",
"message_type": "string (required)",
"content": "object (required)",
"priority": "string (low|medium|high|urgent, default: medium)",
"thread_id": "string (optional)"
}
```
### 5. bzzz_thread
Manage threaded conversations between agents.
**Input Schema:**
```json
{
"action": "string (create|join|leave|list|summarize, required)",
"thread_id": "string (required for most actions)",
"participants": ["string"] (required for create),
"topic": "string (required for create)"
}
```
**Thread Management:**
- **Create**: Start new discussion thread
- **Join**: Add agent to existing thread
- **Leave**: Remove agent from thread
- **List**: Get threads for current agent
- **Summarize**: Generate thread summary
### 6. bzzz_subscribe
Subscribe to real-time events from the BZZZ network.
**Input Schema:**
```json
{
"event_types": ["string"] (required),
"filter_address": "string (optional)",
"callback_webhook": "string (optional)"
}
```
## Agent Roles & Capabilities
The MCP server comes with predefined agent role templates:
### Architect Agent
- **Specialization**: System design and architecture
- **Capabilities**: `system_design`, `architecture_review`, `technology_selection`, `scalability_analysis`
- **Use Cases**: Technical guidance, design validation, technology decisions
### Code Reviewer Agent
- **Specialization**: Code quality and security
- **Capabilities**: `code_review`, `security_analysis`, `performance_optimization`, `best_practices_enforcement`
- **Use Cases**: Pull request reviews, security audits, code quality checks
### Documentation Agent
- **Specialization**: Technical writing
- **Capabilities**: `technical_writing`, `api_documentation`, `user_guides`, `knowledge_synthesis`
- **Use Cases**: API docs, user manuals, knowledge base creation
## Conversation Management
### Thread Lifecycle
```mermaid
graph TD
A[Create Thread] --> B[Active]
B --> C[Add Participants]
B --> D[Exchange Messages]
D --> E{Escalation Triggered?}
E -->|Yes| F[Escalated]
E -->|No| D
F --> G[Human Intervention]
G --> H[Resolved]
B --> I[Paused]
I --> B
B --> J[Completed]
```
### Escalation Rules
The system automatically escalates threads based on:
1. **Long Running Threads**: > 2 hours with no progress
2. **Consensus Failure**: > 3 disagreements in discussions
3. **Error Rate**: High failure rate in thread messages
### Escalation Actions
- **Notify Human**: Alert project managers or stakeholders
- **Request Expert**: Bring in specialized agents
- **Escalate to Architect**: Involve senior technical decision makers
- **Create Decision Thread**: Start focused decision-making process
## Cost Management
### Pricing (GPT-5 Estimates)
- **Prompt Tokens**: $0.05 per 1K tokens
- **Completion Tokens**: $0.15 per 1K tokens
### Cost Tracking Features
- Real-time usage monitoring
- Daily and monthly spending limits
- Automatic warnings at 80% threshold
- Per-model cost breakdown
- Usage analytics and reporting
### Cost Optimization Tips
1. Use appropriate temperature settings (0.3 for consistent tasks, 0.7 for creative work)
2. Set reasonable token limits for different task types
3. Monitor high-usage agents and optimize prompts
4. Use streaming for real-time applications
## Integration with BZZZ Go Service
### Required BZZZ API Endpoints
The MCP server expects these endpoints from the BZZZ Go service:
| Endpoint | Method | Purpose |
|----------|--------|---------|
| `/api/v1/health` | GET | Health check |
| `/api/v1/pubsub/publish` | POST | Publish messages |
| `/api/v1/p2p/send` | POST | Direct messaging |
| `/api/v1/network/query` | POST | Network queries |
| `/api/v1/network/status` | GET | Network status |
| `/api/v1/projects/{id}/data` | GET | Project data |
| `/api/v1/ws` | WebSocket | Real-time events |
### Message Format
```json
{
"type": "message_type",
"content": {...},
"sender": "node_id",
"timestamp": "2025-08-09T16:22:20Z",
"messageId": "msg-unique-id",
"networkId": "bzzz-local"
}
```
## Development
### Project Structure
```
mcp-server/
├── src/
│ ├── agents/ # Agent management
│ ├── ai/ # OpenAI integration
│ ├── config/ # Configuration
│ ├── conversations/ # Thread management
│ ├── p2p/ # BZZZ network client
│ ├── tools/ # MCP protocol tools
│ ├── utils/ # Utilities (logging, cost tracking)
│ └── index.ts # Main server
├── dist/ # Compiled JavaScript
├── test-integration.js # Integration tests
├── package.json
├── tsconfig.json
└── README.md
```
### Building and Testing
```bash
# Install dependencies
npm install
# Build TypeScript
npm run build
# Run development server
npm run dev
# Run linting
npm run lint
# Format code
npm run format
# Run integration test
node test-integration.js
```
### Adding New Agent Types
1. **Define Role Configuration** in `src/config/config.ts`:
```typescript
{
role: 'new_role',
specialization: 'domain_expertise',
capabilities: ['capability1', 'capability2'],
systemPrompt: 'Your role-specific prompt...',
interactionPatterns: {
'other_role': 'interaction_pattern'
}
}
```
2. **Add Task Types** in `src/agents/agent-manager.ts`:
```typescript
case 'new_task_type':
result = await this.executeNewTaskType(agent, task, taskData);
break;
```
3. **Test Integration** with existing agents and workflows.
## Monitoring and Observability
### Logging
The server provides structured logging with multiple levels:
```typescript
// Component-specific logging
const logger = new Logger('ComponentName');
logger.info('Operation completed', { metadata });
logger.error('Operation failed', { error: error.message });
```
### Metrics and Health
- **Agent Performance**: Success rates, response times, task completion
- **Thread Health**: Active threads, escalation rates, resolution times
- **Network Status**: Connection health, message throughput, peer count
- **Cost Analytics**: Spending trends, model usage, token consumption
### Debugging
Enable debug logging:
```bash
export LOG_LEVEL=debug
npm run dev
```
View detailed component interactions, P2P network events, and OpenAI API calls.
## Troubleshooting
### Common Issues
**1. "OpenAI API key not found"**
- Ensure API key file exists: `~/chorus/business/secrets/openai-api-key-for-bzzz.txt`
- Check file permissions and content
**2. "Failed to connect to BZZZ service"**
- Verify BZZZ Go service is running on `localhost:8080`
- Check network connectivity and firewall settings
- Verify API endpoint availability
**3. "Thread escalation not working"**
- Check escalation rule configuration
- Verify human notification endpoints
- Review escalation logs for rule triggers
**4. "High API costs"**
- Review daily/monthly limits in configuration
- Monitor token usage per agent type
- Optimize system prompts and temperature settings
- Use streaming for long-running conversations
### Performance Optimization
1. **Agent Management**
- Limit concurrent tasks per agent
- Use performance thresholds for agent selection
- Implement agent health monitoring
2. **Conversation Threading**
- Set appropriate thread timeouts
- Use thread summarization for long discussions
- Implement thread archival policies
3. **Network Efficiency**
- Use WebSocket connections for real-time events
- Implement message batching for bulk operations
- Cache frequently accessed network data
## Security Considerations
### API Key Management
- Store OpenAI keys securely outside of code repository
- Use environment-specific key files
- Implement key rotation procedures
### Network Security
- Use HTTPS/WSS for all external connections
- Validate all incoming P2P messages
- Implement rate limiting for API calls
### Agent Isolation
- Sandbox agent executions where possible
- Validate agent capabilities and permissions
- Monitor for unusual agent behavior patterns
## Deployment
### Production Checklist
- [ ] OpenAI API key configured and tested
- [ ] BZZZ Go service running and accessible
- [ ] Cost limits set appropriately for environment
- [ ] Logging configured for production monitoring
- [ ] WebSocket connections tested for stability
- [ ] Escalation rules configured for team workflow
- [ ] Performance metrics and alerting set up
### Docker Deployment
```dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY dist/ ./dist/
EXPOSE 3000
CMD ["node", "dist/index.js"]
```
### Systemd Service
```ini
[Unit]
Description=BZZZ MCP Server
After=network.target
[Service]
Type=simple
User=bzzz
WorkingDirectory=/opt/bzzz-mcp-server
ExecStart=/usr/bin/node dist/index.js
Restart=always
Environment=NODE_ENV=production
[Install]
WantedBy=multi-user.target
```
## Contributing
### Development Workflow
1. Fork the repository
2. Create a feature branch: `git checkout -b feature/new-feature`
3. Make changes following TypeScript and ESLint rules
4. Add tests for new functionality
5. Update documentation as needed
6. Submit a pull request
### Code Style
- Use TypeScript strict mode
- Follow existing naming conventions
- Add JSDoc comments for public APIs
- Include comprehensive error handling
- Write meaningful commit messages
## License
This project follows the same license as the BZZZ project.
## Support
For issues and questions:
- Review this documentation and troubleshooting section
- Check the integration test for basic connectivity
- Examine logs for detailed error information
- Consult the BZZZ project documentation for P2P network issues
---
**BZZZ MCP Server v1.0.0** - Enabling GPT-5 agents to collaborate in distributed P2P networks.