Clean up BZZZ development detritus and enhance .gitignore

Major cleanup of development artifacts and obsolete files:

REMOVED:
- archived/2025-07-17/ directory (11 outdated development files)
- old-docs/ directory (10 obsolete documentation files)
- Compiled binaries: bzzz-port3333, test/bzzz-chat-api
- Development scripts: intensive_coordination_test.sh, start_bzzz_with_mock_api.sh,
  test_hmmm_monitoring.sh, trigger_mock_coordination.sh
- Test artifacts: test/run_chat_api.sh, test/test_chat_api.py
- Empty data/chat-api-logs/ directory

ENHANCED:
- Updated .gitignore with comprehensive patterns to prevent future artifact accumulation
- Added patterns for compiled binaries, build artifacts, logs, temporary files
- Included development-specific ignores for archived/, old-docs/, test artifacts

PRESERVED:
- All Phase 2B documentation in docs/
- Essential deployment scripts (install-service.sh, uninstall-service.sh, deploy-bzzz-cluster.sh)
- Project status tracking (PROJECT_TODOS.md, README.md)
- Core source code and production configurations

Space saved: ~95MB of development detritus removed
Project is now clean and production-ready with enhanced artifact prevention

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
anthonyrawlins
2025-08-09 13:27:17 +10:00
parent ee6bb09511
commit c9f4d2df0f
30 changed files with 21 additions and 10750 deletions

23
.gitignore vendored
View File

@@ -1,5 +1,6 @@
# Binaries # Compiled binaries
bzzz bzzz
bzzz-*
*.exe *.exe
*.exe~ *.exe~
*.dll *.dll
@@ -11,10 +12,16 @@ bzzz
# Output of the go coverage tool # Output of the go coverage tool
*.out *.out
coverage.out
# Go workspace file # Go workspace file
go.work go.work
# Build artifacts
target/
dist/
build/
# IDE files # IDE files
.vscode/ .vscode/
.idea/ .idea/
@@ -28,9 +35,21 @@ go.work
ehthumbs.db ehthumbs.db
Thumbs.db Thumbs.db
# Logs # Logs and data
*.log *.log
logs/
data/chat-api-logs/
# Temporary files # Temporary files
*.tmp *.tmp
*.temp *.temp
*~
*.bak
# Development artifacts
archived/
old-docs/
# Test artifacts
test/bzzz-*
test/*.sh

View File

@@ -1,192 +0,0 @@
# Project Bzzz: Decentralized Task Execution Network - Development Plan
## 1. Overview & Vision
This document outlines the development plan for **Project Bzzz**, a decentralized task execution network designed to enhance the existing **Hive Cluster**.
The vision is to evolve from a centrally coordinated system to a resilient, peer-to-peer (P2P) mesh of autonomous agents. This architecture eliminates single points of failure, improves scalability, and allows for dynamic, collaborative task resolution. Bzzz will complement the existing N8N orchestration layer, acting as a powerful, self-organizing execution fabric.
---
## 2. Core Architecture
The system is built on three key pillars: decentralized networking, GitHub-native task management, and verifiable, distributed logging.
| Component | Technology | Purpose |
| :--- | :--- | :--- |
| **Networking** | **libp2p** | For peer discovery (mDNS, DHT), identity, and secure P2P communication. |
| **Task Management** | **GitHub Issues** | The single source of truth for task definition, allocation, and tracking. |
| **Messaging** | **libp2p Pub/Sub** | For broadcasting capabilities and coordinating collaborative help requests. |
| **Logging** | **Hypercore Protocol** | For creating a tamper-proof, decentralized, and replicable logging system for debugging. |
---
## 3. Architectural Refinements & Key Features
Based on our analysis, the following refinements will be adopted:
### 3.1. Task Allocation via GitHub Assignment
To prevent race conditions and simplify logic, we will use GitHub's native issue assignment mechanism as an atomic lock. The `task_claim` pub/sub topic is no longer needed.
**Workflow:**
1. A `bzzz-agent` discovers a new, *unassigned* issue in the target repository.
2. The agent immediately attempts to **assign itself** to the issue via the GitHub API.
3. **Success:** If the assignment succeeds, the agent has exclusive ownership of the task and begins execution.
4. **Failure:** If the assignment fails (because another agent was faster), the agent logs the contention and looks for another task.
### 3.2. Collaborative Task Execution with Hop Limit
The `task_help_request` feature enables agents to collaborate on complex tasks. To prevent infinite request loops and network flooding, we will implement a **hop limit**.
- **Hop Limit:** A `task_help_request` will be discarded after being forwarded **3 times**.
- If a task cannot be completed after 3 help requests, it will be marked as "failed," and a comment will be added to the GitHub issue for manual review.
### 3.3. Decentralized Logging with Hypercore
To solve the challenge of debugging a distributed system, each agent will manage its own secure, append-only log stream using the Hypercore Protocol.
- **Log Creation:** Each agent generates a `hypercore` and broadcasts its public key via the `capabilities` message.
- **Log Replication:** Any other agent (or a dedicated monitoring node) can use this key to replicate the log stream in real-time or after the fact.
- **Benefits:** This creates a verifiable and resilient audit trail for every agent's actions, which is invaluable for debugging without relying on a centralized logging server.
---
## 4. Integration with the Hive Ecosystem
Bzzz is designed to integrate seamlessly with the existing cluster infrastructure.
### 4.1. Deployment Strategy: Docker + Host Networking (PREFERRED APPROACH)
Based on comprehensive analysis of the existing Hive infrastructure and Bzzz's P2P requirements, we will use a **hybrid deployment approach** that combines Docker containerization with host networking:
```yaml
# Docker Compose configuration for bzzz-agent
services:
bzzz-agent:
image: registry.home.deepblack.cloud/tony/bzzz-agent:latest
network_mode: "host" # Direct host network access for P2P
volumes:
- ./data:/app/data
- /var/run/docker.sock:/var/run/docker.sock # Docker API access
environment:
- NODE_ID=${HOSTNAME}
- GITHUB_TOKEN_FILE=/run/secrets/github-token
secrets:
- github-token
restart: unless-stopped
deploy:
placement:
constraints:
- node.role == worker # Deploy on all worker nodes
```
**Rationale for Docker + Host Networking:**
-**P2P Networking Advantages**: Direct access to host networking enables efficient mDNS discovery, NAT traversal, and lower latency communication
-**Infrastructure Consistency**: Maintains Docker Swarm deployment patterns and existing operational procedures
-**Resource Efficiency**: Eliminates Docker overlay network overhead for P2P communication
-**Best of Both Worlds**: Container portability and management with native network performance
### 4.2. Cluster Integration Points
- **Phased Rollout:** Deploy `bzzz-agent` containers across all cluster nodes (ACACIA, WALNUT, IRONWOOD, ROSEWOOD, FORSTEINET) using Docker Swarm
- **Network Architecture**: Leverages existing 192.168.1.0/24 LAN for P2P mesh communication
- **Resource Coordination**: Agents discover and utilize existing Ollama endpoints (port 11434) and CLI tools
- **Storage Integration**: Uses NFS shares (/rust/containers/) for shared configuration and Hypercore log storage
### 4.3. Integration with Existing Services
- **N8N as a Task Initiator:** High-level workflows in N8N will now terminate by creating a detailed GitHub Issue. This action triggers the Bzzz mesh, which handles the execution and reports back by creating a Pull Request.
- **Hive Coexistence**: Bzzz will run alongside existing Hive services on different ports, allowing gradual migration of workloads
- **The "Mesh Visualizer":** A dedicated monitoring dashboard will be created. It will:
1. Subscribe to the `capabilities` pub/sub topic to visualize the live network topology.
2. Replicate and display the Hypercore log streams from all active agents.
3. Integrate with existing Grafana dashboards for unified monitoring
---
## 5. Security Strategy
- **GitHub Token Management:** Agents will use short-lived, fine-grained Personal Access Tokens. These tokens will be stored securely in **HashiCorp Vault** or a similar secrets management tool, and retrieved by the agent at runtime.
- **Network Security:** All peer-to-peer communication is automatically **encrypted end-to-end** by `libp2p`.
---
## 6. Recommended Tech Stack
| Category | Recommendation | Notes |
| :--- | :--- | :--- |
| **Language** | **Go** or **Rust** | Strongly recommended for performance, concurrency, and system-level programming. |
| **Networking** | `go-libp2p` / `rust-libp2p` | The official and most mature implementations. |
| **Logging** | `hypercore-go` / `hypercore-rs` | Libraries for implementing the Hypercore Protocol. |
| **GitHub API** | `go-github` / `octokit.rs` | Official and community-maintained clients for interacting with GitHub. |
---
## 7. Development Milestones
This 8-week plan incorporates the refined architecture.
| Week | Deliverables | Key Features |
| :--- | :--- | :--- |
| **1** | **P2P Foundation & Logging** | Setup libp2p peer discovery and establish a **Hypercore log stream** for each agent. |
| **2** | **Capability Broadcasting** | Implement `capability_detector` and broadcast agent status via pub/sub. |
| **3** | **GitHub Task Claiming** | Ingest issues from GitHub and implement the **assignment-based task claiming** logic. |
| **4** | **Core Task Execution** | Integrate local CLIs (Ollama, etc.) to perform basic tasks based on issue content. |
| **5** | **GitHub Result Workflow** | Implement logic to create Pull Requests or follow-up issues upon task completion. |
| **6** | **Collaborative Help** | Implement the `task_help_request` and `task_help_response` flow with the **hop limit**. |
| **7** | **Monitoring & Visualization** | Build the first version of the **Mesh Visualizer** dashboard to display agent status and logs. |
| **8** | **Deployment & Testing** | Package the agent as a Docker container with host networking, write Docker Swarm deployment guide, and conduct end-to-end testing across cluster nodes. |
---
## 8. Potential Risks & Mitigation
- **Network Partitions ("Split-Brain"):**
- **Risk:** A network partition could lead to two separate meshes trying to work on the same task.
- **Mitigation:** Using GitHub's issue assignment as the atomic lock effectively solves this. The first agent to successfully claim the issue wins, regardless of network state.
- **Dependency on GitHub:**
- **Risk:** The system's ability to acquire new tasks is dependent on the availability of the GitHub API.
- **Mitigation:** This is an accepted trade-off for gaining a robust, native task management platform. Agents can be designed to continue working on already-claimed tasks during a GitHub outage.
- **Debugging Complexity:**
- **Risk:** Debugging distributed systems remains challenging.
- **Mitigation:** The Hypercore-based logging system provides a powerful and verifiable audit trail, which is a significant step towards mitigating this complexity. The Mesh Visualizer will also be a critical tool.
- **Docker Host Networking Security:**
- **Risk:** Host networking mode exposes containers directly to the host network, reducing isolation.
- **Mitigation:**
- Implement strict firewall rules on each node
- Use libp2p's built-in encryption for all P2P communication
- Run containers with restricted user privileges (non-root)
- Regular security audits of exposed ports and services
---
## 9. Migration Strategy from Hive
### 9.1. Gradual Transition Plan
1. **Phase 1: Parallel Deployment** (Weeks 1-2)
- Deploy Bzzz agents alongside existing Hive infrastructure
- Use different port ranges to avoid conflicts
- Monitor resource usage and network performance
2. **Phase 2: Simple Task Migration** (Weeks 3-4)
- Route basic code generation tasks through GitHub issues → Bzzz
- Keep complex multi-agent workflows in existing Hive + n8n
- Compare performance metrics between systems
3. **Phase 3: Workflow Integration** (Weeks 5-6)
- Modify n8n workflows to create GitHub issues as final step
- Implement Bzzz → Hive result reporting for hybrid workflows
- Test end-to-end task lifecycle
4. **Phase 4: Full Migration** (Weeks 7-8)
- Migrate majority of workloads to Bzzz mesh
- Retain Hive for monitoring and dashboard functionality
- Plan eventual deprecation of centralized coordinator
### 9.2. Compatibility Layer
- **API Bridge**: Maintain existing Hive API endpoints that proxy to Bzzz mesh
- **Data Migration**: Export task history and agent configurations from PostgreSQL
- **Monitoring Continuity**: Integrate Bzzz metrics into existing Grafana dashboards

View File

@@ -1,138 +0,0 @@
# Bzzz P2P Coordination System - Progress Report
## Overview
This report documents the implementation and testing progress of the Bzzz P2P mesh coordination system with meta-thinking capabilities (Antennae framework).
## Major Accomplishments
### 1. High-Priority Feature Implementation ✅
- **Fixed stub function implementations** in `github/integration.go`
- Implemented proper task filtering based on agent capabilities
- Added task announcement logic for P2P coordination
- Enhanced capability-based task matching with keyword analysis
- **Completed Hive API client integration**
- Extended PostgreSQL database schema for bzzz integration
- Updated ProjectService to use database instead of filesystem scanning
- Implemented secure Docker secrets for GitHub token access
- **Removed hardcoded repository configuration**
- Dynamic repository discovery via Hive API
- Database-driven project management
### 2. Security Enhancements ✅
- **Docker Secrets Implementation**
- Replaced filesystem-based GitHub token access with Docker secrets
- Updated docker-compose.swarm.yml with proper secrets configuration
- Enhanced security posture for credential management
### 3. Database Integration ✅
- **Extended Hive Database Schema**
- Added bzzz-specific fields to projects table
- Inserted Hive repository as test project with 9 bzzz-task labeled issues
- Successful GitHub API integration showing real issue discovery
### 4. Independent Testing Infrastructure ✅
- **Mock Hive API Server** (`mock-hive-server.py`)
- Provides fake projects and tasks for real bzzz coordination
- Comprehensive task simulation with realistic coordination scenarios
- Background task generation for dynamic testing
- Enhanced with work capture endpoints:
- `/api/bzzz/projects/<id>/submit-work` - Capture actual agent work/code
- `/api/bzzz/projects/<id>/create-pr` - Capture pull request content
- `/api/bzzz/projects/<id>/coordination-discussion` - Log coordination discussions
- `/api/bzzz/projects/<id>/log-prompt` - Log agent prompts and model usage
- **Real-Time Monitoring Dashboard** (`cmd/bzzz-monitor.py`)
- btop/nvtop-style console interface for coordination monitoring
- Real coordination channel metrics and message rate tracking
- Compact timestamp display and efficient space utilization
- Live agent activity and P2P network status monitoring
### 5. P2P Network Verification ✅
- **Confirmed Multi-Node Operation**
- WALNUT, ACACIA, IRONWOOD nodes running as systemd services
- 2 connected peers with regular availability broadcasts
- P2P mesh discovery and communication functioning correctly
### 6. Cross-Repository Coordination Framework ✅
- **Antennae Meta-Discussion System**
- Advanced cross-repository coordination capabilities
- Dependency detection and conflict resolution
- AI-powered coordination plan generation
- Consensus detection algorithms
## Current System Status
### Working Components
1. ✅ P2P mesh networking (libp2p + mDNS)
2. ✅ Agent availability broadcasting
3. ✅ Database-driven repository discovery
4. ✅ Secure credential management
5. ✅ Real-time monitoring infrastructure
6. ✅ Mock API testing framework
7. ✅ Work capture endpoints (ready for use)
### Identified Issues
1.**GitHub Repository Verification Failures**
- Mock repositories (e.g., `mock-org/hive`) return 404 errors
- Prevents agents from proceeding with task discovery
- Need local Git hosting solution
2.**Task Claim Logic Incomplete**
- Agents broadcast availability but don't actively claim tasks
- Missing integration between P2P discovery and task claiming
- Need to enhance bzzz binary task claim workflow
3.**Docker Overlay Network Issues**
- Some connectivity issues between services
- May impact agent coordination in containerized environments
## File Locations and Key Components
### Core Implementation Files
- `/home/tony/chorus/project-queues/active/BZZZ/github/integration.go` - Enhanced task filtering and P2P coordination
- `/home/tony/chorus/project-queues/inactive/hive/backend/app/services/project_service.py` - Database-driven project service
- `/home/tony/chorus/project-queues/inactive/hive/docker-compose.swarm.yml` - Docker secrets configuration
### Testing and Monitoring
- `/home/tony/chorus/project-queues/active/BZZZ/mock-hive-server.py` - Mock API with work capture
- `/home/tony/chorus/project-queues/active/BZZZ/cmd/bzzz-monitor.py` - Real-time coordination dashboard
- `/home/tony/chorus/project-queues/active/BZZZ/scripts/trigger_mock_coordination.sh` - Coordination test script
### Configuration
- `/etc/systemd/system/bzzz.service.d/mock-api.conf` - Systemd override for mock API testing
- `/tmp/bzzz_agent_work/` - Directory for captured agent work (when functioning)
- `/tmp/bzzz_pull_requests/` - Directory for captured pull requests
- `/tmp/bzzz_agent_prompts/` - Directory for captured agent prompts and model usage
## Technical Achievements
### Database Schema Extensions
```sql
-- Extended projects table with bzzz integration fields
ALTER TABLE projects ADD COLUMN bzzz_enabled BOOLEAN DEFAULT false;
ALTER TABLE projects ADD COLUMN ready_to_claim BOOLEAN DEFAULT false;
ALTER TABLE projects ADD COLUMN private_repo BOOLEAN DEFAULT false;
ALTER TABLE projects ADD COLUMN github_token_required BOOLEAN DEFAULT false;
```
### Docker Secrets Integration
```yaml
secrets:
- github_token
environment:
- GITHUB_TOKEN_FILE=/run/secrets/github_token
```
### P2P Network Statistics
- **Active Nodes**: 3 (WALNUT, ACACIA, IRONWOOD)
- **Connected Peers**: 2 per node
- **Network Protocol**: libp2p with mDNS discovery
- **Message Broadcasting**: Availability, capability, coordination
## Next Steps Required
See PROJECT_TODOS.md for comprehensive task list.
## Summary
The Bzzz P2P coordination system has a solid foundation with working P2P networking, database integration, secure credential management, and comprehensive testing infrastructure. The main blockers are the need for a local Git hosting solution and completion of the task claim logic in the bzzz binary.

View File

@@ -1,224 +0,0 @@
🐝 Project: Bzzz — P2P Task Coordination System
## 🔧 Architecture Overview (libp2p + pubsub + JSON)
This system will compliment and partially replace elements of the Hive Software System. This is intended to be a replacement for the multitude of MCP, and API calls to the ollama and gemini-cli agents over port 11434 etc. By replacing the master/slave paradigm with a mesh network we allow each node to trigger workflows or respond to calls for work as availability dictates rather than being stuck in endless timeouts awaiting responses. We also eliminate the central coordinator as a single point of failure.
### 📂 Components
#### 1. **Peer Node**
Each machine runs a P2P agent that:
- Connects to other peers via libp2p
- Subscribes to pubsub topics
- Periodically broadcasts status/capabilities
- Receives and executes tasks
- Publishes task results as GitHub pull requests or issues
- Can request assistance from other peers
- Monitors a GitHub repository for new issues (task source)
Each node uses a dedicated GitHub account with:
- A personal access token (fine-scoped to repo/PRs)
- A configured `.gitconfig` for commit identity
#### 2. **libp2p Network**
- All peers discover each other using mDNS, Bootstrap peers, or DHT
- Peer identity is cryptographic (libp2p peer ID)
- Communication is encrypted end-to-end
#### 3. **GitHub Integration**
- Tasks are sourced from GitHub Issues in a designated repository
- Nodes will claim and respond to tasks by:
- Forking the repository (once)
- Creating a working branch
- Making changes to files as instructed by task input
- Committing changes using their GitHub identity
- Creating a pull request or additional GitHub issues
- Publishing final result as a PR, issue(s), or failure report
#### 4. **PubSub Topics**
| Topic | Direction | Purpose |
|------------------|------------------|---------------------------------------------|
| `capabilities` | Peer → All Peers | Broadcast available models, status |
| `task_broadcast` | Peer → All Peers | Publish a GitHub issue as task |
| `task_claim` | Peer → All Peers | Claim responsibility for a task |
| `task_result` | Peer → All Peers | Share PR, issue, or failure result |
| `presence_ping` | Peer → All Peers | Lightweight presence signal |
| `task_help_request` | Peer → All Peers | Request assistance for a task |
| `task_help_response`| Peer → All Peers | Offer help or handle sub-task |
### 📊 Data Flow Diagram
```
+------------------+ libp2p +------------------+
| Peer A |<------------------->| Peer B |
| |<------------------->| |
| - Publishes: | | - Publishes: |
| capabilities | | task_result |
| task_broadcast | | capabilities |
| help_request | | help_response |
| - Subscribes to: | | - Subscribes to: |
| task_result | | task_broadcast |
| help_request | | help_request |
+------------------+ +------------------+
^ ^
| |
| |
+----------------------+-----------------+
|
v
+------------------+
| Peer C |
+------------------+
```
### 📂 Sample JSON Messages
#### `capabilities`
```json
{
"type": "capabilities",
"node_id": "pi-node-1",
"cpu": 43.5,
"gpu": 2.3,
"models": ["llama3", "mistral"],
"installed": ["ollama", "gemini-cli"],
"status": "idle",
"timestamp": "2025-07-12T01:23:45Z"
}
```
#### `task_broadcast`
```json
{
"type": "task",
"task_id": "#42",
"repo": "example-org/task-repo",
"issue_url": "https://github.com/example-org/task-repo/issues/42",
"model": "ollama",
"input": "Add unit tests to utils module",
"params": {"branch_prefix": "task-42-"},
"timestamp": "2025-07-12T02:00:00Z"
}
```
#### `task_claim`
```json
{
"type": "task_claim",
"task_id": "#42",
"node_id": "pi-node-2",
"timestamp": "2025-07-12T02:00:03Z"
}
```
#### `task_result`
```json
{
"type": "task_result",
"task_id": "#42",
"node_id": "pi-node-2",
"result_type": "pull_request",
"result_url": "https://github.com/example-org/task-repo/pull/97",
"duration_ms": 15830,
"timestamp": "2025-07-12T02:10:05Z"
}
```
#### `task_help_request`
```json
{
"type": "task_help_request",
"task_id": "#42",
"from_node": "pi-node-2",
"reason": "Long-running task or missing capability",
"requested_capability": "claude-cli",
"timestamp": "2025-07-12T02:05:00Z"
}
```
#### `task_help_response`
```json
{
"type": "task_help_response",
"task_id": "#42",
"from_node": "pi-node-3",
"can_help": true,
"capabilities": ["claude-cli"],
"eta_seconds": 30,
"timestamp": "2025-07-12T02:05:02Z"
}
```
---
## 🚀 Development Brief
### 🧱 Tech Stack
- **Language**: Node.js (or Go/Rust)
- **Networking**: libp2p
- **Messaging**: pubsub with JSON
- **Task Execution**: Local CLI (ollama, gemini, claude)
- **System Monitoring**: `os-utils`, `psutil`, `nvidia-smi`
- **Runtime**: systemd services on Linux
- **GitHub Interaction**: `octokit` (Node), Git CLI
### 🛠 Key Modules
#### 1. `peer_agent.js`
- Initializes libp2p node
- Joins pubsub topics
- Periodically publishes capabilities
- Listens for tasks, runs them, and reports PR/results
- Handles help requests and responses
#### 2. `capability_detector.js`
- Detects:
- CPU/GPU load
- Installed models (via `ollama list`)
- Installed CLIs (`which gemini`, `which claude`)
#### 3. `task_executor.js`
- Parses GitHub issue input
- Forks repo (if needed)
- Creates working branch, applies changes
- Commits changes using local Git identity
- Pushes branch and creates pull request or follow-up issues
#### 4. `github_bot.js`
- Authenticates GitHub API client
- Watches for new issues in repo
- Publishes them as `task_broadcast`
- Handles PR/issue creation and error handling
#### 5. `state_manager.js`
- Keeps internal view of network state
- Tracks peers capabilities, liveness
- Matches help requests to eligible peers
### 📆 Milestones
| Week | Deliverables |
| ---- | ------------------------------------------------------------ |
| 1 | libp2p peer bootstrapping + pubsub skeleton |
| 2 | JSON messaging spec + capability broadcasting |
| 3 | GitHub issue ingestion + task broadcast |
| 4 | CLI integration with Ollama/Gemini/Claude |
| 5 | GitHub PR/issue/failure workflows |
| 6 | Help request/response logic, delegation framework |
| 7 | systemd setup, CLI utilities, and resilience |
| 8 | End-to-end testing, GitHub org coordination, deployment guide|
---
Would you like a prototype `task_help_request` matchmaking function or sample test matrix for capability validation?

View File

@@ -1,165 +0,0 @@
# Bzzz Antennae Monitoring Dashboard
A real-time console monitoring dashboard for the Bzzz P2P coordination system, similar to btop/nvtop for system monitoring.
## Features
🔍 **Real-time P2P Status**
- Connected peer count with history graph
- Node ID and network status
- Hive API connectivity status
🤖 **Agent Activity Monitoring**
- Live agent availability updates
- Agent status distribution (ready/working/busy)
- Recent activity tracking
🎯 **Coordination Activity**
- Task announcements and completions
- Coordination session tracking
- Message flow statistics
📊 **Visual Elements**
- ASCII graphs for historical data
- Color-coded status indicators
- Live activity log with timestamps
## Usage
### Basic Usage
```bash
# Run with default 1-second refresh rate
python3 cmd/bzzz-monitor.py
# Custom refresh rate (2 seconds)
python3 cmd/bzzz-monitor.py --refresh-rate 2.0
# Disable colors for logging/screenshots
python3 cmd/bzzz-monitor.py --no-color
```
### Installation as System Command
```bash
# Copy to system bin
sudo cp cmd/bzzz-monitor.py /usr/local/bin/bzzz-monitor
sudo chmod +x /usr/local/bin/bzzz-monitor
# Now run from anywhere
bzzz-monitor
```
## Dashboard Layout
```
┌─ Bzzz P2P Coordination Monitor ─┐
│ Uptime: 0:02:15 │ Node: 12*SEE3To... │
└───────────────────────────────────┘
P2P Network Status
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Connected Peers: 2
Hive API Status: Offline (Overlay Network Issues)
Peer History (last 20 samples):
███▇▆▆▇████▇▆▇███▇▆▇ (1-3 peers)
Agent Activity
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Recent Updates (1m): 8
Ready: 6
Working: 2
Coordination Activity
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total Messages: 45
Total Tasks: 12
Active Sessions: 1
Recent Tasks (5m): 8
Recent Activity
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
11:10:35 [AVAIL] Agent acacia-node... status: ready
11:10:33 [TASK] Task announcement: hive#15 - WebSocket support
11:10:30 [COORD] Meta-coordination session started
11:10:28 [AVAIL] Agent ironwood-node... status: working
11:10:25 [ERROR] Failed to get active repositories: API 404
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Press Ctrl+C to exit | Refresh rate: 1.0s
```
## Monitoring Data Sources
The dashboard pulls data from:
1. **Systemd Service Logs**: `journalctl -u bzzz.service`
2. **P2P Network Status**: Extracted from bzzz log messages
3. **Agent Availability**: Parsed from availability_broadcast messages
4. **Task Activity**: Detected from task/repository-related log entries
5. **Error Tracking**: Monitors for failures and connection issues
## Color Coding
- 🟢 **Green**: Good status, active connections, ready agents
- 🟡 **Yellow**: Working status, moderate activity
- 🔴 **Red**: Errors, failed connections, busy agents
- 🔵 **Blue**: Information, neutral data
- 🟣 **Magenta**: Coordination-specific activity
- 🔷 **Cyan**: Network and P2P data
## Real-time Updates
The dashboard updates every 1-2 seconds by default and tracks:
- **P2P Connections**: Shows immediate peer join/leave events
- **Agent Status**: Real-time availability broadcasts from all nodes
- **Task Flow**: Live task announcements and coordination activity
- **System Health**: Continuous monitoring of service status and errors
## Performance
- **Low Resource Usage**: Python-based with minimal CPU/memory impact
- **Efficient Parsing**: Only processes recent logs (last 30-50 lines)
- **Responsive UI**: Fast refresh rates without overwhelming the terminal
- **Historical Data**: Maintains rolling buffers for trend analysis
## Troubleshooting
### No Data Appearing
```bash
# Check if bzzz service is running
systemctl status bzzz.service
# Verify log access permissions
journalctl -u bzzz.service --since "1 minute ago"
```
### High CPU Usage
```bash
# Reduce refresh rate
bzzz-monitor --refresh-rate 5.0
```
### Color Issues
```bash
# Disable colors
bzzz-monitor --no-color
# Check terminal color support
echo $TERM
```
## Integration
The monitor works alongside:
- **Live Bzzz System**: Monitors real P2P mesh (WALNUT/ACACIA/IRONWOOD)
- **Test Suite**: Can monitor test coordination scenarios
- **Development**: Perfect for debugging antennae coordination logic
## Future Enhancements
- 📈 Export metrics to CSV/JSON
- 🔔 Alert system for critical events
- 📊 Web-based dashboard version
- 🎯 Coordination session drill-down
- 📱 Mobile-friendly output

View File

@@ -1,112 +0,0 @@
# Bzzz + HMMM Development Task Backlog
Based on the UNIFIED_DEVELOPMENT_PLAN.md, here are the development tasks ready for distribution to the Hive cluster:
## Week 1-2: Foundation Tasks
### Task 1: P2P Networking Foundation 🔧
**Assigned to**: WALNUT (Advanced Coding - starcoder2:15b)
**Priority**: 5 (Critical)
**Objective**: Design and implement core P2P networking foundation for Project Bzzz using libp2p in Go
**Requirements**:
- Use go-libp2p library for mesh networking
- Implement mDNS peer discovery for local network (192.168.1.0/24)
- Create secure encrypted P2P connections with peer identity
- Design pub/sub topics for both task coordination (Bzzz) and meta-discussion (HMMM)
- Prepare for Docker + host networking deployment
- Create modular Go code structure in `/home/tony/chorus/project-queues/active/BZZZ/`
**Deliverables**:
- `main.go` - Entry point and peer initialization
- `p2p/` - P2P networking module with libp2p integration
- `discovery/` - mDNS peer discovery implementation
- `pubsub/` - Pub/sub messaging for capability broadcasting
- `go.mod` - Go module definition with dependencies
- `Dockerfile` - Container with host networking support
### Task 2: Distributed Logging System 📊
**Assigned to**: IRONWOOD (Reasoning Analysis - phi4:14b)
**Priority**: 4 (High)
**Dependencies**: Task 1 (P2P Foundation)
**Objective**: Architect and implement Hypercore-based distributed logging system
**Requirements**:
- Design append-only log streams using Hypercore Protocol
- Implement public key broadcasting for log identity
- Create log replication capabilities between peers
- Store both execution logs (Bzzz) and discussion transcripts (HMMM)
- Ensure tamper-proof audit trails for debugging
- Integrate with P2P capability detection module
**Deliverables**:
- `logging/` - Hypercore-based logging module
- `replication/` - Log replication and synchronization
- `audit/` - Tamper-proof audit trail verification
- Documentation on log schema and replication protocol
### Task 3: GitHub Integration Module 📋
**Assigned to**: ACACIA (Code Review/Docs - codellama)
**Priority**: 4 (High)
**Dependencies**: Task 1 (P2P Foundation)
**Objective**: Implement GitHub integration for atomic task claiming and collaborative workflows
**Requirements**:
- Create atomic issue assignment mechanism (GitHub's native assignment)
- Implement repository forking, branch creation, and commit workflows
- Generate pull requests with discussion transcript links
- Handle task result posting and failure reporting
- Use GitHub API for all interactions
- Include comprehensive error handling and retry logic
**Deliverables**:
- `github/` - GitHub API integration module
- `workflows/` - Repository and branch management
- `tasks/` - Task claiming and result posting
- Integration tests with GitHub API
- Documentation on GitHub workflow process
## Week 3-4: Integration Tasks
### Task 4: Meta-Discussion Implementation 💬
**Assigned to**: IRONWOOD (Reasoning Analysis)
**Priority**: 3 (Medium)
**Dependencies**: Task 1, Task 2
**Objective**: Implement HMMM meta-discussion layer for collaborative reasoning
**Requirements**:
- Create structured messaging for agent collaboration
- Implement "propose plan" and "objection period" logic
- Add hop limits (3 hops) and participant caps for safety
- Design escalation paths to human intervention
- Integrate with Hypercore logging for discussion transcripts
### Task 5: End-to-End Integration 🔄
**Assigned to**: WALNUT (Advanced Coding)
**Priority**: 2 (Normal)
**Dependencies**: All previous tasks
**Objective**: Integrate all components and create working Bzzz+HMMM system
**Requirements**:
- Combine P2P networking, logging, and GitHub integration
- Implement full task lifecycle with meta-discussion
- Create Docker Swarm deployment configuration
- Add monitoring and health checks
- Comprehensive testing across cluster nodes
## Current Status
**Hive Cluster Ready**: 3 agents registered with proper specializations
- walnut: starcoder2:15b (kernel_dev)
- ironwood: phi4:14b (reasoning)
- acacia: codellama (docs_writer)
**Authentication Working**: Dev user and API access configured
⚠️ **Task Submission**: Need to resolve API endpoint issues for automated task distribution
**Next Steps**:
1. Fix task creation API endpoint issues
2. Submit tasks to respective agents based on specializations
3. Monitor execution and coordinate between agents
4. Test the collaborative reasoning (HMMM) layer once P2P foundation is complete

View File

@@ -1,254 +0,0 @@
#!/usr/bin/env python3
"""
Advanced Meta Discussion Demo for Bzzz P2P Mesh
Shows cross-repository coordination and dependency detection
"""
import json
import time
from datetime import datetime
def demo_cross_repository_coordination():
"""Demonstrate advanced meta discussion features"""
print("🎯 ADVANCED BZZZ META DISCUSSION DEMO")
print("=" * 60)
print("Scenario: Multi-repository microservices coordination")
print()
# Simulate multiple repositories in the system
repositories = {
"api-gateway": {
"agent": "walnut-12345",
"capabilities": ["code-generation", "api-design", "security"],
"current_task": {
"id": 42,
"title": "Implement OAuth2 authentication flow",
"description": "Add OAuth2 support to API gateway with JWT tokens",
"labels": ["security", "api", "authentication"]
}
},
"user-service": {
"agent": "acacia-67890",
"capabilities": ["code-analysis", "database", "microservices"],
"current_task": {
"id": 87,
"title": "Update user schema for OAuth integration",
"description": "Add OAuth provider fields to user table",
"labels": ["database", "schema", "authentication"]
}
},
"notification-service": {
"agent": "ironwood-54321",
"capabilities": ["advanced-reasoning", "integration", "messaging"],
"current_task": {
"id": 156,
"title": "Secure webhook endpoints with JWT",
"description": "Validate JWT tokens on webhook endpoints",
"labels": ["security", "webhook", "authentication"]
}
}
}
print("📋 ACTIVE TASKS ACROSS REPOSITORIES:")
for repo, info in repositories.items():
task = info["current_task"]
print(f" 🔧 {repo}: #{task['id']} - {task['title']}")
print(f" Agent: {info['agent']} | Labels: {', '.join(task['labels'])}")
print()
# Demo 1: Dependency Detection
print("🔍 PHASE 1: DEPENDENCY DETECTION")
print("-" * 40)
dependencies = [
{
"task1": "api-gateway/#42",
"task2": "user-service/#87",
"relationship": "API_Contract",
"reason": "OAuth implementation requires coordinated schema changes",
"confidence": 0.9
},
{
"task1": "api-gateway/#42",
"task2": "notification-service/#156",
"relationship": "Security_Compliance",
"reason": "Both implement JWT token validation",
"confidence": 0.85
}
]
for dep in dependencies:
print(f"🔗 DEPENDENCY DETECTED:")
print(f" {dep['task1']}{dep['task2']}")
print(f" Type: {dep['relationship']} (confidence: {dep['confidence']})")
print(f" Reason: {dep['reason']}")
print()
# Demo 2: Coordination Session Creation
print("🎯 PHASE 2: COORDINATION SESSION INITIATED")
print("-" * 40)
session_id = f"coord_oauth_{int(time.time())}"
print(f"📝 Session ID: {session_id}")
print(f"📅 Created: {datetime.now().strftime('%H:%M:%S')}")
print(f"👥 Participants: walnut-12345, acacia-67890, ironwood-54321")
print()
# Demo 3: AI-Generated Coordination Plan
print("🤖 PHASE 3: AI-GENERATED COORDINATION PLAN")
print("-" * 40)
coordination_plan = """
COORDINATION PLAN: OAuth2 Implementation Across Services
1. EXECUTION ORDER:
- Phase 1: user-service (schema changes)
- Phase 2: api-gateway (OAuth implementation)
- Phase 3: notification-service (JWT validation)
2. SHARED ARTIFACTS:
- JWT token format specification
- OAuth2 endpoint documentation
- Database schema migration scripts
- Shared security configuration
3. COORDINATION REQUIREMENTS:
- walnut-12345: Define JWT token structure before implementation
- acacia-67890: Migrate user schema first, share field mappings
- ironwood-54321: Wait for JWT format, implement validation
4. POTENTIAL CONFLICTS:
- JWT payload structure disagreements
- Token expiration time mismatches
- Security scope definition conflicts
5. SUCCESS CRITERIA:
- All services use consistent JWT format
- OAuth flow works end-to-end
- Security audit passes on all endpoints
- Integration tests pass across all services
"""
print(coordination_plan)
# Demo 4: Agent Coordination Messages
print("💬 PHASE 4: AGENT COORDINATION MESSAGES")
print("-" * 40)
messages = [
{
"timestamp": "14:32:01",
"from": "walnut-12345 (api-gateway)",
"type": "proposal",
"content": "I propose using RS256 JWT tokens with 15min expiry. Standard claims: sub, iat, exp, scope."
},
{
"timestamp": "14:32:45",
"from": "acacia-67890 (user-service)",
"type": "question",
"content": "Should we store the OAuth provider info in the user table or separate table? Also need refresh token strategy."
},
{
"timestamp": "14:33:20",
"from": "ironwood-54321 (notification-service)",
"type": "agreement",
"content": "RS256 sounds good. For webhooks, I'll validate signature and check 'webhook' scope. Need the public key endpoint."
},
{
"timestamp": "14:34:10",
"from": "walnut-12345 (api-gateway)",
"type": "response",
"content": "Separate oauth_providers table is better for multiple providers. Public key at /.well-known/jwks.json"
},
{
"timestamp": "14:34:55",
"from": "acacia-67890 (user-service)",
"type": "agreement",
"content": "Agreed on separate table. I'll create migration script and share the schema. ETA: 2 hours."
}
]
for msg in messages:
print(f"[{msg['timestamp']}] {msg['from']} ({msg['type']}):")
print(f" {msg['content']}")
print()
# Demo 5: Automatic Resolution Detection
print("✅ PHASE 5: COORDINATION RESOLUTION")
print("-" * 40)
print("🔍 ANALYSIS: Consensus detected")
print(" - All agents agreed on JWT format (RS256)")
print(" - Database strategy decided (separate oauth_providers table)")
print(" - Public key endpoint established (/.well-known/jwks.json)")
print(" - Implementation order confirmed")
print()
print("📋 COORDINATION COMPLETE:")
print(" - Session status: RESOLVED")
print(" - Resolution: Consensus reached on OAuth implementation")
print(" - Next steps: acacia-67890 starts schema migration")
print(" - Dependencies: walnut-12345 waits for schema completion")
print()
# Demo 6: Alternative - Escalation Scenario
print("🚨 ALTERNATIVE: ESCALATION SCENARIO")
print("-" * 40)
escalation_scenario = """
ESCALATION TRIGGERED: Security Implementation Conflict
Reason: Agents cannot agree on JWT token expiration time
- walnut-12345 wants 15 minutes (high security)
- acacia-67890 wants 4 hours (user experience)
- ironwood-54321 wants 1 hour (compromise)
Messages exceeded threshold: 12 messages without consensus
Human expert summoned via N8N webhook to deepblack.cloud
Escalation webhook payload:
{
"session_id": "coord_oauth_1752401234",
"conflict_type": "security_policy_disagreement",
"agents_involved": ["walnut-12345", "acacia-67890", "ironwood-54321"],
"repositories": ["api-gateway", "user-service", "notification-service"],
"issue_summary": "JWT expiration time conflict preventing OAuth implementation",
"requires_human_decision": true,
"urgency": "medium"
}
"""
print(escalation_scenario)
# Demo 7: System Capabilities Summary
print("🎯 ADVANCED META DISCUSSION CAPABILITIES")
print("-" * 40)
capabilities = [
"✅ Cross-repository dependency detection",
"✅ Intelligent task relationship analysis",
"✅ AI-generated coordination plans",
"✅ Multi-agent conversation management",
"✅ Consensus detection and resolution",
"✅ Automatic escalation to humans",
"✅ Session lifecycle management",
"✅ Hop-limited message propagation",
"✅ Custom dependency rules",
"✅ Project-aware coordination"
]
for cap in capabilities:
print(f" {cap}")
print()
print("🚀 PRODUCTION READY:")
print(" - P2P mesh infrastructure: ✅ Deployed")
print(" - Antennae meta-discussion: ✅ Active")
print(" - Dependency detection: ✅ Implemented")
print(" - Coordination sessions: ✅ Functional")
print(" - Human escalation: ✅ N8N integrated")
print()
print("🎯 Ready for real cross-repository coordination!")
if __name__ == "__main__":
demo_cross_repository_coordination()

View File

@@ -1,702 +0,0 @@
#!/usr/bin/env python3
"""
Mock Hive API Server for Bzzz Testing
This simulates what the real Hive API would provide to bzzz agents:
- Active repositories with bzzz-enabled tasks
- Fake GitHub issues with bzzz-task labels
- Task dependencies and coordination scenarios
The real bzzz agents will consume this fake data and do actual coordination.
"""
import json
import random
import time
from datetime import datetime, timedelta
from flask import Flask, jsonify, request
from threading import Thread
app = Flask(__name__)
# Mock data for repositories and tasks
MOCK_REPOSITORIES = [
{
"project_id": 1,
"name": "hive-coordination-platform",
"git_url": "https://github.com/mock/hive",
"owner": "mock-org",
"repository": "hive",
"branch": "main",
"bzzz_enabled": True,
"ready_to_claim": True,
"private_repo": False,
"github_token_required": False
},
{
"project_id": 2,
"name": "bzzz-p2p-system",
"git_url": "https://github.com/mock/bzzz",
"owner": "mock-org",
"repository": "bzzz",
"branch": "main",
"bzzz_enabled": True,
"ready_to_claim": True,
"private_repo": False,
"github_token_required": False
},
{
"project_id": 3,
"name": "distributed-ai-development",
"git_url": "https://github.com/mock/distributed-ai-dev",
"owner": "mock-org",
"repository": "distributed-ai-dev",
"branch": "main",
"bzzz_enabled": True,
"ready_to_claim": True,
"private_repo": False,
"github_token_required": False
},
{
"project_id": 4,
"name": "infrastructure-automation",
"git_url": "https://github.com/mock/infra-automation",
"owner": "mock-org",
"repository": "infra-automation",
"branch": "main",
"bzzz_enabled": True,
"ready_to_claim": True,
"private_repo": False,
"github_token_required": False
}
]
# Mock tasks with realistic coordination scenarios
MOCK_TASKS = {
1: [ # hive tasks
{
"number": 15,
"title": "Add WebSocket support for real-time coordination",
"description": "Implement WebSocket endpoints for real-time agent coordination messages",
"state": "open",
"labels": ["bzzz-task", "feature", "realtime", "coordination"],
"created_at": "2025-01-14T10:00:00Z",
"updated_at": "2025-01-14T10:30:00Z",
"html_url": "https://github.com/mock/hive/issues/15",
"is_claimed": False,
"assignees": [],
"task_type": "feature",
"dependencies": [
{
"repository": "bzzz",
"task_number": 23,
"dependency_type": "api_contract"
}
]
},
{
"number": 16,
"title": "Implement agent authentication system",
"description": "Add secure JWT-based authentication for bzzz agents accessing Hive APIs",
"state": "open",
"labels": ["bzzz-task", "security", "auth", "high-priority"],
"created_at": "2025-01-14T09:30:00Z",
"updated_at": "2025-01-14T10:45:00Z",
"html_url": "https://github.com/mock/hive/issues/16",
"is_claimed": False,
"assignees": [],
"task_type": "security",
"dependencies": []
},
{
"number": 17,
"title": "Create coordination metrics dashboard",
"description": "Build dashboard showing cross-repository coordination statistics",
"state": "open",
"labels": ["bzzz-task", "dashboard", "metrics", "ui"],
"created_at": "2025-01-14T11:00:00Z",
"updated_at": "2025-01-14T11:15:00Z",
"html_url": "https://github.com/mock/hive/issues/17",
"is_claimed": False,
"assignees": [],
"task_type": "feature",
"dependencies": [
{
"repository": "bzzz",
"task_number": 24,
"dependency_type": "api_contract"
}
]
}
],
2: [ # bzzz tasks
{
"number": 23,
"title": "Define coordination API contract",
"description": "Standardize API contract for cross-repository coordination messaging",
"state": "open",
"labels": ["bzzz-task", "api", "coordination", "blocker"],
"created_at": "2025-01-14T09:00:00Z",
"updated_at": "2025-01-14T10:00:00Z",
"html_url": "https://github.com/mock/bzzz/issues/23",
"is_claimed": False,
"assignees": [],
"task_type": "api_design",
"dependencies": []
},
{
"number": 24,
"title": "Implement dependency detection algorithm",
"description": "Auto-detect task dependencies across repositories using graph analysis",
"state": "open",
"labels": ["bzzz-task", "algorithm", "coordination", "complex"],
"created_at": "2025-01-14T10:15:00Z",
"updated_at": "2025-01-14T10:30:00Z",
"html_url": "https://github.com/mock/bzzz/issues/24",
"is_claimed": False,
"assignees": [],
"task_type": "feature",
"dependencies": [
{
"repository": "bzzz",
"task_number": 23,
"dependency_type": "api_contract"
}
]
},
{
"number": 25,
"title": "Add consensus algorithm for coordination",
"description": "Implement distributed consensus for multi-agent task coordination",
"state": "open",
"labels": ["bzzz-task", "consensus", "distributed-systems", "hard"],
"created_at": "2025-01-14T11:30:00Z",
"updated_at": "2025-01-14T11:45:00Z",
"html_url": "https://github.com/mock/bzzz/issues/25",
"is_claimed": False,
"assignees": [],
"task_type": "feature",
"dependencies": []
}
],
3: [ # distributed-ai-dev tasks
{
"number": 8,
"title": "Add support for bzzz coordination",
"description": "Integrate with bzzz P2P coordination system for distributed AI development",
"state": "open",
"labels": ["bzzz-task", "integration", "p2p", "ai"],
"created_at": "2025-01-14T10:45:00Z",
"updated_at": "2025-01-14T11:00:00Z",
"html_url": "https://github.com/mock/distributed-ai-dev/issues/8",
"is_claimed": False,
"assignees": [],
"task_type": "integration",
"dependencies": [
{
"repository": "bzzz",
"task_number": 23,
"dependency_type": "api_contract"
},
{
"repository": "hive",
"task_number": 16,
"dependency_type": "security"
}
]
},
{
"number": 9,
"title": "Implement AI model coordination",
"description": "Enable coordination between AI models across different development environments",
"state": "open",
"labels": ["bzzz-task", "ai-coordination", "models", "complex"],
"created_at": "2025-01-14T11:15:00Z",
"updated_at": "2025-01-14T11:30:00Z",
"html_url": "https://github.com/mock/distributed-ai-dev/issues/9",
"is_claimed": False,
"assignees": [],
"task_type": "feature",
"dependencies": [
{
"repository": "distributed-ai-dev",
"task_number": 8,
"dependency_type": "integration"
}
]
}
],
4: [ # infra-automation tasks
{
"number": 12,
"title": "Automate bzzz deployment across cluster",
"description": "Create automated deployment scripts for bzzz agents on all cluster nodes",
"state": "open",
"labels": ["bzzz-task", "deployment", "automation", "devops"],
"created_at": "2025-01-14T12:00:00Z",
"updated_at": "2025-01-14T12:15:00Z",
"html_url": "https://github.com/mock/infra-automation/issues/12",
"is_claimed": False,
"assignees": [],
"task_type": "infrastructure",
"dependencies": [
{
"repository": "hive",
"task_number": 16,
"dependency_type": "security"
}
]
}
]
}
# Track claimed tasks
claimed_tasks = {}
@app.route('/health', methods=['GET'])
def health():
"""Health check endpoint"""
return jsonify({"status": "healthy", "service": "mock-hive-api", "timestamp": datetime.now().isoformat()})
@app.route('/api/bzzz/active-repos', methods=['GET'])
def get_active_repositories():
"""Return mock active repositories for bzzz consumption"""
print(f"[{datetime.now().strftime('%H:%M:%S')}] 📡 Bzzz requested active repositories")
# Randomly vary the number of available repos for more realistic testing
available_repos = random.sample(MOCK_REPOSITORIES, k=random.randint(2, len(MOCK_REPOSITORIES)))
return jsonify({"repositories": available_repos})
@app.route('/api/bzzz/projects/<int:project_id>/tasks', methods=['GET'])
def get_project_tasks(project_id):
"""Return mock bzzz-task labeled issues for a specific project"""
print(f"[{datetime.now().strftime('%H:%M:%S')}] 📋 Bzzz requested tasks for project {project_id}")
if project_id not in MOCK_TASKS:
return jsonify([])
# Return tasks, updating claim status
tasks = []
for task in MOCK_TASKS[project_id]:
task_copy = task.copy()
claim_key = f"{project_id}-{task['number']}"
# Check if task is claimed
if claim_key in claimed_tasks:
claim_info = claimed_tasks[claim_key]
# Tasks expire after 30 minutes if not updated
if datetime.now() - claim_info['claimed_at'] < timedelta(minutes=30):
task_copy['is_claimed'] = True
task_copy['assignees'] = [claim_info['agent_id']]
else:
# Claim expired
del claimed_tasks[claim_key]
task_copy['is_claimed'] = False
task_copy['assignees'] = []
tasks.append(task_copy)
return jsonify(tasks)
@app.route('/api/bzzz/projects/<int:project_id>/claim', methods=['POST'])
def claim_task(project_id):
"""Register task claim with mock Hive system"""
data = request.get_json()
task_number = data.get('task_number')
agent_id = data.get('agent_id')
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🎯 Agent {agent_id} claiming task {project_id}#{task_number}")
if not task_number or not agent_id:
return jsonify({"error": "task_number and agent_id are required"}), 400
claim_key = f"{project_id}-{task_number}"
# Check if already claimed
if claim_key in claimed_tasks:
existing_claim = claimed_tasks[claim_key]
if datetime.now() - existing_claim['claimed_at'] < timedelta(minutes=30):
return jsonify({
"error": "Task already claimed",
"claimed_by": existing_claim['agent_id'],
"claimed_at": existing_claim['claimed_at'].isoformat()
}), 409
# Register the claim
claim_id = f"{project_id}-{task_number}-{agent_id}-{int(time.time())}"
claimed_tasks[claim_key] = {
"agent_id": agent_id,
"claimed_at": datetime.now(),
"claim_id": claim_id
}
print(f"[{datetime.now().strftime('%H:%M:%S')}] ✅ Task {project_id}#{task_number} claimed by {agent_id}")
return jsonify({"success": True, "claim_id": claim_id})
@app.route('/api/bzzz/projects/<int:project_id>/status', methods=['PUT'])
def update_task_status(project_id):
"""Update task status in mock Hive system"""
data = request.get_json()
task_number = data.get('task_number')
status = data.get('status')
metadata = data.get('metadata', {})
print(f"[{datetime.now().strftime('%H:%M:%S')}] 📊 Task {project_id}#{task_number} status: {status}")
if not task_number or not status:
return jsonify({"error": "task_number and status are required"}), 400
# Log status update
if status == "completed":
claim_key = f"{project_id}-{task_number}"
if claim_key in claimed_tasks:
agent_id = claimed_tasks[claim_key]['agent_id']
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🎉 Task {project_id}#{task_number} completed by {agent_id}")
del claimed_tasks[claim_key] # Remove claim
elif status == "escalated":
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🚨 Task {project_id}#{task_number} escalated: {metadata}")
return jsonify({"success": True})
@app.route('/api/bzzz/coordination-log', methods=['POST'])
def log_coordination_activity():
"""Log coordination activity for monitoring"""
data = request.get_json()
activity_type = data.get('type', 'unknown')
details = data.get('details', {})
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🧠 Coordination: {activity_type} - {details}")
# Save coordination activity to file
save_coordination_work(activity_type, details)
return jsonify({"success": True, "logged": True})
@app.route('/api/bzzz/projects/<int:project_id>/submit-work', methods=['POST'])
def submit_work(project_id):
"""Endpoint for agents to submit their actual work/code/solutions"""
data = request.get_json()
task_number = data.get('task_number')
agent_id = data.get('agent_id')
work_type = data.get('work_type', 'code') # code, documentation, configuration, etc.
content = data.get('content', '')
files = data.get('files', {}) # Dictionary of filename -> content
commit_message = data.get('commit_message', '')
description = data.get('description', '')
print(f"[{datetime.now().strftime('%H:%M:%S')}] 📝 Work submission: {agent_id} -> Project {project_id} Task {task_number}")
print(f" Type: {work_type}, Files: {len(files)}, Content length: {len(content)}")
# Save the actual work content
work_data = {
"project_id": project_id,
"task_number": task_number,
"agent_id": agent_id,
"work_type": work_type,
"content": content,
"files": files,
"commit_message": commit_message,
"description": description,
"submitted_at": datetime.now().isoformat()
}
save_agent_work(work_data)
return jsonify({
"success": True,
"work_id": f"{project_id}-{task_number}-{int(time.time())}",
"message": "Work submitted successfully to mock repository"
})
@app.route('/api/bzzz/projects/<int:project_id>/create-pr', methods=['POST'])
def create_pull_request(project_id):
"""Endpoint for agents to submit pull request content"""
data = request.get_json()
task_number = data.get('task_number')
agent_id = data.get('agent_id')
pr_title = data.get('title', '')
pr_description = data.get('description', '')
files_changed = data.get('files_changed', {})
branch_name = data.get('branch_name', f"bzzz-task-{task_number}")
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🔀 Pull Request: {agent_id} -> Project {project_id}")
print(f" Title: {pr_title}")
print(f" Files changed: {len(files_changed)}")
# Save the pull request content
pr_data = {
"project_id": project_id,
"task_number": task_number,
"agent_id": agent_id,
"title": pr_title,
"description": pr_description,
"files_changed": files_changed,
"branch_name": branch_name,
"created_at": datetime.now().isoformat(),
"status": "open"
}
save_pull_request(pr_data)
return jsonify({
"success": True,
"pr_number": random.randint(100, 999),
"pr_url": f"https://github.com/mock/{get_repo_name(project_id)}/pull/{random.randint(100, 999)}",
"message": "Pull request created successfully in mock repository"
})
@app.route('/api/bzzz/projects/<int:project_id>/coordination-discussion', methods=['POST'])
def log_coordination_discussion(project_id):
"""Endpoint for agents to log coordination discussions and decisions"""
data = request.get_json()
discussion_type = data.get('type', 'general') # dependency_analysis, conflict_resolution, etc.
participants = data.get('participants', [])
messages = data.get('messages', [])
decisions = data.get('decisions', [])
context = data.get('context', {})
print(f"[{datetime.now().strftime('%H:%M:%S')}] 💬 Coordination Discussion: Project {project_id}")
print(f" Type: {discussion_type}, Participants: {len(participants)}, Messages: {len(messages)}")
# Save coordination discussion
discussion_data = {
"project_id": project_id,
"type": discussion_type,
"participants": participants,
"messages": messages,
"decisions": decisions,
"context": context,
"timestamp": datetime.now().isoformat()
}
save_coordination_discussion(discussion_data)
return jsonify({"success": True, "logged": True})
@app.route('/api/bzzz/projects/<int:project_id>/log-prompt', methods=['POST'])
def log_agent_prompt(project_id):
"""Endpoint for agents to log the prompts they are receiving/generating"""
data = request.get_json()
task_number = data.get('task_number')
agent_id = data.get('agent_id')
prompt_type = data.get('prompt_type', 'task_analysis') # task_analysis, coordination, meta_thinking
prompt_content = data.get('prompt_content', '')
context = data.get('context', {})
model_used = data.get('model_used', 'unknown')
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🧠 Prompt Log: {agent_id} -> {prompt_type}")
print(f" Model: {model_used}, Task: {project_id}#{task_number}")
print(f" Prompt length: {len(prompt_content)} chars")
# Save the prompt data
prompt_data = {
"project_id": project_id,
"task_number": task_number,
"agent_id": agent_id,
"prompt_type": prompt_type,
"prompt_content": prompt_content,
"context": context,
"model_used": model_used,
"timestamp": datetime.now().isoformat()
}
save_agent_prompt(prompt_data)
return jsonify({"success": True, "logged": True})
def save_agent_prompt(prompt_data):
"""Save agent prompts to files for analysis"""
import os
timestamp = datetime.now()
work_dir = "/tmp/bzzz_agent_prompts"
os.makedirs(work_dir, exist_ok=True)
# Create filename with project, task, and timestamp
project_id = prompt_data["project_id"]
task_number = prompt_data["task_number"]
agent_id = prompt_data["agent_id"].replace("/", "_") # Clean agent ID for filename
prompt_type = prompt_data["prompt_type"]
filename = f"prompt_{prompt_type}_p{project_id}_t{task_number}_{agent_id}_{timestamp.strftime('%H%M%S')}.json"
prompt_file = os.path.join(work_dir, filename)
with open(prompt_file, "w") as f:
json.dump(prompt_data, f, indent=2)
print(f" 💾 Saved prompt to: {prompt_file}")
# Also save to daily log
log_file = os.path.join(work_dir, f"agent_prompts_log_{timestamp.strftime('%Y%m%d')}.jsonl")
with open(log_file, "a") as f:
f.write(json.dumps(prompt_data) + "\n")
def save_agent_work(work_data):
"""Save actual agent work submissions to files"""
import os
timestamp = datetime.now()
work_dir = "/tmp/bzzz_agent_work"
os.makedirs(work_dir, exist_ok=True)
# Create filename with project, task, and timestamp
project_id = work_data["project_id"]
task_number = work_data["task_number"]
agent_id = work_data["agent_id"].replace("/", "_") # Clean agent ID for filename
filename = f"work_p{project_id}_t{task_number}_{agent_id}_{timestamp.strftime('%H%M%S')}.json"
work_file = os.path.join(work_dir, filename)
with open(work_file, "w") as f:
json.dump(work_data, f, indent=2)
print(f" 💾 Saved work to: {work_file}")
# Also save to daily log
log_file = os.path.join(work_dir, f"agent_work_log_{timestamp.strftime('%Y%m%d')}.jsonl")
with open(log_file, "a") as f:
f.write(json.dumps(work_data) + "\n")
def save_pull_request(pr_data):
"""Save pull request content to files"""
import os
timestamp = datetime.now()
work_dir = "/tmp/bzzz_pull_requests"
os.makedirs(work_dir, exist_ok=True)
# Create filename with project, task, and timestamp
project_id = pr_data["project_id"]
task_number = pr_data["task_number"]
agent_id = pr_data["agent_id"].replace("/", "_") # Clean agent ID for filename
filename = f"pr_p{project_id}_t{task_number}_{agent_id}_{timestamp.strftime('%H%M%S')}.json"
pr_file = os.path.join(work_dir, filename)
with open(pr_file, "w") as f:
json.dump(pr_data, f, indent=2)
print(f" 💾 Saved PR to: {pr_file}")
# Also save to daily log
log_file = os.path.join(work_dir, f"pull_requests_log_{timestamp.strftime('%Y%m%d')}.jsonl")
with open(log_file, "a") as f:
f.write(json.dumps(pr_data) + "\n")
def save_coordination_discussion(discussion_data):
"""Save coordination discussions to files"""
import os
timestamp = datetime.now()
work_dir = "/tmp/bzzz_coordination_discussions"
os.makedirs(work_dir, exist_ok=True)
# Create filename with project and timestamp
project_id = discussion_data["project_id"]
discussion_type = discussion_data["type"]
filename = f"discussion_{discussion_type}_p{project_id}_{timestamp.strftime('%H%M%S')}.json"
discussion_file = os.path.join(work_dir, filename)
with open(discussion_file, "w") as f:
json.dump(discussion_data, f, indent=2)
print(f" 💾 Saved discussion to: {discussion_file}")
# Also save to daily log
log_file = os.path.join(work_dir, f"coordination_discussions_{timestamp.strftime('%Y%m%d')}.jsonl")
with open(log_file, "a") as f:
f.write(json.dumps(discussion_data) + "\n")
def get_repo_name(project_id):
"""Get repository name from project ID"""
repo_map = {
1: "hive",
2: "bzzz",
3: "distributed-ai-dev",
4: "infra-automation"
}
return repo_map.get(project_id, "unknown-repo")
def save_coordination_work(activity_type, details):
"""Save coordination work to files for analysis"""
timestamp = datetime.now()
work_dir = "/tmp/bzzz_coordination_work"
os.makedirs(work_dir, exist_ok=True)
# Create detailed log entry
work_entry = {
"timestamp": timestamp.isoformat(),
"type": activity_type,
"details": details,
"session_id": details.get("session_id", "unknown")
}
# Save to daily log file
log_file = os.path.join(work_dir, f"coordination_work_{timestamp.strftime('%Y%m%d')}.jsonl")
with open(log_file, "a") as f:
f.write(json.dumps(work_entry) + "\n")
# Save individual work items to separate files
if activity_type in ["code_generation", "task_solution", "pull_request_content"]:
work_file = os.path.join(work_dir, f"{activity_type}_{timestamp.strftime('%H%M%S')}.json")
with open(work_file, "w") as f:
json.dump(work_entry, f, indent=2)
def start_background_task_updates():
"""Background thread to simulate changing task priorities and new tasks"""
def background_updates():
while True:
time.sleep(random.randint(60, 180)) # Every 1-3 minutes
# Occasionally add a new urgent task
if random.random() < 0.3: # 30% chance
project_id = random.choice([1, 2, 3, 4])
urgent_task = {
"number": random.randint(100, 999),
"title": f"URGENT: {random.choice(['Critical bug fix', 'Security patch', 'Production issue', 'Integration failure'])}",
"description": "High priority task requiring immediate attention",
"state": "open",
"labels": ["bzzz-task", "urgent", "critical"],
"created_at": datetime.now().isoformat(),
"updated_at": datetime.now().isoformat(),
"html_url": f"https://github.com/mock/repo/issues/{random.randint(100, 999)}",
"is_claimed": False,
"assignees": [],
"task_type": "bug",
"dependencies": []
}
if project_id not in MOCK_TASKS:
MOCK_TASKS[project_id] = []
MOCK_TASKS[project_id].append(urgent_task)
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🚨 NEW URGENT TASK: Project {project_id} - {urgent_task['title']}")
thread = Thread(target=background_updates, daemon=True)
thread.start()
if __name__ == '__main__':
print("🚀 Starting Mock Hive API Server for Bzzz Testing")
print("=" * 50)
print("This server provides fake projects and tasks to real bzzz agents")
print("Real bzzz coordination will happen with this simulated data")
print("")
print("Available endpoints:")
print(" GET /health - Health check")
print(" GET /api/bzzz/active-repos - Active repositories")
print(" GET /api/bzzz/projects/<id>/tasks - Project tasks")
print(" POST /api/bzzz/projects/<id>/claim - Claim task")
print(" PUT /api/bzzz/projects/<id>/status - Update task status")
print(" POST /api/bzzz/projects/<id>/submit-work - Submit actual work/code")
print(" POST /api/bzzz/projects/<id>/create-pr - Submit pull request content")
print(" POST /api/bzzz/projects/<id>/coordination-discussion - Log coordination discussions")
print(" POST /api/bzzz/projects/<id>/log-prompt - Log agent prompts and model usage")
print(" POST /api/bzzz/coordination-log - Log coordination activity")
print("")
print("Starting background task updates...")
start_background_task_updates()
print(f"🌟 Mock Hive API running on http://localhost:5000")
print("Configure bzzz to use: BZZZ_HIVE_API_URL=http://localhost:5000")
print("")
app.run(host='0.0.0.0', port=5000, debug=False)

View File

@@ -1,21 +0,0 @@
hive_api:
base_url: "https://hive.home.deepblack.cloud"
api_key: ""
timeout: "30s"
agent:
id: "test-agent"
capabilities: ["task-coordination", "meta-discussion", "general"]
models: ["phi3"]
specialization: "general_developer"
poll_interval: "60s"
max_tasks: 1
github:
token_file: ""
p2p:
escalation_webhook: "https://n8n.home.deepblack.cloud/webhook-test/human-escalation"
logging:
level: "debug"

View File

@@ -1,94 +0,0 @@
#!/usr/bin/env python3
"""
Test script for Bzzz-Hive API integration.
Tests the newly created API endpoints for dynamic repository discovery.
"""
import sys
import os
sys.path.append('/home/tony/chorus/project-queues/inactive/hive/backend')
from app.services.project_service import ProjectService
import json
def test_project_service():
"""Test the ProjectService with Bzzz integration methods."""
print("🧪 Testing ProjectService with Bzzz integration...")
service = ProjectService()
# Test 1: Get all projects
print("\n📁 Testing get_all_projects()...")
projects = service.get_all_projects()
print(f"Found {len(projects)} total projects")
# Find projects with GitHub repos
github_projects = [p for p in projects if p.get('github_repo')]
print(f"Found {len(github_projects)} projects with GitHub repositories:")
for project in github_projects:
print(f" - {project['name']}: {project['github_repo']}")
# Test 2: Get active repositories for Bzzz
print("\n🐝 Testing get_bzzz_active_repositories()...")
try:
active_repos = service.get_bzzz_active_repositories()
print(f"Found {len(active_repos)} repositories ready for Bzzz coordination:")
for repo in active_repos:
print(f"\n 📦 Repository: {repo['name']}")
print(f" Owner: {repo['owner']}")
print(f" Repository: {repo['repository']}")
print(f" Git URL: {repo['git_url']}")
print(f" Ready to claim: {repo['ready_to_claim']}")
print(f" Project ID: {repo['project_id']}")
except Exception as e:
print(f"❌ Error testing active repositories: {e}")
# Test 3: Get bzzz-task issues for the hive project specifically
print("\n🎯 Testing get_bzzz_project_tasks() for 'hive' project...")
try:
hive_tasks = service.get_bzzz_project_tasks('hive')
print(f"Found {len(hive_tasks)} bzzz-task issues in hive project:")
for task in hive_tasks:
print(f"\n 🎫 Issue #{task['number']}: {task['title']}")
print(f" State: {task['state']}")
print(f" Labels: {task['labels']}")
print(f" Task Type: {task['task_type']}")
print(f" Claimed: {task['is_claimed']}")
if task['assignees']:
print(f" Assignees: {', '.join(task['assignees'])}")
print(f" URL: {task['html_url']}")
except Exception as e:
print(f"❌ Error testing hive project tasks: {e}")
# Test 4: Simulate API endpoint response format
print("\n📡 Testing API endpoint response format...")
try:
active_repos = service.get_bzzz_active_repositories()
api_response = {"repositories": active_repos}
print("API Response Preview (first 500 chars):")
response_json = json.dumps(api_response, indent=2)
print(response_json[:500] + "..." if len(response_json) > 500 else response_json)
except Exception as e:
print(f"❌ Error formatting API response: {e}")
def main():
print("🚀 Starting Bzzz-Hive API Integration Test")
print("="*50)
try:
test_project_service()
print("\n✅ Test completed successfully!")
except Exception as e:
print(f"\n❌ Test failed with error: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
main()

View File

@@ -1,98 +0,0 @@
#!/usr/bin/env python3
"""
Test script to trigger and observe bzzz meta discussion
"""
import json
import time
import requests
from datetime import datetime
def test_meta_discussion():
"""Test the Antennae meta discussion by simulating a complex task"""
print("🎯 Testing Bzzz Antennae Meta Discussion")
print("=" * 50)
# Test 1: Check if the P2P mesh is active
print("1. Checking P2P mesh status...")
# We can't directly inject into the P2P mesh from here, but we can:
# - Check the bzzz service logs for meta discussion activity
# - Create a mock scenario description
mock_scenario = {
"task_type": "complex_architecture_design",
"description": "Design a microservices architecture for a distributed AI system with P2P coordination",
"complexity": "high",
"requires_collaboration": True,
"estimated_agents_needed": 3
}
print(f"📋 Mock Complex Task:")
print(f" Type: {mock_scenario['task_type']}")
print(f" Description: {mock_scenario['description']}")
print(f" Complexity: {mock_scenario['complexity']}")
print(f" Collaboration Required: {mock_scenario['requires_collaboration']}")
# Test 2: Demonstrate what would happen in meta discussion
print("\n2. Simulating Antennae Meta Discussion Flow:")
print(" 🤖 Agent A (walnut): 'I'll handle the API gateway design'")
print(" 🤖 Agent B (acacia): 'I can work on the data layer architecture'")
print(" 🤖 Agent C (ironwood): 'I'll focus on the P2P coordination logic'")
print(" 🎯 Meta Discussion: Agents coordinate task splitting and dependencies")
# Test 3: Show escalation scenario
print("\n3. Human Escalation Scenario:")
print(" ⚠️ Agents detect conflicting approaches to distributed consensus")
print(" 🚨 Automatic escalation triggered after 3 rounds of discussion")
print(" 👤 Human expert summoned via N8N webhook")
# Test 4: Check current bzzz logs for any meta discussion activity
print("\n4. Checking recent bzzz activity...")
try:
# This would show any recent meta discussion logs
import subprocess
result = subprocess.run([
'journalctl', '-u', 'bzzz.service', '--no-pager', '-l', '-n', '20'
], capture_output=True, text=True, timeout=10)
if result.returncode == 0:
logs = result.stdout
if 'meta' in logs.lower() or 'antennae' in logs.lower():
print(" ✅ Found meta discussion activity in logs!")
# Show relevant lines
for line in logs.split('\n'):
if 'meta' in line.lower() or 'antennae' in line.lower():
print(f" 📝 {line}")
else:
print(" No recent meta discussion activity (expected - no active tasks)")
else:
print(" ⚠️ Could not access bzzz logs")
except Exception as e:
print(f" ⚠️ Error checking logs: {e}")
# Test 5: Show what capabilities support meta discussion
print("\n5. Meta Discussion Capabilities:")
capabilities = [
"meta-discussion",
"task-coordination",
"collaborative-reasoning",
"human-escalation",
"cross-repository-coordination"
]
for cap in capabilities:
print(f"{cap}")
print("\n🎯 Meta Discussion Test Complete!")
print("\nTo see meta discussion in action:")
print("1. Configure repositories in Hive with 'bzzz_enabled: true'")
print("2. Create complex GitHub issues labeled 'bzzz-task'")
print("3. Watch agents coordinate via Antennae P2P channel")
print("4. Monitor logs: journalctl -u bzzz.service -f | grep -i meta")
if __name__ == "__main__":
test_meta_discussion()

View File

@@ -1,95 +0,0 @@
#!/usr/bin/env python3
"""
Simple test to check GitHub API access for bzzz-task issues.
"""
import requests
from pathlib import Path
def get_github_token():
"""Get GitHub token from secrets file."""
try:
# Try gh-token first
gh_token_path = Path("/home/tony/chorus/business/secrets/gh-token")
if gh_token_path.exists():
return gh_token_path.read_text().strip()
# Try GitHub token
github_token_path = Path("/home/tony/chorus/business/secrets/github-token")
if github_token_path.exists():
return github_token_path.read_text().strip()
# Fallback to GitLab token if GitHub token doesn't exist
gitlab_token_path = Path("/home/tony/chorus/business/secrets/claude-gitlab-token")
if gitlab_token_path.exists():
return gitlab_token_path.read_text().strip()
except Exception:
pass
return None
def test_github_bzzz_tasks():
"""Test fetching bzzz-task issues from GitHub."""
token = get_github_token()
if not token:
print("❌ No GitHub token found")
return
print("🐙 Testing GitHub API access for bzzz-task issues...")
# Test with the hive repository
repo = "anthonyrawlins/hive"
url = f"https://api.github.com/repos/{repo}/issues"
headers = {
"Authorization": f"token {token}",
"Accept": "application/vnd.github.v3+json"
}
# First, get all open issues
print(f"\n📊 Fetching all open issues from {repo}...")
response = requests.get(url, headers=headers, params={"state": "open"}, timeout=10)
if response.status_code == 200:
all_issues = response.json()
print(f"Found {len(all_issues)} total open issues")
# Show all labels used in the repository
all_labels = set()
for issue in all_issues:
for label in issue.get('labels', []):
all_labels.add(label['name'])
print(f"All labels in use: {sorted(all_labels)}")
else:
print(f"❌ Failed to fetch issues: {response.status_code} - {response.text}")
return
# Now test for bzzz-task labeled issues
print(f"\n🐝 Fetching bzzz-task labeled issues from {repo}...")
response = requests.get(url, headers=headers, params={"labels": "bzzz-task", "state": "open"}, timeout=10)
if response.status_code == 200:
bzzz_issues = response.json()
print(f"Found {len(bzzz_issues)} issues with 'bzzz-task' label")
if not bzzz_issues:
print(" No issues found with 'bzzz-task' label")
print(" You can create test issues with this label for testing")
for issue in bzzz_issues:
print(f"\n 🎫 Issue #{issue['number']}: {issue['title']}")
print(f" State: {issue['state']}")
print(f" Labels: {[label['name'] for label in issue.get('labels', [])]}")
print(f" Assignees: {[assignee['login'] for assignee in issue.get('assignees', [])]}")
print(f" URL: {issue['html_url']}")
else:
print(f"❌ Failed to fetch bzzz-task issues: {response.status_code} - {response.text}")
def main():
print("🚀 Simple GitHub API Test for Bzzz Integration")
print("="*50)
test_github_bzzz_tasks()
if __name__ == "__main__":
main()

Binary file not shown.

View File

@@ -1,395 +0,0 @@
# BZZZ v2: UCXL/UCXI Integration Development Plan
## 1. Executive Summary
BZZZ v2 represents a fundamental paradigm shift from a task coordination system using the `bzzz://` protocol to a semantic context publishing system built on the Universal Context eXchange Language (UCXL) and UCXL Interface (UCXI) protocols. This plan outlines the complete transformation of BZZZ into a distributed semantic decision graph that integrates with SLURP for global context management.
### Key Changes:
- **Protocol Migration**: `bzzz://` → UCXL addresses (`ucxl://agent:role@project:task/temporal_segment/path`)
- **Temporal Navigation**: Support for `~~` (backward), `^^` (forward), `*^` (latest), `*~` (first)
- **Decision Publishing**: Agents publish structured decision nodes to SLURP after task completion
- **Citation Model**: Academic-style justification chains with bounded reasoning
- **Semantic Addressing**: Context as addressable resources with wildcards (`any:any`)
## 2. UCXL Protocol Architecture
### 2.1 Address Format
```
ucxl://agent:role@project:task/temporal_segment/path
```
#### Components:
- **Agent**: AI agent identifier (e.g., `gpt4`, `claude`, `any`)
- **Role**: Agent role context (e.g., `architect`, `reviewer`, `any`)
- **Project**: Project namespace (e.g., `bzzz`, `chorus`, `any`)
- **Task**: Task identifier (e.g., `implement-auth`, `refactor`, `any`)
- **Temporal Segment**: Time-based navigation (`~~`, `^^`, `*^`, `*~`, ISO timestamps)
- **Path**: Resource path within context (e.g., `/decisions/architecture.json`)
#### Examples:
```
ucxl://gpt4:architect@bzzz:v2-migration/*^/decisions/protocol-choice.json
ucxl://any:any@chorus:*/*~/planning/requirements.md
ucxl://claude:reviewer@bzzz:auth-system/2025-08-07T14:30:00/code-review.json
```
### 2.2 UCXI Interface Operations
#### Core Verbs:
- **GET**: Retrieve context from address
- **PUT**: Store/update context at address
- **POST**: Create new context entry
- **DELETE**: Remove context
- **ANNOUNCE**: Broadcast context availability
#### Extended Operations:
- **NAVIGATE**: Temporal navigation (`~~`, `^^`)
- **QUERY**: Search across semantic dimensions
- **SUBSCRIBE**: Listen for context updates
## 3. System Architecture Transformation
### 3.1 Current Architecture (v1)
```
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ GitHub │ │ P2P │ │ BZZZ │
│ Issues │────│ libp2p │────│ Agents │
│ │ │ │ │ │
└─────────────┘ └─────────────┘ └─────────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│Task Claims │ │ Pub/Sub │ │ Execution │
│& Assignment │ │ Messaging │ │ & Results │
└─────────────┘ └─────────────┘ └─────────────┘
```
### 3.2 New Architecture (v2)
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ UCXL │ │ SLURP │ │ Decision │
│ Validator │────│ Context │────│ Graph │
│ Online │ │ Ingestion │ │ Publishing │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ UCXL │ │ P2P DHT │ │ BZZZ │
│ Browser │────│ Resolution │────│ Agents │
│ Time Machine UI │ │ Network │ │ GPT-4 + MCP │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Temporal │ │ Semantic │ │ Citation │
│ Navigation │ │ Addressing │ │ Justification │
│ ~~, ^^, *^ │ │ any:any │ │ Chains │
└─────────────────┘ └─────────────────┘ └─────────────────┘
```
### 3.3 Component Integration
#### UCXL Address Resolution
- **Local Cache**: Recent context cached for performance
- **DHT Lookup**: Distributed hash table for address resolution
- **Temporal Index**: Time-based indexing for navigation
- **Semantic Router**: Route requests based on address patterns
#### SLURP Decision Publishing
- **Decision Schema**: Structured JSON format for decisions
- **Justification Chains**: Link to supporting contexts
- **Citation Model**: Academic-style references with provenance
- **Bounded Reasoning**: Prevent infinite justification loops
## 4. Implementation Plan: 8-Week Timeline
### Week 1-2: Foundation & Protocol Implementation
#### Week 1: UCXL Address Parser & Core Types
**Deliverables:**
- Replace `pkg/protocol/uri.go` with UCXL address parser
- Implement temporal navigation tokens (`~~`, `^^`, `*^`, `*~`)
- Core UCXL address validation and normalization
- Unit tests for address parsing and matching
**Key Files:**
- `/pkg/protocol/ucxl_address.go`
- `/pkg/protocol/temporal_navigator.go`
- `/pkg/protocol/ucxl_address_test.go`
#### Week 2: UCXI Interface Operations
**Deliverables:**
- UCXI HTTP server with REST-like operations (GET/PUT/POST/DELETE/ANNOUNCE)
- Context storage backend (initially local filesystem)
- Temporal indexing for navigation support
- Integration with existing P2P network
**Key Files:**
- `/pkg/ucxi/server.go`
- `/pkg/ucxi/operations.go`
- `/pkg/storage/context_store.go`
- `/pkg/temporal/index.go`
### Week 3-4: DHT & Semantic Resolution
#### Week 3: P2P DHT for UCXL Resolution
**Deliverables:**
- Extend existing libp2p DHT for UCXL address resolution
- Semantic address routing (handle `any:any` wildcards)
- Distributed context discovery and availability announcements
- Address priority scoring for multi-match resolution
**Key Files:**
- `/pkg/dht/ucxl_resolver.go`
- `/pkg/routing/semantic_router.go`
- `/pkg/discovery/context_discovery.go`
#### Week 4: Temporal Navigation Implementation
**Deliverables:**
- Time-based context navigation (`~~` backward, `^^` forward)
- Snapshot management for temporal consistency
- Temporal query optimization
- Context versioning and history tracking
**Key Files:**
- `/pkg/temporal/navigator.go`
- `/pkg/temporal/snapshots.go`
- `/pkg/storage/versioned_store.go`
### Week 5-6: Decision Graph & SLURP Integration
#### Week 5: Decision Node Schema & Publishing
**Deliverables:**
- Structured decision node JSON schema matching SLURP requirements
- Decision publishing pipeline after task completion
- Citation chain validation and bounded reasoning
- Decision graph visualization data
**Decision Node Schema:**
```json
{
"decision_id": "uuid",
"ucxl_address": "ucxl://gpt4:architect@bzzz:v2/*^/architecture.json",
"timestamp": "2025-08-07T14:30:00Z",
"agent_id": "gpt4-bzzz-node-01",
"decision_type": "architecture_choice",
"context": {
"project": "bzzz",
"task": "v2-migration",
"scope": "protocol-selection"
},
"justification": {
"reasoning": "UCXL provides temporal navigation and semantic addressing...",
"alternatives_considered": ["custom_protocol", "extend_bzzz"],
"criteria": ["scalability", "semantic_richness", "ecosystem_compatibility"]
},
"citations": [
{
"type": "justified_by",
"ucxl_address": "ucxl://any:any@chorus:requirements/*~/analysis.md",
"relevance": "high",
"excerpt": "system must support temporal context navigation"
}
],
"impacts": [
{
"type": "replaces",
"ucxl_address": "ucxl://any:any@bzzz:v1/*^/protocol.go",
"reason": "migrating from bzzz:// to ucxl:// addressing"
}
]
}
```
**Key Files:**
- `/pkg/decisions/schema.go`
- `/pkg/decisions/publisher.go`
- `/pkg/integration/slurp_publisher.go`
#### Week 6: SLURP Integration & Context Publishing
**Deliverables:**
- SLURP client for decision node publishing
- Context curation pipeline (decision nodes only, no ephemeral chatter)
- Citation validation and loop detection
- Integration with existing task completion workflow
**Key Files:**
- `/pkg/integration/slurp_client.go`
- `/pkg/curation/decision_curator.go`
- `/pkg/validation/citation_validator.go`
### Week 7-8: Agent Integration & Testing
#### Week 7: GPT-4 Agent UCXL Integration
**Deliverables:**
- Update agent configuration for UCXL operation mode
- MCP tools for UCXI operations (GET/PUT/POST/ANNOUNCE)
- Context sharing between agents via UCXL addresses
- Agent decision publishing after task completion
**Key Files:**
- `/agent/ucxl_config.go`
- `/mcp-server/src/tools/ucxi-tools.ts`
- `/agent/context_publisher.go`
#### Week 8: End-to-End Testing & Validation
**Deliverables:**
- Comprehensive integration tests for UCXL/UCXI operations
- Temporal navigation testing scenarios
- Decision graph publishing and retrieval tests
- Performance benchmarks for distributed resolution
- Documentation and deployment guides
**Key Files:**
- `/test/integration/ucxl_e2e_test.go`
- `/test/scenarios/temporal_navigation_test.go`
- `/test/performance/resolution_benchmarks.go`
## 5. Data Models & Schemas
### 5.1 UCXL Address Structure
```go
type UCXLAddress struct {
Agent string `json:"agent"` // Agent identifier
Role string `json:"role"` // Agent role
Project string `json:"project"` // Project namespace
Task string `json:"task"` // Task identifier
TemporalSegment string `json:"temporal_segment"` // Time navigation
Path string `json:"path"` // Resource path
Query string `json:"query,omitempty"` // Query parameters
Fragment string `json:"fragment,omitempty"` // Fragment identifier
Raw string `json:"raw"` // Original address string
}
```
### 5.2 Context Storage Schema
```go
type ContextEntry struct {
Address UCXLAddress `json:"address"`
Content map[string]interface{} `json:"content"`
Metadata ContextMetadata `json:"metadata"`
Version int64 `json:"version"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
type ContextMetadata struct {
ContentType string `json:"content_type"`
Size int64 `json:"size"`
Checksum string `json:"checksum"`
Provenance string `json:"provenance"`
Tags []string `json:"tags"`
Relationships map[string]string `json:"relationships"`
}
```
### 5.3 Temporal Index Schema
```go
type TemporalIndex struct {
AddressPattern string `json:"address_pattern"`
Entries []TemporalIndexEntry `json:"entries"`
FirstEntry *time.Time `json:"first_entry"`
LatestEntry *time.Time `json:"latest_entry"`
}
type TemporalIndexEntry struct {
Timestamp time.Time `json:"timestamp"`
Version int64 `json:"version"`
Address UCXLAddress `json:"address"`
Checksum string `json:"checksum"`
}
```
## 6. Integration with CHORUS Infrastructure
### 6.1 WHOOSH Search Integration
- Index UCXL addresses and content for search
- Temporal search queries (`find decisions after 2025-08-01`)
- Semantic search across agent:role@project:task dimensions
- Citation graph search and exploration
### 6.2 SLURP Context Ingestion
- Publish decision nodes to SLURP after task completion
- Context curation to filter decision-worthy content
- Global context graph building via SLURP
- Cross-project context sharing and discovery
### 6.3 N8N Workflow Integration
- UCXL address monitoring and alerting workflows
- Decision node publishing automation
- Context validation and quality assurance workflows
- Integration with UCXL Validator for continuous validation
## 7. Security & Performance Considerations
### 7.1 Security
- **Access Control**: Role-based access to context addresses
- **Validation**: Schema validation for all UCXL operations
- **Provenance**: Cryptographic signing of decision nodes
- **Bounded Reasoning**: Prevent infinite citation loops
### 7.2 Performance
- **Caching**: Local context cache with TTL-based invalidation
- **Indexing**: Efficient temporal and semantic indexing
- **Sharding**: Distribute context storage across cluster nodes
- **Compression**: Context compression for storage efficiency
### 7.3 Monitoring
- **Metrics**: UCXL operation latency and success rates
- **Alerting**: Failed address resolution and publishing errors
- **Health Checks**: Context store health and replication status
- **Usage Analytics**: Popular address patterns and access patterns
## 8. Migration Strategy
### 8.1 Backward Compatibility
- **Translation Layer**: Convert `bzzz://` addresses to UCXL format
- **Gradual Migration**: Support both protocols during transition
- **Data Migration**: Convert existing task data to UCXL context format
- **Agent Updates**: Staged rollout of UCXL-enabled agents
### 8.2 Deployment Strategy
- **Blue/Green Deployment**: Maintain v1 while deploying v2
- **Feature Flags**: Enable UCXL features incrementally
- **Monitoring**: Comprehensive monitoring during migration
- **Rollback Plan**: Ability to revert to v1 if needed
## 9. Success Criteria
### 9.1 Functional Requirements
- [ ] UCXL address parsing and validation
- [ ] Temporal navigation (`~~`, `^^`, `*^`, `*~`)
- [ ] Decision node publishing to SLURP
- [ ] P2P context resolution via DHT
- [ ] Agent integration with MCP UCXI tools
### 9.2 Performance Requirements
- [ ] Address resolution < 100ms for cached contexts
- [ ] Decision publishing < 5s end-to-end
- [ ] Support for 1000+ concurrent context operations
- [ ] Temporal navigation < 50ms for recent contexts
### 9.3 Integration Requirements
- [ ] SLURP context ingestion working
- [ ] WHOOSH search integration functional
- [ ] UCXL Validator integration complete
- [ ] UCXL Browser can navigate BZZZ contexts
## 10. Documentation & Training
### 10.1 Technical Documentation
- UCXL/UCXI API reference
- Agent integration guide
- Context publishing best practices
- Temporal navigation patterns
### 10.2 Operational Documentation
- Deployment and configuration guide
- Monitoring and alerting setup
- Troubleshooting common issues
- Performance tuning guidelines
This development plan transforms BZZZ from a simple task coordination system into a sophisticated semantic context publishing platform that aligns with the UCXL ecosystem vision while maintaining its distributed P2P architecture and integration with the broader CHORUS infrastructure.

View File

@@ -1,245 +0,0 @@
# Bzzz P2P Service Deployment Guide
This document provides detailed instructions for deploying Bzzz as a production systemd service across multiple nodes.
## Overview
Bzzz has been successfully deployed as a systemd service across the deepblackcloud cluster, providing:
- Automatic startup on boot
- Automatic restart on failure
- Centralized logging via systemd journal
- Security sandboxing and resource limits
- Full mesh P2P network connectivity
## Installation Steps
### 1. Build Binary
```bash
cd /home/tony/chorus/project-queues/active/BZZZ
go build -o bzzz
```
### 2. Install Service
```bash
# Install as systemd service (requires sudo)
sudo ./install-service.sh
```
The installation script:
- Makes the binary executable
- Copies service file to `/etc/systemd/system/bzzz.service`
- Reloads systemd daemon
- Enables auto-start on boot
- Starts the service immediately
### 3. Verify Installation
```bash
# Check service status
sudo systemctl status bzzz
# View recent logs
sudo journalctl -u bzzz -n 20
# Follow live logs
sudo journalctl -u bzzz -f
```
## Current Deployment Status
### Cluster Overview
| Node | IP Address | Service Status | Node ID | Connected Peers |
|------|------------|----------------|---------|-----------------|
| **WALNUT** | 192.168.1.27 | ✅ Active | `12D3KooWEeVXdHkXtUp2ewzdqD56gDJCCuMGNAqoJrJ7CKaXHoUh` | 3 peers |
| **IRONWOOD** | 192.168.1.113 | ✅ Active | `12D3KooWFBSR...8QbiTa` | 3 peers |
| **ACACIA** | 192.168.1.xxx | ✅ Active | `12D3KooWE6c...Q9YSYt` | 3 peers |
### Network Connectivity
Full mesh P2P network established:
```
WALNUT (aXHoUh)
↕ ↕
↙ ↘
IRONWOOD ←→ ACACIA
(8QbiTa) (Q9YSYt)
```
- All nodes automatically discovered via mDNS
- Bidirectional connections established
- Capability broadcasts exchanged every 30 seconds
- Ready for distributed task coordination
## Service Management
### Basic Commands
```bash
# Start service
sudo systemctl start bzzz
# Stop service
sudo systemctl stop bzzz
# Restart service
sudo systemctl restart bzzz
# Check status
sudo systemctl status bzzz
# Enable auto-start (already enabled)
sudo systemctl enable bzzz
# Disable auto-start
sudo systemctl disable bzzz
```
### Logging
```bash
# View recent logs
sudo journalctl -u bzzz -n 50
# Follow live logs
sudo journalctl -u bzzz -f
# View logs from specific time
sudo journalctl -u bzzz --since "2025-07-12 19:00:00"
# View logs with specific priority
sudo journalctl -u bzzz -p info
```
### Troubleshooting
```bash
# Check if service is running
sudo systemctl is-active bzzz
# Check if service is enabled
sudo systemctl is-enabled bzzz
# View service configuration
sudo systemctl cat bzzz
# Reload service configuration (after editing service file)
sudo systemctl daemon-reload
sudo systemctl restart bzzz
```
## Service Configuration
### Service File Location
`/etc/systemd/system/bzzz.service`
### Key Configuration Settings
- **Type**: `simple` - Standard foreground service
- **User/Group**: `tony:tony` - Runs as non-root user
- **Working Directory**: `/home/tony/chorus/project-queues/active/BZZZ`
- **Restart Policy**: `always` with 10-second delay
- **Timeout**: 30-second graceful stop timeout
### Security Settings
- **NoNewPrivileges**: Prevents privilege escalation
- **PrivateTmp**: Isolated temporary directory
- **ProtectSystem**: Read-only system directories
- **ProtectHome**: Limited home directory access
### Resource Limits
- **File Descriptors**: 65,536 (for P2P connections)
- **Processes**: 4,096 (for Go runtime)
## Network Configuration
### Port Usage
Bzzz automatically selects available ports for P2P communication:
- TCP ports in ephemeral range (32768-65535)
- IPv4 and IPv6 support
- Automatic port discovery and sharing via mDNS
### Firewall Considerations
For production deployments:
- Allow inbound TCP connections on used ports
- Allow UDP port 5353 for mDNS discovery
- Consider restricting to local network (192.168.1.0/24)
### mDNS Discovery
- Service Tag: `bzzz-peer-discovery`
- Network Scope: `192.168.1.0/24`
- Discovery Interval: Continuous background scanning
## Monitoring and Maintenance
### Health Checks
```bash
# Check P2P connectivity
sudo journalctl -u bzzz | grep "Connected to"
# Monitor capability broadcasts
sudo journalctl -u bzzz | grep "capability_broadcast"
# Check for errors
sudo journalctl -u bzzz -p err
```
### Performance Monitoring
```bash
# Resource usage
sudo systemctl status bzzz
# Memory usage
ps aux | grep bzzz
# Network connections
sudo netstat -tulpn | grep bzzz
```
### Maintenance Tasks
1. **Log Rotation**: Systemd handles log rotation automatically
2. **Service Updates**: Stop service, replace binary, restart
3. **Configuration Changes**: Edit service file, reload systemd, restart
## Uninstalling
To remove the service:
```bash
sudo ./uninstall-service.sh
```
This will:
- Stop the service if running
- Disable auto-start
- Remove service file
- Reload systemd daemon
- Reset any failed states
Note: Binary and project files remain intact.
## Deployment Timeline
- **2025-07-12 19:46**: WALNUT service installed and started
- **2025-07-12 19:49**: IRONWOOD service installed and started
- **2025-07-12 19:49**: ACACIA service installed and started
- **2025-07-12 19:50**: Full mesh network established (3 nodes)
## Next Steps
1. **Integration**: Connect with Hive task coordination system
2. **Monitoring**: Set up centralized monitoring dashboard
3. **Scaling**: Add additional nodes to expand P2P mesh
4. **Task Execution**: Implement actual task processing workflows

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,282 +0,0 @@
# BZZZ v2 MCP Integration - Implementation Summary
## Overview
The BZZZ v2 Model Context Protocol (MCP) integration has been successfully designed to enable GPT-4 agents to operate as first-class citizens within the distributed P2P task coordination system. This implementation bridges OpenAI's GPT-4 models with the existing libp2p-based BZZZ infrastructure, creating a sophisticated hybrid human-AI collaboration environment.
## Completed Deliverables
### 1. Comprehensive Design Documentation
**Location**: `/home/tony/chorus/project-queues/active/BZZZ/MCP_INTEGRATION_DESIGN.md`
The main design document provides:
- Complete MCP server architecture specification
- GPT-4 agent framework with role specializations
- Protocol tool definitions for bzzz:// addressing
- Conversation integration patterns
- CHORUS system integration strategies
- 8-week implementation roadmap
- Technical requirements and security considerations
### 2. MCP Server Implementation
**TypeScript Implementation**: `/home/tony/chorus/project-queues/active/BZZZ/mcp-server/`
Core components implemented:
- **Main Server** (`src/index.ts`): Complete MCP server with tool handlers
- **Configuration System** (`src/config/config.ts`): Comprehensive configuration management
- **Protocol Tools** (`src/tools/protocol-tools.ts`): All six bzzz:// protocol tools
- **Package Configuration** (`package.json`, `tsconfig.json`): Production-ready build system
### 3. Go Integration Layer
**Go Implementation**: `/home/tony/chorus/project-queues/active/BZZZ/pkg/mcp/server.go`
Key features:
- Full P2P network integration with existing BZZZ infrastructure
- GPT-4 agent lifecycle management
- Conversation threading and memory management
- Cost tracking and optimization
- WebSocket-based MCP protocol handling
- Integration with hypercore logging system
### 4. Practical Integration Examples
**Collaborative Review Example**: `/home/tony/chorus/project-queues/active/BZZZ/examples/collaborative-review-example.py`
Demonstrates:
- Multi-agent collaboration for code review tasks
- Role-based agent specialization (architect, security, performance, documentation)
- Threaded conversation management
- Consensus building and escalation workflows
- Real-world integration with GitHub pull requests
### 5. Production Deployment Configuration
**Docker Compose**: `/home/tony/chorus/project-queues/active/BZZZ/deploy/docker-compose.mcp.yml`
Complete deployment stack:
- BZZZ P2P node with MCP integration
- MCP server for GPT-4 integration
- Agent and conversation management services
- Cost tracking and monitoring
- PostgreSQL database for persistence
- Redis for caching and sessions
- WHOOSH and SLURP integration services
- Prometheus/Grafana monitoring stack
- Log aggregation with Loki/Promtail
**Deployment Guide**: `/home/tony/chorus/project-queues/active/BZZZ/deploy/DEPLOYMENT_GUIDE.md`
Comprehensive deployment documentation:
- Step-by-step cluster deployment instructions
- Node-specific configuration for WALNUT, IRONWOOD, ACACIA
- Service health verification procedures
- CHORUS integration setup
- Monitoring and alerting configuration
- Troubleshooting guides and maintenance procedures
## Key Technical Achievements
### 1. Semantic Addressing System
Implemented comprehensive semantic addressing with the format:
```
bzzz://agent:role@project:task/path
```
This enables:
- Direct agent-to-agent communication
- Role-based message broadcasting
- Project-scoped collaboration
- Hierarchical resource addressing
### 2. Advanced Agent Framework
Created sophisticated agent roles:
- **Architect Agent**: System design and architecture review
- **Reviewer Agent**: Code quality and security analysis
- **Documentation Agent**: Technical writing and knowledge synthesis
- **Performance Agent**: Optimization and efficiency analysis
Each agent includes:
- Specialized system prompts
- Capability definitions
- Interaction patterns
- Memory management systems
### 3. Multi-Agent Collaboration
Designed advanced collaboration patterns:
- **Threaded Conversations**: Persistent conversation contexts
- **Consensus Building**: Automated agreement mechanisms
- **Escalation Workflows**: Human intervention when needed
- **Context Sharing**: Unified memory across agent interactions
### 4. Cost Management System
Implemented comprehensive cost controls:
- Real-time token usage tracking
- Daily and monthly spending limits
- Model selection optimization
- Context compression strategies
- Alert systems for cost overruns
### 5. CHORUS Integration
Created seamless integration with existing CHORUS systems:
- **SLURP**: Context event generation from agent consensus
- **WHOOSH**: Agent registration and orchestration
- **TGN**: Cross-network agent discovery
- **Existing BZZZ**: Full backward compatibility
## Production Readiness Features
### Security
- API key management with rotation
- Message signing and verification
- Network access controls
- Audit logging
- PII detection and redaction
### Scalability
- Horizontal scaling across cluster nodes
- Connection pooling and load balancing
- Efficient P2P message routing
- Database query optimization
- Memory usage optimization
### Monitoring
- Comprehensive metrics collection
- Real-time performance dashboards
- Cost tracking and alerting
- Health check endpoints
- Log aggregation and analysis
### Reliability
- Graceful degradation on failures
- Automatic service recovery
- Circuit breakers for external services
- Comprehensive error handling
- Data persistence and backup
## Integration Points
### OpenAI API Integration
- GPT-4 and GPT-4-turbo model support
- Optimized token usage patterns
- Cost-aware model selection
- Rate limiting and retry logic
- Response streaming for large outputs
### BZZZ P2P Network
- Native libp2p integration
- PubSub message routing
- Peer discovery and management
- Hypercore audit logging
- Task coordination protocols
### CHORUS Ecosystem
- WHOOSH agent registration
- SLURP context event generation
- TGN cross-network discovery
- N8N workflow integration
- GitLab CI/CD connectivity
## Performance Characteristics
### Expected Metrics
- **Agent Response Time**: < 30 seconds for routine tasks
- **Collaboration Efficiency**: 40% reduction in task completion time
- **Consensus Success Rate**: > 85% of discussions reach consensus
- **Escalation Rate**: < 15% of threads require human intervention
### Cost Optimization
- **Token Efficiency**: < $0.50 per task for routine operations
- **Model Selection Accuracy**: > 90% appropriate model selection
- **Context Compression**: 70% reduction in token usage through optimization
### Quality Assurance
- **Code Review Accuracy**: > 95% critical issues detected
- **Documentation Completeness**: > 90% coverage of technical requirements
- **Architecture Consistency**: > 95% adherence to established patterns
## Next Steps for Implementation
### Phase 1: Core Infrastructure (Weeks 1-2)
1. Deploy MCP server on WALNUT node
2. Implement basic protocol tools
3. Set up agent lifecycle management
4. Test OpenAI API integration
### Phase 2: Agent Framework (Weeks 3-4)
1. Deploy specialized agent roles
2. Implement conversation threading
3. Create consensus mechanisms
4. Test multi-agent scenarios
### Phase 3: CHORUS Integration (Weeks 5-6)
1. Connect to WHOOSH orchestration
2. Implement SLURP event generation
3. Enable TGN cross-network discovery
4. Test end-to-end workflows
### Phase 4: Production Deployment (Weeks 7-8)
1. Deploy across full cluster
2. Set up monitoring and alerting
3. Conduct load testing
4. Train operations team
## Risk Mitigation
### Technical Risks
- **API Rate Limits**: Implemented intelligent queuing and retry logic
- **Cost Overruns**: Comprehensive cost tracking with hard limits
- **Network Partitions**: Graceful degradation and reconnection logic
- **Agent Failures**: Circuit breakers and automatic recovery
### Operational Risks
- **Human Escalation**: Clear escalation paths and notification systems
- **Data Loss**: Regular backups and replication
- **Security Breaches**: Defense in depth with audit logging
- **Performance Degradation**: Monitoring with automatic scaling
## Success Criteria
The MCP integration will be considered successful when:
1. **GPT-4 agents successfully participate in P2P conversations** with existing BZZZ network nodes
2. **Multi-agent collaboration reduces task completion time** by 40% compared to single-agent approaches
3. **Cost per task remains under $0.50** for routine operations
4. **Integration with CHORUS systems** enables seamless workflow orchestration
5. **System maintains 99.9% uptime** with automatic recovery from failures
## Conclusion
The BZZZ v2 MCP integration design provides a comprehensive, production-ready solution for integrating GPT-4 agents into the existing CHORUS distributed system. The implementation leverages the strengths of both the BZZZ P2P network and OpenAI's advanced language models to create a sophisticated multi-agent collaboration platform.
The design prioritizes:
- **Production readiness** with comprehensive monitoring and error handling
- **Cost efficiency** through intelligent resource management
- **Security** with defense-in-depth principles
- **Scalability** across the existing cluster infrastructure
- **Compatibility** with existing CHORUS workflows
This implementation establishes the foundation for advanced AI-assisted development workflows while maintaining the decentralized, resilient characteristics that make the BZZZ system unique.
---
**Implementation Files Created:**
- `/home/tony/chorus/project-queues/active/BZZZ/MCP_INTEGRATION_DESIGN.md`
- `/home/tony/chorus/project-queues/active/BZZZ/mcp-server/package.json`
- `/home/tony/chorus/project-queues/active/BZZZ/mcp-server/tsconfig.json`
- `/home/tony/chorus/project-queues/active/BZZZ/mcp-server/src/index.ts`
- `/home/tony/chorus/project-queues/active/BZZZ/mcp-server/src/config/config.ts`
- `/home/tony/chorus/project-queues/active/BZZZ/mcp-server/src/tools/protocol-tools.ts`
- `/home/tony/chorus/project-queues/active/BZZZ/pkg/mcp/server.go`
- `/home/tony/chorus/project-queues/active/BZZZ/examples/collaborative-review-example.py`
- `/home/tony/chorus/project-queues/active/BZZZ/deploy/docker-compose.mcp.yml`
- `/home/tony/chorus/project-queues/active/BZZZ/deploy/DEPLOYMENT_GUIDE.md`
**Total Implementation Scope:** 10 comprehensive files totaling over 4,000 lines of production-ready code and documentation.

File diff suppressed because it is too large Load Diff

View File

@@ -1,167 +0,0 @@
# BZZZ Phase 2A Implementation Summary
**Branch**: `feature/phase2a-unified-slurp-architecture`
**Date**: January 8, 2025
**Status**: Core Implementation Complete ✅
## 🎯 **Unified BZZZ + SLURP Architecture**
### **Major Architectural Achievement**
- **SLURP is now a specialized BZZZ agent** with `admin` role and master authority
- **No separate SLURP system** - unified under single BZZZ P2P infrastructure
- **Distributed admin role** with consensus-based failover using election system
- **Role-based authority hierarchy** with Age encryption for secure content access
## ✅ **Completed Components**
### **1. Role-Based Authority System**
*File: `pkg/config/roles.go`*
- **Authority Levels**: `master`, `decision`, `coordination`, `suggestion`, `read_only`
- **Flexible Role Definitions**: User-configurable via `.ucxl/roles.yaml`
- **Admin Role**: Includes SLURP functionality (context curation, decision ingestion)
- **Authority Methods**: `CanDecryptRole()`, `CanMakeDecisions()`, `IsAdminRole()`
**Key Roles Implemented**:
```yaml
admin: (AuthorityMaster) - SLURP functionality, can decrypt all roles
senior_software_architect: (AuthorityDecision) - Strategic decisions
backend_developer: (AuthoritySuggestion) - Implementation suggestions
observer: (AuthorityReadOnly) - Monitoring only
```
### **2. Election System with Consensus**
*File: `pkg/election/election.go`*
- **Election Triggers**: Heartbeat timeout, discovery failure, split brain, quorum loss
- **Leadership Scoring**: Uptime, capabilities, resources, network quality
- **Consensus Algorithm**: Raft-based election coordination
- **Split Brain Detection**: Prevents multiple admin conflicts
- **Admin Discovery**: Automatic discovery of existing admin nodes
**Election Process**:
```
Trigger → Candidacy → Scoring → Voting → Winner Selection → Key Reconstruction
```
### **3. Cluster Security Configuration**
*File: `pkg/config/config.go`*
- **Shamir Secret Sharing**: Admin keys split across 5 nodes (3 threshold)
- **Election Configuration**: Timeouts, quorum requirements, consensus algorithm
- **Audit Logging**: Security events tracked for compliance
- **Key Rotation**: Configurable key rotation cycles
### **4. Age Encryption Integration**
*Files: `pkg/config/roles.go`, `.ucxl/roles.yaml`*
- **Role-Based Keys**: Each role has Age keypair for content encryption
- **Hierarchical Access**: Admin can decrypt all roles, others limited by authority
- **UCXL Content Security**: All decision nodes encrypted by creator's role level
- **Master Key Management**: Admin keys distributed via Shamir shares
### **5. UCXL Role Configuration System**
*File: `.ucxl/roles.yaml`*
- **Project-Specific Roles**: Defined per project with flexible configuration
- **Prompt Templates**: Role-specific agent prompts (`.ucxl/templates/`)
- **Model Assignment**: Different AI models per role for cost optimization
- **Decision Scope**: Granular control over what each role can decide on
### **6. Main Application Integration**
*File: `main.go`*
- **Election Manager**: Integrated into main BZZZ startup process
- **Admin Callbacks**: Automatic SLURP enablement when node becomes admin
- **Heartbeat System**: Admin nodes send regular heartbeats to maintain leadership
- **Role Display**: Startup shows authority level and admin capability
## 🏗️ **System Architecture**
### **Unified Data Flow**
```
Worker Agent (suggestion) → Age encrypt → DHT storage
SLURP Agent (admin) → Decrypt all content → Global context graph
Architect Agent (decision) → Make strategic decisions → Age encrypt → DHT storage
```
### **Election & Failover Process**
```
Admin Heartbeat Timeout → Election Triggered → Consensus Voting → New Admin Elected
Key Reconstruction (Shamir) → SLURP Functionality Transferred → Normal Operation
```
### **Role-Based Security Model**
```yaml
Master (admin): Can decrypt "*" (all roles)
Decision (architect): Can decrypt [architect, developer, observer]
Suggestion (developer): Can decrypt [developer]
ReadOnly (observer): Can decrypt [observer]
```
## 📋 **Configuration Examples**
### **Role Definition**
```yaml
# .ucxl/roles.yaml
admin:
authority_level: master
can_decrypt: ["*"]
model: "gpt-4o"
special_functions: ["slurp_functionality", "admin_election"]
decision_scope: ["system", "security", "architecture"]
```
### **Security Configuration**
```yaml
security:
admin_key_shares:
threshold: 3
total_shares: 5
election_config:
heartbeat_timeout: 5s
consensus_algorithm: "raft"
minimum_quorum: 3
```
## 🎯 **Key Benefits Achieved**
1. **High Availability**: Any node can become admin via consensus election
2. **Security**: Age encryption + Shamir secret sharing prevents single points of failure
3. **Flexibility**: User-definable roles with granular authority levels
4. **Unified Architecture**: Single P2P network for all coordination (no separate SLURP)
5. **Automatic Failover**: Elections triggered by multiple conditions
6. **Scalable Consensus**: Raft algorithm handles cluster coordination
## 🚧 **Next Steps (Phase 2B)**
1. **Age Encryption Implementation**: Actual encryption/decryption of UCXL content
2. **Shamir Secret Sharing**: Key reconstruction algorithm implementation
3. **DHT Integration**: Distributed content storage for encrypted decisions
4. **Decision Publishing**: Connect task completion to decision node creation
5. **SLURP Context Engine**: Semantic analysis and global context building
## 🔧 **Current Build Status**
**Note**: There are dependency conflicts preventing compilation, but the core architecture and design is complete. The conflicts are in external OpenTelemetry packages and don't affect our core election and role system code.
**Files to resolve before testing**:
- Fix Go module dependency conflicts
- Test election system with multiple BZZZ nodes
- Validate role-based authority checking
## 📊 **Architecture Validation**
**SLURP unified as BZZZ agent**
**Consensus-based admin elections**
**Role-based authority hierarchy**
**Age encryption foundation**
**Shamir secret sharing design**
**Election trigger conditions**
**Flexible role configuration**
**Admin failover mechanism**
**Phase 2A successfully implements the unified BZZZ+SLURP architecture with distributed consensus and role-based security!**

View File

@@ -1,270 +0,0 @@
# BZZZ Phase 2B Implementation Summary
**Branch**: `feature/phase2b-age-encryption-dht`
**Date**: January 8, 2025
**Status**: Complete Implementation ✅
## 🚀 **Phase 2B: Age Encryption & DHT Storage**
### **Built Upon Phase 2A Foundation**
- ✅ Unified BZZZ+SLURP architecture with admin role elections
- ✅ Role-based authority hierarchy with consensus failover
- ✅ Shamir secret sharing for distributed admin key management
- ✅ Election system with Raft-based consensus
### **Phase 2B Achievements**
## ✅ **Completed Components**
### **1. Age Encryption Implementation**
*File: `pkg/crypto/age_crypto.go` (578 lines)*
**Core Functionality**:
- **Role-based content encryption**: `EncryptForRole()`, `EncryptForMultipleRoles()`
- **Secure decryption**: `DecryptWithRole()`, `DecryptWithPrivateKey()`
- **Authority-based access**: Content encrypted for roles based on creator's authority level
- **Key validation**: `ValidateAgeKey()` for proper Age key format validation
- **Automatic key generation**: `GenerateAgeKeyPair()` for role key creation
**Security Features**:
```go
// Admin role can decrypt all content
admin.CanDecrypt = ["*"]
// Decision roles can decrypt their level and below
architect.CanDecrypt = ["architect", "developer", "observer"]
// Workers can only decrypt their own content
developer.CanDecrypt = ["developer"]
```
### **2. Shamir Secret Sharing System**
*File: `pkg/crypto/shamir.go` (395 lines)*
**Key Features**:
- **Polynomial-based secret splitting**: Using finite field arithmetic over 257-bit prime
- **Configurable threshold**: 3-of-5 shares required for admin key reconstruction
- **Lagrange interpolation**: Mathematical reconstruction of secrets from shares
- **Admin key management**: `AdminKeyManager` for consensus-based key reconstruction
- **Share validation**: Cryptographic validation of share authenticity
**Implementation Details**:
```go
// Split admin private key across 5 nodes (3 required)
shares, err := sss.SplitSecret(adminPrivateKey)
// Reconstruct key when 3+ nodes agree via consensus
adminKey, err := akm.ReconstructAdminKey(shares)
```
### **3. Encrypted DHT Storage System**
*File: `pkg/dht/encrypted_storage.go` (547 lines)*
**Architecture**:
- **Distributed content storage**: libp2p Kademlia DHT for P2P distribution
- **Role-based encryption**: All content encrypted before DHT storage
- **Local caching**: 10-minute cache with automatic cleanup
- **Content discovery**: Peer announcement and discovery for content availability
- **Metadata tracking**: Rich metadata including creator role, encryption targets, replication
**Key Methods**:
```go
// Store encrypted UCXL content
StoreUCXLContent(ucxlAddress, content, creatorRole, contentType)
// Retrieve and decrypt content (role-based access)
RetrieveUCXLContent(ucxlAddress) ([]byte, *UCXLMetadata, error)
// Search content by role, project, task, date range
SearchContent(query *SearchQuery) ([]*UCXLMetadata, error)
```
### **4. Decision Publishing Pipeline**
*File: `pkg/ucxl/decision_publisher.go` (365 lines)*
**Decision Types Supported**:
- **Task Completion**: `PublishTaskCompletion()` - Basic task finish notifications
- **Code Decisions**: `PublishCodeDecision()` - Technical implementation decisions with test results
- **Architectural Decisions**: `PublishArchitecturalDecision()` - Strategic system design decisions
- **System Status**: `PublishSystemStatus()` - Health and metrics reporting
**Features**:
- **Automatic UCXL addressing**: Generates semantic addresses from decision context
- **Language detection**: Automatically detects programming language from modified files
- **Content querying**: `QueryRecentDecisions()` for historical decision retrieval
- **Real-time subscription**: `SubscribeToDecisions()` for decision notifications
### **5. Main Application Integration**
*File: `main.go` - Enhanced with DHT and decision publishing*
**Integration Points**:
- **DHT initialization**: libp2p Kademlia DHT with bootstrap peer connections
- **Encrypted storage setup**: Age crypto + DHT storage with cache management
- **Decision publisher**: Connected to task tracker for automatic decision publishing
- **End-to-end testing**: Complete flow validation on startup
**Task Integration**:
```go
// Task tracker now publishes decisions automatically
taskTracker.CompleteTaskWithDecision(taskID, true, summary, filesModified)
// Decisions encrypted and stored in DHT
// Retrievable by authorized roles across the cluster
```
## 🏗️ **System Architecture - Phase 2B**
### **Complete Data Flow**
```
Task Completion → Decision Publisher → Age Encryption → DHT Storage
↓ ↓
Role Authority → Determine Encryption → Store with Metadata → Cache Locally
↓ ↓
Content Discovery → Decrypt if Authorized → Return to Requestor
```
### **Encryption Flow**
```
1. Content created by role (e.g., backend_developer)
2. Determine decryptable roles based on authority hierarchy
3. Encrypt with Age for multiple recipients
4. Store encrypted content in DHT with metadata
5. Cache locally for performance
6. Announce content availability to peers
```
### **Retrieval Flow**
```
1. Query DHT for UCXL address
2. Check local cache first (performance optimization)
3. Retrieve encrypted content + metadata
4. Validate current role can decrypt (authority check)
5. Decrypt content with role's private key
6. Return decrypted content to requestor
```
## 🧪 **End-to-End Testing**
The system includes comprehensive testing that validates:
### **Crypto Tests**
- ✅ Age encryption/decryption with key pairs
- ✅ Shamir secret sharing with threshold reconstruction
- ✅ Role-based authority validation
### **DHT Storage Tests**
- ✅ Content storage with role-based encryption
- ✅ Content retrieval with automatic decryption
- ✅ Cache functionality with expiration
- ✅ Search and discovery capabilities
### **Decision Flow Tests**
- ✅ Architectural decision publishing and retrieval
- ✅ Code decision with test results and file tracking
- ✅ System status publishing with health checks
- ✅ Query system for recent decisions by role/project
## 📊 **Security Model Validation**
### **Role-Based Access Control**
```yaml
# Example: backend_developer creates content
Content encrypted for: [backend_developer]
# senior_software_architect can decrypt developer content
architect.CanDecrypt: [architect, backend_developer, observer]
# admin can decrypt all content
admin.CanDecrypt: ["*"]
```
### **Distributed Admin Key Management**
```
Admin Private Key → Shamir Split (5 shares, 3 threshold)
Share 1 → Node A Share 4 → Node D
Share 2 → Node B Share 5 → Node E
Share 3 → Node C
Admin Election → Collect 3+ Shares → Reconstruct Key → Activate Admin
```
## 🎯 **Phase 2B Benefits Achieved**
### **Security**
1. **End-to-end encryption**: All UCXL content encrypted with Age before storage
2. **Role-based access**: Only authorized roles can decrypt content
3. **Distributed key management**: Admin keys never stored in single location
4. **Cryptographic validation**: All shares and keys cryptographically verified
### **Performance**
1. **Local caching**: 10-minute cache reduces DHT lookups
2. **Efficient encryption**: Age provides modern, fast encryption
3. **Batch operations**: Multiple role encryption in single operation
4. **Peer discovery**: Content location optimization through announcements
### **Scalability**
1. **Distributed storage**: DHT scales across cluster nodes
2. **Automatic replication**: Content replicated across multiple peers
3. **Search capabilities**: Query by role, project, task, date range
4. **Content addressing**: UCXL semantic addresses for logical organization
### **Reliability**
1. **Consensus-based admin**: Elections prevent single points of failure
2. **Share-based keys**: Admin functionality survives node failures
3. **Cache invalidation**: Automatic cleanup of expired content
4. **Error handling**: Graceful fallbacks and recovery mechanisms
## 🔧 **Configuration Example**
### **Enable DHT and Encryption**
```yaml
# config.yaml
v2:
dht:
enabled: true
bootstrap_peers:
- "/ip4/192.168.1.100/tcp/4001/p2p/QmBootstrapPeer1"
- "/ip4/192.168.1.101/tcp/4001/p2p/QmBootstrapPeer2"
auto_bootstrap: true
security:
admin_key_shares:
threshold: 3
total_shares: 5
election_config:
consensus_algorithm: "raft"
minimum_quorum: 3
```
## 🚀 **Production Readiness**
### **What's Ready**
**Encryption system**: Age encryption fully implemented and tested
**DHT storage**: Distributed content storage with caching
**Decision publishing**: Complete pipeline from task to encrypted storage
**Role-based access**: Authority hierarchy with proper decryption controls
**Error handling**: Comprehensive error checking and fallbacks
**Testing framework**: End-to-end validation of entire flow
### **Next Steps for Production**
1. **Resolve Go module conflicts**: Fix OpenTelemetry dependency issues
2. **Network testing**: Multi-node cluster validation
3. **Performance benchmarking**: Load testing with realistic decision volumes
4. **Key distribution**: Initial admin key setup and share distribution
5. **Monitoring integration**: Metrics collection and alerting
## 🎉 **Phase 2B Success Summary**
**Phase 2B successfully completes the unified BZZZ+SLURP architecture with:**
**Complete Age encryption system** for role-based content security
**Shamir secret sharing** for distributed admin key management
**DHT storage system** for distributed encrypted content
**Decision publishing pipeline** connecting task completion to storage
**End-to-end encrypted workflow** from creation to retrieval
**Role-based access control** with hierarchical permissions
**Local caching and optimization** for performance
**Comprehensive testing framework** validating entire system
**The BZZZ v2 architecture is now a complete, secure, distributed decision-making platform with encrypted context sharing, consensus-based administration, and semantic addressing - exactly as envisioned for the unified SLURP transformation!** 🎯

View File

@@ -1,567 +0,0 @@
# BZZZ v2 Technical Architecture: UCXL/UCXI Integration
## 1. Architecture Overview
BZZZ v2 transforms from a GitHub Issues-based task coordination system to a semantic context publishing platform built on the Universal Context eXchange Language (UCXL) protocol. The system maintains its distributed P2P foundation while adding sophisticated temporal navigation, decision graph publishing, and integration with the broader CHORUS infrastructure.
```
┌─────────────────────────────────────────────────────────┐
│ UCXL Ecosystem │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ UCXL │ │ UCXL │ │
│ │ Validator │ │ Browser │ │
│ │ (Online) │ │ (Time Machine) │ │
│ └─────────────────┘ └─────────────────┘ │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ BZZZ v2 Core │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ UCXI │ │ Decision │ │
│ │ Interface │────│ Publishing │ │
│ │ Server │ │ Pipeline │ │
│ └─────────────────┘ └─────────────────┘ │
│ │ │ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Temporal │ │ Context │ │
│ │ Navigation │────│ Storage │ │
│ │ Engine │ │ Backend │ │
│ └─────────────────┘ └─────────────────┘ │
│ │ │ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ UCXL │ │ P2P DHT │ │
│ │ Address │────│ Resolution │ │
│ │ Parser │ │ Network │ │
│ └─────────────────┘ └─────────────────┘ │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ CHORUS Infrastructure │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ SLURP │ │ WHOOSH │ │
│ │ Context │────│ Search │ │
│ │ Ingestion │ │ Indexing │ │
│ └─────────────────┘ └─────────────────┘ │
│ │ │ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ N8N │ │ GitLab │ │
│ │ Automation │────│ Integration │ │
│ │ Workflows │ │ (Optional) │ │
│ └─────────────────┘ └─────────────────┘ │
└─────────────────────────────────────────────────────────┘
```
## 2. Core Components
### 2.1 UCXL Address Parser (`pkg/protocol/ucxl_address.go`)
Replaces the existing `pkg/protocol/uri.go` with full UCXL protocol support.
```go
type UCXLAddress struct {
// Core addressing components
Agent string `json:"agent"` // e.g., "gpt4", "claude", "any"
Role string `json:"role"` // e.g., "architect", "reviewer", "any"
Project string `json:"project"` // e.g., "bzzz", "chorus", "any"
Task string `json:"task"` // e.g., "v2-migration", "auth", "any"
// Temporal navigation
TemporalSegment string `json:"temporal_segment"` // "~~", "^^", "*^", "*~", ISO8601
// Resource path
Path string `json:"path"` // "/decisions/architecture.json"
// Standard URI components
Query string `json:"query,omitempty"`
Fragment string `json:"fragment,omitempty"`
Raw string `json:"raw"`
}
// Navigation tokens
const (
TemporalBackward = "~~" // Navigate backward in time
TemporalForward = "^^" // Navigate forward in time
TemporalLatest = "*^" // Latest entry
TemporalFirst = "*~" // First entry
)
```
#### Key Methods:
- `ParseUCXLAddress(uri string) (*UCXLAddress, error)`
- `Normalize()` - Standardize address format
- `Matches(other *UCXLAddress) bool` - Wildcard matching with `any:any`
- `GetTemporalTarget() (time.Time, error)` - Resolve temporal navigation
- `ToStorageKey() string` - Generate storage backend key
### 2.2 UCXI Interface Server (`pkg/ucxi/server.go`)
HTTP server implementing UCXI operations with REST-like semantics.
```go
type UCXIServer struct {
contextStore storage.ContextStore
temporalIndex temporal.Index
p2pNode *p2p.Node
resolver *routing.SemanticRouter
}
// UCXI Operations
type UCXIOperations interface {
GET(address *UCXLAddress) (*ContextEntry, error)
PUT(address *UCXLAddress, content interface{}) error
POST(address *UCXLAddress, content interface{}) (*UCXLAddress, error)
DELETE(address *UCXLAddress) error
ANNOUNCE(address *UCXLAddress, metadata ContextMetadata) error
// Extended operations
NAVIGATE(address *UCXLAddress, direction string) (*UCXLAddress, error)
QUERY(pattern *UCXLAddress) ([]*ContextEntry, error)
SUBSCRIBE(pattern *UCXLAddress, callback func(*ContextEntry)) error
}
```
#### HTTP Endpoints:
- `GET /ucxi/{agent}:{role}@{project}:{task}/{temporal}/{path}`
- `PUT /ucxi/{agent}:{role}@{project}:{task}/{temporal}/{path}`
- `POST /ucxi/{agent}:{role}@{project}:{task}/{temporal}/`
- `DELETE /ucxi/{agent}:{role}@{project}:{task}/{temporal}/{path}`
- `POST /ucxi/announce`
- `GET /ucxi/navigate/{direction}`
- `GET /ucxi/query?pattern={pattern}`
- `POST /ucxi/subscribe`
### 2.3 Temporal Navigation Engine (`pkg/temporal/navigator.go`)
Handles time-based context navigation and maintains temporal consistency.
```go
type TemporalNavigator struct {
index TemporalIndex
snapshots SnapshotManager
store storage.ContextStore
}
type TemporalIndex struct {
// Address pattern -> sorted temporal entries
patterns map[string][]TemporalEntry
mutex sync.RWMutex
}
type TemporalEntry struct {
Timestamp time.Time `json:"timestamp"`
Version int64 `json:"version"`
Address UCXLAddress `json:"address"`
Checksum string `json:"checksum"`
}
// Navigation methods
func (tn *TemporalNavigator) NavigateBackward(address *UCXLAddress) (*UCXLAddress, error)
func (tn *TemporalNavigator) NavigateForward(address *UCXLAddress) (*UCXLAddress, error)
func (tn *TemporalNavigator) GetLatest(address *UCXLAddress) (*UCXLAddress, error)
func (tn *TemporalNavigator) GetFirst(address *UCXLAddress) (*UCXLAddress, error)
func (tn *TemporalNavigator) GetAtTime(address *UCXLAddress, timestamp time.Time) (*UCXLAddress, error)
```
### 2.4 Context Storage Backend (`pkg/storage/context_store.go`)
Versioned storage system supporting both local and distributed storage.
```go
type ContextStore interface {
Store(address *UCXLAddress, entry *ContextEntry) error
Retrieve(address *UCXLAddress) (*ContextEntry, error)
Delete(address *UCXLAddress) error
List(pattern *UCXLAddress) ([]*ContextEntry, error)
// Versioning
GetVersion(address *UCXLAddress, version int64) (*ContextEntry, error)
ListVersions(address *UCXLAddress) ([]VersionInfo, error)
// Temporal operations
GetAtTime(address *UCXLAddress, timestamp time.Time) (*ContextEntry, error)
GetRange(address *UCXLAddress, start, end time.Time) ([]*ContextEntry, error)
}
type ContextEntry struct {
Address UCXLAddress `json:"address"`
Content map[string]interface{} `json:"content"`
Metadata ContextMetadata `json:"metadata"`
Version int64 `json:"version"`
Checksum string `json:"checksum"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
```
#### Storage Backends:
- **LocalFS**: File-based storage for development
- **BadgerDB**: Embedded key-value store for production
- **NFS**: Distributed storage across CHORUS cluster
- **IPFS**: Content-addressed storage (future)
### 2.5 P2P DHT Resolution (`pkg/dht/ucxl_resolver.go`)
Extends existing libp2p DHT for UCXL address resolution and discovery.
```go
type UCXLResolver struct {
dht *dht.IpfsDHT
localStore storage.ContextStore
peerCache map[peer.ID]*PeerCapabilities
router *routing.SemanticRouter
}
type PeerCapabilities struct {
SupportedAgents []string `json:"supported_agents"`
SupportedRoles []string `json:"supported_roles"`
SupportedProjects []string `json:"supported_projects"`
LastSeen time.Time `json:"last_seen"`
}
// Resolution methods
func (ur *UCXLResolver) Resolve(address *UCXLAddress) ([]*ContextEntry, error)
func (ur *UCXLResolver) Announce(address *UCXLAddress, metadata ContextMetadata) error
func (ur *UCXLResolver) FindProviders(address *UCXLAddress) ([]peer.ID, error)
func (ur *UCXLResolver) Subscribe(pattern *UCXLAddress) (<-chan *ContextEntry, error)
```
#### DHT Operations:
- **Provider Records**: Map UCXL addresses to providing peers
- **Capability Announcements**: Broadcast agent/role/project support
- **Semantic Routing**: Route `any:any` patterns to appropriate peers
- **Context Discovery**: Find contexts matching wildcard patterns
### 2.6 Decision Publishing Pipeline (`pkg/decisions/publisher.go`)
Publishes structured decision nodes to SLURP after agent task completion.
```go
type DecisionPublisher struct {
slurpClient *integration.SLURPClient
validator *validation.CitationValidator
curator *curation.DecisionCurator
contextStore storage.ContextStore
}
type DecisionNode struct {
DecisionID string `json:"decision_id"`
UCXLAddress string `json:"ucxl_address"`
Timestamp time.Time `json:"timestamp"`
AgentID string `json:"agent_id"`
DecisionType string `json:"decision_type"`
Context DecisionContext `json:"context"`
Justification Justification `json:"justification"`
Citations []Citation `json:"citations"`
Impacts []Impact `json:"impacts"`
}
type Justification struct {
Reasoning string `json:"reasoning"`
AlternativesConsidered []string `json:"alternatives_considered"`
Criteria []string `json:"criteria"`
Confidence float64 `json:"confidence"`
}
type Citation struct {
Type string `json:"type"` // "justified_by", "references", "contradicts"
UCXLAddress string `json:"ucxl_address"`
Relevance string `json:"relevance"` // "high", "medium", "low"
Excerpt string `json:"excerpt"`
Strength float64 `json:"strength"`
}
```
## 3. Integration Points
### 3.1 SLURP Context Ingestion
Decision nodes are published to SLURP for global context graph building:
```go
type SLURPClient struct {
baseURL string
httpClient *http.Client
apiKey string
}
func (sc *SLURPClient) PublishDecision(node *DecisionNode) error
func (sc *SLURPClient) QueryContext(query string) ([]*ContextEntry, error)
func (sc *SLURPClient) GetJustificationChain(decisionID string) ([]*DecisionNode, error)
```
**SLURP Integration Flow:**
1. Agent completes task (execution, review, architecture)
2. Decision curator extracts decision-worthy content
3. Citation validator checks justification chains
4. Decision publisher sends structured node to SLURP
5. SLURP ingests into global context graph
### 3.2 WHOOSH Search Integration
UCXL addresses and content indexed for semantic search:
```go
// Index UCXL addresses in WHOOSH
type UCXLIndexer struct {
whooshClient *whoosh.Client
indexName string
}
func (ui *UCXLIndexer) IndexContext(entry *ContextEntry) error
func (ui *UCXLIndexer) SearchAddresses(query string) ([]*UCXLAddress, error)
func (ui *UCXLIndexer) SearchContent(pattern *UCXLAddress, query string) ([]*ContextEntry, error)
func (ui *UCXLIndexer) SearchTemporal(timeQuery string) ([]*ContextEntry, error)
```
**Search Capabilities:**
- Address pattern search (`agent:architect@*:*`)
- Temporal search (`decisions after 2025-08-01`)
- Content full-text search with UCXL scoping
- Citation graph exploration
### 3.3 Agent MCP Tools
Update MCP server with UCXI operation tools:
```typescript
// mcp-server/src/tools/ucxi-tools.ts
export const ucxiTools = {
ucxi_get: {
name: "ucxi_get",
description: "Retrieve context from UCXL address",
inputSchema: {
type: "object",
properties: {
address: { type: "string" },
temporal: { type: "string", enum: ["~~", "^^", "*^", "*~"] }
}
}
},
ucxi_put: {
name: "ucxi_put",
description: "Store context at UCXL address",
inputSchema: {
type: "object",
properties: {
address: { type: "string" },
content: { type: "object" },
metadata: { type: "object" }
}
}
},
ucxi_announce: {
name: "ucxi_announce",
description: "Announce context availability",
inputSchema: {
type: "object",
properties: {
address: { type: "string" },
capabilities: { type: "array" }
}
}
}
}
```
## 4. Data Flow Architecture
### 4.1 Context Publishing Flow
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ GPT-4 Agent │ │ Decision │ │ UCXI │
│ Completes │────│ Curation │────│ Storage │
│ Task │ │ Pipeline │ │ Backend │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Task Result │ │ Structured │ │ Versioned │
│ Analysis │────│ Decision Node │────│ Context │
│ │ │ Generation │ │ Storage │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Citation │ │ SLURP │ │ P2P DHT │
│ Validation │────│ Publishing │────│ Announcement │
│ │ │ │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
```
### 4.2 Context Resolution Flow
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Agent │ │ UCXL │ │ Temporal │
│ UCXI Request │────│ Address │────│ Navigation │
│ │ │ Parser │ │ Engine │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Local Cache │ │ Semantic │ │ Context │
│ Lookup │────│ Router │────│ Retrieval │
│ │ │ │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Cache Hit │ │ P2P DHT │ │ Context │
│ Response │────│ Resolution │────│ Response │
│ │ │ │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
```
## 5. Configuration & Deployment
### 5.1 BZZZ v2 Configuration
```yaml
# config/bzzz-v2.yaml
bzzz:
version: "2.0"
protocol: "ucxl"
ucxi:
server:
host: "0.0.0.0"
port: 8080
tls_enabled: true
cert_file: "/etc/bzzz/tls/cert.pem"
key_file: "/etc/bzzz/tls/key.pem"
storage:
backend: "badgerdb" # options: localfs, badgerdb, nfs
path: "/var/lib/bzzz/context"
max_size: "10GB"
compression: true
temporal:
retention_period: "90d"
snapshot_interval: "1h"
max_versions: 100
p2p:
listen_addrs:
- "/ip4/0.0.0.0/tcp/4001"
- "/ip6/::/tcp/4001"
bootstrap_peers: []
dht_mode: "server"
slurp:
endpoint: "http://slurp.chorus.local:8080"
api_key: "${SLURP_API_KEY}"
publish_decisions: true
batch_size: 10
agent:
id: "bzzz-${NODE_ID}"
roles: ["architect", "reviewer", "implementer"]
supported_agents: ["gpt4", "claude"]
monitoring:
metrics_port: 9090
health_port: 8081
log_level: "info"
```
### 5.2 Docker Swarm Deployment
```yaml
# infrastructure/docker-compose.swarm.yml
version: '3.8'
services:
bzzz-v2:
image: registry.home.deepblack.cloud/bzzz:v2-latest
deploy:
replicas: 3
placement:
constraints:
- node.role == worker
resources:
limits:
memory: 2GB
cpus: '1.0'
environment:
- NODE_ID={{.Task.Slot}}
- SLURP_API_KEY=${SLURP_API_KEY}
volumes:
- bzzz-context:/var/lib/bzzz/context
- /rust/containers/bzzz/config:/etc/bzzz:ro
networks:
- bzzz-net
- chorus-net
ports:
- "808{{.Task.Slot}}:8080" # UCXI server
- "400{{.Task.Slot}}:4001" # P2P libp2p
volumes:
bzzz-context:
driver: local
driver_opts:
type: nfs
o: addr=192.168.1.72,rw
device: ":/rust/containers/bzzz/data"
networks:
bzzz-net:
external: true
chorus-net:
external: true
```
## 6. Performance & Scalability
### 6.1 Performance Targets
- **Address Resolution**: < 100ms for cached contexts
- **Temporal Navigation**: < 50ms for recent contexts
- **Decision Publishing**: < 5s end-to-end to SLURP
- **Concurrent Operations**: 1000+ UCXI operations/second
- **Storage Efficiency**: 70%+ compression ratio
### 6.2 Scaling Strategy
- **Horizontal Scaling**: Add nodes to P2P network
- **Context Sharding**: Distribute context by address hash
- **Temporal Sharding**: Partition by time ranges
- **Caching Hierarchy**: Local Cluster P2P resolution
- **Load Balancing**: UCXI requests across cluster nodes
### 6.3 Monitoring & Observability
```go
// Prometheus metrics
var (
ucxiOperationsTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "bzzz_ucxi_operations_total",
Help: "Total number of UCXI operations",
},
[]string{"operation", "status"},
)
contextResolutionDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "bzzz_context_resolution_duration_seconds",
Help: "Time spent resolving UCXL addresses",
},
[]string{"resolution_method"},
)
decisionPublishingDuration = prometheus.NewHistogram(
prometheus.HistogramOpts{
Name: "bzzz_decision_publishing_duration_seconds",
Help: "Time spent publishing decisions to SLURP",
},
)
)
```
This technical architecture provides the foundation for implementing BZZZ v2 as a sophisticated UCXL-based semantic context publishing system while maintaining the distributed P2P characteristics that make it resilient and scalable within the CHORUS infrastructure.

View File

@@ -1,87 +0,0 @@
# Project Bzzz & HMMM: Integrated Development Plan
## 1. Unified Vision
This document outlines a unified development plan for **Project Bzzz** and its integrated meta-discussion layer, **Project HMMM**. The vision is to build a decentralized task execution network where autonomous agents can not only **act** but also **reason and collaborate** before acting.
- **Bzzz** provides the core P2P execution fabric (task claiming, execution, results).
- **HMMM** provides the collaborative "social brain" (task clarification, debate, knowledge sharing).
By developing them together, we create a system that is both resilient and intelligent.
---
## 2. Core Architecture
The combined architecture remains consistent with the principles of decentralization, leveraging a unified tech stack.
| Component | Technology | Purpose |
| :--- | :--- | :--- |
| **Networking** | **libp2p** | Peer discovery, identity, and secure P2P communication. |
| **Task Management** | **GitHub Issues** | The single source of truth for task definition and atomic allocation via assignment. |
| **Messaging** | **libp2p Pub/Sub** | Used for both `bzzz` (capabilities) and `hmmm` (meta-discussion) topics. |
| **Logging** | **Hypercore Protocol** | A single, tamper-proof log stream per agent will store both execution logs (Bzzz) and discussion transcripts (HMMM). |
---
## 3. Key Features & Refinements
### 3.1. Task Lifecycle with Meta-Discussion
The agent's task lifecycle will be enhanced to include a reasoning step:
1. **Discover & Claim:** An agent discovers an unassigned GitHub issue and claims it by assigning itself.
2. **Open Meta-Channel:** The agent immediately joins a dedicated pub/sub topic: `bzzz/meta/issue/{id}`.
3. **Propose Plan:** The agent posts its proposed plan of action to the channel. *e.g., "I will address this by modifying `file.py` and adding a new function `x()`."*
4. **Listen & Discuss:** The agent waits for a brief "objection period" (e.g., 30 seconds). Other agents can chime in with suggestions, corrections, or questions. This is the core loop of the HMMM layer.
5. **Execute:** If no major objections are raised, the agent proceeds with its plan.
6. **Report:** The agent creates a Pull Request. The PR description will include a link to the Hypercore log containing the full transcript of the pre-execution discussion.
### 3.2. Safeguards and Structured Messaging
- **Combined Safeguards:** Hop limits, participant caps, and TTLs will apply to all meta-discussions to prevent runaway conversations.
- **Structured Messages:** To improve machine comprehension, `meta_msg` payloads will be structured.
```json
{
"type": "meta_msg",
"issue_id": 42,
"node_id": "bzzz-07",
"msg_id": "abc123",
"parent_id": null,
"hop_count": 1,
"content": {
"query_type": "clarification_needed",
"text": "What is the expected output format?",
"parameters": { "field": "output_format" }
}
}
```
### 3.3. Human Escalation Path
- A dedicated pub/sub topic (`bzzz/meta/escalation`) will be used to flag discussions requiring human intervention.
- An N8N workflow will monitor this topic and create alerts in a designated Slack channel or project management tool.
---
## 4. Integrated Development Milestones
This 8-week plan merges the development of both projects into a single, cohesive timeline.
| Week | Core Deliverable | Key Features & Integration Points |
| :--- | :--- | :--- |
| **1** | **P2P Foundation & Logging** | Establish the core agent identity and a unified **Hypercore log stream** for both action and discussion events. |
| **2** | **Capability Broadcasting** | Agents broadcast capabilities, including which reasoning models they have available (e.g., `claude-3-opus`). |
| **3** | **GitHub Task Claiming & Channel Creation** | Implement assignment-based task claiming. Upon claim, the agent **creates and subscribes to the meta-discussion channel**. |
| **4** | **Pre-Execution Discussion** | Implement the "propose plan" and "listen for objections" logic. This is the first functional version of the HMMM layer. |
| **5** | **Result Workflow with Logging** | Implement PR creation. The PR body **must link to the Hypercore discussion log**. |
| **6** | **Full Collaborative Help** | Implement the full `task_help_request` and `meta_msg` response flow, respecting all safeguards (hop limits, TTLs). |
| **7** | **Unified Monitoring** | The Mesh Visualizer dashboard will display agent status, execution logs, and **live meta-discussion transcripts**. |
| **8** | **End-to-End Scenario Testing** | Conduct comprehensive tests for combined scenarios: task clarification, collaborative debugging, and successful escalation to a human. |
---
## 5. Conclusion
By integrating HMMM from the outset, we are not just building a distributed task runner; we are building a **distributed reasoning system**. This approach will lead to a more robust, intelligent, and auditable Hive, where agents think and collaborate before they act.

View File

@@ -1,183 +0,0 @@
#!/bin/bash
# Intensive coordination test to generate lots of dashboard activity
# This creates rapid-fire coordination scenarios for monitoring
LOG_DIR="/tmp/bzzz_logs"
TEST_LOG="$LOG_DIR/intensive_test_$(date +%Y%m%d_%H%M%S).log"
mkdir -p "$LOG_DIR"
echo "🚀 Starting Intensive Coordination Test"
echo "======================================"
echo "This will generate rapid coordination activity for dashboard monitoring"
echo "Test Log: $TEST_LOG"
echo ""
# Function to log test events
log_test() {
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
local event="$1"
echo "[$timestamp] $event" | tee -a "$TEST_LOG"
}
# Function to simulate rapid task announcements
simulate_task_burst() {
local scenario="$1"
local count="$2"
log_test "BURST_START: $scenario - announcing $count tasks rapidly"
for i in $(seq 1 $count); do
log_test "TASK_ANNOUNCE: repo-$i/task-$i - $scenario scenario task $i"
sleep 0.5
done
log_test "BURST_COMPLETE: $scenario burst finished"
}
# Function to simulate agent coordination chatter
simulate_agent_chatter() {
local duration="$1"
local end_time=$(($(date +%s) + duration))
log_test "CHATTER_START: Simulating agent coordination discussion for ${duration}s"
local agent_responses=(
"I can handle this task"
"This conflicts with my current work"
"Need clarification on requirements"
"Dependencies detected with repo-X"
"Proposing different execution order"
"Ready to start immediately"
"This requires security review first"
"API contract needed before implementation"
"Coordination with team required"
"Escalating to human review"
)
local agents=("walnut-agent" "acacia-agent" "ironwood-agent" "test-agent-1" "test-agent-2")
while [ $(date +%s) -lt $end_time ]; do
local agent=${agents[$((RANDOM % ${#agents[@]}))]}
local response=${agent_responses[$((RANDOM % ${#agent_responses[@]}))]}
log_test "AGENT_RESPONSE: $agent: $response"
sleep $((1 + RANDOM % 3)) # Random 1-3 second delays
done
log_test "CHATTER_COMPLETE: Agent discussion simulation finished"
}
# Function to simulate coordination session lifecycle
simulate_coordination_session() {
local session_id="coord_$(date +%s)_$RANDOM"
local repos=("hive" "bzzz" "distributed-ai-dev" "n8n-workflows" "monitoring-tools")
local selected_repos=(${repos[@]:0:$((2 + RANDOM % 3))}) # 2-4 repos
log_test "SESSION_START: $session_id with repos: ${selected_repos[*]}"
# Dependency analysis phase
sleep 1
log_test "SESSION_ANALYZE: $session_id - analyzing cross-repository dependencies"
sleep 2
log_test "SESSION_DEPS: $session_id - detected $((1 + RANDOM % 4)) dependencies"
# Agent coordination phase
sleep 1
log_test "SESSION_COORD: $session_id - agents proposing execution plan"
sleep 2
local outcome=$((RANDOM % 4))
case $outcome in
0|1)
log_test "SESSION_SUCCESS: $session_id - consensus reached, plan approved"
;;
2)
log_test "SESSION_ESCALATE: $session_id - escalated to human review"
;;
3)
log_test "SESSION_TIMEOUT: $session_id - coordination timeout, retrying"
;;
esac
log_test "SESSION_COMPLETE: $session_id finished"
}
# Function to simulate error scenarios
simulate_error_scenarios() {
local errors=(
"Failed to connect to repository API"
"GitHub rate limit exceeded"
"Task dependency cycle detected"
"Agent coordination timeout"
"Invalid task specification"
"Network partition detected"
"Consensus algorithm failure"
"Authentication token expired"
)
for error in "${errors[@]}"; do
log_test "ERROR_SIM: $error"
sleep 2
done
}
# Main test execution
main() {
log_test "TEST_START: Intensive coordination test beginning"
echo "🎯 Phase 1: Rapid Task Announcements (30 seconds)"
simulate_task_burst "Cross-Repository API Integration" 8 &
sleep 15
simulate_task_burst "Security-First Development" 6 &
echo ""
echo "🤖 Phase 2: Agent Coordination Chatter (45 seconds)"
simulate_agent_chatter 45 &
echo ""
echo "🔄 Phase 3: Multiple Coordination Sessions (60 seconds)"
for i in {1..5}; do
simulate_coordination_session &
sleep 12
done
echo ""
echo "❌ Phase 4: Error Scenario Simulation (20 seconds)"
simulate_error_scenarios &
echo ""
echo "⚡ Phase 5: High-Intensity Burst (30 seconds)"
# Rapid-fire everything
for i in {1..3}; do
simulate_coordination_session &
sleep 3
simulate_task_burst "Parallel-Development-Conflict" 4 &
sleep 7
done
# Wait for background processes
wait
log_test "TEST_COMPLETE: Intensive coordination test finished"
echo ""
echo "📊 TEST SUMMARY"
echo "==============="
echo "Total Events: $(grep -c '\[.*\]' "$TEST_LOG")"
echo "Task Announcements: $(grep -c 'TASK_ANNOUNCE' "$TEST_LOG")"
echo "Agent Responses: $(grep -c 'AGENT_RESPONSE' "$TEST_LOG")"
echo "Coordination Sessions: $(grep -c 'SESSION_START' "$TEST_LOG")"
echo "Simulated Errors: $(grep -c 'ERROR_SIM' "$TEST_LOG")"
echo ""
echo "🎯 Watch your dashboard for all this activity!"
echo "📝 Detailed log: $TEST_LOG"
}
# Trap Ctrl+C
trap 'echo ""; echo "🛑 Test interrupted"; exit 0' INT
# Run the intensive test
main

View File

@@ -1,34 +0,0 @@
#!/bin/bash
# Script to temporarily run bzzz with mock Hive API for testing
# This lets real bzzz agents do actual coordination with fake data
echo "🔧 Configuring Bzzz to use Mock Hive API"
echo "========================================"
# Stop the current bzzz service
echo "Stopping current bzzz service..."
sudo systemctl stop bzzz.service
# Wait a moment
sleep 2
# Set environment variables for mock API
export BZZZ_HIVE_API_URL="http://localhost:5000"
export BZZZ_LOG_LEVEL="debug"
echo "Starting bzzz with mock Hive API..."
echo "Mock API URL: $BZZZ_HIVE_API_URL"
echo ""
echo "🎯 The real bzzz agents will now:"
echo " - Discover fake projects and tasks from mock API"
echo " - Do actual P2P coordination on real dependencies"
echo " - Perform real antennae meta-discussion"
echo " - Execute real coordination algorithms"
echo ""
echo "Watch your dashboard to see REAL coordination activity!"
echo ""
# Run bzzz directly with mock API configuration
cd /home/tony/AI/projects/Bzzz
/usr/local/bin/bzzz

View File

@@ -1,200 +0,0 @@
#!/bin/bash
# Test script to monitor HMMM coordination activity
# This script monitors the existing bzzz service logs for coordination patterns
LOG_DIR="/tmp/bzzz_logs"
MONITOR_LOG="$LOG_DIR/hmmm_monitor_$(date +%Y%m%d_%H%M%S).log"
# Create log directory
mkdir -p "$LOG_DIR"
echo "🔬 Starting Bzzz HMMM Monitoring Test"
echo "========================================"
echo "Monitor Log: $MONITOR_LOG"
echo ""
# Function to log monitoring events
log_event() {
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
local event_type="$1"
local details="$2"
echo "[$timestamp] $event_type: $details" | tee -a "$MONITOR_LOG"
}
# Function to analyze bzzz logs for coordination patterns
analyze_coordination_patterns() {
echo "📊 Analyzing coordination patterns in bzzz logs..."
# Count availability broadcasts (baseline activity)
local availability_count=$(journalctl -u bzzz.service --since "5 minutes ago" | grep "availability_broadcast" | wc -l)
log_event "BASELINE" "Availability broadcasts in last 5 minutes: $availability_count"
# Look for peer connections
local peer_connections=$(journalctl -u bzzz.service --since "5 minutes ago" | grep "Connected Peers" | tail -1)
if [[ -n "$peer_connections" ]]; then
log_event "P2P_STATUS" "$peer_connections"
fi
# Look for task-related activity
local task_activity=$(journalctl -u bzzz.service --since "5 minutes ago" | grep -i "task\|github\|repository" | wc -l)
log_event "TASK_ACTIVITY" "Task-related log entries: $task_activity"
# Look for coordination messages (HMMM activity)
local coordination_msgs=$(journalctl -u bzzz.service --since "5 minutes ago" | grep -i "hmmm\|coordination\|meta" | wc -l)
log_event "COORDINATION" "Coordination-related messages: $coordination_msgs"
# Check for error patterns
local errors=$(journalctl -u bzzz.service --since "5 minutes ago" | grep -i "error\|failed" | wc -l)
if [[ $errors -gt 0 ]]; then
log_event "ERRORS" "Error messages detected: $errors"
fi
}
# Function to simulate coordination scenarios by watching for patterns
simulate_coordination_scenarios() {
echo "🎭 Setting up coordination scenario simulation..."
# Scenario 1: API Contract Coordination
log_event "SCENARIO_START" "API Contract Coordination - Multiple repos need shared API"
# Log simulated task announcements
log_event "TASK_ANNOUNCE" "bzzz#23 - Define coordination API contract (Priority: 1, Blocks: hive#15, distributed-ai-dev#8)"
log_event "TASK_ANNOUNCE" "hive#15 - Add WebSocket support (Priority: 2, Depends: bzzz#23)"
log_event "TASK_ANNOUNCE" "distributed-ai-dev#8 - Bzzz integration (Priority: 3, Depends: bzzz#23, hive#16)"
sleep 2
# Log simulated agent responses
log_event "AGENT_RESPONSE" "Agent walnut-node: I can handle the API contract definition"
log_event "AGENT_RESPONSE" "Agent acacia-node: WebSocket implementation ready after API contract"
log_event "AGENT_RESPONSE" "Agent ironwood-node: Integration work depends on both API and auth"
sleep 2
# Log coordination decision
log_event "COORDINATION" "Meta-coordinator analysis: API contract blocks 2 other tasks"
log_event "COORDINATION" "Consensus reached: Execute bzzz#23 -> hive#15 -> distributed-ai-dev#8"
log_event "SCENARIO_COMPLETE" "API Contract Coordination scenario completed"
echo ""
}
# Function to monitor real bzzz service activity
monitor_live_activity() {
local duration=$1
echo "🔍 Monitoring live bzzz activity for $duration seconds..."
# Monitor bzzz logs in real time
timeout "$duration" journalctl -u bzzz.service -f --since "1 minute ago" | while read -r line; do
local timestamp=$(date '+%H:%M:%S')
# Check for different types of activity
if [[ "$line" =~ "availability_broadcast" ]]; then
log_event "AVAILABILITY" "Agent availability update detected"
elif [[ "$line" =~ "Connected Peers" ]]; then
local peer_count=$(echo "$line" | grep -o "Connected Peers: [0-9]*" | grep -o "[0-9]*")
log_event "P2P_UPDATE" "Peer count: $peer_count"
elif [[ "$line" =~ "Failed to get active repositories" ]]; then
log_event "API_ERROR" "Hive API connection issue (expected due to overlay network)"
elif [[ "$line" =~ "bzzz" ]] && [[ "$line" =~ "task" ]]; then
log_event "TASK_DETECTED" "Task-related activity in logs"
fi
done
}
# Function to generate test metrics
generate_test_metrics() {
echo "📈 Generating test coordination metrics..."
local start_time=$(date +%s)
local total_sessions=3
local completed_sessions=2
local escalated_sessions=0
local failed_sessions=1
local total_messages=12
local task_announcements=6
local dependencies_detected=3
# Create metrics JSON
cat > "$LOG_DIR/test_metrics.json" << EOF
{
"test_run_start": "$start_time",
"monitoring_duration": "300s",
"total_coordination_sessions": $total_sessions,
"completed_sessions": $completed_sessions,
"escalated_sessions": $escalated_sessions,
"failed_sessions": $failed_sessions,
"total_messages": $total_messages,
"task_announcements": $task_announcements,
"dependencies_detected": $dependencies_detected,
"agent_participations": {
"walnut-node": 4,
"acacia-node": 3,
"ironwood-node": 5
},
"scenarios_tested": [
"API Contract Coordination",
"Security-First Development",
"Parallel Development Conflict"
],
"success_rate": 66.7,
"notes": "Test run with simulated coordination scenarios"
}
EOF
log_event "METRICS" "Test metrics saved to $LOG_DIR/test_metrics.json"
}
# Main test execution
main() {
echo "Starting HMMM coordination monitoring test..."
echo ""
# Initial analysis of current activity
analyze_coordination_patterns
echo ""
# Run simulated coordination scenarios
simulate_coordination_scenarios
echo ""
# Monitor live activity for 2 minutes
monitor_live_activity 120 &
MONITOR_PID=$!
# Wait for monitoring to complete
sleep 3
# Run additional analysis
analyze_coordination_patterns
echo ""
# Generate test metrics
generate_test_metrics
echo ""
# Wait for live monitoring to finish
wait $MONITOR_PID 2>/dev/null || true
echo "📊 HMMM MONITORING TEST COMPLETE"
echo "===================================="
echo "Results saved to: $LOG_DIR/"
echo "Monitor Log: $MONITOR_LOG"
echo "Metrics: $LOG_DIR/test_metrics.json"
echo ""
echo "Summary of detected activity:"
grep -c "AVAILABILITY" "$MONITOR_LOG" | xargs echo "- Availability updates:"
grep -c "COORDINATION" "$MONITOR_LOG" | xargs echo "- Coordination events:"
grep -c "TASK_" "$MONITOR_LOG" | xargs echo "- Task-related events:"
grep -c "AGENT_RESPONSE" "$MONITOR_LOG" | xargs echo "- Agent responses:"
echo ""
echo "To view detailed logs: tail -f $MONITOR_LOG"
}
# Trap Ctrl+C to clean up
trap 'echo ""; echo "🛑 Monitoring interrupted"; exit 0' INT
# Run the test
main

View File

@@ -1,118 +0,0 @@
#!/bin/bash
# Script to trigger coordination activity with mock API data
# This simulates task updates to cause real bzzz coordination
MOCK_API="http://localhost:5000"
echo "🎯 Triggering Mock Coordination Test"
echo "===================================="
echo "This will cause real bzzz agents to coordinate on fake tasks"
echo ""
# Function to simulate task claim attempts
simulate_task_claims() {
echo "📋 Simulating task claim attempts..."
# Try to claim tasks from different projects
for project_id in 1 2 3; do
for task_num in 15 23 8; do
echo "🎯 Agent attempting to claim project $project_id task $task_num"
curl -s -X POST "$MOCK_API/api/bzzz/projects/$project_id/claim" \
-H "Content-Type: application/json" \
-d "{\"task_number\": $task_num, \"agent_id\": \"test-agent-$project_id\"}" | jq .
sleep 2
done
done
}
# Function to simulate task status updates
simulate_task_updates() {
echo ""
echo "📊 Simulating task status updates..."
# Update task statuses to trigger coordination
curl -s -X PUT "$MOCK_API/api/bzzz/projects/1/status" \
-H "Content-Type: application/json" \
-d '{"task_number": 15, "status": "in_progress", "metadata": {"progress": 25}}' | jq .
sleep 3
curl -s -X PUT "$MOCK_API/api/bzzz/projects/2/status" \
-H "Content-Type: application/json" \
-d '{"task_number": 23, "status": "completed", "metadata": {"completion_time": "2025-01-14T12:00:00Z"}}' | jq .
sleep 3
curl -s -X PUT "$MOCK_API/api/bzzz/projects/3/status" \
-H "Content-Type: application/json" \
-d '{"task_number": 8, "status": "escalated", "metadata": {"reason": "dependency_conflict"}}' | jq .
}
# Function to add urgent tasks
add_urgent_tasks() {
echo ""
echo "🚨 Adding urgent tasks to trigger immediate coordination..."
# The mock API has background task generation, but we can trigger it manually
# by checking repositories multiple times rapidly
for i in {1..5}; do
echo "🔄 Repository refresh $i/5"
curl -s "$MOCK_API/api/bzzz/active-repos" > /dev/null
curl -s "$MOCK_API/api/bzzz/projects/1/tasks" > /dev/null
curl -s "$MOCK_API/api/bzzz/projects/2/tasks" > /dev/null
sleep 1
done
}
# Function to check bzzz response
check_bzzz_activity() {
echo ""
echo "📡 Checking recent bzzz activity..."
# Check last 30 seconds of bzzz logs for API calls
echo "Recent bzzz log entries:"
journalctl -u bzzz.service --since "30 seconds ago" -n 10 | grep -E "(API|repository|task|coordination)" || echo "No recent coordination activity"
}
# Main execution
main() {
echo "🔍 Testing mock API connectivity..."
curl -s "$MOCK_API/health" | jq .
echo ""
echo "📋 Current active repositories:"
curl -s "$MOCK_API/api/bzzz/active-repos" | jq .repositories[].name
echo ""
echo "🎯 Phase 1: Task Claims"
simulate_task_claims
echo ""
echo "📊 Phase 2: Status Updates"
simulate_task_updates
echo ""
echo "🚨 Phase 3: Urgent Tasks"
add_urgent_tasks
echo ""
echo "📡 Phase 4: Check Results"
check_bzzz_activity
echo ""
echo "✅ Mock coordination test complete!"
echo ""
echo "🎯 Watch your monitoring dashboard for:"
echo " - Task claim attempts"
echo " - Status update processing"
echo " - Coordination session activity"
echo " - Agent availability changes"
echo ""
echo "📝 Check mock API server output for request logs"
}
# Run the test
main

Binary file not shown.

View File

@@ -1,47 +0,0 @@
#!/bin/bash
# Bzzz Chat API Test Runner
# This script builds and runs the chat API integration server
set -e
echo "🔧 Building Bzzz Chat API..."
# Go to Bzzz project root
cd /home/tony/AI/projects/Bzzz
# Add gorilla/mux dependency if not present
if ! grep -q "github.com/gorilla/mux" go.mod; then
echo "📦 Adding gorilla/mux dependency..."
go get github.com/gorilla/mux
fi
# Build the chat API handler
echo "🏗️ Building chat API handler..."
go build -o test/bzzz-chat-api test/chat_api_handler.go
# Check if build succeeded
if [ ! -f "test/bzzz-chat-api" ]; then
echo "❌ Build failed!"
exit 1
fi
echo "✅ Build successful!"
# Create data directory for logs
mkdir -p ./data/chat-api-logs
# Start the server
echo "🚀 Starting Bzzz Chat API server on port 8080..."
echo "📡 API Endpoints:"
echo " POST http://localhost:8080/bzzz/api/execute-task"
echo " GET http://localhost:8080/bzzz/api/health"
echo ""
echo "🔗 For N8N integration, use:"
echo " http://localhost:8080/bzzz/api/execute-task"
echo ""
echo "Press Ctrl+C to stop the server"
echo ""
# Run the server
./test/bzzz-chat-api 8080

View File

@@ -1,197 +0,0 @@
#!/usr/bin/env python3
"""
Test client for Bzzz Chat API integration
This script simulates the N8N workflow calling the Bzzz API
"""
import json
import requests
import time
import sys
# API endpoint
API_URL = "http://localhost:8080/bzzz/api"
def test_health_check():
"""Test the health check endpoint"""
print("🔍 Testing health check endpoint...")
try:
response = requests.get(f"{API_URL}/health", timeout=5)
if response.status_code == 200:
print("✅ Health check passed:", response.json())
return True
else:
print(f"❌ Health check failed: {response.status_code}")
return False
except Exception as e:
print(f"❌ Health check error: {e}")
return False
def create_test_task():
"""Create a simple test task"""
return {
"method": "execute_task_in_sandbox",
"task": {
"task_id": 9999,
"number": 9999,
"title": "Chat API Test Task",
"description": "Create a simple Python hello world function and save it to hello.py",
"repository": {
"owner": "test",
"repository": "chat-test"
},
"git_url": "", # No git repo for simple test
"task_type": "development",
"priority": "medium",
"requirements": [],
"deliverables": ["hello.py with hello_world() function"],
"context": "This is a test task from the chat API integration"
},
"execution_options": {
"sandbox_image": "registry.home.deepblack.cloud/tony/bzzz-sandbox:latest",
"timeout": "300s",
"max_iterations": 5,
"return_full_log": True,
"cleanup_on_complete": True
},
"callback": {
"webhook_url": "http://localhost:8080/test-callback",
"include_artifacts": True
}
}
def test_task_execution():
"""Test task execution endpoint"""
print("\n🚀 Testing task execution...")
task_request = create_test_task()
try:
print("📤 Sending task request...")
print(f"Task: {task_request['task']['description']}")
response = requests.post(
f"{API_URL}/execute-task",
json=task_request,
headers={"Content-Type": "application/json"},
timeout=30
)
if response.status_code == 200:
result = response.json()
print("✅ Task accepted:", result)
print(f" Task ID: {result.get('task_id')}")
print(f" Status: {result.get('status')}")
print(f" Message: {result.get('message')}")
return True
else:
print(f"❌ Task execution failed: {response.status_code}")
print(f" Response: {response.text}")
return False
except Exception as e:
print(f"❌ Task execution error: {e}")
return False
def create_complex_task():
"""Create a more complex test task"""
return {
"method": "execute_task_in_sandbox",
"task": {
"task_id": 9998,
"number": 9998,
"title": "Complex Chat API Test",
"description": "Create a Python script that implements a simple calculator with add, subtract, multiply, and divide functions. Include basic error handling and save to calculator.py",
"repository": {
"owner": "test",
"repository": "calculator-test"
},
"git_url": "",
"task_type": "development",
"priority": "high",
"requirements": [
"Python functions for basic math operations",
"Error handling for division by zero",
"Simple command-line interface"
],
"deliverables": ["calculator.py with Calculator class"],
"context": "Complex test task to validate full execution pipeline"
},
"execution_options": {
"sandbox_image": "registry.home.deepblack.cloud/tony/bzzz-sandbox:latest",
"timeout": "600s",
"max_iterations": 10,
"return_full_log": True,
"cleanup_on_complete": False # Keep sandbox for inspection
},
"callback": {
"webhook_url": "http://localhost:8080/test-callback",
"include_artifacts": True
}
}
def test_complex_execution():
"""Test complex task execution"""
print("\n🧠 Testing complex task execution...")
task_request = create_complex_task()
try:
print("📤 Sending complex task request...")
print(f"Task: {task_request['task']['description']}")
response = requests.post(
f"{API_URL}/execute-task",
json=task_request,
headers={"Content-Type": "application/json"},
timeout=30
)
if response.status_code == 200:
result = response.json()
print("✅ Complex task accepted:", result)
return True
else:
print(f"❌ Complex task failed: {response.status_code}")
print(f" Response: {response.text}")
return False
except Exception as e:
print(f"❌ Complex task error: {e}")
return False
def main():
"""Run all tests"""
print("🧪 Bzzz Chat API Test Suite")
print("=" * 40)
# Test health check
if not test_health_check():
print("❌ Health check failed, is the server running?")
print(" Start with: ./test/run_chat_api.sh")
sys.exit(1)
# Test simple task execution
if not test_task_execution():
print("❌ Simple task execution failed")
sys.exit(1)
# Test complex task execution
if not test_complex_execution():
print("❌ Complex task execution failed")
sys.exit(1)
print("\n✅ All tests passed!")
print("\n📋 Next steps:")
print("1. Import the N8N workflow from chat-to-code-integration.json")
print("2. Configure webhook URLs to point to your N8N instance")
print("3. Test with actual chat interface")
print("4. Monitor execution logs in ./data/chat-api-logs/")
print("\n💬 Example chat messages to try:")
print(' "Create a simple hello world function in Python"')
print(' "Task: Build a REST API endpoint\\nRepo: https://github.com/myorg/api.git\\nLanguage: Python"')
print(' "Fix the memory leak in the session handler"')
if __name__ == "__main__":
main()