diff --git a/.gitignore b/.gitignore index 32deff8a..e36d030b 100644 --- a/.gitignore +++ b/.gitignore @@ -1,5 +1,6 @@ -# Binaries +# Compiled binaries bzzz +bzzz-* *.exe *.exe~ *.dll @@ -11,10 +12,16 @@ bzzz # Output of the go coverage tool *.out +coverage.out # Go workspace file go.work +# Build artifacts +target/ +dist/ +build/ + # IDE files .vscode/ .idea/ @@ -28,9 +35,21 @@ go.work ehthumbs.db Thumbs.db -# Logs +# Logs and data *.log +logs/ +data/chat-api-logs/ # Temporary files *.tmp *.temp +*~ +*.bak + +# Development artifacts +archived/ +old-docs/ + +# Test artifacts +test/bzzz-* +test/*.sh diff --git a/archived/2025-07-17/DEVELOPMENT_PLAN.md b/archived/2025-07-17/DEVELOPMENT_PLAN.md deleted file mode 100644 index 0621409e..00000000 --- a/archived/2025-07-17/DEVELOPMENT_PLAN.md +++ /dev/null @@ -1,192 +0,0 @@ -# Project Bzzz: Decentralized Task Execution Network - Development Plan - -## 1. Overview & Vision - -This document outlines the development plan for **Project Bzzz**, a decentralized task execution network designed to enhance the existing **Hive Cluster**. - -The vision is to evolve from a centrally coordinated system to a resilient, peer-to-peer (P2P) mesh of autonomous agents. This architecture eliminates single points of failure, improves scalability, and allows for dynamic, collaborative task resolution. Bzzz will complement the existing N8N orchestration layer, acting as a powerful, self-organizing execution fabric. - ---- - -## 2. Core Architecture - -The system is built on three key pillars: decentralized networking, GitHub-native task management, and verifiable, distributed logging. - -| Component | Technology | Purpose | -| :--- | :--- | :--- | -| **Networking** | **libp2p** | For peer discovery (mDNS, DHT), identity, and secure P2P communication. | -| **Task Management** | **GitHub Issues** | The single source of truth for task definition, allocation, and tracking. | -| **Messaging** | **libp2p Pub/Sub** | For broadcasting capabilities and coordinating collaborative help requests. | -| **Logging** | **Hypercore Protocol** | For creating a tamper-proof, decentralized, and replicable logging system for debugging. | - ---- - -## 3. Architectural Refinements & Key Features - -Based on our analysis, the following refinements will be adopted: - -### 3.1. Task Allocation via GitHub Assignment - -To prevent race conditions and simplify logic, we will use GitHub's native issue assignment mechanism as an atomic lock. The `task_claim` pub/sub topic is no longer needed. - -**Workflow:** -1. A `bzzz-agent` discovers a new, *unassigned* issue in the target repository. -2. The agent immediately attempts to **assign itself** to the issue via the GitHub API. -3. **Success:** If the assignment succeeds, the agent has exclusive ownership of the task and begins execution. -4. **Failure:** If the assignment fails (because another agent was faster), the agent logs the contention and looks for another task. - -### 3.2. Collaborative Task Execution with Hop Limit - -The `task_help_request` feature enables agents to collaborate on complex tasks. To prevent infinite request loops and network flooding, we will implement a **hop limit**. - -- **Hop Limit:** A `task_help_request` will be discarded after being forwarded **3 times**. -- If a task cannot be completed after 3 help requests, it will be marked as "failed," and a comment will be added to the GitHub issue for manual review. - -### 3.3. Decentralized Logging with Hypercore - -To solve the challenge of debugging a distributed system, each agent will manage its own secure, append-only log stream using the Hypercore Protocol. - -- **Log Creation:** Each agent generates a `hypercore` and broadcasts its public key via the `capabilities` message. -- **Log Replication:** Any other agent (or a dedicated monitoring node) can use this key to replicate the log stream in real-time or after the fact. -- **Benefits:** This creates a verifiable and resilient audit trail for every agent's actions, which is invaluable for debugging without relying on a centralized logging server. - ---- - -## 4. Integration with the Hive Ecosystem - -Bzzz is designed to integrate seamlessly with the existing cluster infrastructure. - -### 4.1. Deployment Strategy: Docker + Host Networking (PREFERRED APPROACH) - -Based on comprehensive analysis of the existing Hive infrastructure and Bzzz's P2P requirements, we will use a **hybrid deployment approach** that combines Docker containerization with host networking: - -```yaml -# Docker Compose configuration for bzzz-agent -services: - bzzz-agent: - image: registry.home.deepblack.cloud/tony/bzzz-agent:latest - network_mode: "host" # Direct host network access for P2P - volumes: - - ./data:/app/data - - /var/run/docker.sock:/var/run/docker.sock # Docker API access - environment: - - NODE_ID=${HOSTNAME} - - GITHUB_TOKEN_FILE=/run/secrets/github-token - secrets: - - github-token - restart: unless-stopped - deploy: - placement: - constraints: - - node.role == worker # Deploy on all worker nodes -``` - -**Rationale for Docker + Host Networking:** -- ✅ **P2P Networking Advantages**: Direct access to host networking enables efficient mDNS discovery, NAT traversal, and lower latency communication -- ✅ **Infrastructure Consistency**: Maintains Docker Swarm deployment patterns and existing operational procedures -- ✅ **Resource Efficiency**: Eliminates Docker overlay network overhead for P2P communication -- ✅ **Best of Both Worlds**: Container portability and management with native network performance - -### 4.2. Cluster Integration Points - -- **Phased Rollout:** Deploy `bzzz-agent` containers across all cluster nodes (ACACIA, WALNUT, IRONWOOD, ROSEWOOD, FORSTEINET) using Docker Swarm -- **Network Architecture**: Leverages existing 192.168.1.0/24 LAN for P2P mesh communication -- **Resource Coordination**: Agents discover and utilize existing Ollama endpoints (port 11434) and CLI tools -- **Storage Integration**: Uses NFS shares (/rust/containers/) for shared configuration and Hypercore log storage - -### 4.3. Integration with Existing Services - -- **N8N as a Task Initiator:** High-level workflows in N8N will now terminate by creating a detailed GitHub Issue. This action triggers the Bzzz mesh, which handles the execution and reports back by creating a Pull Request. -- **Hive Coexistence**: Bzzz will run alongside existing Hive services on different ports, allowing gradual migration of workloads -- **The "Mesh Visualizer":** A dedicated monitoring dashboard will be created. It will: - 1. Subscribe to the `capabilities` pub/sub topic to visualize the live network topology. - 2. Replicate and display the Hypercore log streams from all active agents. - 3. Integrate with existing Grafana dashboards for unified monitoring - ---- - -## 5. Security Strategy - -- **GitHub Token Management:** Agents will use short-lived, fine-grained Personal Access Tokens. These tokens will be stored securely in **HashiCorp Vault** or a similar secrets management tool, and retrieved by the agent at runtime. -- **Network Security:** All peer-to-peer communication is automatically **encrypted end-to-end** by `libp2p`. - ---- - -## 6. Recommended Tech Stack - -| Category | Recommendation | Notes | -| :--- | :--- | :--- | -| **Language** | **Go** or **Rust** | Strongly recommended for performance, concurrency, and system-level programming. | -| **Networking** | `go-libp2p` / `rust-libp2p` | The official and most mature implementations. | -| **Logging** | `hypercore-go` / `hypercore-rs` | Libraries for implementing the Hypercore Protocol. | -| **GitHub API** | `go-github` / `octokit.rs` | Official and community-maintained clients for interacting with GitHub. | - ---- - -## 7. Development Milestones - -This 8-week plan incorporates the refined architecture. - -| Week | Deliverables | Key Features | -| :--- | :--- | :--- | -| **1** | **P2P Foundation & Logging** | Setup libp2p peer discovery and establish a **Hypercore log stream** for each agent. | -| **2** | **Capability Broadcasting** | Implement `capability_detector` and broadcast agent status via pub/sub. | -| **3** | **GitHub Task Claiming** | Ingest issues from GitHub and implement the **assignment-based task claiming** logic. | -| **4** | **Core Task Execution** | Integrate local CLIs (Ollama, etc.) to perform basic tasks based on issue content. | -| **5** | **GitHub Result Workflow** | Implement logic to create Pull Requests or follow-up issues upon task completion. | -| **6** | **Collaborative Help** | Implement the `task_help_request` and `task_help_response` flow with the **hop limit**. | -| **7** | **Monitoring & Visualization** | Build the first version of the **Mesh Visualizer** dashboard to display agent status and logs. | -| **8** | **Deployment & Testing** | Package the agent as a Docker container with host networking, write Docker Swarm deployment guide, and conduct end-to-end testing across cluster nodes. | - ---- - -## 8. Potential Risks & Mitigation - -- **Network Partitions ("Split-Brain"):** - - **Risk:** A network partition could lead to two separate meshes trying to work on the same task. - - **Mitigation:** Using GitHub's issue assignment as the atomic lock effectively solves this. The first agent to successfully claim the issue wins, regardless of network state. -- **Dependency on GitHub:** - - **Risk:** The system's ability to acquire new tasks is dependent on the availability of the GitHub API. - - **Mitigation:** This is an accepted trade-off for gaining a robust, native task management platform. Agents can be designed to continue working on already-claimed tasks during a GitHub outage. -- **Debugging Complexity:** - - **Risk:** Debugging distributed systems remains challenging. - - **Mitigation:** The Hypercore-based logging system provides a powerful and verifiable audit trail, which is a significant step towards mitigating this complexity. The Mesh Visualizer will also be a critical tool. -- **Docker Host Networking Security:** - - **Risk:** Host networking mode exposes containers directly to the host network, reducing isolation. - - **Mitigation:** - - Implement strict firewall rules on each node - - Use libp2p's built-in encryption for all P2P communication - - Run containers with restricted user privileges (non-root) - - Regular security audits of exposed ports and services - ---- - -## 9. Migration Strategy from Hive - -### 9.1. Gradual Transition Plan - -1. **Phase 1: Parallel Deployment** (Weeks 1-2) - - Deploy Bzzz agents alongside existing Hive infrastructure - - Use different port ranges to avoid conflicts - - Monitor resource usage and network performance - -2. **Phase 2: Simple Task Migration** (Weeks 3-4) - - Route basic code generation tasks through GitHub issues → Bzzz - - Keep complex multi-agent workflows in existing Hive + n8n - - Compare performance metrics between systems - -3. **Phase 3: Workflow Integration** (Weeks 5-6) - - Modify n8n workflows to create GitHub issues as final step - - Implement Bzzz → Hive result reporting for hybrid workflows - - Test end-to-end task lifecycle - -4. **Phase 4: Full Migration** (Weeks 7-8) - - Migrate majority of workloads to Bzzz mesh - - Retain Hive for monitoring and dashboard functionality - - Plan eventual deprecation of centralized coordinator - -### 9.2. Compatibility Layer - -- **API Bridge**: Maintain existing Hive API endpoints that proxy to Bzzz mesh -- **Data Migration**: Export task history and agent configurations from PostgreSQL -- **Monitoring Continuity**: Integrate Bzzz metrics into existing Grafana dashboards diff --git a/archived/2025-07-17/PROGRESS_REPORT.md b/archived/2025-07-17/PROGRESS_REPORT.md deleted file mode 100644 index 2708372b..00000000 --- a/archived/2025-07-17/PROGRESS_REPORT.md +++ /dev/null @@ -1,138 +0,0 @@ -# Bzzz P2P Coordination System - Progress Report - -## Overview -This report documents the implementation and testing progress of the Bzzz P2P mesh coordination system with meta-thinking capabilities (Antennae framework). - -## Major Accomplishments - -### 1. High-Priority Feature Implementation ✅ -- **Fixed stub function implementations** in `github/integration.go` - - Implemented proper task filtering based on agent capabilities - - Added task announcement logic for P2P coordination - - Enhanced capability-based task matching with keyword analysis - -- **Completed Hive API client integration** - - Extended PostgreSQL database schema for bzzz integration - - Updated ProjectService to use database instead of filesystem scanning - - Implemented secure Docker secrets for GitHub token access - -- **Removed hardcoded repository configuration** - - Dynamic repository discovery via Hive API - - Database-driven project management - -### 2. Security Enhancements ✅ -- **Docker Secrets Implementation** - - Replaced filesystem-based GitHub token access with Docker secrets - - Updated docker-compose.swarm.yml with proper secrets configuration - - Enhanced security posture for credential management - -### 3. Database Integration ✅ -- **Extended Hive Database Schema** - - Added bzzz-specific fields to projects table - - Inserted Hive repository as test project with 9 bzzz-task labeled issues - - Successful GitHub API integration showing real issue discovery - -### 4. Independent Testing Infrastructure ✅ -- **Mock Hive API Server** (`mock-hive-server.py`) - - Provides fake projects and tasks for real bzzz coordination - - Comprehensive task simulation with realistic coordination scenarios - - Background task generation for dynamic testing - - Enhanced with work capture endpoints: - - `/api/bzzz/projects//submit-work` - Capture actual agent work/code - - `/api/bzzz/projects//create-pr` - Capture pull request content - - `/api/bzzz/projects//coordination-discussion` - Log coordination discussions - - `/api/bzzz/projects//log-prompt` - Log agent prompts and model usage - -- **Real-Time Monitoring Dashboard** (`cmd/bzzz-monitor.py`) - - btop/nvtop-style console interface for coordination monitoring - - Real coordination channel metrics and message rate tracking - - Compact timestamp display and efficient space utilization - - Live agent activity and P2P network status monitoring - -### 5. P2P Network Verification ✅ -- **Confirmed Multi-Node Operation** - - WALNUT, ACACIA, IRONWOOD nodes running as systemd services - - 2 connected peers with regular availability broadcasts - - P2P mesh discovery and communication functioning correctly - -### 6. Cross-Repository Coordination Framework ✅ -- **Antennae Meta-Discussion System** - - Advanced cross-repository coordination capabilities - - Dependency detection and conflict resolution - - AI-powered coordination plan generation - - Consensus detection algorithms - -## Current System Status - -### Working Components -1. ✅ P2P mesh networking (libp2p + mDNS) -2. ✅ Agent availability broadcasting -3. ✅ Database-driven repository discovery -4. ✅ Secure credential management -5. ✅ Real-time monitoring infrastructure -6. ✅ Mock API testing framework -7. ✅ Work capture endpoints (ready for use) - -### Identified Issues -1. ❌ **GitHub Repository Verification Failures** - - Mock repositories (e.g., `mock-org/hive`) return 404 errors - - Prevents agents from proceeding with task discovery - - Need local Git hosting solution - -2. ❌ **Task Claim Logic Incomplete** - - Agents broadcast availability but don't actively claim tasks - - Missing integration between P2P discovery and task claiming - - Need to enhance bzzz binary task claim workflow - -3. ❌ **Docker Overlay Network Issues** - - Some connectivity issues between services - - May impact agent coordination in containerized environments - -## File Locations and Key Components - -### Core Implementation Files -- `/home/tony/chorus/project-queues/active/BZZZ/github/integration.go` - Enhanced task filtering and P2P coordination -- `/home/tony/chorus/project-queues/inactive/hive/backend/app/services/project_service.py` - Database-driven project service -- `/home/tony/chorus/project-queues/inactive/hive/docker-compose.swarm.yml` - Docker secrets configuration - -### Testing and Monitoring -- `/home/tony/chorus/project-queues/active/BZZZ/mock-hive-server.py` - Mock API with work capture -- `/home/tony/chorus/project-queues/active/BZZZ/cmd/bzzz-monitor.py` - Real-time coordination dashboard -- `/home/tony/chorus/project-queues/active/BZZZ/scripts/trigger_mock_coordination.sh` - Coordination test script - -### Configuration -- `/etc/systemd/system/bzzz.service.d/mock-api.conf` - Systemd override for mock API testing -- `/tmp/bzzz_agent_work/` - Directory for captured agent work (when functioning) -- `/tmp/bzzz_pull_requests/` - Directory for captured pull requests -- `/tmp/bzzz_agent_prompts/` - Directory for captured agent prompts and model usage - -## Technical Achievements - -### Database Schema Extensions -```sql --- Extended projects table with bzzz integration fields -ALTER TABLE projects ADD COLUMN bzzz_enabled BOOLEAN DEFAULT false; -ALTER TABLE projects ADD COLUMN ready_to_claim BOOLEAN DEFAULT false; -ALTER TABLE projects ADD COLUMN private_repo BOOLEAN DEFAULT false; -ALTER TABLE projects ADD COLUMN github_token_required BOOLEAN DEFAULT false; -``` - -### Docker Secrets Integration -```yaml -secrets: - - github_token -environment: - - GITHUB_TOKEN_FILE=/run/secrets/github_token -``` - -### P2P Network Statistics -- **Active Nodes**: 3 (WALNUT, ACACIA, IRONWOOD) -- **Connected Peers**: 2 per node -- **Network Protocol**: libp2p with mDNS discovery -- **Message Broadcasting**: Availability, capability, coordination - -## Next Steps Required -See PROJECT_TODOS.md for comprehensive task list. - -## Summary -The Bzzz P2P coordination system has a solid foundation with working P2P networking, database integration, secure credential management, and comprehensive testing infrastructure. The main blockers are the need for a local Git hosting solution and completion of the task claim logic in the bzzz binary. \ No newline at end of file diff --git a/archived/2025-07-17/PROJECT_PLAN.md b/archived/2025-07-17/PROJECT_PLAN.md deleted file mode 100644 index ed2162c5..00000000 --- a/archived/2025-07-17/PROJECT_PLAN.md +++ /dev/null @@ -1,224 +0,0 @@ -🐝 Project: Bzzz — P2P Task Coordination System - -## 🔧 Architecture Overview (libp2p + pubsub + JSON) - -This system will compliment and partially replace elements of the Hive Software System. This is intended to be a replacement for the multitude of MCP, and API calls to the ollama and gemini-cli agents over port 11434 etc. By replacing the master/slave paradigm with a mesh network we allow each node to trigger workflows or respond to calls for work as availability dictates rather than being stuck in endless timeouts awaiting responses. We also eliminate the central coordinator as a single point of failure. - -### 📂 Components - -#### 1. **Peer Node** - -Each machine runs a P2P agent that: - -- Connects to other peers via libp2p -- Subscribes to pubsub topics -- Periodically broadcasts status/capabilities -- Receives and executes tasks -- Publishes task results as GitHub pull requests or issues -- Can request assistance from other peers -- Monitors a GitHub repository for new issues (task source) - -Each node uses a dedicated GitHub account with: -- A personal access token (fine-scoped to repo/PRs) -- A configured `.gitconfig` for commit identity - -#### 2. **libp2p Network** - -- All peers discover each other using mDNS, Bootstrap peers, or DHT -- Peer identity is cryptographic (libp2p peer ID) -- Communication is encrypted end-to-end - -#### 3. **GitHub Integration** - -- Tasks are sourced from GitHub Issues in a designated repository -- Nodes will claim and respond to tasks by: - - Forking the repository (once) - - Creating a working branch - - Making changes to files as instructed by task input - - Committing changes using their GitHub identity - - Creating a pull request or additional GitHub issues - - Publishing final result as a PR, issue(s), or failure report - -#### 4. **PubSub Topics** - -| Topic | Direction | Purpose | -|------------------|------------------|---------------------------------------------| -| `capabilities` | Peer → All Peers | Broadcast available models, status | -| `task_broadcast` | Peer → All Peers | Publish a GitHub issue as task | -| `task_claim` | Peer → All Peers | Claim responsibility for a task | -| `task_result` | Peer → All Peers | Share PR, issue, or failure result | -| `presence_ping` | Peer → All Peers | Lightweight presence signal | -| `task_help_request` | Peer → All Peers | Request assistance for a task | -| `task_help_response`| Peer → All Peers | Offer help or handle sub-task | - -### 📊 Data Flow Diagram -``` -+------------------+ libp2p +------------------+ -| Peer A |<------------------->| Peer B | -| |<------------------->| | -| - Publishes: | | - Publishes: | -| capabilities | | task_result | -| task_broadcast | | capabilities | -| help_request | | help_response | -| - Subscribes to: | | - Subscribes to: | -| task_result | | task_broadcast | -| help_request | | help_request | -+------------------+ +------------------+ - ^ ^ - | | - | | - +----------------------+-----------------+ - | - v - +------------------+ - | Peer C | - +------------------+ -``` - -### 📂 Sample JSON Messages - -#### `capabilities` -```json -{ - "type": "capabilities", - "node_id": "pi-node-1", - "cpu": 43.5, - "gpu": 2.3, - "models": ["llama3", "mistral"], - "installed": ["ollama", "gemini-cli"], - "status": "idle", - "timestamp": "2025-07-12T01:23:45Z" -} -``` - -#### `task_broadcast` -```json -{ - "type": "task", - "task_id": "#42", - "repo": "example-org/task-repo", - "issue_url": "https://github.com/example-org/task-repo/issues/42", - "model": "ollama", - "input": "Add unit tests to utils module", - "params": {"branch_prefix": "task-42-"}, - "timestamp": "2025-07-12T02:00:00Z" -} -``` - -#### `task_claim` -```json -{ - "type": "task_claim", - "task_id": "#42", - "node_id": "pi-node-2", - "timestamp": "2025-07-12T02:00:03Z" -} -``` - -#### `task_result` -```json -{ - "type": "task_result", - "task_id": "#42", - "node_id": "pi-node-2", - "result_type": "pull_request", - "result_url": "https://github.com/example-org/task-repo/pull/97", - "duration_ms": 15830, - "timestamp": "2025-07-12T02:10:05Z" -} -``` - -#### `task_help_request` -```json -{ - "type": "task_help_request", - "task_id": "#42", - "from_node": "pi-node-2", - "reason": "Long-running task or missing capability", - "requested_capability": "claude-cli", - "timestamp": "2025-07-12T02:05:00Z" -} -``` - -#### `task_help_response` -```json -{ - "type": "task_help_response", - "task_id": "#42", - "from_node": "pi-node-3", - "can_help": true, - "capabilities": ["claude-cli"], - "eta_seconds": 30, - "timestamp": "2025-07-12T02:05:02Z" -} -``` - ---- - -## 🚀 Development Brief - -### 🧱 Tech Stack - -- **Language**: Node.js (or Go/Rust) -- **Networking**: libp2p -- **Messaging**: pubsub with JSON -- **Task Execution**: Local CLI (ollama, gemini, claude) -- **System Monitoring**: `os-utils`, `psutil`, `nvidia-smi` -- **Runtime**: systemd services on Linux -- **GitHub Interaction**: `octokit` (Node), Git CLI - -### 🛠 Key Modules - -#### 1. `peer_agent.js` - -- Initializes libp2p node -- Joins pubsub topics -- Periodically publishes capabilities -- Listens for tasks, runs them, and reports PR/results -- Handles help requests and responses - -#### 2. `capability_detector.js` - -- Detects: - - CPU/GPU load - - Installed models (via `ollama list`) - - Installed CLIs (`which gemini`, `which claude`) - -#### 3. `task_executor.js` - -- Parses GitHub issue input -- Forks repo (if needed) -- Creates working branch, applies changes -- Commits changes using local Git identity -- Pushes branch and creates pull request or follow-up issues - -#### 4. `github_bot.js` - -- Authenticates GitHub API client -- Watches for new issues in repo -- Publishes them as `task_broadcast` -- Handles PR/issue creation and error handling - -#### 5. `state_manager.js` - -- Keeps internal view of network state -- Tracks peers’ capabilities, liveness -- Matches help requests to eligible peers - -### 📆 Milestones - -| Week | Deliverables | -| ---- | ------------------------------------------------------------ | -| 1 | libp2p peer bootstrapping + pubsub skeleton | -| 2 | JSON messaging spec + capability broadcasting | -| 3 | GitHub issue ingestion + task broadcast | -| 4 | CLI integration with Ollama/Gemini/Claude | -| 5 | GitHub PR/issue/failure workflows | -| 6 | Help request/response logic, delegation framework | -| 7 | systemd setup, CLI utilities, and resilience | -| 8 | End-to-end testing, GitHub org coordination, deployment guide| - ---- - -Would you like a prototype `task_help_request` matchmaking function or sample test matrix for capability validation? - diff --git a/archived/2025-07-17/README_MONITORING.md b/archived/2025-07-17/README_MONITORING.md deleted file mode 100644 index 2c024210..00000000 --- a/archived/2025-07-17/README_MONITORING.md +++ /dev/null @@ -1,165 +0,0 @@ -# Bzzz Antennae Monitoring Dashboard - -A real-time console monitoring dashboard for the Bzzz P2P coordination system, similar to btop/nvtop for system monitoring. - -## Features - -🔍 **Real-time P2P Status** -- Connected peer count with history graph -- Node ID and network status -- Hive API connectivity status - -🤖 **Agent Activity Monitoring** -- Live agent availability updates -- Agent status distribution (ready/working/busy) -- Recent activity tracking - -🎯 **Coordination Activity** -- Task announcements and completions -- Coordination session tracking -- Message flow statistics - -📊 **Visual Elements** -- ASCII graphs for historical data -- Color-coded status indicators -- Live activity log with timestamps - -## Usage - -### Basic Usage -```bash -# Run with default 1-second refresh rate -python3 cmd/bzzz-monitor.py - -# Custom refresh rate (2 seconds) -python3 cmd/bzzz-monitor.py --refresh-rate 2.0 - -# Disable colors for logging/screenshots -python3 cmd/bzzz-monitor.py --no-color -``` - -### Installation as System Command -```bash -# Copy to system bin -sudo cp cmd/bzzz-monitor.py /usr/local/bin/bzzz-monitor -sudo chmod +x /usr/local/bin/bzzz-monitor - -# Now run from anywhere -bzzz-monitor -``` - -## Dashboard Layout - -``` -┌─ Bzzz P2P Coordination Monitor ─┐ -│ Uptime: 0:02:15 │ Node: 12*SEE3To... │ -└───────────────────────────────────┘ - -P2P Network Status -━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ -Connected Peers: 2 -Hive API Status: Offline (Overlay Network Issues) - -Peer History (last 20 samples): -███▇▆▆▇████▇▆▇███▇▆▇ (1-3 peers) - -Agent Activity -━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ -Recent Updates (1m): 8 - Ready: 6 - Working: 2 - -Coordination Activity -━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ -Total Messages: 45 -Total Tasks: 12 -Active Sessions: 1 -Recent Tasks (5m): 8 - -Recent Activity -━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ -11:10:35 [AVAIL] Agent acacia-node... status: ready -11:10:33 [TASK] Task announcement: hive#15 - WebSocket support -11:10:30 [COORD] Meta-coordination session started -11:10:28 [AVAIL] Agent ironwood-node... status: working -11:10:25 [ERROR] Failed to get active repositories: API 404 - -━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ -Press Ctrl+C to exit | Refresh rate: 1.0s -``` - -## Monitoring Data Sources - -The dashboard pulls data from: - -1. **Systemd Service Logs**: `journalctl -u bzzz.service` -2. **P2P Network Status**: Extracted from bzzz log messages -3. **Agent Availability**: Parsed from availability_broadcast messages -4. **Task Activity**: Detected from task/repository-related log entries -5. **Error Tracking**: Monitors for failures and connection issues - -## Color Coding - -- 🟢 **Green**: Good status, active connections, ready agents -- 🟡 **Yellow**: Working status, moderate activity -- 🔴 **Red**: Errors, failed connections, busy agents -- 🔵 **Blue**: Information, neutral data -- 🟣 **Magenta**: Coordination-specific activity -- 🔷 **Cyan**: Network and P2P data - -## Real-time Updates - -The dashboard updates every 1-2 seconds by default and tracks: - -- **P2P Connections**: Shows immediate peer join/leave events -- **Agent Status**: Real-time availability broadcasts from all nodes -- **Task Flow**: Live task announcements and coordination activity -- **System Health**: Continuous monitoring of service status and errors - -## Performance - -- **Low Resource Usage**: Python-based with minimal CPU/memory impact -- **Efficient Parsing**: Only processes recent logs (last 30-50 lines) -- **Responsive UI**: Fast refresh rates without overwhelming the terminal -- **Historical Data**: Maintains rolling buffers for trend analysis - -## Troubleshooting - -### No Data Appearing -```bash -# Check if bzzz service is running -systemctl status bzzz.service - -# Verify log access permissions -journalctl -u bzzz.service --since "1 minute ago" -``` - -### High CPU Usage -```bash -# Reduce refresh rate -bzzz-monitor --refresh-rate 5.0 -``` - -### Color Issues -```bash -# Disable colors -bzzz-monitor --no-color - -# Check terminal color support -echo $TERM -``` - -## Integration - -The monitor works alongside: -- **Live Bzzz System**: Monitors real P2P mesh (WALNUT/ACACIA/IRONWOOD) -- **Test Suite**: Can monitor test coordination scenarios -- **Development**: Perfect for debugging antennae coordination logic - -## Future Enhancements - -- 📈 Export metrics to CSV/JSON -- 🔔 Alert system for critical events -- 📊 Web-based dashboard version -- 🎯 Coordination session drill-down -- 📱 Mobile-friendly output \ No newline at end of file diff --git a/archived/2025-07-17/TASK_BACKLOG.md b/archived/2025-07-17/TASK_BACKLOG.md deleted file mode 100644 index 561321e8..00000000 --- a/archived/2025-07-17/TASK_BACKLOG.md +++ /dev/null @@ -1,112 +0,0 @@ -# Bzzz + HMMM Development Task Backlog - -Based on the UNIFIED_DEVELOPMENT_PLAN.md, here are the development tasks ready for distribution to the Hive cluster: - -## Week 1-2: Foundation Tasks - -### Task 1: P2P Networking Foundation 🔧 -**Assigned to**: WALNUT (Advanced Coding - starcoder2:15b) -**Priority**: 5 (Critical) -**Objective**: Design and implement core P2P networking foundation for Project Bzzz using libp2p in Go - -**Requirements**: -- Use go-libp2p library for mesh networking -- Implement mDNS peer discovery for local network (192.168.1.0/24) -- Create secure encrypted P2P connections with peer identity -- Design pub/sub topics for both task coordination (Bzzz) and meta-discussion (HMMM) -- Prepare for Docker + host networking deployment -- Create modular Go code structure in `/home/tony/chorus/project-queues/active/BZZZ/` - -**Deliverables**: -- `main.go` - Entry point and peer initialization -- `p2p/` - P2P networking module with libp2p integration -- `discovery/` - mDNS peer discovery implementation -- `pubsub/` - Pub/sub messaging for capability broadcasting -- `go.mod` - Go module definition with dependencies -- `Dockerfile` - Container with host networking support - -### Task 2: Distributed Logging System 📊 -**Assigned to**: IRONWOOD (Reasoning Analysis - phi4:14b) -**Priority**: 4 (High) -**Dependencies**: Task 1 (P2P Foundation) -**Objective**: Architect and implement Hypercore-based distributed logging system - -**Requirements**: -- Design append-only log streams using Hypercore Protocol -- Implement public key broadcasting for log identity -- Create log replication capabilities between peers -- Store both execution logs (Bzzz) and discussion transcripts (HMMM) -- Ensure tamper-proof audit trails for debugging -- Integrate with P2P capability detection module - -**Deliverables**: -- `logging/` - Hypercore-based logging module -- `replication/` - Log replication and synchronization -- `audit/` - Tamper-proof audit trail verification -- Documentation on log schema and replication protocol - -### Task 3: GitHub Integration Module 📋 -**Assigned to**: ACACIA (Code Review/Docs - codellama) -**Priority**: 4 (High) -**Dependencies**: Task 1 (P2P Foundation) -**Objective**: Implement GitHub integration for atomic task claiming and collaborative workflows - -**Requirements**: -- Create atomic issue assignment mechanism (GitHub's native assignment) -- Implement repository forking, branch creation, and commit workflows -- Generate pull requests with discussion transcript links -- Handle task result posting and failure reporting -- Use GitHub API for all interactions -- Include comprehensive error handling and retry logic - -**Deliverables**: -- `github/` - GitHub API integration module -- `workflows/` - Repository and branch management -- `tasks/` - Task claiming and result posting -- Integration tests with GitHub API -- Documentation on GitHub workflow process - -## Week 3-4: Integration Tasks - -### Task 4: Meta-Discussion Implementation 💬 -**Assigned to**: IRONWOOD (Reasoning Analysis) -**Priority**: 3 (Medium) -**Dependencies**: Task 1, Task 2 -**Objective**: Implement HMMM meta-discussion layer for collaborative reasoning - -**Requirements**: -- Create structured messaging for agent collaboration -- Implement "propose plan" and "objection period" logic -- Add hop limits (3 hops) and participant caps for safety -- Design escalation paths to human intervention -- Integrate with Hypercore logging for discussion transcripts - -### Task 5: End-to-End Integration 🔄 -**Assigned to**: WALNUT (Advanced Coding) -**Priority**: 2 (Normal) -**Dependencies**: All previous tasks -**Objective**: Integrate all components and create working Bzzz+HMMM system - -**Requirements**: -- Combine P2P networking, logging, and GitHub integration -- Implement full task lifecycle with meta-discussion -- Create Docker Swarm deployment configuration -- Add monitoring and health checks -- Comprehensive testing across cluster nodes - -## Current Status - -✅ **Hive Cluster Ready**: 3 agents registered with proper specializations -- walnut: starcoder2:15b (kernel_dev) -- ironwood: phi4:14b (reasoning) -- acacia: codellama (docs_writer) - -✅ **Authentication Working**: Dev user and API access configured - -⚠️ **Task Submission**: Need to resolve API endpoint issues for automated task distribution - -**Next Steps**: -1. Fix task creation API endpoint issues -2. Submit tasks to respective agents based on specializations -3. Monitor execution and coordinate between agents -4. Test the collaborative reasoning (HMMM) layer once P2P foundation is complete \ No newline at end of file diff --git a/archived/2025-07-17/demo_advanced_meta_discussion.py b/archived/2025-07-17/demo_advanced_meta_discussion.py deleted file mode 100644 index 0923f89a..00000000 --- a/archived/2025-07-17/demo_advanced_meta_discussion.py +++ /dev/null @@ -1,254 +0,0 @@ -#!/usr/bin/env python3 -""" -Advanced Meta Discussion Demo for Bzzz P2P Mesh -Shows cross-repository coordination and dependency detection -""" - -import json -import time -from datetime import datetime - -def demo_cross_repository_coordination(): - """Demonstrate advanced meta discussion features""" - - print("🎯 ADVANCED BZZZ META DISCUSSION DEMO") - print("=" * 60) - print("Scenario: Multi-repository microservices coordination") - print() - - # Simulate multiple repositories in the system - repositories = { - "api-gateway": { - "agent": "walnut-12345", - "capabilities": ["code-generation", "api-design", "security"], - "current_task": { - "id": 42, - "title": "Implement OAuth2 authentication flow", - "description": "Add OAuth2 support to API gateway with JWT tokens", - "labels": ["security", "api", "authentication"] - } - }, - "user-service": { - "agent": "acacia-67890", - "capabilities": ["code-analysis", "database", "microservices"], - "current_task": { - "id": 87, - "title": "Update user schema for OAuth integration", - "description": "Add OAuth provider fields to user table", - "labels": ["database", "schema", "authentication"] - } - }, - "notification-service": { - "agent": "ironwood-54321", - "capabilities": ["advanced-reasoning", "integration", "messaging"], - "current_task": { - "id": 156, - "title": "Secure webhook endpoints with JWT", - "description": "Validate JWT tokens on webhook endpoints", - "labels": ["security", "webhook", "authentication"] - } - } - } - - print("📋 ACTIVE TASKS ACROSS REPOSITORIES:") - for repo, info in repositories.items(): - task = info["current_task"] - print(f" 🔧 {repo}: #{task['id']} - {task['title']}") - print(f" Agent: {info['agent']} | Labels: {', '.join(task['labels'])}") - print() - - # Demo 1: Dependency Detection - print("🔍 PHASE 1: DEPENDENCY DETECTION") - print("-" * 40) - - dependencies = [ - { - "task1": "api-gateway/#42", - "task2": "user-service/#87", - "relationship": "API_Contract", - "reason": "OAuth implementation requires coordinated schema changes", - "confidence": 0.9 - }, - { - "task1": "api-gateway/#42", - "task2": "notification-service/#156", - "relationship": "Security_Compliance", - "reason": "Both implement JWT token validation", - "confidence": 0.85 - } - ] - - for dep in dependencies: - print(f"🔗 DEPENDENCY DETECTED:") - print(f" {dep['task1']} ↔ {dep['task2']}") - print(f" Type: {dep['relationship']} (confidence: {dep['confidence']})") - print(f" Reason: {dep['reason']}") - print() - - # Demo 2: Coordination Session Creation - print("🎯 PHASE 2: COORDINATION SESSION INITIATED") - print("-" * 40) - - session_id = f"coord_oauth_{int(time.time())}" - print(f"📝 Session ID: {session_id}") - print(f"📅 Created: {datetime.now().strftime('%H:%M:%S')}") - print(f"👥 Participants: walnut-12345, acacia-67890, ironwood-54321") - print() - - # Demo 3: AI-Generated Coordination Plan - print("🤖 PHASE 3: AI-GENERATED COORDINATION PLAN") - print("-" * 40) - - coordination_plan = """ -COORDINATION PLAN: OAuth2 Implementation Across Services - -1. EXECUTION ORDER: - - Phase 1: user-service (schema changes) - - Phase 2: api-gateway (OAuth implementation) - - Phase 3: notification-service (JWT validation) - -2. SHARED ARTIFACTS: - - JWT token format specification - - OAuth2 endpoint documentation - - Database schema migration scripts - - Shared security configuration - -3. COORDINATION REQUIREMENTS: - - walnut-12345: Define JWT token structure before implementation - - acacia-67890: Migrate user schema first, share field mappings - - ironwood-54321: Wait for JWT format, implement validation - -4. POTENTIAL CONFLICTS: - - JWT payload structure disagreements - - Token expiration time mismatches - - Security scope definition conflicts - -5. SUCCESS CRITERIA: - - All services use consistent JWT format - - OAuth flow works end-to-end - - Security audit passes on all endpoints - - Integration tests pass across all services -""" - - print(coordination_plan) - - # Demo 4: Agent Coordination Messages - print("💬 PHASE 4: AGENT COORDINATION MESSAGES") - print("-" * 40) - - messages = [ - { - "timestamp": "14:32:01", - "from": "walnut-12345 (api-gateway)", - "type": "proposal", - "content": "I propose using RS256 JWT tokens with 15min expiry. Standard claims: sub, iat, exp, scope." - }, - { - "timestamp": "14:32:45", - "from": "acacia-67890 (user-service)", - "type": "question", - "content": "Should we store the OAuth provider info in the user table or separate table? Also need refresh token strategy." - }, - { - "timestamp": "14:33:20", - "from": "ironwood-54321 (notification-service)", - "type": "agreement", - "content": "RS256 sounds good. For webhooks, I'll validate signature and check 'webhook' scope. Need the public key endpoint." - }, - { - "timestamp": "14:34:10", - "from": "walnut-12345 (api-gateway)", - "type": "response", - "content": "Separate oauth_providers table is better for multiple providers. Public key at /.well-known/jwks.json" - }, - { - "timestamp": "14:34:55", - "from": "acacia-67890 (user-service)", - "type": "agreement", - "content": "Agreed on separate table. I'll create migration script and share the schema. ETA: 2 hours." - } - ] - - for msg in messages: - print(f"[{msg['timestamp']}] {msg['from']} ({msg['type']}):") - print(f" {msg['content']}") - print() - - # Demo 5: Automatic Resolution Detection - print("✅ PHASE 5: COORDINATION RESOLUTION") - print("-" * 40) - - print("🔍 ANALYSIS: Consensus detected") - print(" - All agents agreed on JWT format (RS256)") - print(" - Database strategy decided (separate oauth_providers table)") - print(" - Public key endpoint established (/.well-known/jwks.json)") - print(" - Implementation order confirmed") - print() - print("📋 COORDINATION COMPLETE:") - print(" - Session status: RESOLVED") - print(" - Resolution: Consensus reached on OAuth implementation") - print(" - Next steps: acacia-67890 starts schema migration") - print(" - Dependencies: walnut-12345 waits for schema completion") - print() - - # Demo 6: Alternative - Escalation Scenario - print("🚨 ALTERNATIVE: ESCALATION SCENARIO") - print("-" * 40) - - escalation_scenario = """ -ESCALATION TRIGGERED: Security Implementation Conflict - -Reason: Agents cannot agree on JWT token expiration time -- walnut-12345 wants 15 minutes (high security) -- acacia-67890 wants 4 hours (user experience) -- ironwood-54321 wants 1 hour (compromise) - -Messages exceeded threshold: 12 messages without consensus -Human expert summoned via N8N webhook to deepblack.cloud - -Escalation webhook payload: -{ - "session_id": "coord_oauth_1752401234", - "conflict_type": "security_policy_disagreement", - "agents_involved": ["walnut-12345", "acacia-67890", "ironwood-54321"], - "repositories": ["api-gateway", "user-service", "notification-service"], - "issue_summary": "JWT expiration time conflict preventing OAuth implementation", - "requires_human_decision": true, - "urgency": "medium" -} -""" - - print(escalation_scenario) - - # Demo 7: System Capabilities Summary - print("🎯 ADVANCED META DISCUSSION CAPABILITIES") - print("-" * 40) - - capabilities = [ - "✅ Cross-repository dependency detection", - "✅ Intelligent task relationship analysis", - "✅ AI-generated coordination plans", - "✅ Multi-agent conversation management", - "✅ Consensus detection and resolution", - "✅ Automatic escalation to humans", - "✅ Session lifecycle management", - "✅ Hop-limited message propagation", - "✅ Custom dependency rules", - "✅ Project-aware coordination" - ] - - for cap in capabilities: - print(f" {cap}") - - print() - print("🚀 PRODUCTION READY:") - print(" - P2P mesh infrastructure: ✅ Deployed") - print(" - Antennae meta-discussion: ✅ Active") - print(" - Dependency detection: ✅ Implemented") - print(" - Coordination sessions: ✅ Functional") - print(" - Human escalation: ✅ N8N integrated") - print() - print("🎯 Ready for real cross-repository coordination!") - -if __name__ == "__main__": - demo_cross_repository_coordination() \ No newline at end of file diff --git a/archived/2025-07-17/mock-hive-server.py b/archived/2025-07-17/mock-hive-server.py deleted file mode 100755 index 87ee2c78..00000000 --- a/archived/2025-07-17/mock-hive-server.py +++ /dev/null @@ -1,702 +0,0 @@ -#!/usr/bin/env python3 -""" -Mock Hive API Server for Bzzz Testing - -This simulates what the real Hive API would provide to bzzz agents: -- Active repositories with bzzz-enabled tasks -- Fake GitHub issues with bzzz-task labels -- Task dependencies and coordination scenarios - -The real bzzz agents will consume this fake data and do actual coordination. -""" - -import json -import random -import time -from datetime import datetime, timedelta -from flask import Flask, jsonify, request -from threading import Thread - -app = Flask(__name__) - -# Mock data for repositories and tasks -MOCK_REPOSITORIES = [ - { - "project_id": 1, - "name": "hive-coordination-platform", - "git_url": "https://github.com/mock/hive", - "owner": "mock-org", - "repository": "hive", - "branch": "main", - "bzzz_enabled": True, - "ready_to_claim": True, - "private_repo": False, - "github_token_required": False - }, - { - "project_id": 2, - "name": "bzzz-p2p-system", - "git_url": "https://github.com/mock/bzzz", - "owner": "mock-org", - "repository": "bzzz", - "branch": "main", - "bzzz_enabled": True, - "ready_to_claim": True, - "private_repo": False, - "github_token_required": False - }, - { - "project_id": 3, - "name": "distributed-ai-development", - "git_url": "https://github.com/mock/distributed-ai-dev", - "owner": "mock-org", - "repository": "distributed-ai-dev", - "branch": "main", - "bzzz_enabled": True, - "ready_to_claim": True, - "private_repo": False, - "github_token_required": False - }, - { - "project_id": 4, - "name": "infrastructure-automation", - "git_url": "https://github.com/mock/infra-automation", - "owner": "mock-org", - "repository": "infra-automation", - "branch": "main", - "bzzz_enabled": True, - "ready_to_claim": True, - "private_repo": False, - "github_token_required": False - } -] - -# Mock tasks with realistic coordination scenarios -MOCK_TASKS = { - 1: [ # hive tasks - { - "number": 15, - "title": "Add WebSocket support for real-time coordination", - "description": "Implement WebSocket endpoints for real-time agent coordination messages", - "state": "open", - "labels": ["bzzz-task", "feature", "realtime", "coordination"], - "created_at": "2025-01-14T10:00:00Z", - "updated_at": "2025-01-14T10:30:00Z", - "html_url": "https://github.com/mock/hive/issues/15", - "is_claimed": False, - "assignees": [], - "task_type": "feature", - "dependencies": [ - { - "repository": "bzzz", - "task_number": 23, - "dependency_type": "api_contract" - } - ] - }, - { - "number": 16, - "title": "Implement agent authentication system", - "description": "Add secure JWT-based authentication for bzzz agents accessing Hive APIs", - "state": "open", - "labels": ["bzzz-task", "security", "auth", "high-priority"], - "created_at": "2025-01-14T09:30:00Z", - "updated_at": "2025-01-14T10:45:00Z", - "html_url": "https://github.com/mock/hive/issues/16", - "is_claimed": False, - "assignees": [], - "task_type": "security", - "dependencies": [] - }, - { - "number": 17, - "title": "Create coordination metrics dashboard", - "description": "Build dashboard showing cross-repository coordination statistics", - "state": "open", - "labels": ["bzzz-task", "dashboard", "metrics", "ui"], - "created_at": "2025-01-14T11:00:00Z", - "updated_at": "2025-01-14T11:15:00Z", - "html_url": "https://github.com/mock/hive/issues/17", - "is_claimed": False, - "assignees": [], - "task_type": "feature", - "dependencies": [ - { - "repository": "bzzz", - "task_number": 24, - "dependency_type": "api_contract" - } - ] - } - ], - 2: [ # bzzz tasks - { - "number": 23, - "title": "Define coordination API contract", - "description": "Standardize API contract for cross-repository coordination messaging", - "state": "open", - "labels": ["bzzz-task", "api", "coordination", "blocker"], - "created_at": "2025-01-14T09:00:00Z", - "updated_at": "2025-01-14T10:00:00Z", - "html_url": "https://github.com/mock/bzzz/issues/23", - "is_claimed": False, - "assignees": [], - "task_type": "api_design", - "dependencies": [] - }, - { - "number": 24, - "title": "Implement dependency detection algorithm", - "description": "Auto-detect task dependencies across repositories using graph analysis", - "state": "open", - "labels": ["bzzz-task", "algorithm", "coordination", "complex"], - "created_at": "2025-01-14T10:15:00Z", - "updated_at": "2025-01-14T10:30:00Z", - "html_url": "https://github.com/mock/bzzz/issues/24", - "is_claimed": False, - "assignees": [], - "task_type": "feature", - "dependencies": [ - { - "repository": "bzzz", - "task_number": 23, - "dependency_type": "api_contract" - } - ] - }, - { - "number": 25, - "title": "Add consensus algorithm for coordination", - "description": "Implement distributed consensus for multi-agent task coordination", - "state": "open", - "labels": ["bzzz-task", "consensus", "distributed-systems", "hard"], - "created_at": "2025-01-14T11:30:00Z", - "updated_at": "2025-01-14T11:45:00Z", - "html_url": "https://github.com/mock/bzzz/issues/25", - "is_claimed": False, - "assignees": [], - "task_type": "feature", - "dependencies": [] - } - ], - 3: [ # distributed-ai-dev tasks - { - "number": 8, - "title": "Add support for bzzz coordination", - "description": "Integrate with bzzz P2P coordination system for distributed AI development", - "state": "open", - "labels": ["bzzz-task", "integration", "p2p", "ai"], - "created_at": "2025-01-14T10:45:00Z", - "updated_at": "2025-01-14T11:00:00Z", - "html_url": "https://github.com/mock/distributed-ai-dev/issues/8", - "is_claimed": False, - "assignees": [], - "task_type": "integration", - "dependencies": [ - { - "repository": "bzzz", - "task_number": 23, - "dependency_type": "api_contract" - }, - { - "repository": "hive", - "task_number": 16, - "dependency_type": "security" - } - ] - }, - { - "number": 9, - "title": "Implement AI model coordination", - "description": "Enable coordination between AI models across different development environments", - "state": "open", - "labels": ["bzzz-task", "ai-coordination", "models", "complex"], - "created_at": "2025-01-14T11:15:00Z", - "updated_at": "2025-01-14T11:30:00Z", - "html_url": "https://github.com/mock/distributed-ai-dev/issues/9", - "is_claimed": False, - "assignees": [], - "task_type": "feature", - "dependencies": [ - { - "repository": "distributed-ai-dev", - "task_number": 8, - "dependency_type": "integration" - } - ] - } - ], - 4: [ # infra-automation tasks - { - "number": 12, - "title": "Automate bzzz deployment across cluster", - "description": "Create automated deployment scripts for bzzz agents on all cluster nodes", - "state": "open", - "labels": ["bzzz-task", "deployment", "automation", "devops"], - "created_at": "2025-01-14T12:00:00Z", - "updated_at": "2025-01-14T12:15:00Z", - "html_url": "https://github.com/mock/infra-automation/issues/12", - "is_claimed": False, - "assignees": [], - "task_type": "infrastructure", - "dependencies": [ - { - "repository": "hive", - "task_number": 16, - "dependency_type": "security" - } - ] - } - ] -} - -# Track claimed tasks -claimed_tasks = {} - -@app.route('/health', methods=['GET']) -def health(): - """Health check endpoint""" - return jsonify({"status": "healthy", "service": "mock-hive-api", "timestamp": datetime.now().isoformat()}) - -@app.route('/api/bzzz/active-repos', methods=['GET']) -def get_active_repositories(): - """Return mock active repositories for bzzz consumption""" - print(f"[{datetime.now().strftime('%H:%M:%S')}] 📡 Bzzz requested active repositories") - - # Randomly vary the number of available repos for more realistic testing - available_repos = random.sample(MOCK_REPOSITORIES, k=random.randint(2, len(MOCK_REPOSITORIES))) - - return jsonify({"repositories": available_repos}) - -@app.route('/api/bzzz/projects//tasks', methods=['GET']) -def get_project_tasks(project_id): - """Return mock bzzz-task labeled issues for a specific project""" - print(f"[{datetime.now().strftime('%H:%M:%S')}] 📋 Bzzz requested tasks for project {project_id}") - - if project_id not in MOCK_TASKS: - return jsonify([]) - - # Return tasks, updating claim status - tasks = [] - for task in MOCK_TASKS[project_id]: - task_copy = task.copy() - claim_key = f"{project_id}-{task['number']}" - - # Check if task is claimed - if claim_key in claimed_tasks: - claim_info = claimed_tasks[claim_key] - # Tasks expire after 30 minutes if not updated - if datetime.now() - claim_info['claimed_at'] < timedelta(minutes=30): - task_copy['is_claimed'] = True - task_copy['assignees'] = [claim_info['agent_id']] - else: - # Claim expired - del claimed_tasks[claim_key] - task_copy['is_claimed'] = False - task_copy['assignees'] = [] - - tasks.append(task_copy) - - return jsonify(tasks) - -@app.route('/api/bzzz/projects//claim', methods=['POST']) -def claim_task(project_id): - """Register task claim with mock Hive system""" - data = request.get_json() - task_number = data.get('task_number') - agent_id = data.get('agent_id') - - print(f"[{datetime.now().strftime('%H:%M:%S')}] 🎯 Agent {agent_id} claiming task {project_id}#{task_number}") - - if not task_number or not agent_id: - return jsonify({"error": "task_number and agent_id are required"}), 400 - - claim_key = f"{project_id}-{task_number}" - - # Check if already claimed - if claim_key in claimed_tasks: - existing_claim = claimed_tasks[claim_key] - if datetime.now() - existing_claim['claimed_at'] < timedelta(minutes=30): - return jsonify({ - "error": "Task already claimed", - "claimed_by": existing_claim['agent_id'], - "claimed_at": existing_claim['claimed_at'].isoformat() - }), 409 - - # Register the claim - claim_id = f"{project_id}-{task_number}-{agent_id}-{int(time.time())}" - claimed_tasks[claim_key] = { - "agent_id": agent_id, - "claimed_at": datetime.now(), - "claim_id": claim_id - } - - print(f"[{datetime.now().strftime('%H:%M:%S')}] ✅ Task {project_id}#{task_number} claimed by {agent_id}") - - return jsonify({"success": True, "claim_id": claim_id}) - -@app.route('/api/bzzz/projects//status', methods=['PUT']) -def update_task_status(project_id): - """Update task status in mock Hive system""" - data = request.get_json() - task_number = data.get('task_number') - status = data.get('status') - metadata = data.get('metadata', {}) - - print(f"[{datetime.now().strftime('%H:%M:%S')}] 📊 Task {project_id}#{task_number} status: {status}") - - if not task_number or not status: - return jsonify({"error": "task_number and status are required"}), 400 - - # Log status update - if status == "completed": - claim_key = f"{project_id}-{task_number}" - if claim_key in claimed_tasks: - agent_id = claimed_tasks[claim_key]['agent_id'] - print(f"[{datetime.now().strftime('%H:%M:%S')}] 🎉 Task {project_id}#{task_number} completed by {agent_id}") - del claimed_tasks[claim_key] # Remove claim - elif status == "escalated": - print(f"[{datetime.now().strftime('%H:%M:%S')}] 🚨 Task {project_id}#{task_number} escalated: {metadata}") - - return jsonify({"success": True}) - -@app.route('/api/bzzz/coordination-log', methods=['POST']) -def log_coordination_activity(): - """Log coordination activity for monitoring""" - data = request.get_json() - activity_type = data.get('type', 'unknown') - details = data.get('details', {}) - - print(f"[{datetime.now().strftime('%H:%M:%S')}] 🧠 Coordination: {activity_type} - {details}") - - # Save coordination activity to file - save_coordination_work(activity_type, details) - - return jsonify({"success": True, "logged": True}) - -@app.route('/api/bzzz/projects//submit-work', methods=['POST']) -def submit_work(project_id): - """Endpoint for agents to submit their actual work/code/solutions""" - data = request.get_json() - task_number = data.get('task_number') - agent_id = data.get('agent_id') - work_type = data.get('work_type', 'code') # code, documentation, configuration, etc. - content = data.get('content', '') - files = data.get('files', {}) # Dictionary of filename -> content - commit_message = data.get('commit_message', '') - description = data.get('description', '') - - print(f"[{datetime.now().strftime('%H:%M:%S')}] 📝 Work submission: {agent_id} -> Project {project_id} Task {task_number}") - print(f" Type: {work_type}, Files: {len(files)}, Content length: {len(content)}") - - # Save the actual work content - work_data = { - "project_id": project_id, - "task_number": task_number, - "agent_id": agent_id, - "work_type": work_type, - "content": content, - "files": files, - "commit_message": commit_message, - "description": description, - "submitted_at": datetime.now().isoformat() - } - - save_agent_work(work_data) - - return jsonify({ - "success": True, - "work_id": f"{project_id}-{task_number}-{int(time.time())}", - "message": "Work submitted successfully to mock repository" - }) - -@app.route('/api/bzzz/projects//create-pr', methods=['POST']) -def create_pull_request(project_id): - """Endpoint for agents to submit pull request content""" - data = request.get_json() - task_number = data.get('task_number') - agent_id = data.get('agent_id') - pr_title = data.get('title', '') - pr_description = data.get('description', '') - files_changed = data.get('files_changed', {}) - branch_name = data.get('branch_name', f"bzzz-task-{task_number}") - - print(f"[{datetime.now().strftime('%H:%M:%S')}] 🔀 Pull Request: {agent_id} -> Project {project_id}") - print(f" Title: {pr_title}") - print(f" Files changed: {len(files_changed)}") - - # Save the pull request content - pr_data = { - "project_id": project_id, - "task_number": task_number, - "agent_id": agent_id, - "title": pr_title, - "description": pr_description, - "files_changed": files_changed, - "branch_name": branch_name, - "created_at": datetime.now().isoformat(), - "status": "open" - } - - save_pull_request(pr_data) - - return jsonify({ - "success": True, - "pr_number": random.randint(100, 999), - "pr_url": f"https://github.com/mock/{get_repo_name(project_id)}/pull/{random.randint(100, 999)}", - "message": "Pull request created successfully in mock repository" - }) - -@app.route('/api/bzzz/projects//coordination-discussion', methods=['POST']) -def log_coordination_discussion(project_id): - """Endpoint for agents to log coordination discussions and decisions""" - data = request.get_json() - discussion_type = data.get('type', 'general') # dependency_analysis, conflict_resolution, etc. - participants = data.get('participants', []) - messages = data.get('messages', []) - decisions = data.get('decisions', []) - context = data.get('context', {}) - - print(f"[{datetime.now().strftime('%H:%M:%S')}] 💬 Coordination Discussion: Project {project_id}") - print(f" Type: {discussion_type}, Participants: {len(participants)}, Messages: {len(messages)}") - - # Save coordination discussion - discussion_data = { - "project_id": project_id, - "type": discussion_type, - "participants": participants, - "messages": messages, - "decisions": decisions, - "context": context, - "timestamp": datetime.now().isoformat() - } - - save_coordination_discussion(discussion_data) - - return jsonify({"success": True, "logged": True}) - -@app.route('/api/bzzz/projects//log-prompt', methods=['POST']) -def log_agent_prompt(project_id): - """Endpoint for agents to log the prompts they are receiving/generating""" - data = request.get_json() - task_number = data.get('task_number') - agent_id = data.get('agent_id') - prompt_type = data.get('prompt_type', 'task_analysis') # task_analysis, coordination, meta_thinking - prompt_content = data.get('prompt_content', '') - context = data.get('context', {}) - model_used = data.get('model_used', 'unknown') - - print(f"[{datetime.now().strftime('%H:%M:%S')}] 🧠 Prompt Log: {agent_id} -> {prompt_type}") - print(f" Model: {model_used}, Task: {project_id}#{task_number}") - print(f" Prompt length: {len(prompt_content)} chars") - - # Save the prompt data - prompt_data = { - "project_id": project_id, - "task_number": task_number, - "agent_id": agent_id, - "prompt_type": prompt_type, - "prompt_content": prompt_content, - "context": context, - "model_used": model_used, - "timestamp": datetime.now().isoformat() - } - - save_agent_prompt(prompt_data) - - return jsonify({"success": True, "logged": True}) - -def save_agent_prompt(prompt_data): - """Save agent prompts to files for analysis""" - import os - timestamp = datetime.now() - work_dir = "/tmp/bzzz_agent_prompts" - os.makedirs(work_dir, exist_ok=True) - - # Create filename with project, task, and timestamp - project_id = prompt_data["project_id"] - task_number = prompt_data["task_number"] - agent_id = prompt_data["agent_id"].replace("/", "_") # Clean agent ID for filename - prompt_type = prompt_data["prompt_type"] - - filename = f"prompt_{prompt_type}_p{project_id}_t{task_number}_{agent_id}_{timestamp.strftime('%H%M%S')}.json" - prompt_file = os.path.join(work_dir, filename) - - with open(prompt_file, "w") as f: - json.dump(prompt_data, f, indent=2) - - print(f" 💾 Saved prompt to: {prompt_file}") - - # Also save to daily log - log_file = os.path.join(work_dir, f"agent_prompts_log_{timestamp.strftime('%Y%m%d')}.jsonl") - with open(log_file, "a") as f: - f.write(json.dumps(prompt_data) + "\n") - -def save_agent_work(work_data): - """Save actual agent work submissions to files""" - import os - timestamp = datetime.now() - work_dir = "/tmp/bzzz_agent_work" - os.makedirs(work_dir, exist_ok=True) - - # Create filename with project, task, and timestamp - project_id = work_data["project_id"] - task_number = work_data["task_number"] - agent_id = work_data["agent_id"].replace("/", "_") # Clean agent ID for filename - - filename = f"work_p{project_id}_t{task_number}_{agent_id}_{timestamp.strftime('%H%M%S')}.json" - work_file = os.path.join(work_dir, filename) - - with open(work_file, "w") as f: - json.dump(work_data, f, indent=2) - - print(f" 💾 Saved work to: {work_file}") - - # Also save to daily log - log_file = os.path.join(work_dir, f"agent_work_log_{timestamp.strftime('%Y%m%d')}.jsonl") - with open(log_file, "a") as f: - f.write(json.dumps(work_data) + "\n") - -def save_pull_request(pr_data): - """Save pull request content to files""" - import os - timestamp = datetime.now() - work_dir = "/tmp/bzzz_pull_requests" - os.makedirs(work_dir, exist_ok=True) - - # Create filename with project, task, and timestamp - project_id = pr_data["project_id"] - task_number = pr_data["task_number"] - agent_id = pr_data["agent_id"].replace("/", "_") # Clean agent ID for filename - - filename = f"pr_p{project_id}_t{task_number}_{agent_id}_{timestamp.strftime('%H%M%S')}.json" - pr_file = os.path.join(work_dir, filename) - - with open(pr_file, "w") as f: - json.dump(pr_data, f, indent=2) - - print(f" 💾 Saved PR to: {pr_file}") - - # Also save to daily log - log_file = os.path.join(work_dir, f"pull_requests_log_{timestamp.strftime('%Y%m%d')}.jsonl") - with open(log_file, "a") as f: - f.write(json.dumps(pr_data) + "\n") - -def save_coordination_discussion(discussion_data): - """Save coordination discussions to files""" - import os - timestamp = datetime.now() - work_dir = "/tmp/bzzz_coordination_discussions" - os.makedirs(work_dir, exist_ok=True) - - # Create filename with project and timestamp - project_id = discussion_data["project_id"] - discussion_type = discussion_data["type"] - - filename = f"discussion_{discussion_type}_p{project_id}_{timestamp.strftime('%H%M%S')}.json" - discussion_file = os.path.join(work_dir, filename) - - with open(discussion_file, "w") as f: - json.dump(discussion_data, f, indent=2) - - print(f" 💾 Saved discussion to: {discussion_file}") - - # Also save to daily log - log_file = os.path.join(work_dir, f"coordination_discussions_{timestamp.strftime('%Y%m%d')}.jsonl") - with open(log_file, "a") as f: - f.write(json.dumps(discussion_data) + "\n") - -def get_repo_name(project_id): - """Get repository name from project ID""" - repo_map = { - 1: "hive", - 2: "bzzz", - 3: "distributed-ai-dev", - 4: "infra-automation" - } - return repo_map.get(project_id, "unknown-repo") - -def save_coordination_work(activity_type, details): - """Save coordination work to files for analysis""" - timestamp = datetime.now() - work_dir = "/tmp/bzzz_coordination_work" - os.makedirs(work_dir, exist_ok=True) - - # Create detailed log entry - work_entry = { - "timestamp": timestamp.isoformat(), - "type": activity_type, - "details": details, - "session_id": details.get("session_id", "unknown") - } - - # Save to daily log file - log_file = os.path.join(work_dir, f"coordination_work_{timestamp.strftime('%Y%m%d')}.jsonl") - with open(log_file, "a") as f: - f.write(json.dumps(work_entry) + "\n") - - # Save individual work items to separate files - if activity_type in ["code_generation", "task_solution", "pull_request_content"]: - work_file = os.path.join(work_dir, f"{activity_type}_{timestamp.strftime('%H%M%S')}.json") - with open(work_file, "w") as f: - json.dump(work_entry, f, indent=2) - -def start_background_task_updates(): - """Background thread to simulate changing task priorities and new tasks""" - def background_updates(): - while True: - time.sleep(random.randint(60, 180)) # Every 1-3 minutes - - # Occasionally add a new urgent task - if random.random() < 0.3: # 30% chance - project_id = random.choice([1, 2, 3, 4]) - urgent_task = { - "number": random.randint(100, 999), - "title": f"URGENT: {random.choice(['Critical bug fix', 'Security patch', 'Production issue', 'Integration failure'])}", - "description": "High priority task requiring immediate attention", - "state": "open", - "labels": ["bzzz-task", "urgent", "critical"], - "created_at": datetime.now().isoformat(), - "updated_at": datetime.now().isoformat(), - "html_url": f"https://github.com/mock/repo/issues/{random.randint(100, 999)}", - "is_claimed": False, - "assignees": [], - "task_type": "bug", - "dependencies": [] - } - - if project_id not in MOCK_TASKS: - MOCK_TASKS[project_id] = [] - MOCK_TASKS[project_id].append(urgent_task) - - print(f"[{datetime.now().strftime('%H:%M:%S')}] 🚨 NEW URGENT TASK: Project {project_id} - {urgent_task['title']}") - - thread = Thread(target=background_updates, daemon=True) - thread.start() - -if __name__ == '__main__': - print("🚀 Starting Mock Hive API Server for Bzzz Testing") - print("=" * 50) - print("This server provides fake projects and tasks to real bzzz agents") - print("Real bzzz coordination will happen with this simulated data") - print("") - print("Available endpoints:") - print(" GET /health - Health check") - print(" GET /api/bzzz/active-repos - Active repositories") - print(" GET /api/bzzz/projects//tasks - Project tasks") - print(" POST /api/bzzz/projects//claim - Claim task") - print(" PUT /api/bzzz/projects//status - Update task status") - print(" POST /api/bzzz/projects//submit-work - Submit actual work/code") - print(" POST /api/bzzz/projects//create-pr - Submit pull request content") - print(" POST /api/bzzz/projects//coordination-discussion - Log coordination discussions") - print(" POST /api/bzzz/projects//log-prompt - Log agent prompts and model usage") - print(" POST /api/bzzz/coordination-log - Log coordination activity") - print("") - print("Starting background task updates...") - start_background_task_updates() - - print(f"🌟 Mock Hive API running on http://localhost:5000") - print("Configure bzzz to use: BZZZ_HIVE_API_URL=http://localhost:5000") - print("") - - app.run(host='0.0.0.0', port=5000, debug=False) \ No newline at end of file diff --git a/archived/2025-07-17/test-config.yaml b/archived/2025-07-17/test-config.yaml deleted file mode 100644 index 0f8048f7..00000000 --- a/archived/2025-07-17/test-config.yaml +++ /dev/null @@ -1,21 +0,0 @@ -hive_api: - base_url: "https://hive.home.deepblack.cloud" - api_key: "" - timeout: "30s" - -agent: - id: "test-agent" - capabilities: ["task-coordination", "meta-discussion", "general"] - models: ["phi3"] - specialization: "general_developer" - poll_interval: "60s" - max_tasks: 1 - -github: - token_file: "" - -p2p: - escalation_webhook: "https://n8n.home.deepblack.cloud/webhook-test/human-escalation" - -logging: - level: "debug" \ No newline at end of file diff --git a/archived/2025-07-17/test_hive_api.py b/archived/2025-07-17/test_hive_api.py deleted file mode 100644 index f75ee056..00000000 --- a/archived/2025-07-17/test_hive_api.py +++ /dev/null @@ -1,94 +0,0 @@ -#!/usr/bin/env python3 -""" -Test script for Bzzz-Hive API integration. -Tests the newly created API endpoints for dynamic repository discovery. -""" - -import sys -import os -sys.path.append('/home/tony/chorus/project-queues/inactive/hive/backend') - -from app.services.project_service import ProjectService -import json - -def test_project_service(): - """Test the ProjectService with Bzzz integration methods.""" - print("🧪 Testing ProjectService with Bzzz integration...") - - service = ProjectService() - - # Test 1: Get all projects - print("\n📁 Testing get_all_projects()...") - projects = service.get_all_projects() - print(f"Found {len(projects)} total projects") - - # Find projects with GitHub repos - github_projects = [p for p in projects if p.get('github_repo')] - print(f"Found {len(github_projects)} projects with GitHub repositories:") - for project in github_projects: - print(f" - {project['name']}: {project['github_repo']}") - - # Test 2: Get active repositories for Bzzz - print("\n🐝 Testing get_bzzz_active_repositories()...") - try: - active_repos = service.get_bzzz_active_repositories() - print(f"Found {len(active_repos)} repositories ready for Bzzz coordination:") - - for repo in active_repos: - print(f"\n 📦 Repository: {repo['name']}") - print(f" Owner: {repo['owner']}") - print(f" Repository: {repo['repository']}") - print(f" Git URL: {repo['git_url']}") - print(f" Ready to claim: {repo['ready_to_claim']}") - print(f" Project ID: {repo['project_id']}") - - except Exception as e: - print(f"❌ Error testing active repositories: {e}") - - # Test 3: Get bzzz-task issues for the hive project specifically - print("\n🎯 Testing get_bzzz_project_tasks() for 'hive' project...") - try: - hive_tasks = service.get_bzzz_project_tasks('hive') - print(f"Found {len(hive_tasks)} bzzz-task issues in hive project:") - - for task in hive_tasks: - print(f"\n 🎫 Issue #{task['number']}: {task['title']}") - print(f" State: {task['state']}") - print(f" Labels: {task['labels']}") - print(f" Task Type: {task['task_type']}") - print(f" Claimed: {task['is_claimed']}") - if task['assignees']: - print(f" Assignees: {', '.join(task['assignees'])}") - print(f" URL: {task['html_url']}") - - except Exception as e: - print(f"❌ Error testing hive project tasks: {e}") - - # Test 4: Simulate API endpoint response format - print("\n📡 Testing API endpoint response format...") - try: - active_repos = service.get_bzzz_active_repositories() - api_response = {"repositories": active_repos} - - print("API Response Preview (first 500 chars):") - response_json = json.dumps(api_response, indent=2) - print(response_json[:500] + "..." if len(response_json) > 500 else response_json) - - except Exception as e: - print(f"❌ Error formatting API response: {e}") - -def main(): - print("🚀 Starting Bzzz-Hive API Integration Test") - print("="*50) - - try: - test_project_service() - print("\n✅ Test completed successfully!") - - except Exception as e: - print(f"\n❌ Test failed with error: {e}") - import traceback - traceback.print_exc() - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/archived/2025-07-17/test_meta_discussion.py b/archived/2025-07-17/test_meta_discussion.py deleted file mode 100644 index 96972b00..00000000 --- a/archived/2025-07-17/test_meta_discussion.py +++ /dev/null @@ -1,98 +0,0 @@ -#!/usr/bin/env python3 -""" -Test script to trigger and observe bzzz meta discussion -""" - -import json -import time -import requests -from datetime import datetime - -def test_meta_discussion(): - """Test the Antennae meta discussion by simulating a complex task""" - - print("🎯 Testing Bzzz Antennae Meta Discussion") - print("=" * 50) - - # Test 1: Check if the P2P mesh is active - print("1. Checking P2P mesh status...") - - # We can't directly inject into the P2P mesh from here, but we can: - # - Check the bzzz service logs for meta discussion activity - # - Create a mock scenario description - - mock_scenario = { - "task_type": "complex_architecture_design", - "description": "Design a microservices architecture for a distributed AI system with P2P coordination", - "complexity": "high", - "requires_collaboration": True, - "estimated_agents_needed": 3 - } - - print(f"📋 Mock Complex Task:") - print(f" Type: {mock_scenario['task_type']}") - print(f" Description: {mock_scenario['description']}") - print(f" Complexity: {mock_scenario['complexity']}") - print(f" Collaboration Required: {mock_scenario['requires_collaboration']}") - - # Test 2: Demonstrate what would happen in meta discussion - print("\n2. Simulating Antennae Meta Discussion Flow:") - print(" 🤖 Agent A (walnut): 'I'll handle the API gateway design'") - print(" 🤖 Agent B (acacia): 'I can work on the data layer architecture'") - print(" 🤖 Agent C (ironwood): 'I'll focus on the P2P coordination logic'") - print(" 🎯 Meta Discussion: Agents coordinate task splitting and dependencies") - - # Test 3: Show escalation scenario - print("\n3. Human Escalation Scenario:") - print(" ⚠️ Agents detect conflicting approaches to distributed consensus") - print(" 🚨 Automatic escalation triggered after 3 rounds of discussion") - print(" 👤 Human expert summoned via N8N webhook") - - # Test 4: Check current bzzz logs for any meta discussion activity - print("\n4. Checking recent bzzz activity...") - - try: - # This would show any recent meta discussion logs - import subprocess - result = subprocess.run([ - 'journalctl', '-u', 'bzzz.service', '--no-pager', '-l', '-n', '20' - ], capture_output=True, text=True, timeout=10) - - if result.returncode == 0: - logs = result.stdout - if 'meta' in logs.lower() or 'antennae' in logs.lower(): - print(" ✅ Found meta discussion activity in logs!") - # Show relevant lines - for line in logs.split('\n'): - if 'meta' in line.lower() or 'antennae' in line.lower(): - print(f" 📝 {line}") - else: - print(" ℹ️ No recent meta discussion activity (expected - no active tasks)") - else: - print(" ⚠️ Could not access bzzz logs") - - except Exception as e: - print(f" ⚠️ Error checking logs: {e}") - - # Test 5: Show what capabilities support meta discussion - print("\n5. Meta Discussion Capabilities:") - capabilities = [ - "meta-discussion", - "task-coordination", - "collaborative-reasoning", - "human-escalation", - "cross-repository-coordination" - ] - - for cap in capabilities: - print(f" ✅ {cap}") - - print("\n🎯 Meta Discussion Test Complete!") - print("\nTo see meta discussion in action:") - print("1. Configure repositories in Hive with 'bzzz_enabled: true'") - print("2. Create complex GitHub issues labeled 'bzzz-task'") - print("3. Watch agents coordinate via Antennae P2P channel") - print("4. Monitor logs: journalctl -u bzzz.service -f | grep -i meta") - -if __name__ == "__main__": - test_meta_discussion() \ No newline at end of file diff --git a/archived/2025-07-17/test_simple_github.py b/archived/2025-07-17/test_simple_github.py deleted file mode 100644 index 56c9a92b..00000000 --- a/archived/2025-07-17/test_simple_github.py +++ /dev/null @@ -1,95 +0,0 @@ -#!/usr/bin/env python3 -""" -Simple test to check GitHub API access for bzzz-task issues. -""" - -import requests -from pathlib import Path - -def get_github_token(): - """Get GitHub token from secrets file.""" - try: - # Try gh-token first - gh_token_path = Path("/home/tony/chorus/business/secrets/gh-token") - if gh_token_path.exists(): - return gh_token_path.read_text().strip() - - # Try GitHub token - github_token_path = Path("/home/tony/chorus/business/secrets/github-token") - if github_token_path.exists(): - return github_token_path.read_text().strip() - - # Fallback to GitLab token if GitHub token doesn't exist - gitlab_token_path = Path("/home/tony/chorus/business/secrets/claude-gitlab-token") - if gitlab_token_path.exists(): - return gitlab_token_path.read_text().strip() - except Exception: - pass - return None - -def test_github_bzzz_tasks(): - """Test fetching bzzz-task issues from GitHub.""" - token = get_github_token() - if not token: - print("❌ No GitHub token found") - return - - print("🐙 Testing GitHub API access for bzzz-task issues...") - - # Test with the hive repository - repo = "anthonyrawlins/hive" - url = f"https://api.github.com/repos/{repo}/issues" - - headers = { - "Authorization": f"token {token}", - "Accept": "application/vnd.github.v3+json" - } - - # First, get all open issues - print(f"\n📊 Fetching all open issues from {repo}...") - response = requests.get(url, headers=headers, params={"state": "open"}, timeout=10) - - if response.status_code == 200: - all_issues = response.json() - print(f"Found {len(all_issues)} total open issues") - - # Show all labels used in the repository - all_labels = set() - for issue in all_issues: - for label in issue.get('labels', []): - all_labels.add(label['name']) - - print(f"All labels in use: {sorted(all_labels)}") - - else: - print(f"❌ Failed to fetch issues: {response.status_code} - {response.text}") - return - - # Now test for bzzz-task labeled issues - print(f"\n🐝 Fetching bzzz-task labeled issues from {repo}...") - response = requests.get(url, headers=headers, params={"labels": "bzzz-task", "state": "open"}, timeout=10) - - if response.status_code == 200: - bzzz_issues = response.json() - print(f"Found {len(bzzz_issues)} issues with 'bzzz-task' label") - - if not bzzz_issues: - print("ℹ️ No issues found with 'bzzz-task' label") - print(" You can create test issues with this label for testing") - - for issue in bzzz_issues: - print(f"\n 🎫 Issue #{issue['number']}: {issue['title']}") - print(f" State: {issue['state']}") - print(f" Labels: {[label['name'] for label in issue.get('labels', [])]}") - print(f" Assignees: {[assignee['login'] for assignee in issue.get('assignees', [])]}") - print(f" URL: {issue['html_url']}") - else: - print(f"❌ Failed to fetch bzzz-task issues: {response.status_code} - {response.text}") - -def main(): - print("🚀 Simple GitHub API Test for Bzzz Integration") - print("="*50) - test_github_bzzz_tasks() - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/bzzz-port3333 b/bzzz-port3333 deleted file mode 100755 index 77e1ccae..00000000 Binary files a/bzzz-port3333 and /dev/null differ diff --git a/old-docs/BZZZ_V2_UCXL_DEVELOPMENT_PLAN.md b/old-docs/BZZZ_V2_UCXL_DEVELOPMENT_PLAN.md deleted file mode 100644 index a0dd841c..00000000 --- a/old-docs/BZZZ_V2_UCXL_DEVELOPMENT_PLAN.md +++ /dev/null @@ -1,395 +0,0 @@ -# BZZZ v2: UCXL/UCXI Integration Development Plan - -## 1. Executive Summary - -BZZZ v2 represents a fundamental paradigm shift from a task coordination system using the `bzzz://` protocol to a semantic context publishing system built on the Universal Context eXchange Language (UCXL) and UCXL Interface (UCXI) protocols. This plan outlines the complete transformation of BZZZ into a distributed semantic decision graph that integrates with SLURP for global context management. - -### Key Changes: -- **Protocol Migration**: `bzzz://` → UCXL addresses (`ucxl://agent:role@project:task/temporal_segment/path`) -- **Temporal Navigation**: Support for `~~` (backward), `^^` (forward), `*^` (latest), `*~` (first) -- **Decision Publishing**: Agents publish structured decision nodes to SLURP after task completion -- **Citation Model**: Academic-style justification chains with bounded reasoning -- **Semantic Addressing**: Context as addressable resources with wildcards (`any:any`) - -## 2. UCXL Protocol Architecture - -### 2.1 Address Format -``` -ucxl://agent:role@project:task/temporal_segment/path -``` - -#### Components: -- **Agent**: AI agent identifier (e.g., `gpt4`, `claude`, `any`) -- **Role**: Agent role context (e.g., `architect`, `reviewer`, `any`) -- **Project**: Project namespace (e.g., `bzzz`, `chorus`, `any`) -- **Task**: Task identifier (e.g., `implement-auth`, `refactor`, `any`) -- **Temporal Segment**: Time-based navigation (`~~`, `^^`, `*^`, `*~`, ISO timestamps) -- **Path**: Resource path within context (e.g., `/decisions/architecture.json`) - -#### Examples: -``` -ucxl://gpt4:architect@bzzz:v2-migration/*^/decisions/protocol-choice.json -ucxl://any:any@chorus:*/*~/planning/requirements.md -ucxl://claude:reviewer@bzzz:auth-system/2025-08-07T14:30:00/code-review.json -``` - -### 2.2 UCXI Interface Operations - -#### Core Verbs: -- **GET**: Retrieve context from address -- **PUT**: Store/update context at address -- **POST**: Create new context entry -- **DELETE**: Remove context -- **ANNOUNCE**: Broadcast context availability - -#### Extended Operations: -- **NAVIGATE**: Temporal navigation (`~~`, `^^`) -- **QUERY**: Search across semantic dimensions -- **SUBSCRIBE**: Listen for context updates - -## 3. System Architecture Transformation - -### 3.1 Current Architecture (v1) -``` -┌─────────────┐ ┌─────────────┐ ┌─────────────┐ -│ GitHub │ │ P2P │ │ BZZZ │ -│ Issues │────│ libp2p │────│ Agents │ -│ │ │ │ │ │ -└─────────────┘ └─────────────┘ └─────────────┘ - │ │ │ - │ │ │ - ▼ ▼ ▼ -┌─────────────┐ ┌─────────────┐ ┌─────────────┐ -│Task Claims │ │ Pub/Sub │ │ Execution │ -│& Assignment │ │ Messaging │ │ & Results │ -└─────────────┘ └─────────────┘ └─────────────┘ -``` - -### 3.2 New Architecture (v2) -``` -┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ -│ UCXL │ │ SLURP │ │ Decision │ -│ Validator │────│ Context │────│ Graph │ -│ Online │ │ Ingestion │ │ Publishing │ -└─────────────────┘ └─────────────────┘ └─────────────────┘ - │ │ │ - │ │ │ - ▼ ▼ ▼ -┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ -│ UCXL │ │ P2P DHT │ │ BZZZ │ -│ Browser │────│ Resolution │────│ Agents │ -│ Time Machine UI │ │ Network │ │ GPT-4 + MCP │ -└─────────────────┘ └─────────────────┘ └─────────────────┘ - │ │ │ - │ │ │ - ▼ ▼ ▼ -┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ -│ Temporal │ │ Semantic │ │ Citation │ -│ Navigation │ │ Addressing │ │ Justification │ -│ ~~, ^^, *^ │ │ any:any │ │ Chains │ -└─────────────────┘ └─────────────────┘ └─────────────────┘ -``` - -### 3.3 Component Integration - -#### UCXL Address Resolution -- **Local Cache**: Recent context cached for performance -- **DHT Lookup**: Distributed hash table for address resolution -- **Temporal Index**: Time-based indexing for navigation -- **Semantic Router**: Route requests based on address patterns - -#### SLURP Decision Publishing -- **Decision Schema**: Structured JSON format for decisions -- **Justification Chains**: Link to supporting contexts -- **Citation Model**: Academic-style references with provenance -- **Bounded Reasoning**: Prevent infinite justification loops - -## 4. Implementation Plan: 8-Week Timeline - -### Week 1-2: Foundation & Protocol Implementation - -#### Week 1: UCXL Address Parser & Core Types -**Deliverables:** -- Replace `pkg/protocol/uri.go` with UCXL address parser -- Implement temporal navigation tokens (`~~`, `^^`, `*^`, `*~`) -- Core UCXL address validation and normalization -- Unit tests for address parsing and matching - -**Key Files:** -- `/pkg/protocol/ucxl_address.go` -- `/pkg/protocol/temporal_navigator.go` -- `/pkg/protocol/ucxl_address_test.go` - -#### Week 2: UCXI Interface Operations -**Deliverables:** -- UCXI HTTP server with REST-like operations (GET/PUT/POST/DELETE/ANNOUNCE) -- Context storage backend (initially local filesystem) -- Temporal indexing for navigation support -- Integration with existing P2P network - -**Key Files:** -- `/pkg/ucxi/server.go` -- `/pkg/ucxi/operations.go` -- `/pkg/storage/context_store.go` -- `/pkg/temporal/index.go` - -### Week 3-4: DHT & Semantic Resolution - -#### Week 3: P2P DHT for UCXL Resolution -**Deliverables:** -- Extend existing libp2p DHT for UCXL address resolution -- Semantic address routing (handle `any:any` wildcards) -- Distributed context discovery and availability announcements -- Address priority scoring for multi-match resolution - -**Key Files:** -- `/pkg/dht/ucxl_resolver.go` -- `/pkg/routing/semantic_router.go` -- `/pkg/discovery/context_discovery.go` - -#### Week 4: Temporal Navigation Implementation -**Deliverables:** -- Time-based context navigation (`~~` backward, `^^` forward) -- Snapshot management for temporal consistency -- Temporal query optimization -- Context versioning and history tracking - -**Key Files:** -- `/pkg/temporal/navigator.go` -- `/pkg/temporal/snapshots.go` -- `/pkg/storage/versioned_store.go` - -### Week 5-6: Decision Graph & SLURP Integration - -#### Week 5: Decision Node Schema & Publishing -**Deliverables:** -- Structured decision node JSON schema matching SLURP requirements -- Decision publishing pipeline after task completion -- Citation chain validation and bounded reasoning -- Decision graph visualization data - -**Decision Node Schema:** -```json -{ - "decision_id": "uuid", - "ucxl_address": "ucxl://gpt4:architect@bzzz:v2/*^/architecture.json", - "timestamp": "2025-08-07T14:30:00Z", - "agent_id": "gpt4-bzzz-node-01", - "decision_type": "architecture_choice", - "context": { - "project": "bzzz", - "task": "v2-migration", - "scope": "protocol-selection" - }, - "justification": { - "reasoning": "UCXL provides temporal navigation and semantic addressing...", - "alternatives_considered": ["custom_protocol", "extend_bzzz"], - "criteria": ["scalability", "semantic_richness", "ecosystem_compatibility"] - }, - "citations": [ - { - "type": "justified_by", - "ucxl_address": "ucxl://any:any@chorus:requirements/*~/analysis.md", - "relevance": "high", - "excerpt": "system must support temporal context navigation" - } - ], - "impacts": [ - { - "type": "replaces", - "ucxl_address": "ucxl://any:any@bzzz:v1/*^/protocol.go", - "reason": "migrating from bzzz:// to ucxl:// addressing" - } - ] -} -``` - -**Key Files:** -- `/pkg/decisions/schema.go` -- `/pkg/decisions/publisher.go` -- `/pkg/integration/slurp_publisher.go` - -#### Week 6: SLURP Integration & Context Publishing -**Deliverables:** -- SLURP client for decision node publishing -- Context curation pipeline (decision nodes only, no ephemeral chatter) -- Citation validation and loop detection -- Integration with existing task completion workflow - -**Key Files:** -- `/pkg/integration/slurp_client.go` -- `/pkg/curation/decision_curator.go` -- `/pkg/validation/citation_validator.go` - -### Week 7-8: Agent Integration & Testing - -#### Week 7: GPT-4 Agent UCXL Integration -**Deliverables:** -- Update agent configuration for UCXL operation mode -- MCP tools for UCXI operations (GET/PUT/POST/ANNOUNCE) -- Context sharing between agents via UCXL addresses -- Agent decision publishing after task completion - -**Key Files:** -- `/agent/ucxl_config.go` -- `/mcp-server/src/tools/ucxi-tools.ts` -- `/agent/context_publisher.go` - -#### Week 8: End-to-End Testing & Validation -**Deliverables:** -- Comprehensive integration tests for UCXL/UCXI operations -- Temporal navigation testing scenarios -- Decision graph publishing and retrieval tests -- Performance benchmarks for distributed resolution -- Documentation and deployment guides - -**Key Files:** -- `/test/integration/ucxl_e2e_test.go` -- `/test/scenarios/temporal_navigation_test.go` -- `/test/performance/resolution_benchmarks.go` - -## 5. Data Models & Schemas - -### 5.1 UCXL Address Structure -```go -type UCXLAddress struct { - Agent string `json:"agent"` // Agent identifier - Role string `json:"role"` // Agent role - Project string `json:"project"` // Project namespace - Task string `json:"task"` // Task identifier - TemporalSegment string `json:"temporal_segment"` // Time navigation - Path string `json:"path"` // Resource path - Query string `json:"query,omitempty"` // Query parameters - Fragment string `json:"fragment,omitempty"` // Fragment identifier - Raw string `json:"raw"` // Original address string -} -``` - -### 5.2 Context Storage Schema -```go -type ContextEntry struct { - Address UCXLAddress `json:"address"` - Content map[string]interface{} `json:"content"` - Metadata ContextMetadata `json:"metadata"` - Version int64 `json:"version"` - CreatedAt time.Time `json:"created_at"` - UpdatedAt time.Time `json:"updated_at"` -} - -type ContextMetadata struct { - ContentType string `json:"content_type"` - Size int64 `json:"size"` - Checksum string `json:"checksum"` - Provenance string `json:"provenance"` - Tags []string `json:"tags"` - Relationships map[string]string `json:"relationships"` -} -``` - -### 5.3 Temporal Index Schema -```go -type TemporalIndex struct { - AddressPattern string `json:"address_pattern"` - Entries []TemporalIndexEntry `json:"entries"` - FirstEntry *time.Time `json:"first_entry"` - LatestEntry *time.Time `json:"latest_entry"` -} - -type TemporalIndexEntry struct { - Timestamp time.Time `json:"timestamp"` - Version int64 `json:"version"` - Address UCXLAddress `json:"address"` - Checksum string `json:"checksum"` -} -``` - -## 6. Integration with CHORUS Infrastructure - -### 6.1 WHOOSH Search Integration -- Index UCXL addresses and content for search -- Temporal search queries (`find decisions after 2025-08-01`) -- Semantic search across agent:role@project:task dimensions -- Citation graph search and exploration - -### 6.2 SLURP Context Ingestion -- Publish decision nodes to SLURP after task completion -- Context curation to filter decision-worthy content -- Global context graph building via SLURP -- Cross-project context sharing and discovery - -### 6.3 N8N Workflow Integration -- UCXL address monitoring and alerting workflows -- Decision node publishing automation -- Context validation and quality assurance workflows -- Integration with UCXL Validator for continuous validation - -## 7. Security & Performance Considerations - -### 7.1 Security -- **Access Control**: Role-based access to context addresses -- **Validation**: Schema validation for all UCXL operations -- **Provenance**: Cryptographic signing of decision nodes -- **Bounded Reasoning**: Prevent infinite citation loops - -### 7.2 Performance -- **Caching**: Local context cache with TTL-based invalidation -- **Indexing**: Efficient temporal and semantic indexing -- **Sharding**: Distribute context storage across cluster nodes -- **Compression**: Context compression for storage efficiency - -### 7.3 Monitoring -- **Metrics**: UCXL operation latency and success rates -- **Alerting**: Failed address resolution and publishing errors -- **Health Checks**: Context store health and replication status -- **Usage Analytics**: Popular address patterns and access patterns - -## 8. Migration Strategy - -### 8.1 Backward Compatibility -- **Translation Layer**: Convert `bzzz://` addresses to UCXL format -- **Gradual Migration**: Support both protocols during transition -- **Data Migration**: Convert existing task data to UCXL context format -- **Agent Updates**: Staged rollout of UCXL-enabled agents - -### 8.2 Deployment Strategy -- **Blue/Green Deployment**: Maintain v1 while deploying v2 -- **Feature Flags**: Enable UCXL features incrementally -- **Monitoring**: Comprehensive monitoring during migration -- **Rollback Plan**: Ability to revert to v1 if needed - -## 9. Success Criteria - -### 9.1 Functional Requirements -- [ ] UCXL address parsing and validation -- [ ] Temporal navigation (`~~`, `^^`, `*^`, `*~`) -- [ ] Decision node publishing to SLURP -- [ ] P2P context resolution via DHT -- [ ] Agent integration with MCP UCXI tools - -### 9.2 Performance Requirements -- [ ] Address resolution < 100ms for cached contexts -- [ ] Decision publishing < 5s end-to-end -- [ ] Support for 1000+ concurrent context operations -- [ ] Temporal navigation < 50ms for recent contexts - -### 9.3 Integration Requirements -- [ ] SLURP context ingestion working -- [ ] WHOOSH search integration functional -- [ ] UCXL Validator integration complete -- [ ] UCXL Browser can navigate BZZZ contexts - -## 10. Documentation & Training - -### 10.1 Technical Documentation -- UCXL/UCXI API reference -- Agent integration guide -- Context publishing best practices -- Temporal navigation patterns - -### 10.2 Operational Documentation -- Deployment and configuration guide -- Monitoring and alerting setup -- Troubleshooting common issues -- Performance tuning guidelines - -This development plan transforms BZZZ from a simple task coordination system into a sophisticated semantic context publishing platform that aligns with the UCXL ecosystem vision while maintaining its distributed P2P architecture and integration with the broader CHORUS infrastructure. \ No newline at end of file diff --git a/old-docs/DEPLOYMENT.md b/old-docs/DEPLOYMENT.md deleted file mode 100644 index 17f1768d..00000000 --- a/old-docs/DEPLOYMENT.md +++ /dev/null @@ -1,245 +0,0 @@ -# Bzzz P2P Service Deployment Guide - -This document provides detailed instructions for deploying Bzzz as a production systemd service across multiple nodes. - -## Overview - -Bzzz has been successfully deployed as a systemd service across the deepblackcloud cluster, providing: -- Automatic startup on boot -- Automatic restart on failure -- Centralized logging via systemd journal -- Security sandboxing and resource limits -- Full mesh P2P network connectivity - -## Installation Steps - -### 1. Build Binary - -```bash -cd /home/tony/chorus/project-queues/active/BZZZ -go build -o bzzz -``` - -### 2. Install Service - -```bash -# Install as systemd service (requires sudo) -sudo ./install-service.sh -``` - -The installation script: -- Makes the binary executable -- Copies service file to `/etc/systemd/system/bzzz.service` -- Reloads systemd daemon -- Enables auto-start on boot -- Starts the service immediately - -### 3. Verify Installation - -```bash -# Check service status -sudo systemctl status bzzz - -# View recent logs -sudo journalctl -u bzzz -n 20 - -# Follow live logs -sudo journalctl -u bzzz -f -``` - -## Current Deployment Status - -### Cluster Overview - -| Node | IP Address | Service Status | Node ID | Connected Peers | -|------|------------|----------------|---------|-----------------| -| **WALNUT** | 192.168.1.27 | ✅ Active | `12D3KooWEeVXdHkXtUp2ewzdqD56gDJCCuMGNAqoJrJ7CKaXHoUh` | 3 peers | -| **IRONWOOD** | 192.168.1.113 | ✅ Active | `12D3KooWFBSR...8QbiTa` | 3 peers | -| **ACACIA** | 192.168.1.xxx | ✅ Active | `12D3KooWE6c...Q9YSYt` | 3 peers | - -### Network Connectivity - -Full mesh P2P network established: - -``` - WALNUT (aXHoUh) - ↕ ↕ - ↙ ↘ -IRONWOOD ←→ ACACIA -(8QbiTa) (Q9YSYt) -``` - -- All nodes automatically discovered via mDNS -- Bidirectional connections established -- Capability broadcasts exchanged every 30 seconds -- Ready for distributed task coordination - -## Service Management - -### Basic Commands - -```bash -# Start service -sudo systemctl start bzzz - -# Stop service -sudo systemctl stop bzzz - -# Restart service -sudo systemctl restart bzzz - -# Check status -sudo systemctl status bzzz - -# Enable auto-start (already enabled) -sudo systemctl enable bzzz - -# Disable auto-start -sudo systemctl disable bzzz -``` - -### Logging - -```bash -# View recent logs -sudo journalctl -u bzzz -n 50 - -# Follow live logs -sudo journalctl -u bzzz -f - -# View logs from specific time -sudo journalctl -u bzzz --since "2025-07-12 19:00:00" - -# View logs with specific priority -sudo journalctl -u bzzz -p info -``` - -### Troubleshooting - -```bash -# Check if service is running -sudo systemctl is-active bzzz - -# Check if service is enabled -sudo systemctl is-enabled bzzz - -# View service configuration -sudo systemctl cat bzzz - -# Reload service configuration (after editing service file) -sudo systemctl daemon-reload -sudo systemctl restart bzzz -``` - -## Service Configuration - -### Service File Location - -`/etc/systemd/system/bzzz.service` - -### Key Configuration Settings - -- **Type**: `simple` - Standard foreground service -- **User/Group**: `tony:tony` - Runs as non-root user -- **Working Directory**: `/home/tony/chorus/project-queues/active/BZZZ` -- **Restart Policy**: `always` with 10-second delay -- **Timeout**: 30-second graceful stop timeout - -### Security Settings - -- **NoNewPrivileges**: Prevents privilege escalation -- **PrivateTmp**: Isolated temporary directory -- **ProtectSystem**: Read-only system directories -- **ProtectHome**: Limited home directory access - -### Resource Limits - -- **File Descriptors**: 65,536 (for P2P connections) -- **Processes**: 4,096 (for Go runtime) - -## Network Configuration - -### Port Usage - -Bzzz automatically selects available ports for P2P communication: -- TCP ports in ephemeral range (32768-65535) -- IPv4 and IPv6 support -- Automatic port discovery and sharing via mDNS - -### Firewall Considerations - -For production deployments: -- Allow inbound TCP connections on used ports -- Allow UDP port 5353 for mDNS discovery -- Consider restricting to local network (192.168.1.0/24) - -### mDNS Discovery - -- Service Tag: `bzzz-peer-discovery` -- Network Scope: `192.168.1.0/24` -- Discovery Interval: Continuous background scanning - -## Monitoring and Maintenance - -### Health Checks - -```bash -# Check P2P connectivity -sudo journalctl -u bzzz | grep "Connected to" - -# Monitor capability broadcasts -sudo journalctl -u bzzz | grep "capability_broadcast" - -# Check for errors -sudo journalctl -u bzzz -p err -``` - -### Performance Monitoring - -```bash -# Resource usage -sudo systemctl status bzzz - -# Memory usage -ps aux | grep bzzz - -# Network connections -sudo netstat -tulpn | grep bzzz -``` - -### Maintenance Tasks - -1. **Log Rotation**: Systemd handles log rotation automatically -2. **Service Updates**: Stop service, replace binary, restart -3. **Configuration Changes**: Edit service file, reload systemd, restart - -## Uninstalling - -To remove the service: - -```bash -sudo ./uninstall-service.sh -``` - -This will: -- Stop the service if running -- Disable auto-start -- Remove service file -- Reload systemd daemon -- Reset any failed states - -Note: Binary and project files remain intact. - -## Deployment Timeline - -- **2025-07-12 19:46**: WALNUT service installed and started -- **2025-07-12 19:49**: IRONWOOD service installed and started -- **2025-07-12 19:49**: ACACIA service installed and started -- **2025-07-12 19:50**: Full mesh network established (3 nodes) - -## Next Steps - -1. **Integration**: Connect with Hive task coordination system -2. **Monitoring**: Set up centralized monitoring dashboard -3. **Scaling**: Add additional nodes to expand P2P mesh -4. **Task Execution**: Implement actual task processing workflows \ No newline at end of file diff --git a/old-docs/FUTURE_DEVELOPMENT.md b/old-docs/FUTURE_DEVELOPMENT.md deleted file mode 100644 index 5ff13923..00000000 --- a/old-docs/FUTURE_DEVELOPMENT.md +++ /dev/null @@ -1,3532 +0,0 @@ -Exactly — swapping the raw IP for a **semantic addressing scheme** like - -``` -bzzz://agent:role@project:task/path/to/context -``` - -makes a *lot* more sense in a P2P + SLURP/BZZZ world. -Here’s why: - ---- - -## **Why Semantic > IP Address** - -1. **Location-independent** — you don’t care *where* the agent physically is, only *what it is* and *what it’s doing*. -2. **Self-describing** — anyone reading the URI can immediately infer: - - * **Agent** → identity (human, bot, service) - * **Role** → function or responsibility - * **Project** → higher-level grouping - * **Task** → sub-scope within project - * **Path** → hierarchical context location (HCFS path) -3. **Resilient to churn** — if a node changes IP, it can still announce under the same project/task space. -4. **Supports dynamic discovery** — peers subscribe to project/task channels, so they automatically learn new agents or context without knowing network topology. - ---- - -## **How This Works in a P2P Mesh** - -* **Announce**: - - ``` - bzzz announce bzzz://alice:frontend@chorus:ui-redesign/src/components/navbar - ``` - - → Published over the BZZZ gossip layer. - -* **Lookup**: - Any peer can query: - - ``` - bzzz get bzzz://any:frontend@chorus:ui-redesign/src/components - ``` - - The mesh routes to the *closest* peer with that context (by latency or chunk availability). - -* **Routing layer**: - Under the hood, `project:task` maps to a *swarm namespace* (think topic in a pub/sub system). - The actual transport (IP, QUIC, etc.) is abstracted away — only the routing fabric knows it. - ---- - -## **Bonus Benefits** - -* **Easier onboarding** — no need to teach new nodes about IP addresses or ports; they just announce with semantic URIs. -* **Cross-project federation** — the same agent could appear in multiple project\:task spaces without changing network plumbing. -* **Natural fit for role-based SLURP filtering** — your role & project are literally in the address, so the curator knows instantly what context you can see. - ---- - -If you like, I can **sketch a lightweight `bzzz://` URI grammar & resolution spec** that defines: - -* Allowed characters & formats for `agent`, `role`, `project`, `task`. -* How the mesh resolves them to actual peers. -* How caching & replication work. -* How this plugs into SLURP’s permission and event system. - -That would make it concrete enough for you to start building BZZZ announce/lookup commands without locking into IP-based thinking. - - - -Nice—this is exactly the right direction. Treating `bzzz://` URIs as **semantic resource addresses** and supporting REST-like verbs (GET/POST/PUT/DELETE) mapped onto the P2P mesh and your HCFS/SLURP stack gives you a clean, powerful, and auditable control surface. - -Below I’ll give you a focused, implementable **protocol spec**: URI grammar, verb semantics, payload shapes for event-driven POSTs, example flows, security rules, idempotency/versioning, and how SLURP/HCFS should process these messages (and log them to the hyperlog / `context_events` table). You can iterate from this into code, CLI commands, or a lightweight RFC for your team. - ---- - -# bzzz:// Protocol — Practical Spec (draft) - -## 1 — URI grammar (informal) - -``` -bzzz://[agent]:[role]@[project]:[task]/[path...][?query][#fragment] -``` - -* `agent` — identifier for the emitter (human, service, bot). \[a–zA–Z0–9\_.-] -* `role` — semantic role (frontend, devops, finance, curator). \[a–zA–Z0–9\_-] -* `project` — project namespace. \[a–zA–Z0–9\_-] -* `task` — task or subproject identifier. \[a–zA–Z0–9\_-] -* `/path/to/context` — HCFS-style hierarchical path; components may include `/` separators. -* `?query` — optional parameters (e.g., `depth=2&version=7`) -* `#fragment` — optional fragment inside blob - -**Example** - -``` -bzzz://alice:frontend@chorus:website-redesign/src/modules/db/connection -``` - ---- - -## 2 — Basic verbs & semantics - -Map familiar REST verbs to P2P actions: - -### `GET bzzz://...` - -* **Intent:** Retrieve an assembled context packet for `role/channel/path` or to fetch a resource (blob, metadata). -* **Behavior:** - - * SLURP checks permissions -> resolves path -> returns composed context (with metadata). - * May consult LightRAG and TGN for ranking/drift warnings if requested (`?use_lightrag=true`). -* **Response:** `200` with `{ context, metadata, source_blob_ids, drift_score, events_considered }` -* **Cacheable:** yes (subject to freshness flags) - -### `POST bzzz://...` (create / event submit) - -* **Intent:** Submit a new **event** or **context blob**. For events, payload carries `message-type` (event\_type) and semantics. -* **Behavior:** - - * Receives payload (see event payload schema below). - * Validate sender (signature/ACL). - * Insert `context_events` and/or `context_blobs` in HCFS. - * Publish announcement on mesh pubsub for `project:task` swarm. - * SLURP curators listening see the event and may act (re-curation, revocation, quarantine). -* **Idempotency:** client should include `idempotency_key` (UUID) to avoid duplicates. -* **Response:** `201` with `event_id`, `hyperlog_ref`, and `applied=true|queued` depending on processing. - -### `PUT bzzz://...` (update) - -* **Intent:** Update an existing resource (e.g., revise an event or blob). -* **Behavior:** require `resource_id` or `version` in query; insert a new version or update mutable metadata. Log versioning. -* **Response:** `200` with new `version_id`, `hyperlog_ref`. - -### `DELETE bzzz://...` - -* **Intent:** Soft-delete or mark obsolete (rarely hard-delete — prefer soft deletes for audit). -* **Behavior:** mark `end_time` on event or set `deleted=true` in blob metadata; announce to mesh. -* **Response:** `200` with `deleted=true` and hyperlog ref. - -### `SUBSCRIBE / UNSUBSCRIBE` (pubsub) - -* Semantic action for live channels: `bzzz sub bzzz://...` subscribes to notifications/announcements on the project\:task namespace. - ---- - -## 3 — Event POST payload (JSON) - -This is the general structure for posting dynamic events (blockers, warnings, approvals, etc.): - -```json -{ - "idempotency_key": "uuid-v4", - "message_type": "blocker", // enum: announcement|warning|blocker|approval|priority_change|access_update|structural_change - "title": "Payment module under audit", - "content": "Do not deploy or change DB schemas until audit complete.", - "path": "/src/modules/payments", - "scope": { "project": "chorus", "task": "payments-launch" }, - "severity": 9, // 1-10 - "visibility": { "roles": ["frontend","backend"], "channels": ["dev"], "exclude": [] }, - "start_time": "2025-08-10T10:00:00Z", - "end_time": null, - "created_by": "alice", - "signature": "base64(sig)", // optional but recommended - "metadata": { "ticket": "JIRA-1234", "owner": "pm_team" } -} -``` - -Notes: - -* `message_type` maps to `event_type` in DB. -* `visibility` indicates role/channel scoping — SLURP uses this to decide who sees it. -* `signature` signed with agent private key over canonicalized JSON to ensure authenticity. - ---- - -## 4 — Example flows - -### A. Developer posts a blocker (your example) - -**Command** - -``` -bzzz post bzzz://any:developer@chorus:website-redesign/db/connection/credentials \ -'{"message_type":"blocker","title":"Hardcoded DB creds","content":"Remove credentials; rotate keys","path":"/src/modules/db/connection","visibility":{"roles":["devops","backend"]}}' -``` - -**Processing** - -1. Node receives POST, validates `idempotency_key` and `signature`. -2. Writes event into `context_events` (DB) and creates a `graph_node` + `graph_edge` (event node, linked to path node). -3. Publishes `bzzz.announce` on pubsub for `chorus:website-redesign`. -4. SLURP instances subscribed to that project\:task see the event → decide action: - - * If `severity >= 8` and `visibility` includes `devops`, SLURP may quarantine affected blob(s), notify WHOOSH to run remediation ansible playbook, and log a hyperlog decision. -5. Response includes `event_id`, `hyperlog_ref` and `applied:true`. - -### B. Agent requests context and gets override - -**Command** - -``` -bzzz get bzzz://bob:frontend@chorus:website-redesign/src/modules/db/connection?use_lightrag=true -``` - -**Processing** - -1. SLURP inspects active events on `/src/modules/db/connection` and parent paths. Finds `blocker`. -2. Depending on project event\_type\_definitions, SLURP either: - - * **suppress** normal blobs and return only blocker note, or - * **annotate** returned context with the blocker at top -3. If allowed, SLURP calls LightRAG to re-rank candidate blobs (restricted to those the requester may see). -4. SLURP returns composed payload and logs to hyperlog. - ---- - -## 5 — Mapping to HCFS / SLURP / TGN storage - -When a POST is accepted: - -* Insert into `context_events`: - - * `(path, event_type, content, severity, start_time, end_time, created_by, project_id)` -* Create `graph_nodes` entry for the event and `graph_edges` linking event→path and created\_by→agent. -* Append a hyperlog entry (hypercore) that records the event and SLURP's decision. -* Publish mesh announcement: `bzzz:announce` topic with minimal envelope: - -```json -{ "type":"event_announce", "project":"chorus", "task":"website-redesign", "event_id": 123, "path": "/src/...", "message_type":"blocker" } -``` - -TGN ingestion: event becomes a temporal edge for the TGN model (src: agent node, dst: path or blob node, event\_type, timestamp). - ---- - -## 6 — Security & Permissions - -* **Authentication:** Agents hold keypairs. Use Ed25519 signatures. When posting, include `signature` and `public_key_id`. Validate against ACLs in `graph_nodes` or `agents` table. -* **Authorization:** SLURP enforces `visibility` and `role` constraints. If a posting agent tries to create an event affecting roles they don't control, SLURP may mark `status=pending` and route to human approver. -* **Encryption:** Use layer transport TLS/QUIC, and optionally encrypt payloads for specific roles using symmetric keys shared by the project’s team. -* **Auditability:** Never hard-delete events; always append an "end\_time" or tombstone entry. Hyperlog retains immutability. - ---- - -## 7 — Idempotency, versioning & conflict handling - -* **Idempotency key** (client-generated UUID) ensures repeated POST attempts don't create duplicates. -* **Versioning:** context blobs and events are immutable records with explicit versions. `PUT` creates a new version rather than in-place mutation (or marks prior version end\_time). -* **Conflict resolution:** for concurrent edits, use last-writer-wins by timestamp + signature OR a merge flow managed by SLURP/curator (create merge candidates, present to humans or meta-curator). - ---- - -## 8 — PubSub & discovery model - -* **Topic namespace:** `bzzz.project..task.` (or `bzzz.project.chorus.task.website-redesign`) -* **Announcements:** All writes publish lightweight announcements so subscribed peers discover new events quickly without fetching full payload. -* **Pull model:** on announcement, peers decide whether to `GET` the full event or ignore. -* **Replication:** `bzzz` nodes will replicate blobs via content-addressing (hashes) and chunked streaming (libp2p style), storing locally per retention policy. - ---- - -## 9 — Implementation notes / pseudocode - -### POST handler pseudocode (simplified) - -```python -def handle_post(uri, body, signature): - parsed = parse_bzzz_uri(uri) - assert validate_signature(body, signature, parsed.agent) - - # 1. Validate message_type and visibility - if not allowed_to_post(parsed.agent, parsed.project, body['path'], body['message_type']): - return 403 - - # 2. Check idempotency - if exists_event_with_idempotency_key(body.get('idempotency_key')): - return existing_event - - # 3. Insert into DB - event_id = insert_context_event(body, parsed) - - # 4. Create graph node + edges - node_id = create_graph_node('event', ref_id=event_id) - create_graph_edge(agent_node(parsed.agent), node_id, 'created') - - # 5. Append hyperlog entry (immutable) - hyperlog_ref = hyperlog.append({...}) - - # 6. Publish announcement - pubsub.publish(topic_for(parsed.project, parsed.task), { 'type': 'event_announce', 'event_id': event_id }) - - # 7. Return success - return { 'event_id': event_id, 'hyperlog_ref': hyperlog_ref } -``` - -### GET handler pseudocode (simplified) - -```python -def handle_get(uri, params): - parsed = parse_bzzz_uri(uri) - # 1. AuthN & AuthZ - ensure agent role can read requested path - check_read_permission(parsed.agent, parsed.role, parsed.path) - - # 2. Check context_events for overrides - events = find_active_events(parsed.path) - action = resolve_action_based_on_project_definitions(parsed.project, events, parsed.role) - if action == 'suppress': - return format_response_with_events_only(events) - - # 3. Query HCFS for blobs matching path & role/channel - blobs = fetch_context_blobs(parsed.path, parsed.role, parsed.channel) - if params.use_lightrag and len(blobs) >= threshold: - lrag_res = lightrag.query(restrict_ids=[b.lightrag_doc_id for b in blobs]) - chosen = lrag_res.chunks - else: - chosen = blobs - - # 4. TGN check for drift - drift = tgn.get_drift_score(parsed.path) - - # 5. Compose response, log decision - resp = compose_context_packet(chosen, events, drift) - hyperlog.append({ 'request':uri, 'decision':..., 'returned_ids': [...] }) - return resp -``` - ---- - -## 10 — Operational / UX considerations - -* **CLI tool**: `bzzz post ...`, `bzzz get ...`, `bzzz sub ...` — wraps canonical signing and pubsub convenience. -* **Human review queue**: if a POST creates a high-risk action (e.g., revoke access), route to dashboard for a human to approve. -* **Rate limits & throttles** on post events to avoid spam floods. -* **Retention policy** for replicated model blobs and context blobs (LRU or TTL per project). - ---- - -## 11 — How SLURP & WHOOSH interact with this protocol - -* **SLURP** subscribes to `bzzz` pubsub topics for the projects it curates. On announcements it: - - * pulls event payload - * evaluates via project `event_type_definitions` - * takes actions (suppress, annotate, notify, route to WHOOSH) - * logs decisions to hyperlog - -* **WHOOSH** (automation/orchestration) listens for high-severity `blocker` or `access_update` events: - - * runs Ansible playbooks (install bzzz client, run `ollama pull` fallback, or trigger model replication). - * on success/failure, posts back a status event (POST `message_type: announcement` with metadata). - ---- - -## 12 — Next concrete deliverables I can give you right away - -Pick one and I’ll produce it: - -* **A. Formal `bzzz://` URI BNF grammar + canonical JSON schema** for GET/POST/PUT/DELETE payloads (suitable for adding to a spec doc). -* **B. CLI design & sample implementation** (Python) with signing, pubsub announce, and hyperlog append. -* **C. Fast prototyping plan** using libp2p (py-libp2p or go-libp2p) + a tiny HTTP shim (so `bzzz` URIs are handled by a local daemon that maps to libp2p calls). -* **D. Event handling flow doc** showing how SLURP processes `POST` events end-to-end (with exact DB writes and hyperlog entries). - - -Nice — I’ll give you all four, structured and ready to drop into your docs or hand to an engineer. I’ll keep each section practical and implementation-ready: **A** (formal grammar + payload schemas), **B** (CLI spec + Python stub), **C** (fast prototype plan using libp2p + HTTP shim), **D** (end-to-end SLURP event handling flow with DB + hyperlog details). - ---- - -## A — `bzzz://` URI Grammar (BNF) + JSON Schemas - -### A.1 — BNF grammar (informal, precise) - -``` - ::= "bzzz" "://" ":" "@" ":" [ "?" ] [ "#" ] - - ::= ; [A-Za-z0-9_.-]+ - ::= ; [A-Za-z0-9_.-]+ - ::= ; [A-Za-z0-9_.-]+ - ::= ; [A-Za-z0-9_.-]+ - ::= "/" { "/" } - ::= [A-Za-z0-9_.-]+ ; allow dot, underscore, hyphen - ::= { "&" } - ::= "=" - ::= - -Example: -bzzz://alice:frontend@chorus:website-redesign/src/modules/db/connection?use_lightrag=true#v3 -``` - -**Notes:** - -* All IDs normalized to lowercase for routing. -* Percent-encoding allowed for unusual characters in path segments. -* `agent` may be `any` for anonymous or service-level posts, but authenticity must be validated via `signature` in payload. - ---- - -### A.2 — Canonical JSON Schemas - -All schemas in OpenAPI-style JSON (trimmed to essentials). Use `message_type` as event\_type enum. - -#### A.2.1 — `GET` Response (context packet) - -```json -{ - "type": "object", - "required": ["context", "metadata"], - "properties": { - "context": { "type": "string" }, - "metadata": { - "type": "object", - "properties": { - "source_blob_ids": { "type": "array", "items": { "type": "integer" } }, - "lightrag_doc_ids": { "type": "array", "items": { "type": "string" } }, - "events_considered": { "type": "array", "items": { "type": "integer" } }, - "drift_score": { "type": "number", "minimum": 0, "maximum": 1 }, - "timestamp": { "type": "string", "format": "date-time" } - } - } - } -} -``` - -#### A.2.2 — `POST` Event Body (create event / create blob) - -```json -{ - "type": "object", - "required": ["idempotency_key", "message_type", "title", "content", "path", "created_by"], - "properties": { - "idempotency_key": { "type": "string", "format": "uuid" }, - "message_type": { - "type": "string", - "enum": ["announcement","warning","blocker","approval","priority_change","access_update","structural_change"] - }, - "title": { "type": "string" }, - "content": { "type": "string" }, - "path": { "type": "string" }, - "scope": { - "type": "object", - "properties": { - "project": { "type": "string" }, - "task": { "type": "string" } - } - }, - "severity": { "type": "integer", "minimum": 1, "maximum": 10, "default": 5 }, - "visibility": { - "type": "object", - "properties": { - "roles": { "type": "array", "items": { "type": "string" } }, - "channels": { "type": "array", "items": { "type": "string" } }, - "exclude": { "type": "array", "items": { "type": "string" } } - } - }, - "start_time": { "type": "string", "format": "date-time" }, - "end_time": { "type": ["string","null"], "format": "date-time" }, - "created_by": { "type": "string" }, - "signature": { "type": "string" }, - "metadata": { "type": "object" } - } -} -``` - -#### A.2.3 — `PUT` Update (minor) - -```json -{ - "type":"object", - "required":["idempotency_key","resource_id","updates"], - "properties":{ - "idempotency_key":{"type":"string","format":"uuid"}, - "resource_id":{"type":"integer"}, - "updates":{"type":"object"}, - "created_by":{"type":"string"}, - "signature":{"type":"string"} - } -} -``` - -#### A.2.4 — `DELETE` Body - -Prefer soft-delete: - -```json -{ - "type":"object", - "required":["resource_id","created_by"], - "properties":{ - "resource_id":{"type":"integer"}, - "reason":{"type":"string"}, - "created_by":{"type":"string"}, - "signature":{"type":"string"} - } -} -``` - ---- - -## B — CLI Design & Python Sample Implementation (signing, pubsub announce, hyperlog append) - -### B.1 — CLI UX surface (commands) - -``` -bzzz post --file payload.json -bzzz get [--use-lightrag] -bzzz put --file update.json -bzzz delete --file delete.json -bzzz sub # subscribe to project:task announcements -bzzz announce --meta '{...}' # low-level announce -``` - -* All commands auto-sign with local key (unless `--no-sign`). -* Default transport: local daemon via HTTP shim on `http://localhost:19000/` (see C). -* CLI handles idempotency keys (generates UUID if not provided) and prints returned `event_id` + `hyperlog_ref`. - ---- - -### B.2 — Python CLI minimal stub (uses `requests` to local shim + Ed25519 signing) - -Save as `bzzz_cli.py`. This is a minimal, production-adjacent example — adapt to your codebase. - -```python -#!/usr/bin/env python3 -import argparse, json, requests, os, sys, uuid -from nacl.signing import SigningKey -from nacl.encoding import Base64Encoder -from urllib.parse import urlparse - -# Config -SHIM_URL = os.getenv("BZZZ_SHIM_URL", "http://localhost:19000") - -# Load or create key -KEY_PATH = os.path.expanduser("~/.bzzz/signing.key") -if not os.path.exists(KEY_PATH): - os.makedirs(os.path.dirname(KEY_PATH), exist_ok=True) - sk = SigningKey.generate() - with open(KEY_PATH, "wb") as f: - f.write(sk.encode()) -else: - sk = SigningKey(open(KEY_PATH,"rb").read()) - -pk_b64 = sk.verify_key.encode(encoder=Base64Encoder).decode() - -def sign_payload(payload_bytes): - sig = sk.sign(payload_bytes).signature - return Base64Encoder.encode(sig).decode() - -def post(uri, payload): - body = payload.copy() - if "idempotency_key" not in body: - body["idempotency_key"] = str(uuid.uuid4()) - body_bytes = json.dumps(body, separators=(",", ":"), sort_keys=True).encode() - signature = sign_payload(body_bytes) - headers = { - "Content-Type": "application/json", - "X-BZZZ-Signature": signature, - "X-BZZZ-PubKey": pk_b64 - } - resp = requests.post(f"{SHIM_URL}/post", params={"uri": uri}, data=body_bytes, headers=headers, timeout=10) - resp.raise_for_status() - return resp.json() - -def get(uri, use_lightrag=False): - resp = requests.get(f"{SHIM_URL}/get", params={"uri": uri, "use_lightrag": str(use_lightrag).lower()}, timeout=10) - resp.raise_for_status() - return resp.json() - -def main(): - p = argparse.ArgumentParser() - sub = p.add_subparsers(dest="cmd") - p_post = sub.add_parser("post"); p_post.add_argument("uri"); p_post.add_argument("--file") - p_get = sub.add_parser("get"); p_get.add_argument("uri"); p_get.add_argument("--use-lightrag", action="store_true") - args = p.parse_args() - - if args.cmd == "post": - if args.file: - payload = json.load(open(args.file)) - else: - payload = {} - res = post(args.uri, payload) - print(json.dumps(res, indent=2)) - elif args.cmd == "get": - res = get(args.uri, use_lightrag=args.use_lightrag) - print(json.dumps(res, indent=2)) - else: - p.print_help() - sys.exit(1) - -if __name__ == "__main__": - main() -``` - -**Notes:** - -* Uses `pynacl` (`libsodium` binding) for Ed25519 signing. -* `SHIM_URL` talks to local daemon (C) which maps to real P2P calls and does hyperlog append. - ---- - -### B.3 — Hyperlog append API (local shim expectations) - -Local shim must expose endpoints: - -``` -POST /post?uri= # same as bzzz post -GET /get?uri=&use_lightrag=true -POST /announce # lightweight announcement -``` - -On incoming POST, shim will: - -* verify signature using `X-BZZZ-PubKey` -* forward to cluster (local node or forward to appropriate peer) -* append event to hyperlog (local feed) and return `{"event_id":123, "hyperlog_ref":"feed:345"}` - ---- - -## C — Fast prototype plan: libp2p + HTTP shim - -Goal: ship a working prototype in days, not months. Use libp2p for P2P transport and a local HTTP daemon to map `bzzz://` calls to libp2p operations. Keep implementation minimal: - -### C.1 — Components - -* **Local daemon** (Go or Python) — exposes HTTP endpoints (`/post`, `/get`, `/announce`, `/sub`) and runs libp2p node. -* **libp2p peer** — provides pubsub, DHT, and direct stream capabilities. -* **Hyperlog** — use `hypercore` or a Mongo/SQLite append-only table (for prototype, use a local append-only SQLite table or simple file-based log). Hypercore JS is an option if you prefer Node. -* **Discovery** — mDNS for LAN + optional bootstrap peers for multi-network. - -### C.2 — Minimal tech choices (fastest path) - -* **Go**: `go-libp2p` + `go-http` — performant, production-friendly. -* **Python**: `py-libp2p` (less mature) or run a small `node` helper for libp2p. If you want speed, use Go for the shim. -* **DB**: SQLite for prototype (later Postgres). -* **LightRAG**: separate service (optional); initial prototype: skip LightRAG or stub it. - -### C.3 — Minimal runtime flows - -#### Startup - -* Node boots libp2p, joins DHT, subscribes to `bzzz.project..task.` topics configured in local file. -* Node exposes HTTP on `localhost:19000`. - -#### POST - -* HTTP shim receives POST -> validate signature -> write event into local hyperlog -> publish `event_announce` on topic -> attempt to store in HCFS DB or forward to known curator nodes. - -#### GET - -* Shim resolves request: check local HCFS DB -> if not present, query DHT for peers with `path` -> open libp2p stream to peer(s) and retrieve blob -> run SLURP logic (basic ACLs) -> return composed packet. - -#### SUBSCRIBE - -* Shim subscribes to pubsub topic and emits events on local websocket to CLI or UI. - -### C.4 — Development tasks (order) - -1. Implement HTTP shim skeleton with `/post` and `/get` routes. -2. Wire libp2p pubsub announce & subscribe. -3. Implement a simple hyperlog (append-only SQLite table) and hyperlog append/ read. -4. Implement signature verification (Ed25519). -5. Implement simple SLURP rules engine (role/channel/path ACL check + event override logic). -6. Hook in LightRAG later as a separate service (call via HTTP). -7. Add `bzzz` CLI that calls shim (B). - -### C.5 — Security / dev ops notes - -* For local network dev, mDNS + DHT is fine; for production, bootstrap peers + TLS + persistent peer lists. -* Run shim as systemd service and enable socket activation if desired. - ---- - -## D — Event Handling Flow: SLURP processing `POST` events (end-to-end) - -I’ll describe concrete DB writes, hyperlog entries, pubsub announce, curator actions, and WHOOSH triggers. Use this as the implementation recipe. - -### D.1 — Inputs - -* HTTP POST: `bzzz://alice:dev@chorus:website-redesign/src/modules/db/connection` with JSON `body` (see A.2.2), headers: - - * `X-BZZZ-Signature` - * `X-BZZZ-PubKey` - -### D.2 — Validation & initial checks - -1. **Signature verification**: verify `body` canonical JSON signature against `X-BZZZ-PubKey`. If fail → 401. -2. **Idempotency**: check `idempotency_key` in `context_events` table. If exists → return existing event record (HTTP 200). -3. **Authorization**: check posting agent (`alice`) is permitted to post events in `project` context (`chorus`) — consult `agents` table or `graph_nodes` ACL. If not permitted → create `pending_event` with status `pending_approval`, notify human admin, return 202. - -### D.3 — Accepted path (authorized & new) - -4. **Persist `context_events`**: - -```sql -INSERT INTO context_events - (path, event_type, content, severity, start_time, end_time, created_by, project_id) -VALUES (?, ?, ?, ?, ?, ?, ?, ?) -RETURNING id; -``` - -Return `event_id`. - -5. **Create graph node + edges**: - -* `graph_nodes` entry: - -```sql -INSERT INTO graph_nodes (node_type, ref_id, label) VALUES ('event', event_id, title) RETURNING id as node_id; -``` - -* `graph_edges` entries: - -```sql -INSERT INTO graph_edges (src_node, dst_node, edge_type) VALUES (agent_node_id, node_id, 'created'); -INSERT INTO graph_edges (src_node, dst_node, edge_type) VALUES (node_id, path_node_id, 'affects_path'); -``` - -(Where `agent_node_id` and `path_node_id` existence is resolved by lookup or created if missing.) - -6. **Append immutable hyperlog** (append-only) - -* Hyperlog entry JSON shape: - -```json -{ - "type": "context_event", - "event_id": 123, - "path": "/src/modules/db/connection", - "message_type": "blocker", - "created_by": "alice", - "timestamp": "2025-08-06T10:12:34Z", - "idempotency_key": "..." -} -``` - -* Hyperlog append function returns `hyperlog_ref = feed_id:offset`. Record that in `context_events.hyperlog_ref`. - -7. **Publish mesh announcement** (pubsub) - -* Topic: `bzzz.project.chorus.task.website-redesign` -* Envelope: - -```json -{ "type": "event_announce", "event_id": 123, "path": "/src/modules/...", "message_type": "blocker", "severity": 9 } -``` - -### D.4 — SLURP Curator reaction - -SLURP instances subscribe to the project/topic and run the Curator pipeline: - -1. **Receive announce** → fetch event full payload (if not included) via libp2p / HTTP shim `/get_event?id=123`. - -2. **Evaluate with `event_type_definitions`** for the project: - - * lookup `event_type_definitions` for `(project_id, message_type, role)` to find action (`suppress|annotate|notify|route`). - -3. **Decide**: - - * If `action == suppress` for certain roles: update `context_events` to set `applies_to_roles` and mark `affected_blobs`. - * If `action == notify`: enqueue notifications to relevant agents (via pubsub or webhooks). - * If `severity >= threshold` and `message_type == blocker`: - - * **Quarantine** blobs: set `quarantine = true` on `context_blobs` where `path LIKE '/src/modules/db/connection%'` and `sensitivity_level` threshold exceeded. - * Call WHOOSH to run remediation playbook if remediation mapping exists. - -4. **Log SLURP decision to hyperlog**: - -```json -{ - "type": "slurp_decision", - "event_id": 123, - "decision": "quarantine_blobs", - "affected_blob_ids": [42, 43], - "reason": "blocker severity 9 and project rule says suppress for role frontend", - "timestamp": "..." -} -``` - -5. **Emit post-actions** (if any): - - * `WHOOSH` run request: push an orchestration event into `automation_queue` table or pubsub topic `bzzz.automation.chorus.website-redesign`. - * WHOOSH acknowledges and later posts an `automation_result` event (POST) with status; SLURP appends that to hyperlog and optionally updates `context_events` with remediation timestamp. - -### D.5 — WHOOSH orchestration example - -If WHOOSH runs an Ansible playbook to rotate keys or replicate a model: - -* WHOOSH posts a follow-up event: - -```json -{ - "message_type": "announcement", - "title": "Remediation run: rotate_db_creds", - "content": "Ansible completed key rotation on hosts A,B,C. Affected tokens revoked.", - "path": "/src/modules/db/connection", - "created_by": "whoosh", - "metadata": {"ansible_run_id": "run-321", "status":"success"} -} -``` - -* SLURP ingests it and updates `context_events` / hyperlog. - ---- - -## Final notes & recommended next steps - -**Deliverables prepared here are intended to be actionable**: - -* **A** is ready to copy into a protocol RFC or OpenAPI doc. -* **B** is a runnable CLI stub you can expand: signing + HTTP shim integration work immediately. -* **C** gives you a fast prototype roadmap; use Go + go-libp2p if you want production-grade speed and stability. -* **D** is the operational recipe your engineering team needs to implement SLURP event handling with exact DB writes, graph updates, hyperlog entries, and WHOOSH orchestration hooks. - ---- - -### Want me to: - -* ✍️ Produce the **full OpenAPI YAML** for the shim endpoints (`/post`, `/get`, `/announce`) so you can autogen SDKs? -* 🔧 Expand the Python shim example to include a tiny in-memory hyperlog and libp2p mock (so you can run an end-to-end demo locally)? -* 🧩 Draft the Ansible playbook WHOOSH would call for a typical `blocker` remediation (e.g., rotate DB creds + update HCFS metadata)? - - - -The only thing I'm not convinced on; is the need for querystrings here. - - - -Nice — I’ll give you two concrete things first, then a robust design for replies + thread coherence that fits your P2P/hyperlog/TGN/SLURP stack. - -**Deliverables in this message** - -1. **Formal BNF grammar** for `bzzz://` (clean, canonical). -2. **libp2p message schema** for the JSON envelope (canonical fields, headers, signing). -3. **Threading & reply model** — how agents reply to messages, maintain conversation coherence, and how SLURP/WHOOSH use that for actions. Includes storage schema, ordering, conflict handling, replay rules, and security considerations. -4. **Examples** (sample messages, reply flow, pseudo-code for client and SLURP handling). - ---- - -## 1) Formal BNF for `bzzz://` (canonical) - -``` - ::= "bzzz" "://" ":" "@" ":" [ "#" ] - - ::= - ::= - ::= - ::= - - ::= "/" { "/" } - ::= { } - ::= ALPHA | DIGIT | "-" | "_" | "." - - ::= { } - ::= ALPHA | DIGIT | "-" | "_" | "." | ":" - - ::= { } - ::= ALPHA | DIGIT | "-" | "_" | "." - -ALPHA ::= "A" | "B" | ... | "Z" | "a" | "b" | ... | "z" -DIGIT ::= "0" | "1" | ... | "9" -``` - -**Notes** - -* URIs are case-insensitive for routing; normalized to lowercase for canonical forms. -* Percent-encode anything outside allowed characters in `segment` / `fragment`. -* `agent` can be `any` (for anonymous broadcast) but real agents should use stable Decentralized IDs (DIDs) or names mapped to public keys. - ---- - -## 2) libp2p Message Schema (JSON) — canonical envelope - -This is the payload structure transmitted over pubsub / direct streams. Keep it compact and signable. - -### 2.1 — Top-level envelope (compact JSON) - -```json -{ - "hdr": { - "op": "POST", // GET, POST, PUT, DELETE, ACK, RESP - "uri": "bzzz://alice:dev@chorus:task/path#v1", - "msg_id": "uuid-v4", - "parent_id": "uuid-v4|null", // points to immediate parent message for threads - "convo_id": "uuid-v4", // conversation identifier (root msg id if absent) - "ts": "2025-08-06T15:02:00Z", // ISO8601 UTC - "lamport": 12345, // optional lamport counter - "ttl": 3600, // seconds; 0 = no-expire (use with care) - "priority": "normal", // low|normal|urgent - "agent_pubkey": "base64(pubkey)" // sender's public key for verification - }, - "meta": { - "event_type": "blocker", // optional, maps to event enums - "visibility": { - "roles": ["frontend","devops"], - "channels": ["devchat"], - "exclude": [] - }, - "project": "chorus", - "task": "website-redesign" - }, - "payload": { // free-form content, may include structured fields - "title": "Hardcoded DB creds", - "body": "Detected DB creds in repo. Rotate immediately.", - "attachments": [ - { "type": "blob-ref", "hash": "sha256:..." } - ], - "hints": { "use_lightrag": true } // optional operational hints - }, - "sig": "base64(signature)" // signature over canonical_hdr + payload -} -``` - -### 2.2 — Canonical signing - -* **What to sign:** canonicalized JSON of `hdr` + `meta` + `payload` in deterministic key order (e.g., RFC8785 or JCS canonical form). -* **Signature scheme:** Ed25519 (or COSE/COSE2 if you want richer structures). Include `agent_pubkey` so receiver can verify. -* **Idempotency:** clients generate `msg_id` UUID v4 and re-send on retry; server checks for duplicates. - -### 2.3 — Minimal ACK / Response envelope - -* For fast ACKs/minimal replies: - -```json -{ - "hdr": { "op": "ACK", "uri": "...", "msg_id": "ack-uuid", "parent_id": "original-msg-id", "convo_id":"...", "ts":"..." }, - "meta": { "status": "received" }, - "payload": { "note": "Queued for processing", "hyperlog_ref": "feed:123" }, - "sig": "..." -} -``` - ---- - -## 3) Threading & Reply Model — design - -Goals: - -* Keep thread structure explicit, replayable, verifiable. -* Support branching conversations, parallel replies, follow-ups. -* Allow SLURP and TGN to reason about conversation coherence and detect divergence. - -### 3.1 — Core primitives - -**msg\_id** — unique message identifier (UUID v4). -**parent\_id** — immediate parent message (null for root). -**convo\_id** — conversation identifier; for a new thread, `convo_id = msg_id` of root; replies inherit convo\_id. -**lamport** — optional logical counter to help ordering across peers. Each agent increments before sending; merge rules use lamport + ts. -**ts** — wall-clock timestamp in UTC (for human readability and approximate ordering). -**signature** — verifies authenticity & integrity. -**visibility** — roles/channels control who should see the message. - -### 3.2 — Thread coherence rules - -1. **Canonical thread root**: When starting a thread, `convo_id = msg_id`. All replies set `convo_id` to root id. This lets you quickly fetch the entire conversation: `SELECT * FROM messages WHERE convo_id = ORDER BY lamport, ts`. - -2. **Parent linkage**: `parent_id` forms a tree. Use this for fast tree traversal and reconstructing reply chains, branching points, and ancestors. - -3. **Ordering**: - - * **Preferred ordering**: `(lamport, ts)` ascending. Lamport gives causal order; ts helps human-ordered chronology. - * **Tie-breaker**: `msg_id` lexicographic order for total order. - * **Delivery ordering**: pubsub doesn’t guarantee ordering; receivers must reorder using lamport & parent relationships. - -4. **Causal consistency**: - - * When agent A replies to message M, it must set `parent_id = M.msg_id`, and ensure its `lamport` > lamport of M (if known). If unknown, agent may read latest lamport for M or use local lamport +1. - * If an agent receives reply R before M (possible in P2P), they store R in a pending table keyed by `parent_id` and once M arrives, link them. - -5. **Thread reconciliation on forks**: - - * Branching is expected (parallel replies). UI and SLURP should show branches; TGN can detect divergence if branches move on different actions. - * For conflicting "decision" messages (e.g., two approvals with different payloads), SLURP can apply deterministic conflict-resolution rules (highest priority event\_type, then latest lamport, then human escalation). Keep conflicts explicitly logged. - -6. **Message lifecycle**: - - * Messages are appended immutably to hyperlog. "Deletion" is a new tombstone message (`op=DELETE` with parent pointing to resource) — append-only history preserved. - * Edit flows: editing a message results in a new `PUT` message with `parent_id` pointing to the original; store versions. - -### 3.3 — Conversation metadata & summarization - -SLURP maintains per-conversation metadata in DB: - -* `convo_id`, `root_msg_id`, `participants` (agent list), `last_activity_ts`, `state` (open/closed/escalated), `drift_score`, `summary_blob_id`. - -On receiving messages, SLURP: - -* Updates convo `last_activity_ts`. -* Recomputes a short summary every N messages or if event\_type == decision/blocker. -* Stores the summary (and TGN embedding) for quick retrieval. - -TGN uses message edges as temporal edges: - -* Node types: message nodes, agent nodes, path nodes. -* Edges: `reply_to`, `authored_by`, `affects_path`, with timestamps. -* TGN embeddings help cluster threads, detect drift, and propose thread summarization. - -### 3.4 — Subscriptions & delivery guarantees - -* Pubsub topics are per `project:task` swarm. Consumers subscribe to `bzzz.project..task.`. -* Message delivery: pubsub announce includes the `hdr` and optionally small payload. For large payloads, announce contains `blob_ref` (content-addressed hash) and peers fetch via direct stream. -* Clients maintain **receipt logs** (ACKs) — receiver returns `ACK` to sender (or via pubsub) so sender knows which peers got it. For broadcast messages requiring stronger guarantee (e.g., revocation), SLURP may require quorum ACKs (N out of M agents). - -### 3.5 — Storage schema (messages table) — Postgres/SQLite - -```sql -CREATE TABLE messages ( - msg_id TEXT PRIMARY KEY, - convo_id TEXT NOT NULL, - parent_id TEXT, - uri TEXT, -- the bzzz:// resource targeted - op TEXT, -- POST/GET/ACK/RESP/PUT/DELETE - hdr JSONB, -- raw header storage - meta JSONB, -- event_type, visibility, etc. - payload JSONB, - agent_pubkey TEXT, - lamport BIGINT, - ts TIMESTAMPTZ, - hyperlog_ref TEXT, -- pointer to immutable hyperlog entry - signature TEXT -); - -CREATE INDEX idx_messages_convo ON messages(convo_id); -CREATE INDEX idx_messages_parent ON messages(parent_id); -CREATE INDEX idx_messages_uri ON messages(uri); -CREATE INDEX idx_messages_ts ON messages(ts); -``` - -Hyperlog append stores the same JSON in append-only feed, and `hyperlog_ref` links to it. You can reconstruct message history from hyperlog or DB. - ---- - -## 4) Reply flow examples & pseudo-code - -### 4.1 — Simple reply (agent client behavior) - -**Action:** Developer receives a blocker message, replies with an ACK and follow-up. - -Pseudocode: - -```python -# local lamport counter -lamport = load_local_lamport() - -def send_reply(parent_msg, uri, payload, op="RESP"): - global lamport - lamport = max(lamport, parent_msg.get('lamport', 0)) + 1 - msg = { - "hdr": { - "op": op, - "uri": uri, - "msg_id": str(uuid4()), - "parent_id": parent_msg['msg_id'], - "convo_id": parent_msg['convo_id'], - "ts": utc_now(), - "lamport": lamport, - "agent_pubkey": my_pubkey_b64 - }, - "meta": { "event_type": "info" }, - "payload": payload - } - msg['sig'] = sign_msg(msg) - pubsub_publish(topic_for(parent_msg['meta']['project'], parent_msg['meta']['task']), msg) - append_hyperlog(msg) # local append for history & replay -``` - -**When receiving messages**, client must: - -* Verify signature. -* Store message in DB and hyperlog. -* If `parent_id` missing (child before parent), put into `pending_children[parent_id]`. -* If parent later arrives, attach and bubble up. - -### 4.2 — SLURP reaction to reply messages - -SLURP subscribes to project topic. - -On receipt: - -1. Verify signature & visibility. -2. Persist message (messages table) + hyperlog append. -3. If `op` == `POST` and `event_type` == `blocker`: - - * Evaluate project `event_type_definitions` -> action. - * If action == `quarantine`, mark affected blobs & create `slurp_decision` entry & hyperlog record. - * Trigger WHOOSH orchestration if remediation required. -4. If `op` == `RESP` or `ACK`: - - * Update `messages` table state for the referenced msg\_id (e.g., mark acknowledged). - * Optionally re-evaluate whether the blocker is resolved (if `RESP` payload includes `resolution=true`). -5. Update convo summary & TGN embedding. - -Pseudo: - -```python -def handle_msg(msg): - verify(msg) - insert_messages_table(msg) - hyperlog_append(msg) - if msg.hdr.op == "POST" and msg.meta.event_type == "blocker": - action = lookup_action(msg.meta.project, "blocker", role=... ) - if action == "quarantine": - affected = find_blobs_under_path(msg.payload.path) - quarantine_blobs(affected, reason="blocker") - log_slurp_decision(msg, affected) - whoosh_trigger_remediation(affected) - elif msg.hdr.op in ("RESP", "ACK"): - update_message_ack(msg.parent_id, msg) - maybe_resolve_blocker(msg.parent_id) - # update convo metadata - update_convo(msg.convo_id, last_activity=msg.hdr.ts) - tgn_ingest(msg) # add temporal edge for TGN -``` - -### 4.3 — Thread coherence & TGN usage - -* **TGN ingestion**: each message creates edges (agent -> message: authored\_by; message -> path: affects\_path; message -> message: reply\_to). Timestamps are the message ts. -* **TGN tasks**: - - * Detect threads drifting (e.g., replies reference different resources over time). - * Cluster replies that are semantically related via embeddings. - * Suggest merge candidates or escalate to human if branches diverge (e.g., some replies recommend conflicting actions). -* **SLURP uses TGN outputs** to: - - * Re-rank which threads to surface to humans (e.g., high-drift critical threads). - * Suggest thread summaries. - ---- - -## 5) Conflict resolution & decision authority - -Design choices: - -* **Deterministic policy**: for conflicting decisions (e.g., two approvals), choose the one with higher priority, or later lamport if priority equal. Log the override. -* **Human override**: if conflict cannot be safely auto-resolved, SLURP sets `state=escalated` and notifies owners. -* **Audit trail**: every override, conflict, and merge is appended to hyperlog with reason and signatures. - ---- - -## 6) Scaling, pruning, and GC - -* **Retention policy**: projects define how long hyperlog feeds are kept locally. Use content-addressed storage (CAR files or chunk stores) for older archives. -* **Pruning**: nodes may garbage-collect message bodies older than X days but must preserve minimal hyperlog index (msg\_id, hyperlog\_ref) to reconstruct from archive peers. -* **Archival nodes**: designate some nodes to keep full immutable history (for compliance), others keep working window. -* **Snapshotting**: periodically SLURP writes convo snapshots (summary + representative blobs) to reduce replay cost. - ---- - -## 7) Privacy & security considerations - -* **Visibility metadata** controls who should receive message payloads. Pubsub announces may include only metadata and blob hashes; payloads are fetched only by authorized peers. -* **End-to-end encryption** of payloads: use per-project symmetric keys encrypted for authorized roles, or use public-key envelopes (e.g., encrypt to set of role public keys). Keep metadata in plaintext (or signed) for routing. -* **Signature & key rotation**: support key revocation events; SLURP verifies signature chain and consults agent registry. -* **Redaction**: for high-sensitivity payloads, only send redacted previews on pubsub announcements; full content via direct encrypted stream after ACL check. - ---- - -## 8) Replay & determinism - -* Because everything is appended with signatures, you can **replay** a convo deterministically by replaying hyperlog entries in order. When replaying: - - * Recompute lamport counters in order. - * Restore convo states, Slurp decisions, TGN ingestions. -* For experiments, you can replay with alternative SLURP policies and compare outcomes (A/B). - ---- - -## 9) Example end-to-end scenario - -**1. Alice posts blocker** - -* Alice sends POST envelope; publishes to pubsub. SLURP persists, appends hyperlog, quarantines blobs, triggers WHOOSH. - -**2. Bob (frontend) receives announcement** - -* Bob’s local shim fetches announce, gets header. Since Bob has no access to DB credentials, SLURP's visibility suppressed full content; Bob receives annotated blocker with action instructions. - -**3. DevOps replies with remediation update** - -* DevOps posts RESP with `parent_id = alice.msg_id` and payload `{ "resolution": true, "details": "Keys rotated" }`. SLURP marks blocker resolved, appends hyperlog, notifies participants. - -**4. SLURP updates convo summary & TGN** - -* SLURP runs an incremental summarizer, stores summary as blob, and TGN updates embeddings to note a resolved blocker and the participants. - ---- - -## Final notes & next steps - -I’ve given you: - -* Formal grammar -* Compact libp2p JSON envelope and signing rules -* A full threading & reply model with DB schema -* Pseudocode and operational rules for SLURP & WHOOSH processing -* Security, replay, GC, and TGN integration points - -If you want I can: - -* **Generate a concrete OpenAPI spec** for the HTTP shim endpoints that accept these envelopes (for BZZZ CLI to talk to shim). -* **Write a minimal Go or Python shim prototype** that runs libp2p pubsub + HTTP endpoints and implements the message handling (including verifying signatures and appending to a SQLite hyperlog). -* **Produce a client library** (Python) that handles lamport counters, signing, parent tracking, and pending-child attachment logic. - - -Great — I’ll give you two deliverables, in order: - -1. **OpenAPI 3.0 spec** for the local HTTP shim (endpoints `/post`, `/get`, `/announce`, `/sub`) so clients (CLI, SDKs) can autogen clients and understand the envelope. -2. **Go prototype** (runnable) for the shim that: - - * runs a libp2p node (pubsub) - * exposes the HTTP endpoints - * verifies Ed25519 signatures - * appends messages to a simple SQLite-based hyperlog - * publishes/receives pubsub announces - * is intentionally minimal so you can extend it quickly (LightRAG/TGN/SLURP hooks are left as TODOs) - -I’ll keep the Go code compact but complete enough to run locally on a LAN. After the code I’ll include **how to build & run**, and suggestions for testing. - ---- - -## Part 1 — OpenAPI Spec (bzzz-shim-openapi.yaml) - -Save this YAML as `bzzz-shim-openapi.yaml`. It describes the main shim endpoints and the canonical message envelope. - -```yaml -openapi: 3.0.3 -info: - title: BZZZ HTTP Shim API - version: 1.0.0 - description: > - Local HTTP shim for bzzz:// protocol. Maps HTTP requests to libp2p pubsub - and local hyperlog persistence. Designed for prototype/demo use. - -servers: - - url: http://localhost:19000 - description: Local shim instance - -paths: - /post: - post: - summary: Post a BZZZ envelope (create event / context post) - requestBody: - required: true - content: - application/json: - schema: - $ref: '#/components/schemas/Envelope' - responses: - '201': - description: Event accepted and appended to hyperlog - content: - application/json: - schema: - $ref: '#/components/schemas/PostResponse' - '400': - description: Bad request (validation / signature failure) - /get: - post: - summary: Request retrieval / composed context for a URI - description: | - Use POST body so the retrieval can contain hints (use_lightrag, depth, etc) - requestBody: - required: true - content: - application/json: - schema: - $ref: '#/components/schemas/GetRequest' - responses: - '200': - description: Retrieved context packet - content: - application/json: - schema: - $ref: '#/components/schemas/GetResponse' - '404': - description: Resource not found - /announce: - post: - summary: Publish a lightweight announcement on pubsub - requestBody: - required: true - content: - application/json: - schema: - $ref: '#/components/schemas/AnnounceRequest' - responses: - '200': - description: Announcement published - content: - application/json: - schema: - $ref: '#/components/schemas/AnnounceResponse' - /sub: - post: - summary: Subscribe this shim to a project:task topic (server will deliver events via websocket or local handler) - requestBody: - required: true - content: - application/json: - schema: - $ref: '#/components/schemas/SubRequest' - responses: - '200': - description: Subscription successful - content: - application/json: - schema: - type: object - properties: - status: - type: string - -components: - schemas: - Envelope: - type: object - required: ["hdr","meta","payload","sig"] - properties: - hdr: - type: object - required: ["op","uri","msg_id","convo_id","ts","agent_pubkey"] - properties: - op: - type: string - description: GET|POST|PUT|DELETE|ACK|RESP - uri: - type: string - description: canonical bzzz:// URI - msg_id: - type: string - description: UUID v4 - parent_id: - type: string - nullable: true - convo_id: - type: string - ts: - type: string - format: date-time - lamport: - type: integer - ttl: - type: integer - priority: - type: string - agent_pubkey: - type: string - description: base64-encoded public key - meta: - type: object - properties: - event_type: - type: string - description: e.g. blocker|warning|info|decision - visibility: - type: object - properties: - roles: - type: array - items: - type: string - channels: - type: array - items: - type: string - payload: - type: object - description: free-form content (title/body/attachments/hints) - sig: - type: string - description: base64 signature over canonicalized hdr+meta+payload - PostResponse: - type: object - properties: - status: - type: string - event_id: - type: string - hyperlog_ref: - type: string - GetRequest: - type: object - required: ["uri"] - properties: - uri: - type: string - hints: - type: object - description: e.g. { "use_lightrag": true, "depth": 2 } - GetResponse: - type: object - properties: - context: - type: string - metadata: - type: object - properties: - source_blob_ids: - type: array - items: { type: integer } - lightrag_doc_ids: - type: array - items: { type: string } - events_considered: - type: array - items: { type: integer } - drift_score: - type: number - AnnounceRequest: - type: object - required: ["topic","summary"] - properties: - topic: - type: string - summary: - type: object - envelope: - $ref: '#/components/schemas/Envelope' - AnnounceResponse: - type: object - properties: - status: - type: string - published_to: - type: string - SubRequest: - type: object - required: ["topic"] - properties: - topic: - type: string -``` - ---- - -## Part 2 — Go prototype shim - -Below is a compact but runnable Go prototype. It uses: - -* **go-libp2p** for peer/pubsub (basic usage) -* **mattn/go-sqlite3** for SQLite hyperlog -* **golang.org/x/crypto/ed25519** for signature verification -* **gorilla/mux** for HTTP routing (simple) -* Note: go-libp2p and pubsub code is slightly simplified for clarity — this is a working starting point but you should adapt and harden for production. - -Create a directory, e.g. `bzzz-shim`, and put these files in it. - -### go.mod - -```go -module github.com/yourorg/bzzz-shim - -go 1.20 - -require ( - github.com/gorilla/mux v1.8.0 - github.com/libp2p/go-libp2p v0.25.1 - github.com/libp2p/go-libp2p-core v0.8.0 - github.com/libp2p/go-libp2p-pubsub v0.15.0 - github.com/mattn/go-sqlite3 v1.14.20 - golang.org/x/crypto v0.12.0 -) -``` - -> If dependency versions are strict for you, lock/adjust them. `go get` will fetch what it needs. - ---- - -### main.go - -```go -package main - -import ( - "context" - "crypto/ed25519" - "database/sql" - "encoding/base64" - "encoding/json" - "fmt" - "io" - "log" - "net/http" - "os" - "time" - - "github.com/gorilla/mux" - libp2p "github.com/libp2p/go-libp2p" - pubsub "github.com/libp2p/go-libp2p-pubsub" - "github.com/mattn/go-sqlite3" - _ "github.com/mattn/go-sqlite3" -) - -const ( - DefaultPort = 19000 - PubsubTopicPrefix = "bzzz.project." -) - -type Envelope struct { - Hdr map[string]interface{} `json:"hdr"` - Meta map[string]interface{} `json:"meta"` - Payload map[string]interface{} `json:"payload"` - Sig string `json:"sig"` -} - -type PostResponse struct { - Status string `json:"status"` - EventID string `json:"event_id"` - HyperlogRef string `json:"hyperlog_ref"` -} - -var ( - db *sql.DB - ps *pubsub.PubSub - hostCtx context.Context - topicCache = map[string]*pubsub.Topic{} -) - -// initDB opens a sqlite DB and creates messages table if not exists -func initDB(path string) error { - var err error - db, err = sql.Open("sqlite3", path) - if err != nil { - return err - } - create := ` - CREATE TABLE IF NOT EXISTS messages ( - msg_id TEXT PRIMARY KEY, - convo_id TEXT, - parent_id TEXT, - uri TEXT, - op TEXT, - hdr TEXT, - meta TEXT, - payload TEXT, - agent_pubkey TEXT, - lamport INTEGER, - ts TEXT, - hyperlog_ref TEXT, - signature TEXT - ); - ` - _, err = db.Exec(create) - return err -} - -func appendHyperlog(msg *Envelope) (string, error) { - // hyperlog_ref: simple feed:rowid - hdrB, _ := json.Marshal(msg.Hdr) - metaB, _ := json.Marshal(msg.Meta) - payloadB, _ := json.Marshal(msg.Payload) - msgID, _ := msg.Hdr["msg_id"].(string) - convo, _ := msg.Hdr["convo_id"].(string) - parent, _ := msg.Hdr["parent_id"].(string) - uri, _ := msg.Hdr["uri"].(string) - op, _ := msg.Hdr["op"].(string) - lam, _ := msg.Hdr["lamport"].(float64) - ts, _ := msg.Hdr["ts"].(string) - pubkey, _ := msg.Hdr["agent_pubkey"].(string) - - stmt, err := db.Prepare(`INSERT INTO messages ( - msg_id, convo_id, parent_id, uri, op, hdr, meta, payload, agent_pubkey, lamport, ts, hyperlog_ref, signature - ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?);`) - if err != nil { - return "", err - } - res, err := stmt.Exec(msgID, convo, parent, uri, op, string(hdrB), string(metaB), string(payloadB), pubkey, int64(lam), ts, "", msg.Sig) - if err != nil { - return "", err - } - rowid, err := res.LastInsertId() - if err != nil { - return "", err - } - hyperRef := fmt.Sprintf("feed:%d", rowid) - _, err = db.Exec("UPDATE messages SET hyperlog_ref = ? WHERE rowid = ?", hyperRef, rowid) - if err != nil { - return "", err - } - return hyperRef, nil -} - -func verifyEnvelope(msg *Envelope) error { - // check msg.Hdr.agent_pubkey and msg.Sig - pubB64, ok := msg.Hdr["agent_pubkey"].(string) - if !ok { - return fmt.Errorf("missing agent_pubkey") - } - sigB64 := msg.Sig - pub, err := base64.StdEncoding.DecodeString(pubB64) - if err != nil { - return fmt.Errorf("bad pubkey base64: %v", err) - } - sig, err := base64.StdEncoding.DecodeString(sigB64) - if err != nil { - return fmt.Errorf("bad sig base64: %v", err) - } - // canonicalize: simple approach: marshal hdr+meta+payload in fixed order - // For prototype, we use json.Marshal of map order (non-deterministic) — in prod use JCS or RFC8785 canonical JSON. - toSign := map[string]interface{}{ - "hdr": msg.Hdr, - "meta": msg.Meta, - "payload": msg.Payload, - } - bs, err := json.Marshal(toSign) - if err != nil { - return err - } - if len(pub) != ed25519.PublicKeySize { - return fmt.Errorf("invalid pubkey length") - } - if !ed25519.Verify(ed25519.PublicKey(pub), bs, sig) { - return fmt.Errorf("signature verification failed") - } - return nil -} - -// publishAnnouncement publishes a small announcement envelope on project topic -func publishAnnouncement(topicName string, envelope *Envelope) error { - t := topicCache[topicName] - if t == nil { - var err error - t, err = ps.Join(topicName) - if err != nil { - return err - } - topicCache[topicName] = t - } - // short message — just send minimal hdr+meta - env := map[string]interface{}{ - "hdr": envelope.Hdr, - "meta": envelope.Meta, - } - b, _ := json.Marshal(env) - return t.Publish(hostCtx, b) -} - -func handlePost(w http.ResponseWriter, r *http.Request) { - var env Envelope - body, err := io.ReadAll(r.Body) - if err != nil { - http.Error(w, "bad body", 400) - return - } - if err := json.Unmarshal(body, &env); err != nil { - http.Error(w, "invalid json", 400) - return - } - // basic verification - if err := verifyEnvelope(&env); err != nil { - http.Error(w, fmt.Sprintf("signature failed: %v", err), 400) - return - } - // append hyperlog - hyperRef, err := appendHyperlog(&env) - if err != nil { - log.Printf("appendHyperlog error: %v", err) - http.Error(w, "internal", 500) - return - } - // publish announcement if meta.project present - proj, _ := env.Meta["project"].(string) - task, _ := env.Meta["task"].(string) - if proj != "" && task != "" { - topicName := fmt.Sprintf("%s%s.task.%s", PubsubTopicPrefix, proj, task) - _ = publishAnnouncement(topicName, &env) - } - resp := PostResponse{ - Status: "accepted", - EventID: env.Hdr["msg_id"].(string), - HyperlogRef: hyperRef, - } - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(201) - _ = json.NewEncoder(w).Encode(resp) -} - -type GetRequest struct { - URI string `json:"uri"` - Hints map[string]interface{} `json:"hints"` -} - -type GetResponse struct { - Context string `json:"context"` - Metadata map[string]interface{} `json:"metadata"` -} - -func handleGet(w http.ResponseWriter, r *http.Request) { - var req GetRequest - b, _ := io.ReadAll(r.Body) - if err := json.Unmarshal(b, &req); err != nil { - http.Error(w, "invalid json", 400) - return - } - // Simple prototype: look up messages by uri and compose payload bodies - rows, err := db.Query("SELECT payload FROM messages WHERE uri = ? ORDER BY ts DESC LIMIT 20", req.URI) - if err != nil { - http.Error(w, "db error", 500) - return - } - defer rows.Close() - var parts []string - for rows.Next() { - var payloadStr string - if err := rows.Scan(&payloadStr); err == nil { - parts = append(parts, payloadStr) - } - } - resp := GetResponse{ - Context: fmt.Sprintf("Composed %d items", len(parts)), - Metadata: map[string]interface{}{"count": len(parts)}, - } - w.Header().Set("Content-Type", "application/json") - _ = json.NewEncoder(w).Encode(resp) -} - -type AnnounceRequest struct { - Topic string `json:"topic"` - Summary map[string]interface{} `json:"summary"` -} - -type AnnounceResponse struct { - Status string `json:"status"` - PublishedTo string `json:"published_to"` -} - -func handleAnnounce(w http.ResponseWriter, r *http.Request) { - var ar AnnounceRequest - b, _ := io.ReadAll(r.Body) - if err := json.Unmarshal(b, &ar); err != nil { - http.Error(w, "invalid announce", 400) - return - } - // publish summary on topic - t, err := ps.Join(ar.Topic) - if err != nil { - http.Error(w, "pubsub join failed", 500) - return - } - msg, _ := json.Marshal(ar.Summary) - if err := t.Publish(hostCtx, msg); err != nil { - http.Error(w, "publish failed", 500) - return - } - _ = ar - resp := AnnounceResponse{Status: "ok", PublishedTo: ar.Topic} - w.Header().Set("Content-Type", "application/json") - _ = json.NewEncoder(w).Encode(resp) -} - -func startHTTP(port int) { - r := mux.NewRouter() - r.HandleFunc("/post", handlePost).Methods("POST") - r.HandleFunc("/get", handleGet).Methods("POST") - r.HandleFunc("/announce", handleAnnounce).Methods("POST") - addr := fmt.Sprintf(":%d", port) - log.Printf("HTTP shim listening on %s", addr) - http.ListenAndServe(addr, r) -} - -func main() { - // init db - dbPath := os.Getenv("BZZZ_DB_PATH") - if dbPath == "" { - dbPath = "./bzzz_hyperlog.db" - } - if err := initDB(dbPath); err != nil { - log.Fatalf("db init failed: %v", err) - } - - // create libp2p host + pubsub - ctx := context.Background() - hostCtx = ctx - h, err := libp2p.New() - if err != nil { - log.Fatalf("libp2p host failed: %v", err) - } - ps, err = pubsub.NewGossipSub(ctx, h) - if err != nil { - log.Fatalf("pubsub init failed: %v", err) - } - // For prototype, join a default "global" topic to receive messages if needed - go func() { - t, _ := ps.Join("bzzz.global") - sub, _ := t.Subscribe() - for { - m, err := sub.Next(ctx) - if err != nil { - log.Printf("pubsub read err: %v", err) - return - } - log.Printf("Received pubsub msg: %s", string(m.Data)) - } - }() - - // start HTTP shim - go startHTTP(DefaultPort) - - // block forever - select {} -} -``` - ---- - -## How to build & run the prototype - -1. **Install Go** (1.20+). -2. **Create project folder** and put `go.mod` and `main.go` there. -3. **Get modules**: - -```bash -go mod tidy -``` - -4. **Build**: - -```bash -go build -o bzzz-shim -``` - -5. **Run**: - -```bash -./bzzz-shim -``` - -This starts an HTTP shim on `:19000` and a libp2p node with basic pubsub. - ---- - -## Quick test with `curl` - -**POST a minimal envelope** (you'll need a real Ed25519 signature for the proto's verify step — for quick test, either bypass verify by commenting out `verifyEnvelope` call in code, or sign payloads properly): - -```bash -curl -X POST http://localhost:19000/post -H "Content-Type: application/json" \ - -d '{ - "hdr": {"op":"POST","uri":"bzzz://any:dev@chorus:task/path","msg_id":"1111-2222","convo_id":"1111-2222","ts":"2025-08-06T15:00:00Z","agent_pubkey":""}, - "meta": {"event_type":"blocker","project":"chorus","task":"task"}, - "payload": {"title":"test","body":"please rotate keys"}, - "sig":"" - }' -``` - -If signature verification is disabled for a dev run, the shim will accept and append the hyperlog and publish to the pubsub topic (`bzzz.project.chorus.task.task`). - ---- - -## Notes, caveats, and next steps - -* **Canonical signing**: The prototype uses `json.Marshal` for the canonical payload — this is *not* deterministic. In production, use canonical JSON (RFC8785 / JCS) or COSE/CBOR for signing. Libraries exist for canonical JSON signing in Go. -* **Libp2p tuning**: `go-libp2p` requires configuration for NAT traversal, bootstrap peers, and peerstore persistence; the code uses a default host for local testing. -* **Pubsub topics**: The code currently publishes minimal announcements. For big payloads you should publish `blob_ref` and stream payloads over direct streams. -* **Hyperlog**: For a real immutable hyperlog use Hypercore (JS) or similar append-only stores. For this prototype, SQLite provides a simple append table with `hyperlog_ref` pointer. -* **Security**: Use proper key management and rotate keys. Consider E2E encryption for private payloads. -* **Extensibility**: Hook SLURP logic into `handlePost` (after appendHyperlog) to trigger curation decisions and WHOOSH orchestration. - ---- - -### Want me to follow up with: - -* A **Go implementation that actually performs Ed25519 signing for test clients** (generate keys + sign payloads) and a small test script to POST a valid envelope? -* **Dockerfile + docker-compose** to run multiple shim instances in a local network and see pubsub announces propagate? -* **Add hyperlog replay & a small SLURP plugin** to demo quarantine-on-blocker behavior? - - - -Awesome — love your energy. I’ll give you three practical artifacts and clear run steps so you can try this end-to-end quickly: - -1. **Go test client** that generates an Ed25519 keypair, canonicalizes an envelope, signs it, and POSTs to the shim. -2. **Docker Compose** (bootstrap + 2 worker shim instances) with a small helper to extract the bootstrap multiaddr and start the cluster. -3. **Hyperlog replay + tiny SLURP plugin** added to the shim prototype: a `/replay` HTTP endpoint that replays hyperlog messages in order and demonstrates *quarantine-on-blocker* behavior (marks matching `context_blobs` as quarantined and logs SLURP decisions). - -I’ll include code + instructions for building and running. I keep things intentionally simple and prototype-ready — production hardening (key management, canonical JSON libraries, libp2p bootstrap resiliency, E2E encryption, etc.) is called out as next steps. - ---- - -## A — Go test client (sign + POST) - -**File:** `client/sign_and_post.go` - -This client: - -* Generates an Ed25519 keypair and writes the private key locally (so you can reuse it). -* Builds a deterministic (stable) canonical payload for signing by producing a JSON object where keys are marshaled in stable order. -* Signs the canonical bytes and posts envelope to `http://localhost:19000/post`. - -> Note: For canonical JSON we use a simple deterministic helper that sorts map keys recursively. This is fine for prototype. For production use RFC8785 / JCS canonicalization. - -```go -package main - -import ( - "crypto/ed25519" - "crypto/rand" - "encoding/base64" - "encoding/json" - "flag" - "fmt" - "io" - "io/ioutil" - "net/http" - "os" - "sort" - "time" -) - -// stableMarshal recursively sorts map keys and returns deterministic JSON -func stableMarshal(v interface{}) ([]byte, error) { - switch vv := v.(type) { - case map[string]interface{}: - // sort keys - keys := make([]string, 0, len(vv)) - for k := range vv { - keys = append(keys, k) - } - sort.Strings(keys) - out := make(map[string]interface{}, len(vv)) - for _, k := range keys { - b, _ := stableMarshal(vv[k]) - var tmp interface{} - json.Unmarshal(b, &tmp) - out[k] = tmp - } - return json.Marshal(out) - case []interface{}: - arr := make([]interface{}, len(vv)) - for i, e := range vv { - b, _ := stableMarshal(e) - var tmp interface{} - json.Unmarshal(b, &tmp) - arr[i] = tmp - } - return json.Marshal(arr) - default: - return json.Marshal(v) - } -} - -func main() { - var outKey = flag.String("key", "./client.key", "path to private key (will be created if not exist)") - var shim = flag.String("shim", "http://localhost:19000", "shim base URL") - flag.Parse() - - var priv ed25519.PrivateKey - var pub ed25519.PublicKey - - if _, err := os.Stat(*outKey); os.IsNotExist(err) { - // generate - pubk, privk, _ := ed25519.GenerateKey(rand.Reader) - priv = privk - pub = pubk - // write key file - if err := ioutil.WriteFile(*outKey, priv, 0600); err != nil { - panic(err) - } - fmt.Println("Generated key at", *outKey) - } else { - // load - data, err := ioutil.ReadFile(*outKey) - if err != nil { - panic(err) - } - priv = ed25519.PrivateKey(data) - pub = priv.Public().(ed25519.PublicKey) - } - pubB64 := base64.StdEncoding.EncodeToString(pub) - - msgID := fmt.Sprintf("msg-%d", time.Now().UnixNano()) - convoID := msgID - ts := time.Now().UTC().Format(time.RFC3339) - - hdr := map[string]interface{}{ - "op": "POST", - "uri": "bzzz://alice:dev@chorus:website-redesign/src/modules/db/connection", - "msg_id": msgID, - "parent_id": nil, - "convo_id": convoID, - "ts": ts, - "lamport": 1, - "ttl": 3600, - "priority": "urgent", - "agent_pubkey": pubB64, - } - meta := map[string]interface{}{ - "event_type": "blocker", - "project": "chorus", - "task": "website-redesign", - "visibility": map[string]interface{}{ - "roles": []interface{}{"devops", "backend"}, - "channels": []interface{}{"dev"}, - "exclude": []interface{}{}, - }, - } - payload := map[string]interface{}{ - "title": "Hardcoded DB creds found", - "body": "Automated scan discovered plaintext creds in src/modules/db/connection. Rotate keys.", - "attachments": []interface{}{}, - "hints": map[string]interface{}{"use_lightrag": false}, - } - - // Build canonical bytes - toSign := map[string]interface{}{ - "hdr": hdr, - "meta": meta, - "payload": payload, - } - can, err := stableMarshal(toSign) - if err != nil { - panic(err) - } - sig := ed25519.Sign(priv, can) - sigB64 := base64.StdEncoding.EncodeToString(sig) - - envelope := map[string]interface{}{ - "hdr": hdr, - "meta": meta, - "payload": payload, - "sig": sigB64, - } - envBytes, _ := json.MarshalIndent(envelope, "", " ") - fmt.Println("Envelope:", string(envBytes)) - - resp, err := http.Post(*shim+"/post", "application/json", io.NopCloser((os.Stdin))) - // Note: the above Post will fail because we didn't pass body from stdin. So use Do with body: - client := &http.Client{ Timeout: 10 * time.Second } - req, _ := http.NewRequest("POST", *shim+"/post", io.NopCloser((bytesReader(envBytes)))) - req.Header.Set("Content-Type", "application/json") - r2, err := client.Do(req) - if err != nil { - fmt.Println("POST error:", err) - os.Exit(1) - } - defer r2.Body.Close() - respBody, _ := ioutil.ReadAll(r2.Body) - fmt.Println("Response:", string(respBody)) -} - -// helper to create io.Reader from bytes without importing bytes package elsewhere -func bytesReader(b []byte) io.Reader { - return &reader{b:b, i:0} -} -type reader struct { b []byte; i int } -func (r *reader) Read(p []byte) (int, error) { - if r.i >= len(r.b) { return 0, io.EOF } - n := copy(p, r.b[r.i:]) - r.i += n - return n, nil -} -``` - -**Notes:** - -* Save as `client/sign_and_post.go`. `go run` it. -* This client writes a private key to `./client.key`. Re-run will re-use it. -* **Important**: The shim in the earlier prototype validated signatures by marshaling hdr+meta+payload with `json.Marshal` — this client uses a deterministic `stableMarshal` to match that (so signature verifies). The shim code should be updated to use the same stable canonicalization (I’ll show how below). - ---- - -## B — Docker Compose (bootstrap + 2 workers) - -Because libp2p requires knowing a bootstrap peer to join a swarm in Docker, this `docker-compose` runs **one bootstrap shim** with a fixed port and writes its multiaddr to a file. Then two worker shims are started with an environment var `BOOTSTRAP_PEER` (provided by a small startup helper script) to connect to bootstrap. There’s a tiny orchestration script to capture bootstrap peer info and export it when starting the workers. - -**Files:** - -### `docker-compose.yml` - -```yaml -version: "3.8" -services: - bootstrap: - build: . - container_name: bzzz-shim-bootstrap - environment: - - ROLE=bootstrap - - LISTEN_PORT=19001 - - DB_PATH=/data/bootstrap.db - volumes: - - ./data:/data - ports: - - "19001:19001" - worker1: - build: . - container_name: bzzz-shim-worker1 - environment: - - ROLE=worker - - LISTEN_PORT=19002 - - DB_PATH=/data/worker1.db - - BOOTSTRAP_PEER=${BOOTSTRAP_PEER} - depends_on: - - bootstrap - worker2: - build: . - container_name: bzzz-shim-worker2 - environment: - - ROLE=worker - - LISTEN_PORT=19003 - - DB_PATH=/data/worker2.db - - BOOTSTRAP_PEER=${BOOTSTRAP_PEER} - depends_on: - - bootstrap -``` - -### `bootstrap-peer-helper.sh` — workflow to start and capture multiaddr - -```bash -#!/usr/bin/env bash -set -e -echo "Bringing up bootstrap..." -docker-compose up -d bootstrap -sleep 3 - -# grab multiaddr from bootstrap container logs (we'll print it in shim logs) -echo "Waiting for bootstrap to emit multiaddr..." -BOOTSTRAP_ADDR="" -for i in {1..15}; do - BOOTSTRAP_ADDR=$(docker logs bzzz-shim-bootstrap 2>&1 | grep "Libp2p host listening at" -m1 | awk -F' ' '{print $NF}') - if [ -n "$BOOTSTRAP_ADDR" ]; then break; fi - sleep 1 -done - -if [ -z "$BOOTSTRAP_ADDR" ]; then - echo "Could not find bootstrap multiaddr in logs. Check bootstrap logs." - docker logs bzzz-shim-bootstrap - exit 1 -fi - -echo "Bootstrap multiaddr: $BOOTSTRAP_ADDR" -export BOOTSTRAP_PEER="$BOOTSTRAP_ADDR" -echo "Starting workers with BOOTSTRAP_PEER=$BOOTSTRAP_PEER" -BOOTSTRAP_PEER="$BOOTSTRAP_PEER" docker-compose up -d worker1 worker2 -echo "All services started." -``` - -**Notes & limitations** - -* The shim must print its listen multiaddr line like: `Libp2p host listening at /ip4/172.18.0.2/tcp/4001/p2p/` to logs; the helper scrapes logs for that string. -* Depending on Docker network timings, you might need to adjust sleeps. -* This setup is for dev/demo; production should use persistent peerstore and well-known bootstrap nodes. - ---- - -## C — Hyperlog replay & SLURP plugin (quarantine-on-blocker) - -Additions to the shim prototype (`main.go`) to enable: - -1. `context_blobs` table (simple prototype) — stores blobs and a `quarantine` boolean. -2. `/replay` endpoint — iterates all messages in `messages` table ordered by `rowid` (chronological append), and for each `POST` with `event_type=blocker` it will mark `context_blobs` under the event `path` as quarantined and append a `slurp_decision` message to the hyperlog. - -### SQL additions (run once or included in initDB) - -```sql -CREATE TABLE IF NOT EXISTS context_blobs ( - id INTEGER PRIMARY KEY AUTOINCREMENT, - path TEXT, - role TEXT, - channel TEXT, - content TEXT, - priority INTEGER DEFAULT 5, - created_at TEXT, - quarantine INTEGER DEFAULT 0 -); - -CREATE TABLE IF NOT EXISTS slurp_decisions ( - id INTEGER PRIMARY KEY AUTOINCREMENT, - event_msg_id TEXT, - decision TEXT, - affected_blobs TEXT, -- JSON array - reason TEXT, - ts TEXT -); -``` - -Add these in the existing `initDB` function or run a migration script. - -### `/replay` endpoint handler (pseudo-implementation for main.go) - -Add function: - -```go -func handleReplay(w http.ResponseWriter, r *http.Request) { - // iterate messages in order of rowid - rows, err := db.Query("SELECT rowid, msg_id, hdr, meta, payload FROM messages ORDER BY rowid ASC") - if err != nil { - http.Error(w, "db error", 500); return - } - defer rows.Close() - var decisions []map[string]interface{} - for rows.Next() { - var rowid int64 - var msgID, hdrStr, metaStr, payloadStr string - if err := rows.Scan(&rowid, &msgID, &hdrStr, &metaStr, &payloadStr); err != nil { continue } - // parse meta to check event_type and path - var meta, payload map[string]interface{} - json.Unmarshal([]byte(metaStr), &meta) - json.Unmarshal([]byte(payloadStr), &payload) - et, _ := meta["event_type"].(string) - if et == "blocker" { - // extract path from payload or meta - path := "" - if p, ok := payload["path"].(string); ok { path = p } - if path == "" { - if p2, ok := meta["path"].(string); ok { path = p2 } - } - if path == "" { continue } - // quarantine any context_blobs where path LIKE path || "%" - res, err := db.Exec("UPDATE context_blobs SET quarantine = 1 WHERE path LIKE ? || '%'", path) - if err == nil { - cnt, _ := res.RowsAffected() - reason := fmt.Sprintf("quarantine due to blocker event %s", msgID) - // store decision - affected := []int64{} - // find ids of affected blobs - rows2, _ := db.Query("SELECT id FROM context_blobs WHERE path LIKE ? || '%'", path) - for rows2.Next() { - var id int64 - rows2.Scan(&id) - affected = append(affected, id) - } - rows2.Close() - affJSON, _ := json.Marshal(affected) - ts := time.Now().UTC().Format(time.RFC3339) - db.Exec("INSERT INTO slurp_decisions (event_msg_id, decision, affected_blobs, reason, ts) VALUES (?, ?, ?, ?, ?)", - msgID, "quarantine", string(affJSON), reason, ts) - decisions = append(decisions, map[string]interface{}{"event": msgID, "affected": affected}) - } - } - } - resp := map[string]interface{}{"status":"replayed","decisions":decisions} - w.Header().Set("Content-Type","application/json") - json.NewEncoder(w).Encode(resp) -} -``` - -Register the handler: - -```go -r.HandleFunc("/replay", handleReplay).Methods("POST") -``` - -### How demo works - -1. Start shim(s). -2. Post a few context blobs (toy entries) directly to shim DB (or via HTTP POST envelopes with `op: POST` where payload indicates it's a blob). Example blob row insert: - -```sql -INSERT INTO context_blobs (path, role, channel, content, created_at) VALUES ('/src/modules/db/connection','backend','dev','DB creds are stored here',datetime('now')); -``` - -3. Use the client to POST a blocker event that covers `/src/modules/db/connection` (as earlier client did). -4. Call `POST /replay` on the shim: - -```bash -curl -X POST http://localhost:19000/replay -``` - -5. Shim will scan messages, find the blocker, and mark matching context\_blobs as `quarantine=1`. It will insert entries into `slurp_decisions` that record what was quarantined and why. - -6. Inspect DB: - -```bash -sqlite3 bzzz_hyperlog.db "SELECT id, path, quarantine FROM context_blobs;" -sqlite3 bzzz_hyperlog.db "SELECT * FROM slurp_decisions;" -``` - -This demonstrates the SLURP plugin behavior: **on replay (or in real-time when messages are appended), SLURP can quarantine context based on high-severity events** and log decisions. - ---- - -## How to run everything — quickstart - -1. **Build shim** (in `bzzz-shim` directory) - -```bash -go mod tidy -go build -o bzzz-shim -``` - -2. **Start bootstrap and workers (docker method)** - -```bash -chmod +x bootstrap-peer-helper.sh -./bootstrap-peer-helper.sh -# wait for outputs; it starts bootstrap then workers -``` - -3. **Start local shim (standalone) for quick test** - -```bash -./bzzz-shim -# starts HTTP on :19000 -``` - -4. **Run client to POST signed blocker** - -```bash -go run client/sign_and_post.go --shim http://localhost:19000 -``` - -5. **Seed a context blob (sqlite direct insert for quick demo)** - -```bash -sqlite3 bzzz_hyperlog.db "INSERT INTO context_blobs (path, role, channel, content, created_at) VALUES ('/src/modules/db/connection','backend','dev','DB creds here', datetime('now'));" -``` - -6. **Replay to run SLURP quarantines** - -```bash -curl -X POST http://localhost:19000/replay -``` - -7. **Inspect DB** - -```bash -sqlite3 bzzz_hyperlog.db "SELECT id,path,quarantine FROM context_blobs;" -sqlite3 bzzz_hyperlog.db "SELECT * FROM slurp_decisions;" -``` - ---- - -## Next steps & prod hardening checklist - -If you want to move from prototype → hardened POC / production, next items to tackle: - -* Use a canonical JSON library (JCS / RFC8785) for deterministic signing and verification. -* Persist libp2p peerstore and provide robust bootstrapping (bootstrap nodes, NAT traversal). -* Switch hyperlog to an append-only content-addressed store (Hypercore, IPFS CAR, etc.) for immutability guarantees. -* Implement role-based encryption for payloads (project keys + per-role key wrapping). -* Add TLS & mTLS to shim HTTP endpoints (protect local API). -* Add metrics, observability, rate-limiting, and authorization middleware for the shim. - ---- - -If you want I can: - -* **(a)** produce the exact patched `main.go` with `/replay` + the DB schema changes (complete file). -* **(b)** produce a Dockerfile for the shim image (so `docker-compose` build works). -* **(c)** generate a small demo script that automates: build shim image, start bootstrap, start workers, run client, seed blob, replay, and show results. - -Nice — I’ll give you everything you need to run a quick end-to-end demo: - -* **(B)** a `Dockerfile` for the shim image (builds the Go binary and runs it) -* **(C)** a `docker-compose.yml` (bootstrap + 2 workers) and a `demo.sh` script that: - - 1. builds the shim image, - 2. starts bootstrap, - 3. waits for the bootstrap libp2p multiaddr, - 4. starts workers (pointing them to the bootstrap peer), - 5. runs the local Go test client to POST a signed blocker to the bootstrap shim, - 6. seeds a test `context_blob` into the bootstrap SQLite DB, - 7. calls `/replay` on the bootstrap shim to trigger SLURP quarantine behavior, - 8. prints the `context_blobs` table to show quarantine was applied. - -Everything is intentionally simple and prototype-ready. You’ll need **Docker**, **docker-compose**, and **Go 1.20+** on the host. - ---- - -## A — `Dockerfile` (shim image) - -Save as `Dockerfile` in your `bzzz-shim` project folder (next to `main.go` and `go.mod`). - -```dockerfile -# Build stage -FROM golang:1.20-buster AS builder -WORKDIR /src - -# Copy sources -COPY go.mod go.sum ./ -RUN go mod download - -COPY . . -# Build the shim -RUN CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -o /out/bzzz-shim - -# Final runtime image -FROM debian:buster-slim -RUN apt-get update && apt-get install -y sqlite3 ca-certificates && rm -rf /var/lib/apt/lists/* -WORKDIR /app -# copy binary -COPY --from=builder /out/bzzz-shim /usr/local/bin/bzzz-shim - -# create data dir -RUN mkdir -p /data -VOLUME ["/data"] -EXPOSE 19000 - -ENTRYPOINT ["/usr/local/bin/bzzz-shim"] -``` - -Notes: - -* The container includes `sqlite3` so the demo script can `docker exec` into containers to run SQL commands. -* Binary is built in the builder stage; final image is minimal. - ---- - -## B — `docker-compose.yml` - -Save next to the `Dockerfile`. - -```yaml -version: "3.8" -services: - bootstrap: - build: . - container_name: bzzz-shim-bootstrap - environment: - - ROLE=bootstrap - - LISTEN_PORT=19000 - - DB_PATH=/data/bootstrap.db - volumes: - - ./data:/data - ports: - - "19000:19000" # shim http endpoint exposed on host:19000 - networks: - - bzzz-net - - worker1: - build: . - container_name: bzzz-shim-worker1 - environment: - - ROLE=worker - - LISTEN_PORT=19000 - - DB_PATH=/data/worker1.db - - BOOTSTRAP_PEER=${BOOTSTRAP_PEER} - depends_on: - - bootstrap - networks: - - bzzz-net - - worker2: - build: . - container_name: bzzz-shim-worker2 - environment: - - ROLE=worker - - LISTEN_PORT=19000 - - DB_PATH=/data/worker2.db - - BOOTSTRAP_PEER=${BOOTSTRAP_PEER} - depends_on: - - bootstrap - networks: - - bzzz-net - -networks: - bzzz-net: - driver: bridge -``` - -Notes: - -* `bootstrap` port 19000 is mapped to host so the local test client can POST to `http://localhost:19000/post`. -* Workers will be started by the `demo.sh` script after we scrape the bootstrap peer's multiaddr. - ---- - -## C — `demo.sh` — bootstrap + run client + replay - -Save this script in the repo root (`demo.sh`) and `chmod +x demo.sh`. - -```bash -#!/usr/bin/env bash -set -euo pipefail -# demo.sh - builds image, starts bootstrap, starts workers, posts test event, seeds blob, replays. - -ROOTDIR="$(cd "$(dirname "$0")" && pwd)" -export COMPOSE_PROJECT_NAME=bzzz_demo - -echo "1) Build shim image (docker-compose build)" -docker-compose build --no-cache - -echo "2) Start bootstrap" -docker-compose up -d bootstrap -echo "Waiting for bootstrap to initialize..." - -# wait for shim to log its libp2p multiaddr -BOOTSTRAP_ADDR="" -for i in $(seq 1 30); do - # the shim should print a line like "Libp2p host listening at /ip4/..." in container logs - BOOTSTRAP_ADDR=$(docker logs bzzz-shim-bootstrap 2>&1 | grep -Eo "/ip4/[^ ]+/tcp/[0-9]+/p2p/[A-Za-z0-9]+" | tail -n1 || true) - if [ -n "$BOOTSTRAP_ADDR" ]; then - break - fi - sleep 1 -done - -if [ -z "$BOOTSTRAP_ADDR" ]; then - echo "Could not glean bootstrap multiaddr from logs. Printing bootstrap logs for debug:" - docker logs bzzz-shim-bootstrap --tail 200 - exit 1 -fi - -echo "Bootstrap multiaddr: $BOOTSTRAP_ADDR" -export BOOTSTRAP_PEER="$BOOTSTRAP_ADDR" - -echo "3) Start workers pointing to bootstrap peer..." -# bring up workers with BOOTSTRAP_PEER present in env for compose -BOOTSTRAP_PEER="$BOOTSTRAP_PEER" docker-compose up -d worker1 worker2 - -sleep 2 -echo "Containers status:" -docker ps --filter "name=bzzz-shim-" --format "table {{.Names}}\t{{.Status}}" - -# helper: wait for HTTP shim ready on host:19000 -echo "Waiting for HTTP shim on http://localhost:19000 to respond..." -for i in $(seq 1 30); do - if curl -sS http://localhost:19000/ >/dev/null 2>&1; then - break - fi - sleep 1 -done - -echo "4) Build & run local Go test client to POST a signed blocker" -# build client sign_and_post.go (assumes client code exists at client/sign_and_post.go) -if [ -f client/sign_and_post.go ]; then - (cd client && go mod tidy) || true - (cd client && go build -o ../client_post ./sign_and_post.go) - echo "Running client to POST envelope..." - ./client_post --key ./client.key --shim http://localhost:19000 || true -else - echo "No client code found at client/sign_and_post.go — skipping signed post (you can run client manually)." -fi - -sleep 1 - -echo "5) Seed a sample context_blob into bootstrap DB (demo content)" -# use sqlite3 inside bootstrap container to insert a test blob -docker exec -i bzzz-shim-bootstrap sqlite3 /data/bootstrap.db <<'SQL' -INSERT INTO context_blobs (path, role, channel, content, created_at, quarantine) VALUES -('/src/modules/db/connection','backend','dev','DB creds: secret', datetime('now'), 0); -SQL - -echo "6) Trigger replay to run SLURP quarantines (POST /replay)" -curl -sS -X POST http://localhost:19000/replay | jq . - -echo "7) Show resulting context_blobs rows (id,path,quarantine)" -docker exec -i bzzz-shim-bootstrap sqlite3 /data/bootstrap.db "SELECT id, path, quarantine FROM context_blobs;" | sed -n '1,200p' - -echo "Demo complete. Inspect the containers' logs for more detail:" -echo " docker logs bzzz-shim-bootstrap --tail 200" -``` - -**Important notes on `demo.sh` usage** - -* The script expects: - - * `client/sign_and_post.go` compiled client binary (the script tries to build it). If you put the client code at `client/sign_and_post.go` it will attempt to build `./client_post`. - * The shim `main.go` prints a libp2p listen line containing a `/ip4/.../p2p/` multiaddr so the script can scrape it. If your shim prints something else, adjust the `grep` in the script. -* If the shim’s `verifyEnvelope` requires exact canonicalization to verify the client's signature, ensure the client uses the same canonicalizing routine. The client code I supplied earlier uses `stableMarshal`, and you should make your shim verify using the same canonicalization (or temporarily disable verification while testing). - ---- - -## Quick checklist / commands - -From the `bzzz-shim` repo root: - -```bash -# 1) ensure you have Docker & docker-compose & Go installed - -# 2) place Dockerfile, docker-compose.yml, demo.sh in repo root (alongside main.go & client/) -chmod +x demo.sh - -# 3) run demo -./demo.sh -``` - -After run: - -* The script will print the `context_blobs` table rows showing `quarantine=1` for blobs under the event path if the replay SLURP logic matched them. - ---- - -## Next steps / polishing suggestions - -1. **Make the shim log the multiaddr in a fixed known format** so `demo.sh` scraping is robust. e.g. `log.Printf("Libp2p host listening at %s", addrs)` (the code should already do that or be updated to do so). -2. **Make the client key pair path configurable** and store keys securely. -3. **Add health endpoints** (`/health`) to speed the shim readiness check. -4. **Swap prototype SQLite hyperlog with a proper append-only content-addressed store** when you move beyond demo. -5. **Add a small `jq` or UI summary** showing SLURP decision hyperlog entries to demonstrate auditability. - ---- - -If you want, I can now: - -* **Patch the `main.go`** in your repo to ensure it prints a deterministic "Libp2p host listening at " line (so `demo.sh` scraping works reliably). -* **Produce a Dockerfile + docker-compose variant** that maps logs/volumes with better persistence and exposes worker HTTP endpoints (so you can test worker-to-worker fetches). -* **Provide a fully working `client/sign_and_post.go`** tuned to the shim’s canonicalization (I already supplied earlier; I can inline a finalized version that matches the shim canonicalization exactly). - - -Nice — I patched `main.go` for you. Main changes: - -* **Prints a deterministic multiaddr line** immediately after the libp2p host is created, in the exact format `Libp2p host listening at /p2p/` so your `demo.sh` can reliably grep it. -* **Implements deterministic canonical JSON marshal** (`stableMarshal`) and uses it in `verifyEnvelope`, matching the client’s canonicalization so signatures verify during tests. -* **Adds `/replay` endpoint** (SLURP demo: quarantines `context_blobs` under blocker paths, appends `slurp_decisions`) and registers it with the HTTP router. -* Keeps everything prototype-friendly and minimal; production improvements (JCS canonicalization, persistent peerstore, better error handling) are still recommended. - -Drop this file into your `bzzz-shim` repo (replacing the existing `main.go`), then `go build` as before. - ---- - -### ✅ Patched `main.go` - -```go -package main - -import ( - "context" - "crypto/ed25519" - "database/sql" - "encoding/base64" - "encoding/json" - "fmt" - "io" - "log" - "net/http" - "os" - "sort" - "strings" - "time" - - "github.com/gorilla/mux" - libp2p "github.com/libp2p/go-libp2p" - pubsub "github.com/libp2p/go-libp2p-pubsub" - _ "github.com/mattn/go-sqlite3" -) - -const ( - DefaultPort = 19000 - PubsubTopicPrefix = "bzzz.project." -) - -type Envelope struct { - Hdr map[string]interface{} `json:"hdr"` - Meta map[string]interface{} `json:"meta"` - Payload map[string]interface{} `json:"payload"` - Sig string `json:"sig"` -} - -type PostResponse struct { - Status string `json:"status"` - EventID string `json:"event_id"` - HyperlogRef string `json:"hyperlog_ref"` -} - -var ( - db *sql.DB - ps *pubsub.PubSub - hostCtx context.Context - topicCache = map[string]*pubsub.Topic{} -) - -// ----------------------- DB init ----------------------- - -func initDB(path string) error { - var err error - db, err = sql.Open("sqlite3", path) - if err != nil { - return err - } - createMessages := ` - CREATE TABLE IF NOT EXISTS messages ( - msg_id TEXT PRIMARY KEY, - convo_id TEXT, - parent_id TEXT, - uri TEXT, - op TEXT, - hdr TEXT, - meta TEXT, - payload TEXT, - agent_pubkey TEXT, - lamport INTEGER, - ts TEXT, - hyperlog_ref TEXT, - signature TEXT - ); - ` - createBlobs := ` - CREATE TABLE IF NOT EXISTS context_blobs ( - id INTEGER PRIMARY KEY AUTOINCREMENT, - path TEXT, - role TEXT, - channel TEXT, - content TEXT, - priority INTEGER DEFAULT 5, - created_at TEXT, - quarantine INTEGER DEFAULT 0 - ); - ` - createDecisions := ` - CREATE TABLE IF NOT EXISTS slurp_decisions ( - id INTEGER PRIMARY KEY AUTOINCREMENT, - event_msg_id TEXT, - decision TEXT, - affected_blobs TEXT, - reason TEXT, - ts TEXT - ); - ` - if _, err = db.Exec(createMessages); err != nil { - return err - } - if _, err = db.Exec(createBlobs); err != nil { - return err - } - if _, err = db.Exec(createDecisions); err != nil { - return err - } - return nil -} - -// ----------------------- Stable (deterministic) JSON marshal ----------------------- -// A simple recursive stable marshaller: sorts object keys so JSON is deterministic. -// Not a full RFC8785 implementation but sufficient for prototype parity with the client. - -func stableMarshal(v interface{}) ([]byte, error) { - switch vv := v.(type) { - case map[string]interface{}: - // sort keys - keys := make([]string, 0, len(vv)) - for k := range vv { - keys = append(keys, k) - } - sort.Strings(keys) - ordered := make([]interface{}, 0, len(keys)*2) - // build an ordered representation: ["k1", value1, "k2", value2, ...] to preserve deterministic order - m := make(map[string]interface{}, len(keys)) - for _, k := range keys { - b, err := stableMarshal(vv[k]) - if err != nil { - return nil, err - } - var tmp interface{} - if err := json.Unmarshal(b, &tmp); err != nil { - return nil, err - } - m[k] = tmp - } - return json.Marshal(m) - case []interface{}: - arr := make([]interface{}, len(vv)) - for i, e := range vv { - b, err := stableMarshal(e) - if err != nil { - return nil, err - } - var tmp interface{} - if err := json.Unmarshal(b, &tmp); err != nil { - return nil, err - } - arr[i] = tmp - } - return json.Marshal(arr) - default: - return json.Marshal(v) - } -} - -// ----------------------- Hyperlog append ----------------------- - -func appendHyperlog(msg *Envelope) (string, error) { - hdrB, _ := json.Marshal(msg.Hdr) - metaB, _ := json.Marshal(msg.Meta) - payloadB, _ := json.Marshal(msg.Payload) - msgID, _ := msg.Hdr["msg_id"].(string) - convo, _ := msg.Hdr["convo_id"].(string) - parent, _ := msg.Hdr["parent_id"].(string) - uri, _ := msg.Hdr["uri"].(string) - op, _ := msg.Hdr["op"].(string) - lamF, _ := msg.Hdr["lamport"].(float64) - lam := int64(lamF) - ts, _ := msg.Hdr["ts"].(string) - pubkey, _ := msg.Hdr["agent_pubkey"].(string) - - stmt, err := db.Prepare(`INSERT INTO messages ( - msg_id, convo_id, parent_id, uri, op, hdr, meta, payload, agent_pubkey, lamport, ts, hyperlog_ref, signature - ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?);`) - if err != nil { - return "", err - } - res, err := stmt.Exec(msgID, convo, parent, uri, op, string(hdrB), string(metaB), string(payloadB), pubkey, lam, ts, "", msg.Sig) - if err != nil { - return "", err - } - rowid, err := res.LastInsertId() - if err != nil { - return "", err - } - hyperRef := fmt.Sprintf("feed:%d", rowid) - _, err = db.Exec("UPDATE messages SET hyperlog_ref = ? WHERE rowid = ?", hyperRef, rowid) - if err != nil { - return "", err - } - return hyperRef, nil -} - -// ----------------------- Verification ----------------------- - -func verifyEnvelope(msg *Envelope) error { - pubB64, ok := msg.Hdr["agent_pubkey"].(string) - if !ok || pubB64 == "" { - return fmt.Errorf("missing agent_pubkey") - } - sigB64 := msg.Sig - pub, err := base64.StdEncoding.DecodeString(pubB64) - if err != nil { - return fmt.Errorf("bad pubkey base64: %v", err) - } - sig, err := base64.StdEncoding.DecodeString(sigB64) - if err != nil { - return fmt.Errorf("bad sig base64: %v", err) - } - - toSign := map[string]interface{}{ - "hdr": msg.Hdr, - "meta": msg.Meta, - "payload": msg.Payload, - } - can, err := stableMarshal(toSign) - if err != nil { - return fmt.Errorf("stableMarshal failed: %v", err) - } - if len(pub) != ed25519.PublicKeySize { - return fmt.Errorf("invalid pubkey length") - } - if !ed25519.Verify(ed25519.PublicKey(pub), can, sig) { - return fmt.Errorf("signature verification failed") - } - return nil -} - -// ----------------------- Pubsub helpers ----------------------- - -func publishAnnouncement(topicName string, envelope *Envelope) error { - t := topicCache[topicName] - if t == nil { - var err error - t, err = ps.Join(topicName) - if err != nil { - return err - } - topicCache[topicName] = t - } - // send minimal announcement: hdr+meta - env := map[string]interface{}{ - "hdr": envelope.Hdr, - "meta": envelope.Meta, - } - b, _ := json.Marshal(env) - return t.Publish(hostCtx, b) -} - -// ----------------------- HTTP Handlers ----------------------- - -func handlePost(w http.ResponseWriter, r *http.Request) { - var env Envelope - body, err := io.ReadAll(r.Body) - if err != nil { - http.Error(w, "bad body", 400) - return - } - if err := json.Unmarshal(body, &env); err != nil { - http.Error(w, "invalid json", 400) - return - } - if err := verifyEnvelope(&env); err != nil { - http.Error(w, fmt.Sprintf("signature failed: %v", err), 400) - return - } - hyperRef, err := appendHyperlog(&env) - if err != nil { - log.Printf("appendHyperlog error: %v", err) - http.Error(w, "internal", 500) - return - } - // publish announcement if project+task present in meta - proj, _ := env.Meta["project"].(string) - task, _ := env.Meta["task"].(string) - if proj != "" && task != "" { - topicName := fmt.Sprintf("%s%s.task.%s", PubsubTopicPrefix, proj, task) - _ = publishAnnouncement(topicName, &env) - } - resp := PostResponse{ - Status: "accepted", - EventID: env.Hdr["msg_id"].(string), - HyperlogRef: hyperRef, - } - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(201) - _ = json.NewEncoder(w).Encode(resp) -} - -type GetRequest struct { - URI string `json:"uri"` - Hints map[string]interface{} `json:"hints"` -} - -type GetResponse struct { - Context string `json:"context"` - Metadata map[string]interface{} `json:"metadata"` -} - -func handleGet(w http.ResponseWriter, r *http.Request) { - var req GetRequest - b, _ := io.ReadAll(r.Body) - if err := json.Unmarshal(b, &req); err != nil { - http.Error(w, "invalid json", 400) - return - } - rows, err := db.Query("SELECT payload FROM messages WHERE uri = ? ORDER BY ts DESC LIMIT 20", req.URI) - if err != nil { - http.Error(w, "db error", 500) - return - } - defer rows.Close() - parts := []string{} - for rows.Next() { - var payloadStr string - if err := rows.Scan(&payloadStr); err == nil { - parts = append(parts, payloadStr) - } - } - resp := GetResponse{ - Context: fmt.Sprintf("Composed %d items", len(parts)), - Metadata: map[string]interface{}{"count": len(parts)}, - } - w.Header().Set("Content-Type", "application/json") - _ = json.NewEncoder(w).Encode(resp) -} - -type AnnounceRequest struct { - Topic string `json:"topic"` - Summary map[string]interface{} `json:"summary"` -} - -type AnnounceResponse struct { - Status string `json:"status"` - PublishedTo string `json:"published_to"` -} - -func handleAnnounce(w http.ResponseWriter, r *http.Request) { - var ar AnnounceRequest - b, _ := io.ReadAll(r.Body) - if err := json.Unmarshal(b, &ar); err != nil { - http.Error(w, "invalid announce", 400) - return - } - t, err := ps.Join(ar.Topic) - if err != nil { - http.Error(w, "pubsub join failed", 500) - return - } - msg, _ := json.Marshal(ar.Summary) - if err := t.Publish(hostCtx, msg); err != nil { - http.Error(w, "publish failed", 500) - return - } - resp := AnnounceResponse{Status: "ok", PublishedTo: ar.Topic} - w.Header().Set("Content-Type", "application/json") - _ = json.NewEncoder(w).Encode(resp) -} - -// ----------------------- Replay endpoint (SLURP demo) ----------------------- - -func handleReplay(w http.ResponseWriter, r *http.Request) { - rows, err := db.Query("SELECT rowid, msg_id, hdr, meta, payload FROM messages ORDER BY rowid ASC") - if err != nil { - http.Error(w, "db error", 500) - return - } - defer rows.Close() - decisions := []map[string]interface{}{} - for rows.Next() { - var rowid int64 - var msgID, hdrStr, metaStr, payloadStr string - if err := rows.Scan(&rowid, &msgID, &hdrStr, &metaStr, &payloadStr); err != nil { - continue - } - var meta map[string]interface{} - var payload map[string]interface{} - _ = json.Unmarshal([]byte(metaStr), &meta) - _ = json.Unmarshal([]byte(payloadStr), &payload) - et, _ := meta["event_type"].(string) - if strings.ToLower(et) == "blocker" { - // check payload.path then meta.path fallback - path := "" - if p, ok := payload["path"].(string); ok { - path = p - } - if path == "" { - if p2, ok := meta["path"].(string); ok { - path = p2 - } - } - if path == "" { - // also support meta["uri"] fallback - if u, ok := meta["uri"].(string); ok { - path = u - } - } - if path == "" { - continue - } - // quarantine matching blobs - _, _ = db.Exec("UPDATE context_blobs SET quarantine = 1 WHERE path LIKE ? || '%'", path) - // collect affected ids - rows2, _ := db.Query("SELECT id FROM context_blobs WHERE path LIKE ? || '%'", path) - affected := []int64{} - for rows2.Next() { - var id int64 - rows2.Scan(&id) - affected = append(affected, id) - } - rows2.Close() - affJSON, _ := json.Marshal(affected) - reason := fmt.Sprintf("quarantine due to blocker event %s", msgID) - ts := time.Now().UTC().Format(time.RFC3339) - _, _ = db.Exec("INSERT INTO slurp_decisions (event_msg_id, decision, affected_blobs, reason, ts) VALUES (?, ?, ?, ?, ?)", - msgID, "quarantine", string(affJSON), reason, ts) - decisions = append(decisions, map[string]interface{}{"event": msgID, "affected": affected}) - } - } - resp := map[string]interface{}{"status": "replayed", "decisions": decisions} - w.Header().Set("Content-Type", "application/json") - _ = json.NewEncoder(w).Encode(resp) -} - -// ----------------------- HTTP server ----------------------- - -func startHTTP(port int) { - r := mux.NewRouter() - r.HandleFunc("/post", handlePost).Methods("POST") - r.HandleFunc("/get", handleGet).Methods("POST") - r.HandleFunc("/announce", handleAnnounce).Methods("POST") - r.HandleFunc("/replay", handleReplay).Methods("POST") - addr := fmt.Sprintf(":%d", port) - log.Printf("HTTP shim listening on %s", addr) - http.ListenAndServe(addr, r) -} - -func main() { - dbPath := os.Getenv("BZZZ_DB_PATH") - if dbPath == "" { - dbPath = "./bzzz_hyperlog.db" - } - if err := initDB(dbPath); err != nil { - log.Fatalf("db init failed: %v", err) - } - - ctx := context.Background() - hostCtx = ctx - // create a libp2p host with default options - h, err := libp2p.New() - if err != nil { - log.Fatalf("libp2p host failed: %v", err) - } - // print deterministic multiaddrs to logs for demo script scraping - addrs := h.Addrs() - pid := h.ID().Pretty() - for _, a := range addrs { - // example: /ip4/172.18.0.2/tcp/4001/p2p/QmPeerID - addr := fmt.Sprintf("%s/p2p/%s", a.String(), pid) - log.Printf("Libp2p host listening at %s", addr) - } - - ps, err = pubsub.NewGossipSub(ctx, h) - if err != nil { - log.Fatalf("pubsub init failed: %v", err) - } - - // join a default topic for debug - go func() { - t, _ := ps.Join("bzzz.global") - sub, _ := t.Subscribe() - for { - m, err := sub.Next(ctx) - if err != nil { - log.Printf("pubsub read err: %v", err) - return - } - log.Printf("Received pubsub msg: %s", string(m.Data)) - } - }() - - // start HTTP shim - go startHTTP(DefaultPort) - - // block forever - select {} -} -``` - ---- - -### Build & run (quick) - -1. Save `main.go` (replace existing). -2. Rebuild: - -```bash -go mod tidy -go build -o bzzz-shim -``` - -3. Run: - -```bash -./bzzz-shim -``` - -You should see lines like: - -``` -Libp2p host listening at /ip4/172.18.0.2/tcp/4001/p2p/Qm... -HTTP shim listening on :19000 -``` - -Your `demo.sh` can now reliably grep that `Libp2p host listening at` line to capture the bootstrap multiaddr. - ---- - -If you want, next I can: - -* Patch the `Dockerfile` to call the binary with env-aware args (e.g., DB path from env), and provide a ready-to-run `docker-compose` (we already drafted those earlier, but I can finalize them). -* Generate a final `client` binary that uses exactly the same `stableMarshal` implementation so signatures match out of the box (I can produce `client/sign_and_post.go` ready-to-run). -* Add unit tests for `stableMarshal` deterministic behavior. - - - -Nice — below are the **finalized artifacts** you asked for: - -* **(2)** A polished `Dockerfile` that runs the patched `main.go` shim and accepts env vars (DB path, port, role, bootstrap peer), plus a ready-to-run `docker-compose.yml` configured for a bootstrap + 2 workers setup. -* **(3)** A complete, tested **Go client** `sign_and_post.go` that uses the *same* deterministic `stableMarshal` algorithm as the shim, signs with Ed25519, and POSTs a valid envelope to the shim. - -I’ve kept everything consistent with the patched `main.go` you already dropped in (stableMarshal, log line format, `/replay` endpoint, DB schema). Follow the instructions at the end to build and run the demo. - ---- - -## 2 — Final `Dockerfile` - -Place this `Dockerfile` in your `bzzz-shim` repo root (next to `main.go`, `go.mod`). - -```dockerfile -# Dockerfile — build + runtime for bzzz-shim -# Multi-stage: build in Go, runtime in Debian slim - -FROM golang:1.20-buster AS builder -WORKDIR /src - -# Copy go.mod first to speed up builds -COPY go.mod go.sum ./ -RUN go mod download - -# Copy source -COPY . . - -# Build the binary (CGO needed for sqlite) -RUN CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -o /out/bzzz-shim - -# Runtime image -FROM debian:buster-slim -RUN apt-get update && apt-get install -y sqlite3 ca-certificates && rm -rf /var/lib/apt/lists/* -WORKDIR /app - -# Copy binary -COPY --from=builder /out/bzzz-shim /usr/local/bin/bzzz-shim - -# Create data dir and make sure shim can write -RUN mkdir -p /data && chown -R 1000:1000 /data -VOLUME ["/data"] - -EXPOSE 19000 - -# Default envs — override in docker-compose if needed -ENV LISTEN_PORT=19000 -ENV DB_PATH=/data/bzzz_hyperlog.db -ENV ROLE=worker -ENV BOOTSTRAP_PEER="" - -# Run as non-root if you like; for simplicity run as root in prototype -ENTRYPOINT ["/usr/local/bin/bzzz-shim"] -``` - -**Notes** - -* The binary expects environment variables: - - * `DB_PATH` — SQLite DB path (default `/data/bzzz_hyperlog.db`) - * `LISTEN_PORT` — not used by shim in prototyping but kept for parity—shim listens on 19000 by default - * `ROLE` — bootstrap | worker (for future behavior) - * `BOOTSTRAP_PEER` — multiaddr to use if connecting to bootstrap (workers can read this and attempt to connect) -* Volume `/data` maps to host for persistence and to let demo script seed DB. - ---- - -## 2 — Final `docker-compose.yml` - -Place this alongside the `Dockerfile`. It starts a bootstrap shim (HTTP port mapped to host) and two workers that point to the bootstrap peer captured by the helper script. - -```yaml -version: "3.8" -services: - bootstrap: - build: . - container_name: bzzz-shim-bootstrap - environment: - ROLE: bootstrap - LISTEN_PORT: "19000" - DB_PATH: /data/bootstrap.db - volumes: - - ./data:/data - ports: - - "19000:19000" - networks: - - bzzz-net - healthcheck: - test: ["CMD-SHELL", "curl -f http://localhost:19000/ || exit 1"] - interval: 3s - retries: 20 - - worker1: - build: . - container_name: bzzz-shim-worker1 - environment: - ROLE: worker - LISTEN_PORT: "19000" - DB_PATH: /data/worker1.db - BOOTSTRAP_PEER: ${BOOTSTRAP_PEER} - volumes: - - ./data:/data - depends_on: - - bootstrap - networks: - - bzzz-net - - worker2: - build: . - container_name: bzzz-shim-worker2 - environment: - ROLE: worker - LISTEN_PORT: "19000" - DB_PATH: /data/worker2.db - BOOTSTRAP_PEER: ${BOOTSTRAP_PEER} - volumes: - - ./data:/data - depends_on: - - bootstrap - networks: - - bzzz-net - -networks: - bzzz-net: - driver: bridge -``` - -**How this works** - -* `bootstrap` is exposed at host port `19000` so local clients can POST to `http://localhost:19000/post`. -* Workers expect `BOOTSTRAP_PEER` env var with multiaddr of the bootstrap — the demo script will scrape logs and set that variable when starting workers. -* `/data` host directory persists SQLite DBs for inspection. - ---- - -## 3 — Final Go client `sign_and_post.go` - -Put this in `client/sign_and_post.go`. It uses the identical `stableMarshal` canonicalization used in the patched `main.go` so signatures will verify. - -```go -// client/sign_and_post.go -package main - -import ( - "crypto/ed25519" - "crypto/rand" - "encoding/base64" - "encoding/json" - "flag" - "fmt" - "io" - "io/ioutil" - "net/http" - "os" - "sort" - "time" -) - -// stableMarshal deterministic JSON: sorts map keys recursively. -// Matches the shim's implementation. -func stableMarshal(v interface{}) ([]byte, error) { - switch vv := v.(type) { - case map[string]interface{}: - keys := make([]string, 0, len(vv)) - for k := range vv { - keys = append(keys, k) - } - sort.Strings(keys) - ordered := make(map[string]interface{}, len(keys)) - for _, k := range keys { - b, err := stableMarshal(vv[k]) - if err != nil { - return nil, err - } - var tmp interface{} - if err := json.Unmarshal(b, &tmp); err != nil { - return nil, err - } - ordered[k] = tmp - } - return json.Marshal(ordered) - case []interface{}: - arr := make([]interface{}, len(vv)) - for i, e := range vv { - b, err := stableMarshal(e) - if err != nil { - return nil, err - } - var tmp interface{} - if err := json.Unmarshal(b, &tmp); err != nil { - return nil, err - } - arr[i] = tmp - } - return json.Marshal(arr) - default: - return json.Marshal(v) - } -} - -// bytesReader helper -type reader struct{ b []byte; i int } -func (r *reader) Read(p []byte) (int, error) { - if r.i >= len(r.b) { - return 0, io.EOF - } - n := copy(p, r.b[r.i:]) - r.i += n - return n, nil -} - -func bytesReader(b []byte) io.Reader { return &reader{b: b} } - -func main() { - keyPath := flag.String("key", "./client.key", "path to private key") - shim := flag.String("shim", "http://localhost:19000", "shim URL") - flag.Parse() - - var priv ed25519.PrivateKey - var pub ed25519.PublicKey - - if _, err := os.Stat(*keyPath); os.IsNotExist(err) { - pubk, privk, _ := ed25519.GenerateKey(rand.Reader) - priv = privk - pub = pubk - if err := ioutil.WriteFile(*keyPath, priv, 0600); err != nil { - panic(err) - } - fmt.Println("Generated new key:", *keyPath) - } else { - data, err := ioutil.ReadFile(*keyPath) - if err != nil { - panic(err) - } - priv = ed25519.PrivateKey(data) - pub = priv.Public().(ed25519.PublicKey) - } - - pubB64 := base64.StdEncoding.EncodeToString(pub) - msgID := fmt.Sprintf("msg-%d", time.Now().UnixNano()) - convoID := msgID - ts := time.Now().UTC().Format(time.RFC3339) - - hdr := map[string]interface{}{ - "op": "POST", - "uri": "bzzz://alice:dev@chorus:website-redesign/src/modules/db/connection", - "msg_id": msgID, - "parent_id": nil, - "convo_id": convoID, - "ts": ts, - "lamport": 1, - "ttl": 3600, - "priority": "urgent", - "agent_pubkey": pubB64, - } - meta := map[string]interface{}{ - "event_type": "blocker", - "project": "chorus", - "task": "website-redesign", - "visibility": map[string]interface{}{ - "roles": []interface{}{"devops", "backend"}, - "channels": []interface{}{"dev"}, - "exclude": []interface{}{}, - }, - } - payload := map[string]interface{}{ - "title": "Hardcoded DB creds found", - "body": "Automated scan discovered plaintext creds in src/modules/db/connection. Rotate keys.", - "attachments": []interface{}{}, - "hints": map[string]interface{}{"use_lightrag": false}, - } - - toSign := map[string]interface{}{ - "hdr": hdr, - "meta": meta, - "payload": payload, - } - can, err := stableMarshal(toSign) - if err != nil { - panic(err) - } - sig := ed25519.Sign(priv, can) - sigB64 := base64.StdEncoding.EncodeToString(sig) - - envelope := map[string]interface{}{ - "hdr": hdr, - "meta": meta, - "payload": payload, - "sig": sigB64, - } - envBytes, _ := json.MarshalIndent(envelope, "", " ") - fmt.Println("Envelope to POST:\n", string(envBytes)) - - client := &http.Client{Timeout: 10 * time.Second} - req, _ := http.NewRequest("POST", *shim+"/post", bytesReader(envBytes)) - req.Header.Set("Content-Type", "application/json") - resp, err := client.Do(req) - if err != nil { - fmt.Println("POST failed:", err) - os.Exit(1) - } - defer resp.Body.Close() - body, _ := ioutil.ReadAll(resp.Body) - fmt.Println("Shim response:", string(body)) -} -``` - -**How it matches the shim** - -* Uses the exact same `stableMarshal` algorithm (recursive key ordering) as the patched shim — this ensures signatures verify. -* Writes the private key to `./client.key` and reuses it across runs. -* Posts to `http://localhost:19000/post` by default — change `--shim` if needed. - ---- - -## Build & Run instructions (end-to-end) - -1. **Prepare repo** - - * Ensure your `bzzz-shim` folder has: - - * patched `main.go` (the version you accepted) - * `go.mod`, `go.sum` - * `Dockerfile` (above) - * `docker-compose.yml` (above) - * `client/sign_and_post.go` (above) - -2. **Build the Docker image** - -```bash -docker-compose build --no-cache -``` - -3. **Start bootstrap only and capture multiaddr** - -```bash -docker-compose up -d bootstrap -# Wait a few seconds for the shim to start and print the multiaddr -sleep 3 -docker logs bzzz-shim-bootstrap --tail 200 -# Look for lines like: -# Libp2p host listening at /ip4/172.18.0.2/tcp/4001/p2p/QmPeer... -# Copy that multiaddr string for the next step (or let demo script auto-scrape) -``` - -4. **Start workers with BOOTSTRAP\_PEER** - If you have the bootstrap multiaddr, export and start workers: - -```bash -export BOOTSTRAP_PEER="/ip4/172.18.0.2/tcp/4001/p2p/QmPeer..." -BOOTSTRAP_PEER="$BOOTSTRAP_PEER" docker-compose up -d worker1 worker2 -``` - -(If you want the helper automation, use the earlier `demo.sh` which scrapes logs and starts workers.) - -5. **Build & run the client** (locally) - -```bash -cd client -go mod tidy -go build -o ../client_post sign_and_post.go -cd .. -./client_post --key ./client.key --shim http://localhost:19000 -``` - -You should see the envelope printed and a response from the shim indicating accepted and `hyperlog_ref`. - -6. **Seed a context blob into bootstrap DB (demo)** - -```bash -docker exec -i bzzz-shim-bootstrap sqlite3 /data/bootstrap.db <<'SQL' -INSERT INTO context_blobs (path, role, channel, content, created_at, quarantine) -VALUES ('/src/modules/db/connection','backend','dev','DB creds: secret', datetime('now'), 0); -SQL -``` - -7. **Trigger replay to run SLURP quarantines** - -```bash -curl -sS -X POST http://localhost:19000/replay | jq . -``` - -8. **Verify quarantine applied** - -```bash -docker exec -i bzzz-shim-bootstrap sqlite3 /data/bootstrap.db "SELECT id, path, quarantine FROM context_blobs;" | sed -n '1,200p' -docker exec -i bzzz-shim-bootstrap sqlite3 /data/bootstrap.db "SELECT * FROM slurp_decisions;" | sed -n '1,200p' -``` - -You should see the blob row with `quarantine = 1` and an entry in `slurp_decisions` showing the event and affected blob IDs. - ---- - -## Final notes & next recommended steps - -* **Key rotation & protection:** Move keys to a secure keystore for production. For demo this file-based approach is fine. -* **Deterministic canonical JSON:** The `stableMarshal` is intentionally simple and matched on both ends. When moving to production, switch to an established canonical JSON library (RFC8785/JCS) or use COSE/CBOR signatures. -* **Bootstrap & NAT:** The docker-internal network multiaddr is used for demo. For cross-host testing, add bootstrap nodes reachable over WAN or use relay nodes. -* **Volumes & persistence:** The compose mounts `./data` — commit/backup that folder if you want persistent demo state. - ---- - -If you want, next I can: - -* Generate an automated `demo.sh` that uses the exact logging pattern and auto-scrapes the bootstrap multiaddr and runs the full sequence (build, start bootstrap, start workers, run client, seed blob, replay) — fully hands-off. -* Add a small UI or simple web page that subscribes to a worker and displays pubsub announcements and SLURP decisions. -* Harden canonical signing using RFC8785-compliant library and swap SQLite hyperlog for Hypercore/IPFS CAR. - diff --git a/old-docs/IMPLEMENTATION_ROADMAP.md b/old-docs/IMPLEMENTATION_ROADMAP.md deleted file mode 100644 index 4862c538..00000000 --- a/old-docs/IMPLEMENTATION_ROADMAP.md +++ /dev/null @@ -1,1194 +0,0 @@ -# BZZZ v2 Implementation Roadmap: UCXL Integration - -## Phase 1: Foundation Implementation (Weeks 1-2) - -### Week 1: UCXL Address Parser Implementation - -#### 1.1 Replace existing `pkg/protocol/uri.go` with UCXL support - -**File:** `/pkg/protocol/ucxl_address.go` - -```go -package protocol - -import ( - "fmt" - "net/url" - "regexp" - "strings" - "time" -) - -// UCXLAddress represents a parsed UCXL address with temporal navigation -// Grammar: ucxl://agent:role@project:task/temporal_segment/path[?query][#fragment] -type UCXLAddress struct { - // Core semantic addressing - Agent string `json:"agent"` // "gpt4", "claude", "any" - Role string `json:"role"` // "architect", "reviewer", "any" - Project string `json:"project"` // "bzzz", "chorus", "any" - Task string `json:"task"` // "v2-migration", "auth", "any" - - // Temporal navigation - TemporalSegment string `json:"temporal_segment"` // "~~", "^^", "*^", "*~", ISO8601 - - // Resource path - Path string `json:"path"` // "/decisions/architecture.json" - - // Standard URI components - Query string `json:"query,omitempty"` - Fragment string `json:"fragment,omitempty"` - Raw string `json:"raw"` - - // Resolved temporal information - ResolvedTime *time.Time `json:"resolved_time,omitempty"` -} - -// Temporal navigation constants -const ( - UCXLScheme = "ucxl" - TemporalBackward = "~~" // Navigate backward in time - TemporalForward = "^^" // Navigate forward in time - TemporalLatest = "*^" // Latest entry - TemporalFirst = "*~" // First entry - AnyWildcard = "any" // Wildcard for any component -) - -// Validation patterns for UCXL components -var ( - agentPattern = regexp.MustCompile(`^[a-zA-Z0-9\-_]+$|^any$`) - rolePattern = regexp.MustCompile(`^[a-zA-Z0-9\-_]+$|^any$`) - projectPattern = regexp.MustCompile(`^[a-zA-Z0-9\-_]+$|^any$`) - taskPattern = regexp.MustCompile(`^[a-zA-Z0-9\-_]+$|^any$`) - temporalPattern = regexp.MustCompile(`^(~~|\^\^|\*\^|\*~|\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.*Z?)$`) - pathPattern = regexp.MustCompile(`^(/[a-zA-Z0-9\-_/.]*)?$`) -) - -// ParseUCXLAddress parses a UCXL URI string -func ParseUCXLAddress(uri string) (*UCXLAddress, error) { - if uri == "" { - return nil, fmt.Errorf("empty UCXL address") - } - - if !strings.HasPrefix(uri, UCXLScheme+"://") { - return nil, fmt.Errorf("invalid scheme: expected '%s'", UCXLScheme) - } - - // Parse using standard URL parser - parsedURL, err := url.Parse(uri) - if err != nil { - return nil, fmt.Errorf("failed to parse UCXL address: %w", err) - } - - // Extract agent:role from user info - userInfo := parsedURL.User - if userInfo == nil { - return nil, fmt.Errorf("missing agent:role information") - } - - agent := userInfo.Username() - role, hasRole := userInfo.Password() - if !hasRole { - return nil, fmt.Errorf("missing role information") - } - - // Extract project:task from host - hostParts := strings.Split(parsedURL.Host, ":") - if len(hostParts) != 2 { - return nil, fmt.Errorf("invalid project:task format") - } - - project := hostParts[0] - task := hostParts[1] - - // Parse temporal segment and path - pathParts := strings.SplitN(strings.TrimPrefix(parsedURL.Path, "/"), "/", 2) - temporalSegment := "" - resourcePath := "" - - if len(pathParts) > 0 && pathParts[0] != "" { - temporalSegment = pathParts[0] - } - if len(pathParts) > 1 { - resourcePath = "/" + pathParts[1] - } - - address := &UCXLAddress{ - Agent: agent, - Role: role, - Project: project, - Task: task, - TemporalSegment: temporalSegment, - Path: resourcePath, - Query: parsedURL.RawQuery, - Fragment: parsedURL.Fragment, - Raw: uri, - } - - if err := address.Validate(); err != nil { - return nil, fmt.Errorf("validation failed: %w", err) - } - - return address, nil -} - -// Validate validates UCXL address components -func (addr *UCXLAddress) Validate() error { - if !agentPattern.MatchString(addr.Agent) { - return fmt.Errorf("invalid agent: %s", addr.Agent) - } - if !rolePattern.MatchString(addr.Role) { - return fmt.Errorf("invalid role: %s", addr.Role) - } - if !projectPattern.MatchString(addr.Project) { - return fmt.Errorf("invalid project: %s", addr.Project) - } - if !taskPattern.MatchString(addr.Task) { - return fmt.Errorf("invalid task: %s", addr.Task) - } - if addr.TemporalSegment != "" && !temporalPattern.MatchString(addr.TemporalSegment) { - return fmt.Errorf("invalid temporal segment: %s", addr.TemporalSegment) - } - if !pathPattern.MatchString(addr.Path) { - return fmt.Errorf("invalid path: %s", addr.Path) - } - return nil -} - -// ResolveTemporalSegment resolves temporal navigation tokens to actual timestamps -func (addr *UCXLAddress) ResolveTemporalSegment(navigator TemporalNavigator) error { - if addr.TemporalSegment == "" { - return nil - } - - switch addr.TemporalSegment { - case TemporalLatest: - timestamp, err := navigator.GetLatestTimestamp(addr) - if err != nil { - return err - } - addr.ResolvedTime = ×tamp - case TemporalFirst: - timestamp, err := navigator.GetFirstTimestamp(addr) - if err != nil { - return err - } - addr.ResolvedTime = ×tamp - case TemporalBackward, TemporalForward: - // These require context of current position - return fmt.Errorf("relative navigation requires current context") - default: - // Parse as ISO8601 timestamp - timestamp, err := time.Parse(time.RFC3339, addr.TemporalSegment) - if err != nil { - return fmt.Errorf("invalid timestamp format: %w", err) - } - addr.ResolvedTime = ×tamp - } - - return nil -} - -// Matches checks if this address matches another (with wildcard support) -func (addr *UCXLAddress) Matches(other *UCXLAddress) bool { - return componentMatches(addr.Agent, other.Agent) && - componentMatches(addr.Role, other.Role) && - componentMatches(addr.Project, other.Project) && - componentMatches(addr.Task, other.Task) && - pathMatches(addr.Path, other.Path) -} - -func componentMatches(a, b string) bool { - return a == b || a == AnyWildcard || b == AnyWildcard -} - -func pathMatches(a, b string) bool { - if a == "" || b == "" { - return true - } - return a == b -} - -// String returns canonical string representation -func (addr *UCXLAddress) String() string { - uri := fmt.Sprintf("%s://%s:%s@%s:%s", UCXLScheme, addr.Agent, addr.Role, addr.Project, addr.Task) - - if addr.TemporalSegment != "" { - uri += "/" + addr.TemporalSegment - } - - if addr.Path != "" { - uri += addr.Path - } - - if addr.Query != "" { - uri += "?" + addr.Query - } - - if addr.Fragment != "" { - uri += "#" + addr.Fragment - } - - return uri -} - -// ToStorageKey generates a storage key for this address -func (addr *UCXLAddress) ToStorageKey() string { - key := fmt.Sprintf("%s/%s/%s/%s", addr.Agent, addr.Role, addr.Project, addr.Task) - - if addr.ResolvedTime != nil { - key += "/" + addr.ResolvedTime.Format(time.RFC3339) - } else if addr.TemporalSegment != "" { - key += "/" + addr.TemporalSegment - } - - if addr.Path != "" { - key += addr.Path - } - - return key -} -``` - -#### 1.2 Temporal Navigator Interface - -**File:** `/pkg/temporal/navigator.go` - -```go -package temporal - -import ( - "time" - "github.com/anthonyrawlins/bzzz/pkg/protocol" -) - -type TemporalNavigator interface { - GetLatestTimestamp(addr *protocol.UCXLAddress) (time.Time, error) - GetFirstTimestamp(addr *protocol.UCXLAddress) (time.Time, error) - NavigateBackward(addr *protocol.UCXLAddress, steps int) (time.Time, error) - NavigateForward(addr *protocol.UCXLAddress, steps int) (time.Time, error) - GetAtTime(addr *protocol.UCXLAddress, timestamp time.Time) (*protocol.UCXLAddress, error) -} - -type TemporalIndex struct { - // Map address patterns to temporal entries - entries map[string][]TemporalEntry -} - -type TemporalEntry struct { - Timestamp time.Time `json:"timestamp"` - Version int64 `json:"version"` - Address *protocol.UCXLAddress `json:"address"` - Checksum string `json:"checksum"` -} -``` - -### Week 2: UCXI Interface Server - -#### 2.1 UCXI Server Implementation - -**File:** `/pkg/ucxi/server.go` - -```go -package ucxi - -import ( - "encoding/json" - "net/http" - "github.com/gorilla/mux" - "github.com/anthonyrawlins/bzzz/pkg/protocol" - "github.com/anthonyrawlins/bzzz/pkg/storage" - "github.com/anthonyrawlins/bzzz/pkg/temporal" -) - -type UCXIServer struct { - contextStore storage.ContextStore - temporalNav temporal.TemporalNavigator - router *mux.Router - port int -} - -// Context entry structure -type ContextEntry struct { - Address *protocol.UCXLAddress `json:"address"` - Content map[string]interface{} `json:"content"` - Metadata ContextMetadata `json:"metadata"` - Version int64 `json:"version"` - Checksum string `json:"checksum"` - CreatedAt time.Time `json:"created_at"` - UpdatedAt time.Time `json:"updated_at"` -} - -type ContextMetadata struct { - ContentType string `json:"content_type"` - Size int64 `json:"size"` - Tags []string `json:"tags"` - Provenance string `json:"provenance"` - Relationships map[string]string `json:"relationships"` -} - -func NewUCXIServer(store storage.ContextStore, nav temporal.TemporalNavigator, port int) *UCXIServer { - server := &UCXIServer{ - contextStore: store, - temporalNav: nav, - router: mux.NewRouter(), - port: port, - } - server.setupRoutes() - return server -} - -func (s *UCXIServer) setupRoutes() { - // UCXI operations - s.router.HandleFunc("/ucxi/{agent}:{role}@{project}:{task}/{temporal}/{path:.*}", s.handleGET).Methods("GET") - s.router.HandleFunc("/ucxi/{agent}:{role}@{project}:{task}/{temporal}/{path:.*}", s.handlePUT).Methods("PUT") - s.router.HandleFunc("/ucxi/{agent}:{role}@{project}:{task}/{temporal}/", s.handlePOST).Methods("POST") - s.router.HandleFunc("/ucxi/{agent}:{role}@{project}:{task}/{temporal}/{path:.*}", s.handleDELETE).Methods("DELETE") - s.router.HandleFunc("/ucxi/announce", s.handleANNOUNCE).Methods("POST") - - // Extended operations - s.router.HandleFunc("/ucxi/navigate/{direction}", s.handleNAVIGATE).Methods("GET") - s.router.HandleFunc("/ucxi/query", s.handleQUERY).Methods("GET") - s.router.HandleFunc("/ucxi/subscribe", s.handleSUBSCRIBE).Methods("POST") -} - -func (s *UCXIServer) handleGET(w http.ResponseWriter, r *http.Request) { - vars := mux.Vars(r) - - // Construct UCXL address from URL parameters - address := &protocol.UCXLAddress{ - Agent: vars["agent"], - Role: vars["role"], - Project: vars["project"], - Task: vars["task"], - TemporalSegment: vars["temporal"], - Path: "/" + vars["path"], - } - - // Resolve temporal navigation - if err := address.ResolveTemporalSegment(s.temporalNav); err != nil { - http.Error(w, err.Error(), http.StatusBadRequest) - return - } - - // Retrieve context - entry, err := s.contextStore.Retrieve(address) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - - w.Header().Set("Content-Type", "application/json") - json.NewEncoder(w).Encode(entry) -} - -func (s *UCXIServer) handlePUT(w http.ResponseWriter, r *http.Request) { - vars := mux.Vars(r) - - var entry ContextEntry - if err := json.NewDecoder(r.Body).Decode(&entry); err != nil { - http.Error(w, err.Error(), http.StatusBadRequest) - return - } - - // Construct address - address := &protocol.UCXLAddress{ - Agent: vars["agent"], - Role: vars["role"], - Project: vars["project"], - Task: vars["task"], - TemporalSegment: vars["temporal"], - Path: "/" + vars["path"], - } - - entry.Address = address - entry.UpdatedAt = time.Now() - - if err := s.contextStore.Store(address, &entry); err != nil { - http.Error(w, err.Error(), http.StatusInternalServerError) - return - } - - w.WriteHeader(http.StatusCreated) -} - -// Additional handlers for POST, DELETE, ANNOUNCE, NAVIGATE, QUERY, SUBSCRIBE... -``` - -## Phase 2: Decision Graph & SLURP Integration (Weeks 3-6) - -### Week 3-4: Decision Node Schema - -#### 3.1 Decision Node Structure - -**File:** `/pkg/decisions/schema.go` - -```go -package decisions - -import ( - "time" - "github.com/anthonyrawlins/bzzz/pkg/protocol" -) - -// DecisionNode represents a structured decision for SLURP ingestion -type DecisionNode struct { - DecisionID string `json:"decision_id"` - UCXLAddress string `json:"ucxl_address"` - Timestamp time.Time `json:"timestamp"` - AgentID string `json:"agent_id"` - DecisionType string `json:"decision_type"` - Context DecisionContext `json:"context"` - Justification Justification `json:"justification"` - Citations []Citation `json:"citations"` - Impacts []Impact `json:"impacts"` - Metadata DecisionMetadata `json:"metadata"` -} - -type DecisionContext struct { - Project string `json:"project"` - Task string `json:"task"` - Scope string `json:"scope"` - Phase string `json:"phase"` - Environment string `json:"environment"` - Constraints map[string]interface{} `json:"constraints"` -} - -type Justification struct { - Reasoning string `json:"reasoning"` - AlternativesConsidered []Alternative `json:"alternatives_considered"` - Criteria []string `json:"criteria"` - Confidence float64 `json:"confidence"` - RiskAssessment RiskAssessment `json:"risk_assessment"` - TradeOffs map[string]interface{} `json:"trade_offs"` -} - -type Alternative struct { - Name string `json:"name"` - Description string `json:"description"` - Pros []string `json:"pros"` - Cons []string `json:"cons"` - Score float64 `json:"score"` - Rejected bool `json:"rejected"` - Reason string `json:"reason"` - Metadata map[string]interface{} `json:"metadata"` -} - -type Citation struct { - Type string `json:"type"` // "justified_by", "references", "contradicts", "extends" - UCXLAddress string `json:"ucxl_address"` - Relevance string `json:"relevance"` // "high", "medium", "low" - Excerpt string `json:"excerpt"` - Strength float64 `json:"strength"` // 0.0 - 1.0 - Verified bool `json:"verified"` - VerifiedAt *time.Time `json:"verified_at,omitempty"` -} - -type Impact struct { - Type string `json:"type"` // "replaces", "modifies", "creates", "deprecates" - UCXLAddress string `json:"ucxl_address"` - Reason string `json:"reason"` - Severity string `json:"severity"` // "low", "medium", "high", "breaking" - Affected []string `json:"affected"` - Migration *MigrationInfo `json:"migration,omitempty"` - Metadata map[string]interface{} `json:"metadata"` -} - -type MigrationInfo struct { - Required bool `json:"required"` - Automated bool `json:"automated"` - Instructions string `json:"instructions"` - Timeline string `json:"timeline"` -} - -type RiskAssessment struct { - Level string `json:"level"` // "low", "medium", "high" - Factors []string `json:"factors"` - Mitigation []string `json:"mitigation"` - Monitoring []string `json:"monitoring"` - Escalation *EscalationInfo `json:"escalation,omitempty"` -} - -type EscalationInfo struct { - Triggers []string `json:"triggers"` - Contacts []string `json:"contacts"` - Procedures []string `json:"procedures"` -} - -type DecisionMetadata struct { - SourceRepository string `json:"source_repository"` - CommitSHA string `json:"commit_sha"` - PullRequestID string `json:"pull_request_id"` - ReviewedBy []string `json:"reviewed_by"` - ApprovedBy []string `json:"approved_by"` - Tags []string `json:"tags"` - Priority int `json:"priority"` - Visibility string `json:"visibility"` // "public", "internal", "private" - ExpiresAt *time.Time `json:"expires_at,omitempty"` - CustomFields map[string]interface{} `json:"custom_fields"` -} - -// Decision types enumeration -const ( - DecisionTypeArchitecture = "architecture_choice" - DecisionTypeImplementation = "implementation_approach" - DecisionTypeTechnology = "technology_selection" - DecisionTypeProcess = "process_definition" - DecisionTypeAPI = "api_design" - DecisionTypeDataModel = "data_model_design" - DecisionTypeSecurity = "security_requirement" - DecisionTypeDeployment = "deployment_strategy" - DecisionTypeRollback = "rollback_decision" - DecisionTypeDeprecation = "deprecation_notice" -) - -// Citation types enumeration -const ( - CitationJustifiedBy = "justified_by" - CitationReferences = "references" - CitationContradicts = "contradicts" - CitationExtends = "extends" - CitationSupersedes = "supersedes" -) - -// Validation methods -func (dn *DecisionNode) Validate() error { - if dn.DecisionID == "" { - return fmt.Errorf("decision_id is required") - } - - if dn.UCXLAddress == "" { - return fmt.Errorf("ucxl_address is required") - } - - if _, err := protocol.ParseUCXLAddress(dn.UCXLAddress); err != nil { - return fmt.Errorf("invalid ucxl_address: %w", err) - } - - if dn.AgentID == "" { - return fmt.Errorf("agent_id is required") - } - - if dn.DecisionType == "" { - return fmt.Errorf("decision_type is required") - } - - // Validate citations - for i, citation := range dn.Citations { - if citation.UCXLAddress == "" { - return fmt.Errorf("citation[%d] missing ucxl_address", i) - } - - if _, err := protocol.ParseUCXLAddress(citation.UCXLAddress); err != nil { - return fmt.Errorf("citation[%d] invalid ucxl_address: %w", i, err) - } - } - - return nil -} - -func (dn *DecisionNode) GenerateID() string { - hash := sha256.Sum256([]byte(dn.UCXLAddress + dn.AgentID + dn.Timestamp.Format(time.RFC3339))) - return fmt.Sprintf("%x", hash)[:16] -} -``` - -### Week 5-6: SLURP Integration - -#### 5.1 SLURP Client Implementation - -**File:** `/pkg/integration/slurp_client.go` - -```go -package integration - -import ( - "bytes" - "encoding/json" - "fmt" - "net/http" - "time" - "github.com/anthonyrawlins/bzzz/pkg/decisions" -) - -type SLURPClient struct { - baseURL string - apiKey string - httpClient *http.Client - batchSize int - timeout time.Duration -} - -type SLURPPublishRequest struct { - Decisions []decisions.DecisionNode `json:"decisions"` - Source string `json:"source"` - Version string `json:"version"` - Timestamp time.Time `json:"timestamp"` -} - -type SLURPPublishResponse struct { - Accepted []string `json:"accepted"` - Rejected []string `json:"rejected"` - Errors []string `json:"errors"` -} - -func NewSLURPClient(baseURL, apiKey string) *SLURPClient { - return &SLURPClient{ - baseURL: baseURL, - apiKey: apiKey, - httpClient: &http.Client{ - Timeout: 30 * time.Second, - }, - batchSize: 10, - timeout: 30 * time.Second, - } -} - -func (sc *SLURPClient) PublishDecisions(nodes []decisions.DecisionNode) (*SLURPPublishResponse, error) { - // Validate all decisions before publishing - for i, node := range nodes { - if err := node.Validate(); err != nil { - return nil, fmt.Errorf("decision[%d] validation failed: %w", i, err) - } - } - - request := SLURPPublishRequest{ - Decisions: nodes, - Source: "bzzz-v2", - Version: "2.0.0", - Timestamp: time.Now(), - } - - jsonData, err := json.Marshal(request) - if err != nil { - return nil, fmt.Errorf("failed to marshal request: %w", err) - } - - req, err := http.NewRequest("POST", sc.baseURL+"/api/v1/decisions", bytes.NewBuffer(jsonData)) - if err != nil { - return nil, fmt.Errorf("failed to create request: %w", err) - } - - req.Header.Set("Content-Type", "application/json") - req.Header.Set("Authorization", "Bearer "+sc.apiKey) - req.Header.Set("User-Agent", "bzzz-v2/2.0.0") - - resp, err := sc.httpClient.Do(req) - if err != nil { - return nil, fmt.Errorf("failed to send request: %w", err) - } - defer resp.Body.Close() - - if resp.StatusCode != http.StatusOK { - return nil, fmt.Errorf("SLURP API error: %d", resp.StatusCode) - } - - var response SLURPPublishResponse - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - return nil, fmt.Errorf("failed to decode response: %w", err) - } - - return &response, nil -} - -func (sc *SLURPClient) QueryDecisions(query string) ([]decisions.DecisionNode, error) { - req, err := http.NewRequest("GET", sc.baseURL+"/api/v1/decisions/query", nil) - if err != nil { - return nil, err - } - - q := req.URL.Query() - q.Add("q", query) - req.URL.RawQuery = q.Encode() - - req.Header.Set("Authorization", "Bearer "+sc.apiKey) - - resp, err := sc.httpClient.Do(req) - if err != nil { - return nil, err - } - defer resp.Body.Close() - - var nodes []decisions.DecisionNode - if err := json.NewDecoder(resp.Body).Decode(&nodes); err != nil { - return nil, err - } - - return nodes, nil -} -``` - -## Phase 3: Agent Integration & Testing (Weeks 7-8) - -### Week 7: GPT-4 Agent UCXL Integration - -#### 7.1 Updated MCP Tools - -**File:** `/mcp-server/src/tools/ucxi-tools.ts` - -```typescript -import { Tool } from "@modelcontextprotocol/sdk/types.js"; - -export const ucxiTools: Record = { - ucxi_get: { - name: "ucxi_get", - description: "Retrieve context from UCXL address with temporal navigation", - inputSchema: { - type: "object", - properties: { - address: { - type: "string", - description: "UCXL address (e.g., ucxl://gpt4:architect@bzzz:v2/*^/decisions.json)" - }, - temporal: { - type: "string", - enum: ["~~", "^^", "*^", "*~"], - description: "Temporal navigation: ~~ (back), ^^ (forward), *^ (latest), *~ (first)" - } - }, - required: ["address"] - } - }, - - ucxi_put: { - name: "ucxi_put", - description: "Store context at UCXL address", - inputSchema: { - type: "object", - properties: { - address: { - type: "string", - description: "UCXL address where to store context" - }, - content: { - type: "object", - description: "Context content to store" - }, - metadata: { - type: "object", - properties: { - content_type: { type: "string" }, - tags: { type: "array", items: { type: "string" } }, - provenance: { type: "string" } - }, - description: "Context metadata" - } - }, - required: ["address", "content"] - } - }, - - ucxi_publish_decision: { - name: "ucxi_publish_decision", - description: "Publish a structured decision to SLURP via UCXI", - inputSchema: { - type: "object", - properties: { - decision_type: { - type: "string", - enum: ["architecture_choice", "implementation_approach", "technology_selection", "api_design"], - description: "Type of decision being published" - }, - reasoning: { - type: "string", - description: "Detailed reasoning for the decision" - }, - alternatives: { - type: "array", - items: { - type: "object", - properties: { - name: { type: "string" }, - description: { type: "string" }, - pros: { type: "array", items: { type: "string" } }, - cons: { type: "array", items: { type: "string" } }, - rejected: { type: "boolean" }, - reason: { type: "string" } - } - }, - description: "Alternative approaches considered" - }, - citations: { - type: "array", - items: { - type: "object", - properties: { - type: { type: "string", enum: ["justified_by", "references", "contradicts"] }, - ucxl_address: { type: "string" }, - relevance: { type: "string", enum: ["high", "medium", "low"] }, - excerpt: { type: "string" } - } - }, - description: "Citations supporting the decision" - }, - impacts: { - type: "array", - items: { - type: "object", - properties: { - type: { type: "string", enum: ["replaces", "modifies", "creates", "deprecates"] }, - ucxl_address: { type: "string" }, - reason: { type: "string" }, - severity: { type: "string", enum: ["low", "medium", "high", "breaking"] } - } - }, - description: "Impacts of this decision" - } - }, - required: ["decision_type", "reasoning"] - } - }, - - ucxi_query: { - name: "ucxi_query", - description: "Query contexts matching UCXL address pattern", - inputSchema: { - type: "object", - properties: { - pattern: { - type: "string", - description: "UCXL address pattern with wildcards (e.g., any:architect@bzzz:*)" - }, - temporal_range: { - type: "object", - properties: { - start: { type: "string", format: "date-time" }, - end: { type: "string", format: "date-time" } - }, - description: "Temporal range for search" - }, - content_filter: { - type: "string", - description: "Full-text search filter for content" - } - }, - required: ["pattern"] - } - }, - - ucxi_navigate: { - name: "ucxi_navigate", - description: "Navigate temporally through context history", - inputSchema: { - type: "object", - properties: { - current_address: { - type: "string", - description: "Current UCXL address position" - }, - direction: { - type: "string", - enum: ["~~", "^^"], - description: "Navigation direction: ~~ (backward) or ^^ (forward)" - }, - steps: { - type: "integer", - default: 1, - description: "Number of temporal steps to navigate" - } - }, - required: ["current_address", "direction"] - } - }, - - ucxi_announce: { - name: "ucxi_announce", - description: "Announce context availability to P2P network", - inputSchema: { - type: "object", - properties: { - address: { - type: "string", - description: "UCXL address to announce" - }, - capabilities: { - type: "array", - items: { type: "string" }, - description: "Agent capabilities (roles, specializations)" - }, - metadata: { - type: "object", - description: "Additional metadata about the context" - } - }, - required: ["address"] - } - } -}; - -// Handler implementations -export async function handleUCXITool(name: string, args: any): Promise { - const ucxiServerURL = process.env.UCXI_SERVER_URL || "http://localhost:8080"; - - switch (name) { - case "ucxi_get": - return await ucxiGet(ucxiServerURL, args.address, args.temporal); - - case "ucxi_put": - return await ucxiPut(ucxiServerURL, args.address, args.content, args.metadata); - - case "ucxi_publish_decision": - return await ucxiPublishDecision(ucxiServerURL, args); - - case "ucxi_query": - return await ucxiQuery(ucxiServerURL, args.pattern, args.temporal_range, args.content_filter); - - case "ucxi_navigate": - return await ucxiNavigate(ucxiServerURL, args.current_address, args.direction, args.steps); - - case "ucxi_announce": - return await ucxiAnnounce(ucxiServerURL, args.address, args.capabilities, args.metadata); - - default: - throw new Error(`Unknown UCXI tool: ${name}`); - } -} - -async function ucxiGet(serverURL: string, address: string, temporal?: string): Promise { - const url = new URL(`/ucxi/${address}`, serverURL); - if (temporal) { - url.searchParams.set('temporal', temporal); - } - - const response = await fetch(url.toString()); - if (!response.ok) { - throw new Error(`UCXI GET failed: ${response.status}`); - } - - return await response.json(); -} - -async function ucxiPublishDecision(serverURL: string, decision: any): Promise { - const response = await fetch(`${serverURL}/ucxi/decisions/publish`, { - method: 'POST', - headers: { - 'Content-Type': 'application/json' - }, - body: JSON.stringify(decision) - }); - - if (!response.ok) { - throw new Error(`Decision publishing failed: ${response.status}`); - } - - return await response.json(); -} -``` - -### Week 8: Integration Testing - -#### 8.1 End-to-End Test Suite - -**File:** `/test/integration/ucxl_e2e_test.go` - -```go -package integration - -import ( - "testing" - "time" - "github.com/stretchr/testify/assert" - "github.com/anthonyrawlins/bzzz/pkg/protocol" - "github.com/anthonyrawlins/bzzz/pkg/ucxi" - "github.com/anthonyrawlins/bzzz/pkg/decisions" -) - -func TestUCXLE2E(t *testing.T) { - // Start UCXI server - server := setupTestServer(t) - defer server.Close() - - t.Run("AddressParsingAndResolution", func(t *testing.T) { - // Test UCXL address parsing - addr, err := protocol.ParseUCXLAddress("ucxl://gpt4:architect@bzzz:v2/*^/decisions/protocol.json") - assert.NoError(t, err) - assert.Equal(t, "gpt4", addr.Agent) - assert.Equal(t, "architect", addr.Role) - assert.Equal(t, "bzzz", addr.Project) - assert.Equal(t, "v2", addr.Task) - assert.Equal(t, "*^", addr.TemporalSegment) - assert.Equal(t, "/decisions/protocol.json", addr.Path) - - // Test storage key generation - storageKey := addr.ToStorageKey() - assert.Contains(t, storageKey, "gpt4/architect/bzzz/v2") - }) - - t.Run("TemporalNavigation", func(t *testing.T) { - // Create temporal sequence - baseAddr := "ucxl://gpt4:architect@bzzz:v2" - - // Store contexts at different times - times := []time.Time{ - time.Now().Add(-2 * time.Hour), - time.Now().Add(-1 * time.Hour), - time.Now(), - } - - for i, timestamp := range times { - addr := fmt.Sprintf("%s/%s/decisions/protocol.json", baseAddr, timestamp.Format(time.RFC3339)) - content := map[string]interface{}{ - "version": i + 1, - "decision": fmt.Sprintf("Protocol decision v%d", i+1), - } - - err := storeContext(server, addr, content) - assert.NoError(t, err) - } - - // Test latest navigation (*^) - latestAddr := fmt.Sprintf("%s/*^/decisions/protocol.json", baseAddr) - entry, err := getContext(server, latestAddr) - assert.NoError(t, err) - assert.Equal(t, 3, entry.Content["version"]) - - // Test first navigation (*~) - firstAddr := fmt.Sprintf("%s/*~/decisions/protocol.json", baseAddr) - entry, err = getContext(server, firstAddr) - assert.NoError(t, err) - assert.Equal(t, 1, entry.Content["version"]) - }) - - t.Run("DecisionPublishing", func(t *testing.T) { - // Create decision node - decision := &decisions.DecisionNode{ - DecisionID: "test-decision-001", - UCXLAddress: "ucxl://gpt4:architect@bzzz:v2/2025-08-07T14:30:00Z/decisions/architecture.json", - Timestamp: time.Now(), - AgentID: "gpt4-test-agent", - DecisionType: decisions.DecisionTypeArchitecture, - Context: decisions.DecisionContext{ - Project: "bzzz", - Task: "v2-migration", - Scope: "protocol-selection", - }, - Justification: decisions.Justification{ - Reasoning: "UCXL provides superior semantic addressing and temporal navigation capabilities", - AlternativesConsidered: []decisions.Alternative{ - { - Name: "Extend bzzz:// protocol", - Description: "Add temporal navigation to existing protocol", - Pros: []string{"Minimal changes", "Backward compatibility"}, - Cons: []string{"Limited semantic expressiveness", "Technical debt"}, - Rejected: true, - Reason: "Insufficient semantic richness for complex context addressing", - }, - }, - Criteria: []string{"semantic_richness", "temporal_navigation", "ecosystem_compatibility"}, - Confidence: 0.9, - }, - Citations: []decisions.Citation{ - { - Type: decisions.CitationJustifiedBy, - UCXLAddress: "ucxl://any:any@chorus:requirements/*~/analysis.md", - Relevance: "high", - Excerpt: "System must support temporal context navigation for audit trails", - Strength: 0.95, - }, - }, - Impacts: []decisions.Impact{ - { - Type: "replaces", - UCXLAddress: "ucxl://any:any@bzzz:v1/*^/protocol.go", - Reason: "Migrating from bzzz:// to ucxl:// addressing scheme", - Severity: "breaking", - Affected: []string{"protocol", "addressing", "navigation"}, - }, - }, - } - - // Validate decision - err := decision.Validate() - assert.NoError(t, err) - - // Test decision publishing to SLURP - slurpClient := setupMockSLURPClient(t) - response, err := slurpClient.PublishDecisions([]decisions.DecisionNode{*decision}) - assert.NoError(t, err) - assert.Contains(t, response.Accepted, decision.DecisionID) - }) - - t.Run("P2PResolution", func(t *testing.T) { - // Test distributed address resolution - addr := "ucxl://any:architect@bzzz:*/*^/decisions" - - // Query should return contexts from multiple agents - entries, err := queryContexts(server, addr) - assert.NoError(t, err) - assert.Greater(t, len(entries), 0) - - // Test wildcard matching - for _, entry := range entries { - assert.Equal(t, "architect", entry.Address.Role) - assert.Equal(t, "bzzz", entry.Address.Project) - } - }) - - t.Run("CitationValidation", func(t *testing.T) { - // Test citation chain validation - validator := decisions.NewCitationValidator(server.ContextStore) - - decision := &decisions.DecisionNode{ - Citations: []decisions.Citation{ - { - Type: decisions.CitationJustifiedBy, - UCXLAddress: "ucxl://nonexistent:agent@invalid:project/*^/missing.json", - Relevance: "high", - }, - }, - } - - err := validator.ValidateCitations(decision) - assert.Error(t, err) // Should fail due to nonexistent citation - - // Test valid citation - decision.Citations[0].UCXLAddress = "ucxl://gpt4:architect@bzzz:v2/*^/requirements.json" - - // Store the referenced context - storeContext(server, decision.Citations[0].UCXLAddress, map[string]interface{}{ - "requirement": "temporal navigation support", - }) - - err = validator.ValidateCitations(decision) - assert.NoError(t, err) - }) -} -``` - -## Database Schema & Storage - -### Context Storage Schema (BadgerDB) - -```go -// Storage keys -const ( - ContextKeyPrefix = "ctx:" // ctx:agent/role/project/task/timestamp/path - TemporalKeyPrefix = "tmp:" // tmp:agent/role/project/task -> []TemporalEntry - IndexKeyPrefix = "idx:" // idx:field:value -> []address - MetadataKeyPrefix = "meta:" // meta:address -> ContextMetadata -) - -// Indexed fields for efficient querying -var IndexedFields = []string{ - "agent", "role", "project", "task", - "content_type", "tags", "timestamp", -} - -// Key generation functions -func GenerateContextKey(addr *protocol.UCXLAddress) string { - return ContextKeyPrefix + addr.ToStorageKey() -} - -func GenerateTemporalKey(addr *protocol.UCXLAddress) string { - return fmt.Sprintf("%s%s/%s/%s/%s", - TemporalKeyPrefix, addr.Agent, addr.Role, addr.Project, addr.Task) -} -``` - -This implementation roadmap provides the complete foundation for transforming BZZZ v2 into a UCXL-based semantic context publishing system while maintaining its distributed P2P architecture and integrating seamlessly with the CHORUS infrastructure. - -## Key Files Created: -- `/home/tony/chorus/project-queues/active/BZZZ/BZZZ_V2_UCXL_DEVELOPMENT_PLAN.md` -- `/home/tony/chorus/project-queues/active/BZZZ/TECHNICAL_ARCHITECTURE.md` -- `/home/tony/chorus/project-queues/active/BZZZ/IMPLEMENTATION_ROADMAP.md` - -The roadmap shows exactly how to implement UCXL address parsing, temporal navigation, decision node publishing to SLURP, P2P DHT resolution, and MCP integration with detailed code examples and test suites. \ No newline at end of file diff --git a/old-docs/MCP_IMPLEMENTATION_SUMMARY.md b/old-docs/MCP_IMPLEMENTATION_SUMMARY.md deleted file mode 100644 index 0a62ac17..00000000 --- a/old-docs/MCP_IMPLEMENTATION_SUMMARY.md +++ /dev/null @@ -1,282 +0,0 @@ -# BZZZ v2 MCP Integration - Implementation Summary - -## Overview - -The BZZZ v2 Model Context Protocol (MCP) integration has been successfully designed to enable GPT-4 agents to operate as first-class citizens within the distributed P2P task coordination system. This implementation bridges OpenAI's GPT-4 models with the existing libp2p-based BZZZ infrastructure, creating a sophisticated hybrid human-AI collaboration environment. - -## Completed Deliverables - -### 1. Comprehensive Design Documentation - -**Location**: `/home/tony/chorus/project-queues/active/BZZZ/MCP_INTEGRATION_DESIGN.md` - -The main design document provides: -- Complete MCP server architecture specification -- GPT-4 agent framework with role specializations -- Protocol tool definitions for bzzz:// addressing -- Conversation integration patterns -- CHORUS system integration strategies -- 8-week implementation roadmap -- Technical requirements and security considerations - -### 2. MCP Server Implementation - -**TypeScript Implementation**: `/home/tony/chorus/project-queues/active/BZZZ/mcp-server/` - -Core components implemented: -- **Main Server** (`src/index.ts`): Complete MCP server with tool handlers -- **Configuration System** (`src/config/config.ts`): Comprehensive configuration management -- **Protocol Tools** (`src/tools/protocol-tools.ts`): All six bzzz:// protocol tools -- **Package Configuration** (`package.json`, `tsconfig.json`): Production-ready build system - -### 3. Go Integration Layer - -**Go Implementation**: `/home/tony/chorus/project-queues/active/BZZZ/pkg/mcp/server.go` - -Key features: -- Full P2P network integration with existing BZZZ infrastructure -- GPT-4 agent lifecycle management -- Conversation threading and memory management -- Cost tracking and optimization -- WebSocket-based MCP protocol handling -- Integration with hypercore logging system - -### 4. Practical Integration Examples - -**Collaborative Review Example**: `/home/tony/chorus/project-queues/active/BZZZ/examples/collaborative-review-example.py` - -Demonstrates: -- Multi-agent collaboration for code review tasks -- Role-based agent specialization (architect, security, performance, documentation) -- Threaded conversation management -- Consensus building and escalation workflows -- Real-world integration with GitHub pull requests - -### 5. Production Deployment Configuration - -**Docker Compose**: `/home/tony/chorus/project-queues/active/BZZZ/deploy/docker-compose.mcp.yml` - -Complete deployment stack: -- BZZZ P2P node with MCP integration -- MCP server for GPT-4 integration -- Agent and conversation management services -- Cost tracking and monitoring -- PostgreSQL database for persistence -- Redis for caching and sessions -- WHOOSH and SLURP integration services -- Prometheus/Grafana monitoring stack -- Log aggregation with Loki/Promtail - -**Deployment Guide**: `/home/tony/chorus/project-queues/active/BZZZ/deploy/DEPLOYMENT_GUIDE.md` - -Comprehensive deployment documentation: -- Step-by-step cluster deployment instructions -- Node-specific configuration for WALNUT, IRONWOOD, ACACIA -- Service health verification procedures -- CHORUS integration setup -- Monitoring and alerting configuration -- Troubleshooting guides and maintenance procedures - -## Key Technical Achievements - -### 1. Semantic Addressing System - -Implemented comprehensive semantic addressing with the format: -``` -bzzz://agent:role@project:task/path -``` - -This enables: -- Direct agent-to-agent communication -- Role-based message broadcasting -- Project-scoped collaboration -- Hierarchical resource addressing - -### 2. Advanced Agent Framework - -Created sophisticated agent roles: -- **Architect Agent**: System design and architecture review -- **Reviewer Agent**: Code quality and security analysis -- **Documentation Agent**: Technical writing and knowledge synthesis -- **Performance Agent**: Optimization and efficiency analysis - -Each agent includes: -- Specialized system prompts -- Capability definitions -- Interaction patterns -- Memory management systems - -### 3. Multi-Agent Collaboration - -Designed advanced collaboration patterns: -- **Threaded Conversations**: Persistent conversation contexts -- **Consensus Building**: Automated agreement mechanisms -- **Escalation Workflows**: Human intervention when needed -- **Context Sharing**: Unified memory across agent interactions - -### 4. Cost Management System - -Implemented comprehensive cost controls: -- Real-time token usage tracking -- Daily and monthly spending limits -- Model selection optimization -- Context compression strategies -- Alert systems for cost overruns - -### 5. CHORUS Integration - -Created seamless integration with existing CHORUS systems: -- **SLURP**: Context event generation from agent consensus -- **WHOOSH**: Agent registration and orchestration -- **TGN**: Cross-network agent discovery -- **Existing BZZZ**: Full backward compatibility - -## Production Readiness Features - -### Security -- API key management with rotation -- Message signing and verification -- Network access controls -- Audit logging -- PII detection and redaction - -### Scalability -- Horizontal scaling across cluster nodes -- Connection pooling and load balancing -- Efficient P2P message routing -- Database query optimization -- Memory usage optimization - -### Monitoring -- Comprehensive metrics collection -- Real-time performance dashboards -- Cost tracking and alerting -- Health check endpoints -- Log aggregation and analysis - -### Reliability -- Graceful degradation on failures -- Automatic service recovery -- Circuit breakers for external services -- Comprehensive error handling -- Data persistence and backup - -## Integration Points - -### OpenAI API Integration -- GPT-4 and GPT-4-turbo model support -- Optimized token usage patterns -- Cost-aware model selection -- Rate limiting and retry logic -- Response streaming for large outputs - -### BZZZ P2P Network -- Native libp2p integration -- PubSub message routing -- Peer discovery and management -- Hypercore audit logging -- Task coordination protocols - -### CHORUS Ecosystem -- WHOOSH agent registration -- SLURP context event generation -- TGN cross-network discovery -- N8N workflow integration -- GitLab CI/CD connectivity - -## Performance Characteristics - -### Expected Metrics -- **Agent Response Time**: < 30 seconds for routine tasks -- **Collaboration Efficiency**: 40% reduction in task completion time -- **Consensus Success Rate**: > 85% of discussions reach consensus -- **Escalation Rate**: < 15% of threads require human intervention - -### Cost Optimization -- **Token Efficiency**: < $0.50 per task for routine operations -- **Model Selection Accuracy**: > 90% appropriate model selection -- **Context Compression**: 70% reduction in token usage through optimization - -### Quality Assurance -- **Code Review Accuracy**: > 95% critical issues detected -- **Documentation Completeness**: > 90% coverage of technical requirements -- **Architecture Consistency**: > 95% adherence to established patterns - -## Next Steps for Implementation - -### Phase 1: Core Infrastructure (Weeks 1-2) -1. Deploy MCP server on WALNUT node -2. Implement basic protocol tools -3. Set up agent lifecycle management -4. Test OpenAI API integration - -### Phase 2: Agent Framework (Weeks 3-4) -1. Deploy specialized agent roles -2. Implement conversation threading -3. Create consensus mechanisms -4. Test multi-agent scenarios - -### Phase 3: CHORUS Integration (Weeks 5-6) -1. Connect to WHOOSH orchestration -2. Implement SLURP event generation -3. Enable TGN cross-network discovery -4. Test end-to-end workflows - -### Phase 4: Production Deployment (Weeks 7-8) -1. Deploy across full cluster -2. Set up monitoring and alerting -3. Conduct load testing -4. Train operations team - -## Risk Mitigation - -### Technical Risks -- **API Rate Limits**: Implemented intelligent queuing and retry logic -- **Cost Overruns**: Comprehensive cost tracking with hard limits -- **Network Partitions**: Graceful degradation and reconnection logic -- **Agent Failures**: Circuit breakers and automatic recovery - -### Operational Risks -- **Human Escalation**: Clear escalation paths and notification systems -- **Data Loss**: Regular backups and replication -- **Security Breaches**: Defense in depth with audit logging -- **Performance Degradation**: Monitoring with automatic scaling - -## Success Criteria - -The MCP integration will be considered successful when: - -1. **GPT-4 agents successfully participate in P2P conversations** with existing BZZZ network nodes -2. **Multi-agent collaboration reduces task completion time** by 40% compared to single-agent approaches -3. **Cost per task remains under $0.50** for routine operations -4. **Integration with CHORUS systems** enables seamless workflow orchestration -5. **System maintains 99.9% uptime** with automatic recovery from failures - -## Conclusion - -The BZZZ v2 MCP integration design provides a comprehensive, production-ready solution for integrating GPT-4 agents into the existing CHORUS distributed system. The implementation leverages the strengths of both the BZZZ P2P network and OpenAI's advanced language models to create a sophisticated multi-agent collaboration platform. - -The design prioritizes: -- **Production readiness** with comprehensive monitoring and error handling -- **Cost efficiency** through intelligent resource management -- **Security** with defense-in-depth principles -- **Scalability** across the existing cluster infrastructure -- **Compatibility** with existing CHORUS workflows - -This implementation establishes the foundation for advanced AI-assisted development workflows while maintaining the decentralized, resilient characteristics that make the BZZZ system unique. - ---- - -**Implementation Files Created:** -- `/home/tony/chorus/project-queues/active/BZZZ/MCP_INTEGRATION_DESIGN.md` -- `/home/tony/chorus/project-queues/active/BZZZ/mcp-server/package.json` -- `/home/tony/chorus/project-queues/active/BZZZ/mcp-server/tsconfig.json` -- `/home/tony/chorus/project-queues/active/BZZZ/mcp-server/src/index.ts` -- `/home/tony/chorus/project-queues/active/BZZZ/mcp-server/src/config/config.ts` -- `/home/tony/chorus/project-queues/active/BZZZ/mcp-server/src/tools/protocol-tools.ts` -- `/home/tony/chorus/project-queues/active/BZZZ/pkg/mcp/server.go` -- `/home/tony/chorus/project-queues/active/BZZZ/examples/collaborative-review-example.py` -- `/home/tony/chorus/project-queues/active/BZZZ/deploy/docker-compose.mcp.yml` -- `/home/tony/chorus/project-queues/active/BZZZ/deploy/DEPLOYMENT_GUIDE.md` - -**Total Implementation Scope:** 10 comprehensive files totaling over 4,000 lines of production-ready code and documentation. \ No newline at end of file diff --git a/old-docs/MCP_INTEGRATION_DESIGN.md b/old-docs/MCP_INTEGRATION_DESIGN.md deleted file mode 100644 index e242166f..00000000 --- a/old-docs/MCP_INTEGRATION_DESIGN.md +++ /dev/null @@ -1,1135 +0,0 @@ -# BZZZ v2 MCP Integration Design -## GPT-4 Agent Framework for Distributed P2P Collaboration - -### Executive Summary - -This document outlines the comprehensive Model Context Protocol (MCP) integration for BZZZ v2, enabling GPT-4 agents to operate as first-class citizens within the distributed P2P task coordination system. The integration provides a bridge between OpenAI's GPT-4 models and the existing libp2p-based BZZZ infrastructure, creating a hybrid human-AI collaboration environment. - ---- - -## 1. MCP Server Architecture - -### 1.1 Core MCP Server Design - -```typescript -interface BzzzMcpServer { - // Protocol Operations - tools: { - bzzz_announce: ToolDefinition; - bzzz_lookup: ToolDefinition; - bzzz_get: ToolDefinition; - bzzz_post: ToolDefinition; - bzzz_thread: ToolDefinition; - bzzz_subscribe: ToolDefinition; - }; - - // Agent Management - agentLifecycle: AgentLifecycleManager; - conversationManager: ConversationManager; - costTracker: OpenAICostTracker; - - // BZZZ Protocol Integration - p2pNode: P2PNodeInterface; - pubsubManager: PubSubManager; - hypercoreLogger: HypercoreLogger; -} -``` - -### 1.2 MCP Tool Registry - -The MCP server exposes BZZZ protocol operations as standardized tools that GPT-4 agents can invoke: - -#### Core Protocol Tools - -**1. `bzzz_announce`** - Agent presence announcement -```json -{ - "name": "bzzz_announce", - "description": "Announce agent presence and capabilities on the BZZZ network", - "inputSchema": { - "type": "object", - "properties": { - "agent_id": {"type": "string", "description": "Unique agent identifier"}, - "role": {"type": "string", "description": "Agent role (architect, reviewer, etc.)"}, - "capabilities": {"type": "array", "items": {"type": "string"}}, - "specialization": {"type": "string"}, - "max_tasks": {"type": "number", "default": 3} - } - } -} -``` - -**2. `bzzz_lookup`** - Semantic address discovery -```json -{ - "name": "bzzz_lookup", - "description": "Discover agents and resources using semantic addressing", - "inputSchema": { - "type": "object", - "properties": { - "semantic_address": { - "type": "string", - "description": "Format: bzzz://agent:role@project:task/path" - }, - "filter_criteria": { - "type": "object", - "properties": { - "expertise": {"type": "array"}, - "availability": {"type": "boolean"}, - "performance_threshold": {"type": "number"} - } - } - } - } -} -``` - -**3. `bzzz_get`** - Content retrieval from addresses -```json -{ - "name": "bzzz_get", - "description": "Retrieve content from BZZZ semantic addresses", - "inputSchema": { - "type": "object", - "properties": { - "address": {"type": "string"}, - "include_metadata": {"type": "boolean", "default": true}, - "max_history": {"type": "number", "default": 10} - } - } -} -``` - -**4. `bzzz_post`** - Event/message posting -```json -{ - "name": "bzzz_post", - "description": "Post events or messages to BZZZ addresses", - "inputSchema": { - "type": "object", - "properties": { - "target_address": {"type": "string"}, - "message_type": {"type": "string"}, - "content": {"type": "object"}, - "priority": {"type": "string", "enum": ["low", "medium", "high", "urgent"]}, - "thread_id": {"type": "string", "optional": true} - } - } -} -``` - -**5. `bzzz_thread`** - Conversation management -```json -{ - "name": "bzzz_thread", - "description": "Manage threaded conversations between agents", - "inputSchema": { - "type": "object", - "properties": { - "action": {"type": "string", "enum": ["create", "join", "leave", "list", "summarize"]}, - "thread_id": {"type": "string", "optional": true}, - "participants": {"type": "array", "items": {"type": "string"}}, - "topic": {"type": "string", "optional": true} - } - } -} -``` - -**6. `bzzz_subscribe`** - Real-time event subscription -```json -{ - "name": "bzzz_subscribe", - "description": "Subscribe to real-time events from BZZZ network", - "inputSchema": { - "type": "object", - "properties": { - "event_types": {"type": "array", "items": {"type": "string"}}, - "filter_address": {"type": "string", "optional": true}, - "callback_webhook": {"type": "string", "optional": true} - } - } -} -``` - ---- - -## 2. GPT-4 Agent Framework - -### 2.1 Agent Specialization Definitions - -#### Core Agent Types - -**1. Architect Agent** (`bzzz://architect@*`) -```json -{ - "role": "architect", - "capabilities": [ - "system_design", - "architecture_review", - "technology_selection", - "scalability_analysis" - ], - "reasoning_prompts": { - "system": "You are a senior software architect specializing in distributed systems...", - "task_analysis": "Analyze this task from an architectural perspective...", - "collaboration": "Coordinate with other architects and provide technical guidance..." - }, - "interaction_patterns": { - "peer_architects": "collaborative_review", - "developers": "guidance_provision", - "reviewers": "design_validation" - } -} -``` - -**2. Code Reviewer Agent** (`bzzz://reviewer@*`) -```json -{ - "role": "reviewer", - "capabilities": [ - "code_review", - "security_analysis", - "performance_optimization", - "best_practices_enforcement" - ], - "reasoning_prompts": { - "system": "You are a senior code reviewer focused on quality and security...", - "review_criteria": "Evaluate code changes against these criteria...", - "feedback_delivery": "Provide constructive feedback to developers..." - } -} -``` - -**3. Documentation Agent** (`bzzz://docs@*`) -```json -{ - "role": "documentation", - "capabilities": [ - "technical_writing", - "api_documentation", - "user_guides", - "knowledge_synthesis" - ], - "reasoning_prompts": { - "system": "You specialize in creating clear, comprehensive technical documentation...", - "content_analysis": "Analyze technical content and identify documentation needs...", - "audience_adaptation": "Adapt documentation for different audience levels..." - } -} -``` - -### 2.2 Agent Lifecycle Management - -#### Agent States and Transitions - -```mermaid -stateDiagram-v2 - [*] --> Initializing - Initializing --> Idle: Registration Complete - Idle --> Active: Task Assigned - Active --> Collaborating: Multi-agent Context - Collaborating --> Active: Individual Work - Active --> Idle: Task Complete - Idle --> Terminating: Shutdown Signal - Terminating --> [*] - - Active --> Escalating: Human Intervention Needed - Escalating --> Active: Issue Resolved - Escalating --> Terminating: Unresolvable Issue -``` - -#### Lifecycle Implementation - -```go -type GPTAgent struct { - ID string - Role AgentRole - State AgentState - Capabilities []string - - // OpenAI Configuration - APIKey string - Model string // gpt-4, gpt-4-turbo, etc. - TokenLimit int - - // BZZZ Integration - P2PNode *p2p.Node - PubSub *pubsub.PubSub - Logger *logging.HypercoreLog - - // Conversation Context - ActiveThreads map[string]*ConversationThread - Memory *AgentMemory - - // Cost Management - TokenUsage *TokenUsageTracker - CostLimits *CostLimitConfig -} - -func (agent *GPTAgent) Initialize() error { - // Register with BZZZ network - if err := agent.announcePresence(); err != nil { - return err - } - - // Subscribe to relevant topics - if err := agent.subscribeToBzzzTopics(); err != nil { - return err - } - - // Initialize conversation memory - agent.Memory = NewAgentMemory(agent.ID) - - agent.State = AgentStateIdle - return nil -} - -func (agent *GPTAgent) ProcessTask(task *repository.Task) error { - agent.State = AgentStateActive - - // Create conversation context - context := agent.buildTaskContext(task) - - // Check if collaboration is needed - if agent.shouldCollaborate(task) { - return agent.initiateCollaboration(task, context) - } - - // Process individually - return agent.processIndividualTask(task, context) -} -``` - -### 2.3 Context Sharing and Memory Management - -#### Agent Memory System - -```go -type AgentMemory struct { - WorkingMemory map[string]interface{} // Current task context - EpisodicMemory []ConversationEpisode // Past interactions - SemanticMemory *KnowledgeGraph // Domain knowledge - - // Conversation History - ThreadMemories map[string]*ThreadMemory - - // Learning and Adaptation - PerformanceFeedback []FeedbackEntry - CollaborationHistory []CollaborationEntry -} - -type ConversationEpisode struct { - Timestamp time.Time - Participants []string - Topic string - Summary string - Outcome string - Lessons []string -} -``` - ---- - -## 3. Conversation Integration - -### 3.1 Threaded Conversation Architecture - -#### Thread Management System - -```go -type ConversationManager struct { - activeThreads map[string]*ConversationThread - threadIndex *ThreadIndex - summaryService *ThreadSummaryService - escalationRules *EscalationRuleEngine -} - -type ConversationThread struct { - ID string - Topic string - Participants []AgentParticipant - Messages []ThreadMessage - State ThreadState - - // Context Management - SharedContext map[string]interface{} - DecisionLog []Decision - - // Thread Lifecycle - CreatedAt time.Time - LastActivity time.Time - AutoClose bool - CloseAfter time.Duration -} - -type ThreadMessage struct { - ID string - From string - Role AgentRole - Content string - MessageType MessageType - Timestamp time.Time - - // Threading - ReplyTo string - Reactions []MessageReaction - - // GPT-4 Specific - TokenCount int - Model string - Context *GPTContext -} -``` - -### 3.2 Multi-Agent Collaboration Patterns - -#### Collaborative Review Pattern - -```go -func (cm *ConversationManager) InitiateCollaborativeReview( - task *repository.Task, - requiredRoles []AgentRole, -) (*ConversationThread, error) { - - // Create thread for collaborative review - thread := &ConversationThread{ - ID: generateThreadID("review", task.Number), - Topic: fmt.Sprintf("Collaborative Review: %s", task.Title), - State: ThreadStateActive, - } - - // Invite relevant agents - for _, role := range requiredRoles { - agents := cm.findAvailableAgents(role) - for _, agent := range agents[:min(2, len(agents))] { - thread.Participants = append(thread.Participants, AgentParticipant{ - AgentID: agent.ID, - Role: role, - Status: ParticipantStatusInvited, - }) - } - } - - // Set initial context - thread.SharedContext = map[string]interface{}{ - "task_details": task, - "review_criteria": getReviewCriteria(task), - "deadline": calculateReviewDeadline(task), - } - - // Start the conversation - initialPrompt := cm.buildCollaborativeReviewPrompt(task, thread) - if err := cm.postInitialMessage(thread, initialPrompt); err != nil { - return nil, err - } - - return thread, nil -} -``` - -#### Escalation Workflow Pattern - -```go -type EscalationRuleEngine struct { - rules []EscalationRule -} - -type EscalationRule struct { - Name string - Conditions []EscalationCondition - Actions []EscalationAction - Priority int -} - -type EscalationCondition struct { - Type string // "thread_duration", "consensus_failure", "error_rate" - Threshold interface{} - Timeframe time.Duration -} - -func (ere *EscalationRuleEngine) CheckEscalation(thread *ConversationThread) []EscalationAction { - var actions []EscalationAction - - for _, rule := range ere.rules { - if ere.evaluateConditions(rule.Conditions, thread) { - actions = append(actions, rule.Actions...) - } - } - - return actions -} - -// Example escalation scenarios -var DefaultEscalationRules = []EscalationRule{ - { - Name: "Long Running Thread", - Conditions: []EscalationCondition{ - {Type: "thread_duration", Threshold: 2 * time.Hour, Timeframe: 0}, - {Type: "no_progress", Threshold: true, Timeframe: 30 * time.Minute}, - }, - Actions: []EscalationAction{ - {Type: "notify_human", Target: "project_manager"}, - {Type: "request_expert", Expertise: []string{"domain_expert"}}, - }, - }, - { - Name: "Consensus Failure", - Conditions: []EscalationCondition{ - {Type: "disagreement_count", Threshold: 3, Timeframe: 0}, - {Type: "no_resolution", Threshold: true, Timeframe: 1 * time.Hour}, - }, - Actions: []EscalationAction{ - {Type: "escalate_to_architect", Priority: "high"}, - {Type: "create_decision_thread", Participants: []string{"senior_architect"}}, - }, - }, -} -``` - ---- - -## 4. CHORUS Integration Patterns - -### 4.1 SLURP Context Integration - -#### SLURP Event Generation from HMMM Consensus - -```go -type SLURPIntegrationService struct { - slurpClient *slurp.Client - conversationMgr *ConversationManager - eventGenerator *ConsensusEventGenerator -} - -func (sis *SLURPIntegrationService) GenerateSLURPEventFromConsensus( - thread *ConversationThread, - consensus *ThreadConsensus, -) (*slurp.ContextEvent, error) { - - // Analyze conversation for insights - insights := sis.extractInsights(thread) - - // Generate structured event - event := &slurp.ContextEvent{ - Type: "agent_consensus", - Source: "bzzz_mcp_integration", - Timestamp: time.Now(), - Context: slurp.ContextData{ - ConversationID: thread.ID, - Participants: getParticipantRoles(thread.Participants), - Topic: thread.Topic, - Insights: insights, - DecisionPoints: consensus.Decisions, - Confidence: consensus.ConfidenceScore, - }, - Metadata: map[string]interface{}{ - "thread_duration": thread.LastActivity.Sub(thread.CreatedAt).Minutes(), - "message_count": len(thread.Messages), - "agent_count": len(thread.Participants), - "consensus_type": consensus.Type, - }, - } - - // Send to SLURP system - if err := sis.slurpClient.SubmitContextEvent(event); err != nil { - return nil, fmt.Errorf("failed to submit SLURP event: %w", err) - } - - // Notify BZZZ network of event generation - sis.notifyEventGenerated(thread, event) - - return event, nil -} -``` - -### 4.2 WHOOSH Orchestration Integration - -#### GPT-4 Agent Registration with WHOOSH - -```go -type WHOOSHIntegrationService struct { - whooshClient *whoosh.Client - agentRegistry map[string]*GPTAgent -} - -func (wis *WHOOSHIntegrationService) RegisterGPTAgentWithWHOOSH( - agent *GPTAgent, -) error { - - // Create WHOOSH agent registration - registration := &whoosh.AgentRegistration{ - AgentID: agent.ID, - Type: "gpt_agent", - Role: string(agent.Role), - Capabilities: agent.Capabilities, - Metadata: map[string]interface{}{ - "model": agent.Model, - "max_tokens": agent.TokenLimit, - "cost_per_token": getTokenCost(agent.Model), - "bzzz_address": fmt.Sprintf("bzzz://%s:%s@*", agent.ID, agent.Role), - }, - Endpoints: whoosh.AgentEndpoints{ - StatusCheck: fmt.Sprintf("http://mcp-server:8080/agents/%s/status", agent.ID), - TaskAssign: fmt.Sprintf("http://mcp-server:8080/agents/%s/tasks", agent.ID), - Collaborate: fmt.Sprintf("http://mcp-server:8080/agents/%s/collaborate", agent.ID), - }, - HealthCheck: whoosh.HealthCheckConfig{ - Interval: 30 * time.Second, - Timeout: 10 * time.Second, - Retries: 3, - }, - } - - // Submit registration - if err := wis.whooshClient.RegisterAgent(registration); err != nil { - return fmt.Errorf("failed to register with WHOOSH: %w", err) - } - - // Start health reporting - go wis.reportAgentHealth(agent) - - return nil -} -``` - -### 4.3 TGN (The Garden Network) Connectivity - -#### Cross-Network Agent Discovery - -```go -type TGNConnector struct { - tgnClient *tgn.Client - bzzzNetwork *BzzzNetwork - agentRegistry *AgentRegistry -} - -func (tgn *TGNConnector) DiscoverCrossNetworkAgents( - query *AgentDiscoveryQuery, -) ([]*RemoteAgent, error) { - - // Query TGN for agents matching criteria - tgnQuery := &tgn.AgentQuery{ - Capabilities: query.RequiredCapabilities, - Role: query.Role, - Network: "bzzz", - Available: true, - } - - remoteAgents, err := tgn.tgnClient.DiscoverAgents(tgnQuery) - if err != nil { - return nil, err - } - - // Convert TGN agents to BZZZ addressable agents - var bzzzAgents []*RemoteAgent - for _, remote := range remoteAgents { - bzzzAgent := &RemoteAgent{ - ID: remote.ID, - Network: remote.Network, - BzzzAddress: fmt.Sprintf("bzzz://%s:%s@%s/*", - remote.ID, remote.Role, remote.Network), - Capabilities: remote.Capabilities, - Endpoint: remote.Endpoint, - } - bzzzAgents = append(bzzzAgents, bzzzAgent) - } - - return bzzzAgents, nil -} -``` - ---- - -## 5. Implementation Roadmap - -### 5.1 Phase 1: Core MCP Infrastructure (Weeks 1-2) - -#### Week 1: MCP Server Foundation -- [ ] Implement basic MCP server with tool registry -- [ ] Create OpenAI API integration wrapper -- [ ] Establish P2P node connection interface -- [ ] Basic agent lifecycle management - -**Key Deliverables:** -- MCP server binary with basic tool definitions -- OpenAI GPT-4 integration module -- Agent registration and deregistration flows - -#### Week 2: Protocol Tool Implementation -- [ ] Implement all six core bzzz:// protocol tools -- [ ] Add semantic addressing support -- [ ] Create pubsub message routing -- [ ] Basic conversation threading - -**Key Deliverables:** -- Full protocol tool suite -- Address resolution system -- Message routing infrastructure - -### 5.2 Phase 2: Agent Framework (Weeks 3-4) - -#### Week 3: Agent Specializations -- [ ] Define role-based agent templates -- [ ] Implement reasoning prompt systems -- [ ] Create capability matching logic -- [ ] Agent memory management - -#### Week 4: Collaboration Patterns -- [ ] Multi-agent conversation threading -- [ ] Consensus building algorithms -- [ ] Escalation rule engine -- [ ] Human intervention workflows - -### 5.3 Phase 3: CHORUS Integration (Weeks 5-6) - -#### Week 5: SLURP Integration -- [ ] Consensus-to-SLURP event generation -- [ ] Context relevance scoring -- [ ] Feedback loop implementation -- [ ] Performance optimization - -#### Week 6: WHOOSH & TGN Integration -- [ ] Agent registration with WHOOSH -- [ ] Cross-network agent discovery -- [ ] Task orchestration bridging -- [ ] Network topology management - -### 5.4 Phase 4: Production Readiness (Weeks 7-8) - -#### Week 7: Monitoring & Cost Management -- [ ] OpenAI cost tracking and limits -- [ ] Performance monitoring dashboards -- [ ] Conversation analytics -- [ ] Agent efficiency metrics - -#### Week 8: Testing & Deployment -- [ ] End-to-end integration testing -- [ ] Load testing with multiple agents -- [ ] Security auditing -- [ ] Production deployment automation - ---- - -## 6. Technical Requirements - -### 6.1 Infrastructure Requirements - -#### Server Specifications -- **CPU**: 8+ cores for concurrent agent processing -- **RAM**: 32GB+ for conversation context management -- **Storage**: 1TB+ SSD for conversation history and logs -- **Network**: High-speed connection for P2P communication - -#### Software Dependencies -- **Go 1.21+**: For BZZZ P2P integration -- **Node.js 18+**: For MCP server implementation -- **Docker**: For containerized deployment -- **PostgreSQL 14+**: For conversation persistence - -### 6.2 Security Considerations - -#### API Key Management -- OpenAI API keys stored in secure vault -- Per-agent API key rotation -- Usage monitoring and alerting -- Rate limiting and quotas - -#### P2P Security -- Message signing and verification -- Agent authentication protocols -- Network access controls -- Audit logging - -### 6.3 Cost Management - -#### Token Usage Optimization -```go -type CostOptimizer struct { - tokenBudgets map[string]*TokenBudget - usageTracking *UsageTracker - costCalculator *CostCalculator -} - -func (co *CostOptimizer) OptimizeConversation(thread *ConversationThread) { - // Compress context when approaching limits - if thread.EstimatedTokens() > thread.TokenBudget * 0.8 { - co.compressConversationHistory(thread) - } - - // Use cheaper models for routine tasks - if thread.Complexity < ComplexityThreshold { - co.assignModel(thread, "gpt-4o-mini") - } - - // Implement conversation summarization - if len(thread.Messages) > MaxMessagesBeforeSummary { - co.summarizeAndTruncate(thread) - } -} -``` - ---- - -## 7. Code Examples - -### 7.1 MCP Server Implementation - -```go -// pkg/mcp/server.go -package mcp - -import ( - "context" - "encoding/json" - "fmt" - "net/http" - - "github.com/anthonyrawlins/bzzz/p2p" - "github.com/anthonyrawlins/bzzz/pubsub" - openai "github.com/sashabaranov/go-openai" -) - -type McpServer struct { - p2pNode *p2p.Node - pubsub *pubsub.PubSub - openaiClient *openai.Client - agents map[string]*GPTAgent - tools map[string]ToolHandler -} - -func NewMcpServer(apiKey string, node *p2p.Node, ps *pubsub.PubSub) *McpServer { - server := &McpServer{ - p2pNode: node, - pubsub: ps, - openaiClient: openai.NewClient(apiKey), - agents: make(map[string]*GPTAgent), - tools: make(map[string]ToolHandler), - } - - // Register protocol tools - server.registerProtocolTools() - - return server -} - -func (s *McpServer) registerProtocolTools() { - s.tools["bzzz_announce"] = s.handleBzzzAnnounce - s.tools["bzzz_lookup"] = s.handleBzzzLookup - s.tools["bzzz_get"] = s.handleBzzzGet - s.tools["bzzz_post"] = s.handleBzzzPost - s.tools["bzzz_thread"] = s.handleBzzzThread - s.tools["bzzz_subscribe"] = s.handleBzzzSubscribe -} - -func (s *McpServer) handleBzzzAnnounce(params map[string]interface{}) (interface{}, error) { - agentID, ok := params["agent_id"].(string) - if !ok { - return nil, fmt.Errorf("agent_id is required") - } - - role, ok := params["role"].(string) - if !ok { - return nil, fmt.Errorf("role is required") - } - - // Create announcement message - announcement := map[string]interface{}{ - "agent_id": agentID, - "role": role, - "capabilities": params["capabilities"], - "specialization": params["specialization"], - "max_tasks": params["max_tasks"], - "announced_at": time.Now(), - } - - // Publish to BZZZ network - err := s.pubsub.PublishBzzzMessage(pubsub.CapabilityBcast, announcement) - if err != nil { - return nil, fmt.Errorf("failed to announce: %w", err) - } - - return map[string]interface{}{ - "status": "announced", - "message": fmt.Sprintf("Agent %s (%s) announced to network", agentID, role), - }, nil -} - -func (s *McpServer) handleBzzzLookup(params map[string]interface{}) (interface{}, error) { - address, ok := params["semantic_address"].(string) - if !ok { - return nil, fmt.Errorf("semantic_address is required") - } - - // Parse semantic address (bzzz://agent:role@project:task/path) - parsedAddr, err := parseSemanticAddress(address) - if err != nil { - return nil, fmt.Errorf("invalid semantic address: %w", err) - } - - // Discover matching agents - agents := s.discoverAgents(parsedAddr, params["filter_criteria"]) - - return map[string]interface{}{ - "address": address, - "matches": agents, - "count": len(agents), - }, nil -} -``` - -### 7.2 GPT-4 Agent Implementation - -```go -// pkg/agents/gpt_agent.go -package agents - -import ( - "context" - "fmt" - - openai "github.com/sashabaranov/go-openai" - "github.com/anthonyrawlins/bzzz/pubsub" -) - -type GPTAgent struct { - ID string - Role AgentRole - Model string - Client *openai.Client - - SystemPrompt string - Memory *AgentMemory - CostTracker *CostTracker - - // BZZZ Integration - PubSub *pubsub.PubSub - Logger *logging.HypercoreLog -} - -func (agent *GPTAgent) ProcessCollaborativeTask( - task *repository.Task, - thread *ConversationThread, -) error { - - // Build context from conversation history - context := agent.buildTaskContext(task, thread) - - // Create GPT-4 request - messages := []openai.ChatCompletionMessage{ - { - Role: openai.ChatMessageRoleSystem, - Content: agent.buildSystemPrompt(task, thread), - }, - } - - // Add conversation history - for _, msg := range thread.Messages { - messages = append(messages, openai.ChatCompletionMessage{ - Role: openai.ChatMessageRoleUser, - Content: fmt.Sprintf("[%s]: %s", msg.From, msg.Content), - }) - } - - // Add current task context - messages = append(messages, openai.ChatCompletionMessage{ - Role: openai.ChatMessageRoleUser, - Content: agent.formatTaskForGPT(task), - }) - - // Make GPT-4 request - resp, err := agent.Client.CreateChatCompletion( - context.Background(), - openai.ChatCompletionRequest{ - Model: agent.Model, - Messages: messages, - MaxTokens: 2000, - Tools: agent.getAvailableTools(), - }, - ) - - if err != nil { - return fmt.Errorf("GPT-4 request failed: %w", err) - } - - // Process response and tool calls - return agent.processGPTResponse(resp, thread) -} - -func (agent *GPTAgent) buildSystemPrompt(task *repository.Task, thread *ConversationThread) string { - basePrompt := agent.SystemPrompt - - // Add role-specific context - roleContext := agent.getRoleSpecificContext(task) - - // Add collaboration context - collabContext := fmt.Sprintf( - "\nYou are collaborating with %d other agents in thread '%s'.\n"+ - "Current participants: %s\n"+ - "Thread topic: %s\n"+ - "Your role in this collaboration: %s\n", - len(thread.Participants)-1, - thread.ID, - getParticipantList(thread.Participants), - thread.Topic, - agent.Role, - ) - - // Add available tools context - toolsContext := "\nAvailable BZZZ tools:\n" - for toolName, tool := range agent.getAvailableTools() { - toolsContext += fmt.Sprintf("- %s: %s\n", toolName, tool.Function.Description) - } - - return basePrompt + roleContext + collabContext + toolsContext -} -``` - -### 7.3 Conversation Threading - -```go -// pkg/conversations/thread_manager.go -package conversations - -type ThreadManager struct { - threads map[string]*ConversationThread - participants map[string][]string // agentID -> threadIDs - summaryEngine *SummaryEngine - escalationMgr *EscalationManager -} - -func (tm *ThreadManager) CreateCollaborativeThread( - topic string, - task *repository.Task, - requiredRoles []AgentRole, -) (*ConversationThread, error) { - - thread := &ConversationThread{ - ID: generateThreadID(topic, task.Number), - Topic: topic, - State: ThreadStateActive, - CreatedAt: time.Now(), - SharedContext: map[string]interface{}{ - "task": task, - "required_roles": requiredRoles, - }, - } - - // Find and invite agents - for _, role := range requiredRoles { - agents := tm.findAvailableAgentsByRole(role) - if len(agents) == 0 { - return nil, fmt.Errorf("no available agents for role: %s", role) - } - - // Select best agent for this role - selectedAgent := tm.selectBestAgent(agents, task) - - thread.Participants = append(thread.Participants, AgentParticipant{ - AgentID: selectedAgent.ID, - Role: role, - Status: ParticipantStatusInvited, - }) - } - - // Initialize thread - tm.threads[thread.ID] = thread - - // Send invitations - for _, participant := range thread.Participants { - if err := tm.inviteToThread(participant.AgentID, thread); err != nil { - fmt.Printf("Failed to invite agent %s: %v\n", participant.AgentID, err) - } - } - - // Start thread monitoring - go tm.monitorThread(thread) - - return thread, nil -} - -func (tm *ThreadManager) PostMessage( - threadID string, - fromAgent string, - content string, - messageType MessageType, -) error { - - thread, exists := tm.threads[threadID] - if !exists { - return fmt.Errorf("thread %s not found", threadID) - } - - message := ThreadMessage{ - ID: generateMessageID(), - From: fromAgent, - Content: content, - Type: messageType, - Timestamp: time.Now(), - } - - thread.Messages = append(thread.Messages, message) - thread.LastActivity = time.Now() - - // Notify all participants - for _, participant := range thread.Participants { - if participant.AgentID != fromAgent { - if err := tm.notifyParticipant(participant.AgentID, thread, message); err != nil { - fmt.Printf("Failed to notify %s: %v\n", participant.AgentID, err) - } - } - } - - // Check for escalation conditions - if actions := tm.escalationMgr.CheckEscalation(thread); len(actions) > 0 { - tm.executeEscalationActions(thread, actions) - } - - return nil -} -``` - ---- - -## 8. Success Metrics - -### 8.1 Performance Metrics -- **Agent Response Time**: < 30 seconds for routine tasks -- **Collaboration Efficiency**: 40% reduction in task completion time -- **Consensus Success Rate**: > 85% of collaborative discussions reach consensus -- **Escalation Rate**: < 15% of threads require human intervention - -### 8.2 Cost Metrics -- **Token Efficiency**: < $0.50 per task for routine tasks -- **Model Selection Accuracy**: > 90% appropriate model selection -- **Context Compression Ratio**: 70% reduction in token usage through compression - -### 8.3 Quality Metrics -- **Code Review Accuracy**: > 95% critical issues detected -- **Documentation Completeness**: > 90% coverage of technical requirements -- **Architecture Consistency**: > 95% adherence to established patterns - ---- - -## 9. Security and Compliance - -### 9.1 Data Protection -- All conversation data encrypted at rest and in transit -- Agent memory isolation between different projects -- Automatic PII detection and redaction -- Configurable data retention policies - -### 9.2 Access Control -- Role-based access to different agent capabilities -- Project-level agent permissions -- API key scoping and rotation -- Audit logging of all agent actions - -### 9.3 Compliance Considerations -- GDPR compliance for European operations -- SOC 2 Type II compliance framework -- Regular security audits and penetration testing -- Incident response procedures for AI agent failures - ---- - -This comprehensive design provides the foundation for implementing GPT-4 agents as first-class citizens in the BZZZ v2 distributed system, enabling sophisticated multi-agent collaboration while maintaining the security, performance, and cost-effectiveness required for production deployment. \ No newline at end of file diff --git a/old-docs/PHASE2A_SUMMARY.md b/old-docs/PHASE2A_SUMMARY.md deleted file mode 100644 index e99e9a3f..00000000 --- a/old-docs/PHASE2A_SUMMARY.md +++ /dev/null @@ -1,167 +0,0 @@ -# BZZZ Phase 2A Implementation Summary - -**Branch**: `feature/phase2a-unified-slurp-architecture` -**Date**: January 8, 2025 -**Status**: Core Implementation Complete ✅ - -## 🎯 **Unified BZZZ + SLURP Architecture** - -### **Major Architectural Achievement** -- **SLURP is now a specialized BZZZ agent** with `admin` role and master authority -- **No separate SLURP system** - unified under single BZZZ P2P infrastructure -- **Distributed admin role** with consensus-based failover using election system -- **Role-based authority hierarchy** with Age encryption for secure content access - -## ✅ **Completed Components** - -### **1. Role-Based Authority System** -*File: `pkg/config/roles.go`* - -- **Authority Levels**: `master`, `decision`, `coordination`, `suggestion`, `read_only` -- **Flexible Role Definitions**: User-configurable via `.ucxl/roles.yaml` -- **Admin Role**: Includes SLURP functionality (context curation, decision ingestion) -- **Authority Methods**: `CanDecryptRole()`, `CanMakeDecisions()`, `IsAdminRole()` - -**Key Roles Implemented**: -```yaml -admin: (AuthorityMaster) - SLURP functionality, can decrypt all roles -senior_software_architect: (AuthorityDecision) - Strategic decisions -backend_developer: (AuthoritySuggestion) - Implementation suggestions -observer: (AuthorityReadOnly) - Monitoring only -``` - -### **2. Election System with Consensus** -*File: `pkg/election/election.go`* - -- **Election Triggers**: Heartbeat timeout, discovery failure, split brain, quorum loss -- **Leadership Scoring**: Uptime, capabilities, resources, network quality -- **Consensus Algorithm**: Raft-based election coordination -- **Split Brain Detection**: Prevents multiple admin conflicts -- **Admin Discovery**: Automatic discovery of existing admin nodes - -**Election Process**: -``` -Trigger → Candidacy → Scoring → Voting → Winner Selection → Key Reconstruction -``` - -### **3. Cluster Security Configuration** -*File: `pkg/config/config.go`* - -- **Shamir Secret Sharing**: Admin keys split across 5 nodes (3 threshold) -- **Election Configuration**: Timeouts, quorum requirements, consensus algorithm -- **Audit Logging**: Security events tracked for compliance -- **Key Rotation**: Configurable key rotation cycles - -### **4. Age Encryption Integration** -*Files: `pkg/config/roles.go`, `.ucxl/roles.yaml`* - -- **Role-Based Keys**: Each role has Age keypair for content encryption -- **Hierarchical Access**: Admin can decrypt all roles, others limited by authority -- **UCXL Content Security**: All decision nodes encrypted by creator's role level -- **Master Key Management**: Admin keys distributed via Shamir shares - -### **5. UCXL Role Configuration System** -*File: `.ucxl/roles.yaml`* - -- **Project-Specific Roles**: Defined per project with flexible configuration -- **Prompt Templates**: Role-specific agent prompts (`.ucxl/templates/`) -- **Model Assignment**: Different AI models per role for cost optimization -- **Decision Scope**: Granular control over what each role can decide on - -### **6. Main Application Integration** -*File: `main.go`* - -- **Election Manager**: Integrated into main BZZZ startup process -- **Admin Callbacks**: Automatic SLURP enablement when node becomes admin -- **Heartbeat System**: Admin nodes send regular heartbeats to maintain leadership -- **Role Display**: Startup shows authority level and admin capability - -## 🏗️ **System Architecture** - -### **Unified Data Flow** -``` -Worker Agent (suggestion) → Age encrypt → DHT storage - ↓ -SLURP Agent (admin) → Decrypt all content → Global context graph - ↓ -Architect Agent (decision) → Make strategic decisions → Age encrypt → DHT storage -``` - -### **Election & Failover Process** -``` -Admin Heartbeat Timeout → Election Triggered → Consensus Voting → New Admin Elected - ↓ -Key Reconstruction (Shamir) → SLURP Functionality Transferred → Normal Operation -``` - -### **Role-Based Security Model** -```yaml -Master (admin): Can decrypt "*" (all roles) -Decision (architect): Can decrypt [architect, developer, observer] -Suggestion (developer): Can decrypt [developer] -ReadOnly (observer): Can decrypt [observer] -``` - -## 📋 **Configuration Examples** - -### **Role Definition** -```yaml -# .ucxl/roles.yaml -admin: - authority_level: master - can_decrypt: ["*"] - model: "gpt-4o" - special_functions: ["slurp_functionality", "admin_election"] - decision_scope: ["system", "security", "architecture"] -``` - -### **Security Configuration** -```yaml -security: - admin_key_shares: - threshold: 3 - total_shares: 5 - election_config: - heartbeat_timeout: 5s - consensus_algorithm: "raft" - minimum_quorum: 3 -``` - -## 🎯 **Key Benefits Achieved** - -1. **High Availability**: Any node can become admin via consensus election -2. **Security**: Age encryption + Shamir secret sharing prevents single points of failure -3. **Flexibility**: User-definable roles with granular authority levels -4. **Unified Architecture**: Single P2P network for all coordination (no separate SLURP) -5. **Automatic Failover**: Elections triggered by multiple conditions -6. **Scalable Consensus**: Raft algorithm handles cluster coordination - -## 🚧 **Next Steps (Phase 2B)** - -1. **Age Encryption Implementation**: Actual encryption/decryption of UCXL content -2. **Shamir Secret Sharing**: Key reconstruction algorithm implementation -3. **DHT Integration**: Distributed content storage for encrypted decisions -4. **Decision Publishing**: Connect task completion to decision node creation -5. **SLURP Context Engine**: Semantic analysis and global context building - -## 🔧 **Current Build Status** - -**Note**: There are dependency conflicts preventing compilation, but the core architecture and design is complete. The conflicts are in external OpenTelemetry packages and don't affect our core election and role system code. - -**Files to resolve before testing**: -- Fix Go module dependency conflicts -- Test election system with multiple BZZZ nodes -- Validate role-based authority checking - -## 📊 **Architecture Validation** - -✅ **SLURP unified as BZZZ agent** -✅ **Consensus-based admin elections** -✅ **Role-based authority hierarchy** -✅ **Age encryption foundation** -✅ **Shamir secret sharing design** -✅ **Election trigger conditions** -✅ **Flexible role configuration** -✅ **Admin failover mechanism** - -**Phase 2A successfully implements the unified BZZZ+SLURP architecture with distributed consensus and role-based security!** \ No newline at end of file diff --git a/old-docs/PHASE2B_SUMMARY.md b/old-docs/PHASE2B_SUMMARY.md deleted file mode 100644 index cb8ff35c..00000000 --- a/old-docs/PHASE2B_SUMMARY.md +++ /dev/null @@ -1,270 +0,0 @@ -# BZZZ Phase 2B Implementation Summary - -**Branch**: `feature/phase2b-age-encryption-dht` -**Date**: January 8, 2025 -**Status**: Complete Implementation ✅ - -## 🚀 **Phase 2B: Age Encryption & DHT Storage** - -### **Built Upon Phase 2A Foundation** -- ✅ Unified BZZZ+SLURP architecture with admin role elections -- ✅ Role-based authority hierarchy with consensus failover -- ✅ Shamir secret sharing for distributed admin key management -- ✅ Election system with Raft-based consensus - -### **Phase 2B Achievements** - -## ✅ **Completed Components** - -### **1. Age Encryption Implementation** -*File: `pkg/crypto/age_crypto.go` (578 lines)* - -**Core Functionality**: -- **Role-based content encryption**: `EncryptForRole()`, `EncryptForMultipleRoles()` -- **Secure decryption**: `DecryptWithRole()`, `DecryptWithPrivateKey()` -- **Authority-based access**: Content encrypted for roles based on creator's authority level -- **Key validation**: `ValidateAgeKey()` for proper Age key format validation -- **Automatic key generation**: `GenerateAgeKeyPair()` for role key creation - -**Security Features**: -```go -// Admin role can decrypt all content -admin.CanDecrypt = ["*"] - -// Decision roles can decrypt their level and below -architect.CanDecrypt = ["architect", "developer", "observer"] - -// Workers can only decrypt their own content -developer.CanDecrypt = ["developer"] -``` - -### **2. Shamir Secret Sharing System** -*File: `pkg/crypto/shamir.go` (395 lines)* - -**Key Features**: -- **Polynomial-based secret splitting**: Using finite field arithmetic over 257-bit prime -- **Configurable threshold**: 3-of-5 shares required for admin key reconstruction -- **Lagrange interpolation**: Mathematical reconstruction of secrets from shares -- **Admin key management**: `AdminKeyManager` for consensus-based key reconstruction -- **Share validation**: Cryptographic validation of share authenticity - -**Implementation Details**: -```go -// Split admin private key across 5 nodes (3 required) -shares, err := sss.SplitSecret(adminPrivateKey) - -// Reconstruct key when 3+ nodes agree via consensus -adminKey, err := akm.ReconstructAdminKey(shares) -``` - -### **3. Encrypted DHT Storage System** -*File: `pkg/dht/encrypted_storage.go` (547 lines)* - -**Architecture**: -- **Distributed content storage**: libp2p Kademlia DHT for P2P distribution -- **Role-based encryption**: All content encrypted before DHT storage -- **Local caching**: 10-minute cache with automatic cleanup -- **Content discovery**: Peer announcement and discovery for content availability -- **Metadata tracking**: Rich metadata including creator role, encryption targets, replication - -**Key Methods**: -```go -// Store encrypted UCXL content -StoreUCXLContent(ucxlAddress, content, creatorRole, contentType) - -// Retrieve and decrypt content (role-based access) -RetrieveUCXLContent(ucxlAddress) ([]byte, *UCXLMetadata, error) - -// Search content by role, project, task, date range -SearchContent(query *SearchQuery) ([]*UCXLMetadata, error) -``` - -### **4. Decision Publishing Pipeline** -*File: `pkg/ucxl/decision_publisher.go` (365 lines)* - -**Decision Types Supported**: -- **Task Completion**: `PublishTaskCompletion()` - Basic task finish notifications -- **Code Decisions**: `PublishCodeDecision()` - Technical implementation decisions with test results -- **Architectural Decisions**: `PublishArchitecturalDecision()` - Strategic system design decisions -- **System Status**: `PublishSystemStatus()` - Health and metrics reporting - -**Features**: -- **Automatic UCXL addressing**: Generates semantic addresses from decision context -- **Language detection**: Automatically detects programming language from modified files -- **Content querying**: `QueryRecentDecisions()` for historical decision retrieval -- **Real-time subscription**: `SubscribeToDecisions()` for decision notifications - -### **5. Main Application Integration** -*File: `main.go` - Enhanced with DHT and decision publishing* - -**Integration Points**: -- **DHT initialization**: libp2p Kademlia DHT with bootstrap peer connections -- **Encrypted storage setup**: Age crypto + DHT storage with cache management -- **Decision publisher**: Connected to task tracker for automatic decision publishing -- **End-to-end testing**: Complete flow validation on startup - -**Task Integration**: -```go -// Task tracker now publishes decisions automatically -taskTracker.CompleteTaskWithDecision(taskID, true, summary, filesModified) - -// Decisions encrypted and stored in DHT -// Retrievable by authorized roles across the cluster -``` - -## 🏗️ **System Architecture - Phase 2B** - -### **Complete Data Flow** -``` -Task Completion → Decision Publisher → Age Encryption → DHT Storage - ↓ ↓ -Role Authority → Determine Encryption → Store with Metadata → Cache Locally - ↓ ↓ -Content Discovery → Decrypt if Authorized → Return to Requestor -``` - -### **Encryption Flow** -``` -1. Content created by role (e.g., backend_developer) -2. Determine decryptable roles based on authority hierarchy -3. Encrypt with Age for multiple recipients -4. Store encrypted content in DHT with metadata -5. Cache locally for performance -6. Announce content availability to peers -``` - -### **Retrieval Flow** -``` -1. Query DHT for UCXL address -2. Check local cache first (performance optimization) -3. Retrieve encrypted content + metadata -4. Validate current role can decrypt (authority check) -5. Decrypt content with role's private key -6. Return decrypted content to requestor -``` - -## 🧪 **End-to-End Testing** - -The system includes comprehensive testing that validates: - -### **Crypto Tests** -- ✅ Age encryption/decryption with key pairs -- ✅ Shamir secret sharing with threshold reconstruction -- ✅ Role-based authority validation - -### **DHT Storage Tests** -- ✅ Content storage with role-based encryption -- ✅ Content retrieval with automatic decryption -- ✅ Cache functionality with expiration -- ✅ Search and discovery capabilities - -### **Decision Flow Tests** -- ✅ Architectural decision publishing and retrieval -- ✅ Code decision with test results and file tracking -- ✅ System status publishing with health checks -- ✅ Query system for recent decisions by role/project - -## 📊 **Security Model Validation** - -### **Role-Based Access Control** -```yaml -# Example: backend_developer creates content -Content encrypted for: [backend_developer] - -# senior_software_architect can decrypt developer content -architect.CanDecrypt: [architect, backend_developer, observer] - -# admin can decrypt all content -admin.CanDecrypt: ["*"] -``` - -### **Distributed Admin Key Management** -``` -Admin Private Key → Shamir Split (5 shares, 3 threshold) - ↓ -Share 1 → Node A Share 4 → Node D -Share 2 → Node B Share 5 → Node E -Share 3 → Node C - -Admin Election → Collect 3+ Shares → Reconstruct Key → Activate Admin -``` - -## 🎯 **Phase 2B Benefits Achieved** - -### **Security** -1. **End-to-end encryption**: All UCXL content encrypted with Age before storage -2. **Role-based access**: Only authorized roles can decrypt content -3. **Distributed key management**: Admin keys never stored in single location -4. **Cryptographic validation**: All shares and keys cryptographically verified - -### **Performance** -1. **Local caching**: 10-minute cache reduces DHT lookups -2. **Efficient encryption**: Age provides modern, fast encryption -3. **Batch operations**: Multiple role encryption in single operation -4. **Peer discovery**: Content location optimization through announcements - -### **Scalability** -1. **Distributed storage**: DHT scales across cluster nodes -2. **Automatic replication**: Content replicated across multiple peers -3. **Search capabilities**: Query by role, project, task, date range -4. **Content addressing**: UCXL semantic addresses for logical organization - -### **Reliability** -1. **Consensus-based admin**: Elections prevent single points of failure -2. **Share-based keys**: Admin functionality survives node failures -3. **Cache invalidation**: Automatic cleanup of expired content -4. **Error handling**: Graceful fallbacks and recovery mechanisms - -## 🔧 **Configuration Example** - -### **Enable DHT and Encryption** -```yaml -# config.yaml -v2: - dht: - enabled: true - bootstrap_peers: - - "/ip4/192.168.1.100/tcp/4001/p2p/QmBootstrapPeer1" - - "/ip4/192.168.1.101/tcp/4001/p2p/QmBootstrapPeer2" - auto_bootstrap: true - -security: - admin_key_shares: - threshold: 3 - total_shares: 5 - election_config: - consensus_algorithm: "raft" - minimum_quorum: 3 -``` - -## 🚀 **Production Readiness** - -### **What's Ready** -✅ **Encryption system**: Age encryption fully implemented and tested -✅ **DHT storage**: Distributed content storage with caching -✅ **Decision publishing**: Complete pipeline from task to encrypted storage -✅ **Role-based access**: Authority hierarchy with proper decryption controls -✅ **Error handling**: Comprehensive error checking and fallbacks -✅ **Testing framework**: End-to-end validation of entire flow - -### **Next Steps for Production** -1. **Resolve Go module conflicts**: Fix OpenTelemetry dependency issues -2. **Network testing**: Multi-node cluster validation -3. **Performance benchmarking**: Load testing with realistic decision volumes -4. **Key distribution**: Initial admin key setup and share distribution -5. **Monitoring integration**: Metrics collection and alerting - -## 🎉 **Phase 2B Success Summary** - -**Phase 2B successfully completes the unified BZZZ+SLURP architecture with:** - -✅ **Complete Age encryption system** for role-based content security -✅ **Shamir secret sharing** for distributed admin key management -✅ **DHT storage system** for distributed encrypted content -✅ **Decision publishing pipeline** connecting task completion to storage -✅ **End-to-end encrypted workflow** from creation to retrieval -✅ **Role-based access control** with hierarchical permissions -✅ **Local caching and optimization** for performance -✅ **Comprehensive testing framework** validating entire system - -**The BZZZ v2 architecture is now a complete, secure, distributed decision-making platform with encrypted context sharing, consensus-based administration, and semantic addressing - exactly as envisioned for the unified SLURP transformation!** 🎯 \ No newline at end of file diff --git a/old-docs/TECHNICAL_ARCHITECTURE.md b/old-docs/TECHNICAL_ARCHITECTURE.md deleted file mode 100644 index 762027b0..00000000 --- a/old-docs/TECHNICAL_ARCHITECTURE.md +++ /dev/null @@ -1,567 +0,0 @@ -# BZZZ v2 Technical Architecture: UCXL/UCXI Integration - -## 1. Architecture Overview - -BZZZ v2 transforms from a GitHub Issues-based task coordination system to a semantic context publishing platform built on the Universal Context eXchange Language (UCXL) protocol. The system maintains its distributed P2P foundation while adding sophisticated temporal navigation, decision graph publishing, and integration with the broader CHORUS infrastructure. - -``` - ┌─────────────────────────────────────────────────────────┐ - │ UCXL Ecosystem │ - │ ┌─────────────────┐ ┌─────────────────┐ │ - │ │ UCXL │ │ UCXL │ │ - │ │ Validator │ │ Browser │ │ - │ │ (Online) │ │ (Time Machine) │ │ - │ └─────────────────┘ └─────────────────┘ │ - └─────────────────────────────────────────────────────────┘ - │ - ┌─────────────────────────────────────────────────────────┐ - │ BZZZ v2 Core │ - │ ┌─────────────────┐ ┌─────────────────┐ │ - │ │ UCXI │ │ Decision │ │ - │ │ Interface │────│ Publishing │ │ - │ │ Server │ │ Pipeline │ │ - │ └─────────────────┘ └─────────────────┘ │ - │ │ │ │ - │ ┌─────────────────┐ ┌─────────────────┐ │ - │ │ Temporal │ │ Context │ │ - │ │ Navigation │────│ Storage │ │ - │ │ Engine │ │ Backend │ │ - │ └─────────────────┘ └─────────────────┘ │ - │ │ │ │ - │ ┌─────────────────┐ ┌─────────────────┐ │ - │ │ UCXL │ │ P2P DHT │ │ - │ │ Address │────│ Resolution │ │ - │ │ Parser │ │ Network │ │ - │ └─────────────────┘ └─────────────────┘ │ - └─────────────────────────────────────────────────────────┘ - │ - ┌─────────────────────────────────────────────────────────┐ - │ CHORUS Infrastructure │ - │ ┌─────────────────┐ ┌─────────────────┐ │ - │ │ SLURP │ │ WHOOSH │ │ - │ │ Context │────│ Search │ │ - │ │ Ingestion │ │ Indexing │ │ - │ └─────────────────┘ └─────────────────┘ │ - │ │ │ │ - │ ┌─────────────────┐ ┌─────────────────┐ │ - │ │ N8N │ │ GitLab │ │ - │ │ Automation │────│ Integration │ │ - │ │ Workflows │ │ (Optional) │ │ - │ └─────────────────┘ └─────────────────┘ │ - └─────────────────────────────────────────────────────────┘ -``` - -## 2. Core Components - -### 2.1 UCXL Address Parser (`pkg/protocol/ucxl_address.go`) - -Replaces the existing `pkg/protocol/uri.go` with full UCXL protocol support. - -```go -type UCXLAddress struct { - // Core addressing components - Agent string `json:"agent"` // e.g., "gpt4", "claude", "any" - Role string `json:"role"` // e.g., "architect", "reviewer", "any" - Project string `json:"project"` // e.g., "bzzz", "chorus", "any" - Task string `json:"task"` // e.g., "v2-migration", "auth", "any" - - // Temporal navigation - TemporalSegment string `json:"temporal_segment"` // "~~", "^^", "*^", "*~", ISO8601 - - // Resource path - Path string `json:"path"` // "/decisions/architecture.json" - - // Standard URI components - Query string `json:"query,omitempty"` - Fragment string `json:"fragment,omitempty"` - Raw string `json:"raw"` -} - -// Navigation tokens -const ( - TemporalBackward = "~~" // Navigate backward in time - TemporalForward = "^^" // Navigate forward in time - TemporalLatest = "*^" // Latest entry - TemporalFirst = "*~" // First entry -) -``` - -#### Key Methods: -- `ParseUCXLAddress(uri string) (*UCXLAddress, error)` -- `Normalize()` - Standardize address format -- `Matches(other *UCXLAddress) bool` - Wildcard matching with `any:any` -- `GetTemporalTarget() (time.Time, error)` - Resolve temporal navigation -- `ToStorageKey() string` - Generate storage backend key - -### 2.2 UCXI Interface Server (`pkg/ucxi/server.go`) - -HTTP server implementing UCXI operations with REST-like semantics. - -```go -type UCXIServer struct { - contextStore storage.ContextStore - temporalIndex temporal.Index - p2pNode *p2p.Node - resolver *routing.SemanticRouter -} - -// UCXI Operations -type UCXIOperations interface { - GET(address *UCXLAddress) (*ContextEntry, error) - PUT(address *UCXLAddress, content interface{}) error - POST(address *UCXLAddress, content interface{}) (*UCXLAddress, error) - DELETE(address *UCXLAddress) error - ANNOUNCE(address *UCXLAddress, metadata ContextMetadata) error - - // Extended operations - NAVIGATE(address *UCXLAddress, direction string) (*UCXLAddress, error) - QUERY(pattern *UCXLAddress) ([]*ContextEntry, error) - SUBSCRIBE(pattern *UCXLAddress, callback func(*ContextEntry)) error -} -``` - -#### HTTP Endpoints: -- `GET /ucxi/{agent}:{role}@{project}:{task}/{temporal}/{path}` -- `PUT /ucxi/{agent}:{role}@{project}:{task}/{temporal}/{path}` -- `POST /ucxi/{agent}:{role}@{project}:{task}/{temporal}/` -- `DELETE /ucxi/{agent}:{role}@{project}:{task}/{temporal}/{path}` -- `POST /ucxi/announce` -- `GET /ucxi/navigate/{direction}` -- `GET /ucxi/query?pattern={pattern}` -- `POST /ucxi/subscribe` - -### 2.3 Temporal Navigation Engine (`pkg/temporal/navigator.go`) - -Handles time-based context navigation and maintains temporal consistency. - -```go -type TemporalNavigator struct { - index TemporalIndex - snapshots SnapshotManager - store storage.ContextStore -} - -type TemporalIndex struct { - // Address pattern -> sorted temporal entries - patterns map[string][]TemporalEntry - mutex sync.RWMutex -} - -type TemporalEntry struct { - Timestamp time.Time `json:"timestamp"` - Version int64 `json:"version"` - Address UCXLAddress `json:"address"` - Checksum string `json:"checksum"` -} - -// Navigation methods -func (tn *TemporalNavigator) NavigateBackward(address *UCXLAddress) (*UCXLAddress, error) -func (tn *TemporalNavigator) NavigateForward(address *UCXLAddress) (*UCXLAddress, error) -func (tn *TemporalNavigator) GetLatest(address *UCXLAddress) (*UCXLAddress, error) -func (tn *TemporalNavigator) GetFirst(address *UCXLAddress) (*UCXLAddress, error) -func (tn *TemporalNavigator) GetAtTime(address *UCXLAddress, timestamp time.Time) (*UCXLAddress, error) -``` - -### 2.4 Context Storage Backend (`pkg/storage/context_store.go`) - -Versioned storage system supporting both local and distributed storage. - -```go -type ContextStore interface { - Store(address *UCXLAddress, entry *ContextEntry) error - Retrieve(address *UCXLAddress) (*ContextEntry, error) - Delete(address *UCXLAddress) error - List(pattern *UCXLAddress) ([]*ContextEntry, error) - - // Versioning - GetVersion(address *UCXLAddress, version int64) (*ContextEntry, error) - ListVersions(address *UCXLAddress) ([]VersionInfo, error) - - // Temporal operations - GetAtTime(address *UCXLAddress, timestamp time.Time) (*ContextEntry, error) - GetRange(address *UCXLAddress, start, end time.Time) ([]*ContextEntry, error) -} - -type ContextEntry struct { - Address UCXLAddress `json:"address"` - Content map[string]interface{} `json:"content"` - Metadata ContextMetadata `json:"metadata"` - Version int64 `json:"version"` - Checksum string `json:"checksum"` - CreatedAt time.Time `json:"created_at"` - UpdatedAt time.Time `json:"updated_at"` -} -``` - -#### Storage Backends: -- **LocalFS**: File-based storage for development -- **BadgerDB**: Embedded key-value store for production -- **NFS**: Distributed storage across CHORUS cluster -- **IPFS**: Content-addressed storage (future) - -### 2.5 P2P DHT Resolution (`pkg/dht/ucxl_resolver.go`) - -Extends existing libp2p DHT for UCXL address resolution and discovery. - -```go -type UCXLResolver struct { - dht *dht.IpfsDHT - localStore storage.ContextStore - peerCache map[peer.ID]*PeerCapabilities - router *routing.SemanticRouter -} - -type PeerCapabilities struct { - SupportedAgents []string `json:"supported_agents"` - SupportedRoles []string `json:"supported_roles"` - SupportedProjects []string `json:"supported_projects"` - LastSeen time.Time `json:"last_seen"` -} - -// Resolution methods -func (ur *UCXLResolver) Resolve(address *UCXLAddress) ([]*ContextEntry, error) -func (ur *UCXLResolver) Announce(address *UCXLAddress, metadata ContextMetadata) error -func (ur *UCXLResolver) FindProviders(address *UCXLAddress) ([]peer.ID, error) -func (ur *UCXLResolver) Subscribe(pattern *UCXLAddress) (<-chan *ContextEntry, error) -``` - -#### DHT Operations: -- **Provider Records**: Map UCXL addresses to providing peers -- **Capability Announcements**: Broadcast agent/role/project support -- **Semantic Routing**: Route `any:any` patterns to appropriate peers -- **Context Discovery**: Find contexts matching wildcard patterns - -### 2.6 Decision Publishing Pipeline (`pkg/decisions/publisher.go`) - -Publishes structured decision nodes to SLURP after agent task completion. - -```go -type DecisionPublisher struct { - slurpClient *integration.SLURPClient - validator *validation.CitationValidator - curator *curation.DecisionCurator - contextStore storage.ContextStore -} - -type DecisionNode struct { - DecisionID string `json:"decision_id"` - UCXLAddress string `json:"ucxl_address"` - Timestamp time.Time `json:"timestamp"` - AgentID string `json:"agent_id"` - DecisionType string `json:"decision_type"` - Context DecisionContext `json:"context"` - Justification Justification `json:"justification"` - Citations []Citation `json:"citations"` - Impacts []Impact `json:"impacts"` -} - -type Justification struct { - Reasoning string `json:"reasoning"` - AlternativesConsidered []string `json:"alternatives_considered"` - Criteria []string `json:"criteria"` - Confidence float64 `json:"confidence"` -} - -type Citation struct { - Type string `json:"type"` // "justified_by", "references", "contradicts" - UCXLAddress string `json:"ucxl_address"` - Relevance string `json:"relevance"` // "high", "medium", "low" - Excerpt string `json:"excerpt"` - Strength float64 `json:"strength"` -} -``` - -## 3. Integration Points - -### 3.1 SLURP Context Ingestion - -Decision nodes are published to SLURP for global context graph building: - -```go -type SLURPClient struct { - baseURL string - httpClient *http.Client - apiKey string -} - -func (sc *SLURPClient) PublishDecision(node *DecisionNode) error -func (sc *SLURPClient) QueryContext(query string) ([]*ContextEntry, error) -func (sc *SLURPClient) GetJustificationChain(decisionID string) ([]*DecisionNode, error) -``` - -**SLURP Integration Flow:** -1. Agent completes task (execution, review, architecture) -2. Decision curator extracts decision-worthy content -3. Citation validator checks justification chains -4. Decision publisher sends structured node to SLURP -5. SLURP ingests into global context graph - -### 3.2 WHOOSH Search Integration - -UCXL addresses and content indexed for semantic search: - -```go -// Index UCXL addresses in WHOOSH -type UCXLIndexer struct { - whooshClient *whoosh.Client - indexName string -} - -func (ui *UCXLIndexer) IndexContext(entry *ContextEntry) error -func (ui *UCXLIndexer) SearchAddresses(query string) ([]*UCXLAddress, error) -func (ui *UCXLIndexer) SearchContent(pattern *UCXLAddress, query string) ([]*ContextEntry, error) -func (ui *UCXLIndexer) SearchTemporal(timeQuery string) ([]*ContextEntry, error) -``` - -**Search Capabilities:** -- Address pattern search (`agent:architect@*:*`) -- Temporal search (`decisions after 2025-08-01`) -- Content full-text search with UCXL scoping -- Citation graph exploration - -### 3.3 Agent MCP Tools - -Update MCP server with UCXI operation tools: - -```typescript -// mcp-server/src/tools/ucxi-tools.ts -export const ucxiTools = { - ucxi_get: { - name: "ucxi_get", - description: "Retrieve context from UCXL address", - inputSchema: { - type: "object", - properties: { - address: { type: "string" }, - temporal: { type: "string", enum: ["~~", "^^", "*^", "*~"] } - } - } - }, - - ucxi_put: { - name: "ucxi_put", - description: "Store context at UCXL address", - inputSchema: { - type: "object", - properties: { - address: { type: "string" }, - content: { type: "object" }, - metadata: { type: "object" } - } - } - }, - - ucxi_announce: { - name: "ucxi_announce", - description: "Announce context availability", - inputSchema: { - type: "object", - properties: { - address: { type: "string" }, - capabilities: { type: "array" } - } - } - } -} -``` - -## 4. Data Flow Architecture - -### 4.1 Context Publishing Flow - -``` -┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ -│ GPT-4 Agent │ │ Decision │ │ UCXI │ -│ Completes │────│ Curation │────│ Storage │ -│ Task │ │ Pipeline │ │ Backend │ -└─────────────────┘ └─────────────────┘ └─────────────────┘ - │ │ │ - │ │ │ - ▼ ▼ ▼ -┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ -│ Task Result │ │ Structured │ │ Versioned │ -│ Analysis │────│ Decision Node │────│ Context │ -│ │ │ Generation │ │ Storage │ -└─────────────────┘ └─────────────────┘ └─────────────────┘ - │ │ │ - │ │ │ - ▼ ▼ ▼ -┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ -│ Citation │ │ SLURP │ │ P2P DHT │ -│ Validation │────│ Publishing │────│ Announcement │ -│ │ │ │ │ │ -└─────────────────┘ └─────────────────┘ └─────────────────┘ -``` - -### 4.2 Context Resolution Flow - -``` -┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ -│ Agent │ │ UCXL │ │ Temporal │ -│ UCXI Request │────│ Address │────│ Navigation │ -│ │ │ Parser │ │ Engine │ -└─────────────────┘ └─────────────────┘ └─────────────────┘ - │ │ │ - │ │ │ - ▼ ▼ ▼ -┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ -│ Local Cache │ │ Semantic │ │ Context │ -│ Lookup │────│ Router │────│ Retrieval │ -│ │ │ │ │ │ -└─────────────────┘ └─────────────────┘ └─────────────────┘ - │ │ │ - │ │ │ - ▼ ▼ ▼ -┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ -│ Cache Hit │ │ P2P DHT │ │ Context │ -│ Response │────│ Resolution │────│ Response │ -│ │ │ │ │ │ -└─────────────────┘ └─────────────────┘ └─────────────────┘ -``` - -## 5. Configuration & Deployment - -### 5.1 BZZZ v2 Configuration - -```yaml -# config/bzzz-v2.yaml -bzzz: - version: "2.0" - protocol: "ucxl" - -ucxi: - server: - host: "0.0.0.0" - port: 8080 - tls_enabled: true - cert_file: "/etc/bzzz/tls/cert.pem" - key_file: "/etc/bzzz/tls/key.pem" - -storage: - backend: "badgerdb" # options: localfs, badgerdb, nfs - path: "/var/lib/bzzz/context" - max_size: "10GB" - compression: true - -temporal: - retention_period: "90d" - snapshot_interval: "1h" - max_versions: 100 - -p2p: - listen_addrs: - - "/ip4/0.0.0.0/tcp/4001" - - "/ip6/::/tcp/4001" - bootstrap_peers: [] - dht_mode: "server" - -slurp: - endpoint: "http://slurp.chorus.local:8080" - api_key: "${SLURP_API_KEY}" - publish_decisions: true - batch_size: 10 - -agent: - id: "bzzz-${NODE_ID}" - roles: ["architect", "reviewer", "implementer"] - supported_agents: ["gpt4", "claude"] - -monitoring: - metrics_port: 9090 - health_port: 8081 - log_level: "info" -``` - -### 5.2 Docker Swarm Deployment - -```yaml -# infrastructure/docker-compose.swarm.yml -version: '3.8' - -services: - bzzz-v2: - image: registry.home.deepblack.cloud/bzzz:v2-latest - deploy: - replicas: 3 - placement: - constraints: - - node.role == worker - resources: - limits: - memory: 2GB - cpus: '1.0' - environment: - - NODE_ID={{.Task.Slot}} - - SLURP_API_KEY=${SLURP_API_KEY} - volumes: - - bzzz-context:/var/lib/bzzz/context - - /rust/containers/bzzz/config:/etc/bzzz:ro - networks: - - bzzz-net - - chorus-net - ports: - - "808{{.Task.Slot}}:8080" # UCXI server - - "400{{.Task.Slot}}:4001" # P2P libp2p - -volumes: - bzzz-context: - driver: local - driver_opts: - type: nfs - o: addr=192.168.1.72,rw - device: ":/rust/containers/bzzz/data" - -networks: - bzzz-net: - external: true - chorus-net: - external: true -``` - -## 6. Performance & Scalability - -### 6.1 Performance Targets -- **Address Resolution**: < 100ms for cached contexts -- **Temporal Navigation**: < 50ms for recent contexts -- **Decision Publishing**: < 5s end-to-end to SLURP -- **Concurrent Operations**: 1000+ UCXI operations/second -- **Storage Efficiency**: 70%+ compression ratio - -### 6.2 Scaling Strategy -- **Horizontal Scaling**: Add nodes to P2P network -- **Context Sharding**: Distribute context by address hash -- **Temporal Sharding**: Partition by time ranges -- **Caching Hierarchy**: Local → Cluster → P2P resolution -- **Load Balancing**: UCXI requests across cluster nodes - -### 6.3 Monitoring & Observability - -```go -// Prometheus metrics -var ( - ucxiOperationsTotal = prometheus.NewCounterVec( - prometheus.CounterOpts{ - Name: "bzzz_ucxi_operations_total", - Help: "Total number of UCXI operations", - }, - []string{"operation", "status"}, - ) - - contextResolutionDuration = prometheus.NewHistogramVec( - prometheus.HistogramOpts{ - Name: "bzzz_context_resolution_duration_seconds", - Help: "Time spent resolving UCXL addresses", - }, - []string{"resolution_method"}, - ) - - decisionPublishingDuration = prometheus.NewHistogram( - prometheus.HistogramOpts{ - Name: "bzzz_decision_publishing_duration_seconds", - Help: "Time spent publishing decisions to SLURP", - }, - ) -) -``` - -This technical architecture provides the foundation for implementing BZZZ v2 as a sophisticated UCXL-based semantic context publishing system while maintaining the distributed P2P characteristics that make it resilient and scalable within the CHORUS infrastructure. \ No newline at end of file diff --git a/old-docs/UNIFIED_DEVELOPMENT_PLAN.md b/old-docs/UNIFIED_DEVELOPMENT_PLAN.md deleted file mode 100644 index 3618a591..00000000 --- a/old-docs/UNIFIED_DEVELOPMENT_PLAN.md +++ /dev/null @@ -1,87 +0,0 @@ -# Project Bzzz & HMMM: Integrated Development Plan - -## 1. Unified Vision - -This document outlines a unified development plan for **Project Bzzz** and its integrated meta-discussion layer, **Project HMMM**. The vision is to build a decentralized task execution network where autonomous agents can not only **act** but also **reason and collaborate** before acting. - -- **Bzzz** provides the core P2P execution fabric (task claiming, execution, results). -- **HMMM** provides the collaborative "social brain" (task clarification, debate, knowledge sharing). - -By developing them together, we create a system that is both resilient and intelligent. - ---- - -## 2. Core Architecture - -The combined architecture remains consistent with the principles of decentralization, leveraging a unified tech stack. - -| Component | Technology | Purpose | -| :--- | :--- | :--- | -| **Networking** | **libp2p** | Peer discovery, identity, and secure P2P communication. | -| **Task Management** | **GitHub Issues** | The single source of truth for task definition and atomic allocation via assignment. | -| **Messaging** | **libp2p Pub/Sub** | Used for both `bzzz` (capabilities) and `hmmm` (meta-discussion) topics. | -| **Logging** | **Hypercore Protocol** | A single, tamper-proof log stream per agent will store both execution logs (Bzzz) and discussion transcripts (HMMM). | - ---- - -## 3. Key Features & Refinements - -### 3.1. Task Lifecycle with Meta-Discussion - -The agent's task lifecycle will be enhanced to include a reasoning step: - -1. **Discover & Claim:** An agent discovers an unassigned GitHub issue and claims it by assigning itself. -2. **Open Meta-Channel:** The agent immediately joins a dedicated pub/sub topic: `bzzz/meta/issue/{id}`. -3. **Propose Plan:** The agent posts its proposed plan of action to the channel. *e.g., "I will address this by modifying `file.py` and adding a new function `x()`."* -4. **Listen & Discuss:** The agent waits for a brief "objection period" (e.g., 30 seconds). Other agents can chime in with suggestions, corrections, or questions. This is the core loop of the HMMM layer. -5. **Execute:** If no major objections are raised, the agent proceeds with its plan. -6. **Report:** The agent creates a Pull Request. The PR description will include a link to the Hypercore log containing the full transcript of the pre-execution discussion. - -### 3.2. Safeguards and Structured Messaging - -- **Combined Safeguards:** Hop limits, participant caps, and TTLs will apply to all meta-discussions to prevent runaway conversations. -- **Structured Messages:** To improve machine comprehension, `meta_msg` payloads will be structured. - - ```json - { - "type": "meta_msg", - "issue_id": 42, - "node_id": "bzzz-07", - "msg_id": "abc123", - "parent_id": null, - "hop_count": 1, - "content": { - "query_type": "clarification_needed", - "text": "What is the expected output format?", - "parameters": { "field": "output_format" } - } - } - ``` - -### 3.3. Human Escalation Path - -- A dedicated pub/sub topic (`bzzz/meta/escalation`) will be used to flag discussions requiring human intervention. -- An N8N workflow will monitor this topic and create alerts in a designated Slack channel or project management tool. - ---- - -## 4. Integrated Development Milestones - -This 8-week plan merges the development of both projects into a single, cohesive timeline. - -| Week | Core Deliverable | Key Features & Integration Points | -| :--- | :--- | :--- | -| **1** | **P2P Foundation & Logging** | Establish the core agent identity and a unified **Hypercore log stream** for both action and discussion events. | -| **2** | **Capability Broadcasting** | Agents broadcast capabilities, including which reasoning models they have available (e.g., `claude-3-opus`). | -| **3** | **GitHub Task Claiming & Channel Creation** | Implement assignment-based task claiming. Upon claim, the agent **creates and subscribes to the meta-discussion channel**. | -| **4** | **Pre-Execution Discussion** | Implement the "propose plan" and "listen for objections" logic. This is the first functional version of the HMMM layer. | -| **5** | **Result Workflow with Logging** | Implement PR creation. The PR body **must link to the Hypercore discussion log**. | -| **6** | **Full Collaborative Help** | Implement the full `task_help_request` and `meta_msg` response flow, respecting all safeguards (hop limits, TTLs). | -| **7** | **Unified Monitoring** | The Mesh Visualizer dashboard will display agent status, execution logs, and **live meta-discussion transcripts**. | -| **8** | **End-to-End Scenario Testing** | Conduct comprehensive tests for combined scenarios: task clarification, collaborative debugging, and successful escalation to a human. | - ---- - -## 5. Conclusion - -By integrating HMMM from the outset, we are not just building a distributed task runner; we are building a **distributed reasoning system**. This approach will lead to a more robust, intelligent, and auditable Hive, where agents think and collaborate before they act. diff --git a/scripts/intensive_coordination_test.sh b/scripts/intensive_coordination_test.sh deleted file mode 100755 index 496a4ef0..00000000 --- a/scripts/intensive_coordination_test.sh +++ /dev/null @@ -1,183 +0,0 @@ -#!/bin/bash - -# Intensive coordination test to generate lots of dashboard activity -# This creates rapid-fire coordination scenarios for monitoring - -LOG_DIR="/tmp/bzzz_logs" -TEST_LOG="$LOG_DIR/intensive_test_$(date +%Y%m%d_%H%M%S).log" - -mkdir -p "$LOG_DIR" - -echo "🚀 Starting Intensive Coordination Test" -echo "======================================" -echo "This will generate rapid coordination activity for dashboard monitoring" -echo "Test Log: $TEST_LOG" -echo "" - -# Function to log test events -log_test() { - local timestamp=$(date '+%Y-%m-%d %H:%M:%S') - local event="$1" - echo "[$timestamp] $event" | tee -a "$TEST_LOG" -} - -# Function to simulate rapid task announcements -simulate_task_burst() { - local scenario="$1" - local count="$2" - - log_test "BURST_START: $scenario - announcing $count tasks rapidly" - - for i in $(seq 1 $count); do - log_test "TASK_ANNOUNCE: repo-$i/task-$i - $scenario scenario task $i" - sleep 0.5 - done - - log_test "BURST_COMPLETE: $scenario burst finished" -} - -# Function to simulate agent coordination chatter -simulate_agent_chatter() { - local duration="$1" - local end_time=$(($(date +%s) + duration)) - - log_test "CHATTER_START: Simulating agent coordination discussion for ${duration}s" - - local agent_responses=( - "I can handle this task" - "This conflicts with my current work" - "Need clarification on requirements" - "Dependencies detected with repo-X" - "Proposing different execution order" - "Ready to start immediately" - "This requires security review first" - "API contract needed before implementation" - "Coordination with team required" - "Escalating to human review" - ) - - local agents=("walnut-agent" "acacia-agent" "ironwood-agent" "test-agent-1" "test-agent-2") - - while [ $(date +%s) -lt $end_time ]; do - local agent=${agents[$((RANDOM % ${#agents[@]}))]} - local response=${agent_responses[$((RANDOM % ${#agent_responses[@]}))]} - - log_test "AGENT_RESPONSE: $agent: $response" - sleep $((1 + RANDOM % 3)) # Random 1-3 second delays - done - - log_test "CHATTER_COMPLETE: Agent discussion simulation finished" -} - -# Function to simulate coordination session lifecycle -simulate_coordination_session() { - local session_id="coord_$(date +%s)_$RANDOM" - local repos=("hive" "bzzz" "distributed-ai-dev" "n8n-workflows" "monitoring-tools") - local selected_repos=(${repos[@]:0:$((2 + RANDOM % 3))}) # 2-4 repos - - log_test "SESSION_START: $session_id with repos: ${selected_repos[*]}" - - # Dependency analysis phase - sleep 1 - log_test "SESSION_ANALYZE: $session_id - analyzing cross-repository dependencies" - - sleep 2 - log_test "SESSION_DEPS: $session_id - detected $((1 + RANDOM % 4)) dependencies" - - # Agent coordination phase - sleep 1 - log_test "SESSION_COORD: $session_id - agents proposing execution plan" - - sleep 2 - local outcome=$((RANDOM % 4)) - case $outcome in - 0|1) - log_test "SESSION_SUCCESS: $session_id - consensus reached, plan approved" - ;; - 2) - log_test "SESSION_ESCALATE: $session_id - escalated to human review" - ;; - 3) - log_test "SESSION_TIMEOUT: $session_id - coordination timeout, retrying" - ;; - esac - - log_test "SESSION_COMPLETE: $session_id finished" -} - -# Function to simulate error scenarios -simulate_error_scenarios() { - local errors=( - "Failed to connect to repository API" - "GitHub rate limit exceeded" - "Task dependency cycle detected" - "Agent coordination timeout" - "Invalid task specification" - "Network partition detected" - "Consensus algorithm failure" - "Authentication token expired" - ) - - for error in "${errors[@]}"; do - log_test "ERROR_SIM: $error" - sleep 2 - done -} - -# Main test execution -main() { - log_test "TEST_START: Intensive coordination test beginning" - - echo "🎯 Phase 1: Rapid Task Announcements (30 seconds)" - simulate_task_burst "Cross-Repository API Integration" 8 & - sleep 15 - simulate_task_burst "Security-First Development" 6 & - - echo "" - echo "🤖 Phase 2: Agent Coordination Chatter (45 seconds)" - simulate_agent_chatter 45 & - - echo "" - echo "🔄 Phase 3: Multiple Coordination Sessions (60 seconds)" - for i in {1..5}; do - simulate_coordination_session & - sleep 12 - done - - echo "" - echo "❌ Phase 4: Error Scenario Simulation (20 seconds)" - simulate_error_scenarios & - - echo "" - echo "⚡ Phase 5: High-Intensity Burst (30 seconds)" - # Rapid-fire everything - for i in {1..3}; do - simulate_coordination_session & - sleep 3 - simulate_task_burst "Parallel-Development-Conflict" 4 & - sleep 7 - done - - # Wait for background processes - wait - - log_test "TEST_COMPLETE: Intensive coordination test finished" - - echo "" - echo "📊 TEST SUMMARY" - echo "===============" - echo "Total Events: $(grep -c '\[.*\]' "$TEST_LOG")" - echo "Task Announcements: $(grep -c 'TASK_ANNOUNCE' "$TEST_LOG")" - echo "Agent Responses: $(grep -c 'AGENT_RESPONSE' "$TEST_LOG")" - echo "Coordination Sessions: $(grep -c 'SESSION_START' "$TEST_LOG")" - echo "Simulated Errors: $(grep -c 'ERROR_SIM' "$TEST_LOG")" - echo "" - echo "🎯 Watch your dashboard for all this activity!" - echo "📝 Detailed log: $TEST_LOG" -} - -# Trap Ctrl+C -trap 'echo ""; echo "🛑 Test interrupted"; exit 0' INT - -# Run the intensive test -main \ No newline at end of file diff --git a/scripts/start_bzzz_with_mock_api.sh b/scripts/start_bzzz_with_mock_api.sh deleted file mode 100755 index 2b21f91d..00000000 --- a/scripts/start_bzzz_with_mock_api.sh +++ /dev/null @@ -1,34 +0,0 @@ -#!/bin/bash - -# Script to temporarily run bzzz with mock Hive API for testing -# This lets real bzzz agents do actual coordination with fake data - -echo "🔧 Configuring Bzzz to use Mock Hive API" -echo "========================================" - -# Stop the current bzzz service -echo "Stopping current bzzz service..." -sudo systemctl stop bzzz.service - -# Wait a moment -sleep 2 - -# Set environment variables for mock API -export BZZZ_HIVE_API_URL="http://localhost:5000" -export BZZZ_LOG_LEVEL="debug" - -echo "Starting bzzz with mock Hive API..." -echo "Mock API URL: $BZZZ_HIVE_API_URL" -echo "" -echo "🎯 The real bzzz agents will now:" -echo " - Discover fake projects and tasks from mock API" -echo " - Do actual P2P coordination on real dependencies" -echo " - Perform real antennae meta-discussion" -echo " - Execute real coordination algorithms" -echo "" -echo "Watch your dashboard to see REAL coordination activity!" -echo "" - -# Run bzzz directly with mock API configuration -cd /home/tony/AI/projects/Bzzz -/usr/local/bin/bzzz \ No newline at end of file diff --git a/scripts/test_hmmm_monitoring.sh b/scripts/test_hmmm_monitoring.sh deleted file mode 100755 index 442fc7ea..00000000 --- a/scripts/test_hmmm_monitoring.sh +++ /dev/null @@ -1,200 +0,0 @@ -#!/bin/bash - -# Test script to monitor HMMM coordination activity -# This script monitors the existing bzzz service logs for coordination patterns - -LOG_DIR="/tmp/bzzz_logs" -MONITOR_LOG="$LOG_DIR/hmmm_monitor_$(date +%Y%m%d_%H%M%S).log" - -# Create log directory -mkdir -p "$LOG_DIR" - -echo "🔬 Starting Bzzz HMMM Monitoring Test" -echo "========================================" -echo "Monitor Log: $MONITOR_LOG" -echo "" - -# Function to log monitoring events -log_event() { - local timestamp=$(date '+%Y-%m-%d %H:%M:%S') - local event_type="$1" - local details="$2" - - echo "[$timestamp] $event_type: $details" | tee -a "$MONITOR_LOG" -} - -# Function to analyze bzzz logs for coordination patterns -analyze_coordination_patterns() { - echo "📊 Analyzing coordination patterns in bzzz logs..." - - # Count availability broadcasts (baseline activity) - local availability_count=$(journalctl -u bzzz.service --since "5 minutes ago" | grep "availability_broadcast" | wc -l) - log_event "BASELINE" "Availability broadcasts in last 5 minutes: $availability_count" - - # Look for peer connections - local peer_connections=$(journalctl -u bzzz.service --since "5 minutes ago" | grep "Connected Peers" | tail -1) - if [[ -n "$peer_connections" ]]; then - log_event "P2P_STATUS" "$peer_connections" - fi - - # Look for task-related activity - local task_activity=$(journalctl -u bzzz.service --since "5 minutes ago" | grep -i "task\|github\|repository" | wc -l) - log_event "TASK_ACTIVITY" "Task-related log entries: $task_activity" - - # Look for coordination messages (HMMM activity) - local coordination_msgs=$(journalctl -u bzzz.service --since "5 minutes ago" | grep -i "hmmm\|coordination\|meta" | wc -l) - log_event "COORDINATION" "Coordination-related messages: $coordination_msgs" - - # Check for error patterns - local errors=$(journalctl -u bzzz.service --since "5 minutes ago" | grep -i "error\|failed" | wc -l) - if [[ $errors -gt 0 ]]; then - log_event "ERRORS" "Error messages detected: $errors" - fi -} - -# Function to simulate coordination scenarios by watching for patterns -simulate_coordination_scenarios() { - echo "🎭 Setting up coordination scenario simulation..." - - # Scenario 1: API Contract Coordination - log_event "SCENARIO_START" "API Contract Coordination - Multiple repos need shared API" - - # Log simulated task announcements - log_event "TASK_ANNOUNCE" "bzzz#23 - Define coordination API contract (Priority: 1, Blocks: hive#15, distributed-ai-dev#8)" - log_event "TASK_ANNOUNCE" "hive#15 - Add WebSocket support (Priority: 2, Depends: bzzz#23)" - log_event "TASK_ANNOUNCE" "distributed-ai-dev#8 - Bzzz integration (Priority: 3, Depends: bzzz#23, hive#16)" - - sleep 2 - - # Log simulated agent responses - log_event "AGENT_RESPONSE" "Agent walnut-node: I can handle the API contract definition" - log_event "AGENT_RESPONSE" "Agent acacia-node: WebSocket implementation ready after API contract" - log_event "AGENT_RESPONSE" "Agent ironwood-node: Integration work depends on both API and auth" - - sleep 2 - - # Log coordination decision - log_event "COORDINATION" "Meta-coordinator analysis: API contract blocks 2 other tasks" - log_event "COORDINATION" "Consensus reached: Execute bzzz#23 -> hive#15 -> distributed-ai-dev#8" - log_event "SCENARIO_COMPLETE" "API Contract Coordination scenario completed" - - echo "" -} - -# Function to monitor real bzzz service activity -monitor_live_activity() { - local duration=$1 - echo "🔍 Monitoring live bzzz activity for $duration seconds..." - - # Monitor bzzz logs in real time - timeout "$duration" journalctl -u bzzz.service -f --since "1 minute ago" | while read -r line; do - local timestamp=$(date '+%H:%M:%S') - - # Check for different types of activity - if [[ "$line" =~ "availability_broadcast" ]]; then - log_event "AVAILABILITY" "Agent availability update detected" - elif [[ "$line" =~ "Connected Peers" ]]; then - local peer_count=$(echo "$line" | grep -o "Connected Peers: [0-9]*" | grep -o "[0-9]*") - log_event "P2P_UPDATE" "Peer count: $peer_count" - elif [[ "$line" =~ "Failed to get active repositories" ]]; then - log_event "API_ERROR" "Hive API connection issue (expected due to overlay network)" - elif [[ "$line" =~ "bzzz" ]] && [[ "$line" =~ "task" ]]; then - log_event "TASK_DETECTED" "Task-related activity in logs" - fi - done -} - -# Function to generate test metrics -generate_test_metrics() { - echo "📈 Generating test coordination metrics..." - - local start_time=$(date +%s) - local total_sessions=3 - local completed_sessions=2 - local escalated_sessions=0 - local failed_sessions=1 - local total_messages=12 - local task_announcements=6 - local dependencies_detected=3 - - # Create metrics JSON - cat > "$LOG_DIR/test_metrics.json" << EOF -{ - "test_run_start": "$start_time", - "monitoring_duration": "300s", - "total_coordination_sessions": $total_sessions, - "completed_sessions": $completed_sessions, - "escalated_sessions": $escalated_sessions, - "failed_sessions": $failed_sessions, - "total_messages": $total_messages, - "task_announcements": $task_announcements, - "dependencies_detected": $dependencies_detected, - "agent_participations": { - "walnut-node": 4, - "acacia-node": 3, - "ironwood-node": 5 - }, - "scenarios_tested": [ - "API Contract Coordination", - "Security-First Development", - "Parallel Development Conflict" - ], - "success_rate": 66.7, - "notes": "Test run with simulated coordination scenarios" -} -EOF - - log_event "METRICS" "Test metrics saved to $LOG_DIR/test_metrics.json" -} - -# Main test execution -main() { - echo "Starting HMMM coordination monitoring test..." - echo "" - - # Initial analysis of current activity - analyze_coordination_patterns - echo "" - - # Run simulated coordination scenarios - simulate_coordination_scenarios - echo "" - - # Monitor live activity for 2 minutes - monitor_live_activity 120 & - MONITOR_PID=$! - - # Wait for monitoring to complete - sleep 3 - - # Run additional analysis - analyze_coordination_patterns - echo "" - - # Generate test metrics - generate_test_metrics - echo "" - - # Wait for live monitoring to finish - wait $MONITOR_PID 2>/dev/null || true - - echo "📊 HMMM MONITORING TEST COMPLETE" - echo "====================================" - echo "Results saved to: $LOG_DIR/" - echo "Monitor Log: $MONITOR_LOG" - echo "Metrics: $LOG_DIR/test_metrics.json" - echo "" - echo "Summary of detected activity:" - grep -c "AVAILABILITY" "$MONITOR_LOG" | xargs echo "- Availability updates:" - grep -c "COORDINATION" "$MONITOR_LOG" | xargs echo "- Coordination events:" - grep -c "TASK_" "$MONITOR_LOG" | xargs echo "- Task-related events:" - grep -c "AGENT_RESPONSE" "$MONITOR_LOG" | xargs echo "- Agent responses:" - echo "" - echo "To view detailed logs: tail -f $MONITOR_LOG" -} - -# Trap Ctrl+C to clean up -trap 'echo ""; echo "🛑 Monitoring interrupted"; exit 0' INT - -# Run the test -main \ No newline at end of file diff --git a/scripts/trigger_mock_coordination.sh b/scripts/trigger_mock_coordination.sh deleted file mode 100755 index 13ba3bf5..00000000 --- a/scripts/trigger_mock_coordination.sh +++ /dev/null @@ -1,118 +0,0 @@ -#!/bin/bash - -# Script to trigger coordination activity with mock API data -# This simulates task updates to cause real bzzz coordination - -MOCK_API="http://localhost:5000" - -echo "🎯 Triggering Mock Coordination Test" -echo "====================================" -echo "This will cause real bzzz agents to coordinate on fake tasks" -echo "" - -# Function to simulate task claim attempts -simulate_task_claims() { - echo "📋 Simulating task claim attempts..." - - # Try to claim tasks from different projects - for project_id in 1 2 3; do - for task_num in 15 23 8; do - echo "🎯 Agent attempting to claim project $project_id task $task_num" - - curl -s -X POST "$MOCK_API/api/bzzz/projects/$project_id/claim" \ - -H "Content-Type: application/json" \ - -d "{\"task_number\": $task_num, \"agent_id\": \"test-agent-$project_id\"}" | jq . - - sleep 2 - done - done -} - -# Function to simulate task status updates -simulate_task_updates() { - echo "" - echo "📊 Simulating task status updates..." - - # Update task statuses to trigger coordination - curl -s -X PUT "$MOCK_API/api/bzzz/projects/1/status" \ - -H "Content-Type: application/json" \ - -d '{"task_number": 15, "status": "in_progress", "metadata": {"progress": 25}}' | jq . - - sleep 3 - - curl -s -X PUT "$MOCK_API/api/bzzz/projects/2/status" \ - -H "Content-Type: application/json" \ - -d '{"task_number": 23, "status": "completed", "metadata": {"completion_time": "2025-01-14T12:00:00Z"}}' | jq . - - sleep 3 - - curl -s -X PUT "$MOCK_API/api/bzzz/projects/3/status" \ - -H "Content-Type: application/json" \ - -d '{"task_number": 8, "status": "escalated", "metadata": {"reason": "dependency_conflict"}}' | jq . -} - -# Function to add urgent tasks -add_urgent_tasks() { - echo "" - echo "🚨 Adding urgent tasks to trigger immediate coordination..." - - # The mock API has background task generation, but we can trigger it manually - # by checking repositories multiple times rapidly - for i in {1..5}; do - echo "🔄 Repository refresh $i/5" - curl -s "$MOCK_API/api/bzzz/active-repos" > /dev/null - curl -s "$MOCK_API/api/bzzz/projects/1/tasks" > /dev/null - curl -s "$MOCK_API/api/bzzz/projects/2/tasks" > /dev/null - sleep 1 - done -} - -# Function to check bzzz response -check_bzzz_activity() { - echo "" - echo "📡 Checking recent bzzz activity..." - - # Check last 30 seconds of bzzz logs for API calls - echo "Recent bzzz log entries:" - journalctl -u bzzz.service --since "30 seconds ago" -n 10 | grep -E "(API|repository|task|coordination)" || echo "No recent coordination activity" -} - -# Main execution -main() { - echo "🔍 Testing mock API connectivity..." - curl -s "$MOCK_API/health" | jq . - - echo "" - echo "📋 Current active repositories:" - curl -s "$MOCK_API/api/bzzz/active-repos" | jq .repositories[].name - - echo "" - echo "🎯 Phase 1: Task Claims" - simulate_task_claims - - echo "" - echo "📊 Phase 2: Status Updates" - simulate_task_updates - - echo "" - echo "🚨 Phase 3: Urgent Tasks" - add_urgent_tasks - - echo "" - echo "📡 Phase 4: Check Results" - check_bzzz_activity - - echo "" - echo "✅ Mock coordination test complete!" - echo "" - echo "🎯 Watch your monitoring dashboard for:" - echo " - Task claim attempts" - echo " - Status update processing" - echo " - Coordination session activity" - echo " - Agent availability changes" - echo "" - echo "📝 Check mock API server output for request logs" -} - -# Run the test -main \ No newline at end of file diff --git a/test/bzzz-chat-api b/test/bzzz-chat-api deleted file mode 100755 index 4d299152..00000000 Binary files a/test/bzzz-chat-api and /dev/null differ diff --git a/test/run_chat_api.sh b/test/run_chat_api.sh deleted file mode 100755 index 27e0fd89..00000000 --- a/test/run_chat_api.sh +++ /dev/null @@ -1,47 +0,0 @@ -#!/bin/bash - -# Bzzz Chat API Test Runner -# This script builds and runs the chat API integration server - -set -e - -echo "🔧 Building Bzzz Chat API..." - -# Go to Bzzz project root -cd /home/tony/AI/projects/Bzzz - -# Add gorilla/mux dependency if not present -if ! grep -q "github.com/gorilla/mux" go.mod; then - echo "📦 Adding gorilla/mux dependency..." - go get github.com/gorilla/mux -fi - -# Build the chat API handler -echo "🏗️ Building chat API handler..." -go build -o test/bzzz-chat-api test/chat_api_handler.go - -# Check if build succeeded -if [ ! -f "test/bzzz-chat-api" ]; then - echo "❌ Build failed!" - exit 1 -fi - -echo "✅ Build successful!" - -# Create data directory for logs -mkdir -p ./data/chat-api-logs - -# Start the server -echo "🚀 Starting Bzzz Chat API server on port 8080..." -echo "📡 API Endpoints:" -echo " POST http://localhost:8080/bzzz/api/execute-task" -echo " GET http://localhost:8080/bzzz/api/health" -echo "" -echo "🔗 For N8N integration, use:" -echo " http://localhost:8080/bzzz/api/execute-task" -echo "" -echo "Press Ctrl+C to stop the server" -echo "" - -# Run the server -./test/bzzz-chat-api 8080 \ No newline at end of file diff --git a/test/test_chat_api.py b/test/test_chat_api.py deleted file mode 100755 index 78b0ffbd..00000000 --- a/test/test_chat_api.py +++ /dev/null @@ -1,197 +0,0 @@ -#!/usr/bin/env python3 -""" -Test client for Bzzz Chat API integration -This script simulates the N8N workflow calling the Bzzz API -""" - -import json -import requests -import time -import sys - -# API endpoint -API_URL = "http://localhost:8080/bzzz/api" - -def test_health_check(): - """Test the health check endpoint""" - print("🔍 Testing health check endpoint...") - try: - response = requests.get(f"{API_URL}/health", timeout=5) - if response.status_code == 200: - print("✅ Health check passed:", response.json()) - return True - else: - print(f"❌ Health check failed: {response.status_code}") - return False - except Exception as e: - print(f"❌ Health check error: {e}") - return False - -def create_test_task(): - """Create a simple test task""" - return { - "method": "execute_task_in_sandbox", - "task": { - "task_id": 9999, - "number": 9999, - "title": "Chat API Test Task", - "description": "Create a simple Python hello world function and save it to hello.py", - "repository": { - "owner": "test", - "repository": "chat-test" - }, - "git_url": "", # No git repo for simple test - "task_type": "development", - "priority": "medium", - "requirements": [], - "deliverables": ["hello.py with hello_world() function"], - "context": "This is a test task from the chat API integration" - }, - "execution_options": { - "sandbox_image": "registry.home.deepblack.cloud/tony/bzzz-sandbox:latest", - "timeout": "300s", - "max_iterations": 5, - "return_full_log": True, - "cleanup_on_complete": True - }, - "callback": { - "webhook_url": "http://localhost:8080/test-callback", - "include_artifacts": True - } - } - -def test_task_execution(): - """Test task execution endpoint""" - print("\n🚀 Testing task execution...") - - task_request = create_test_task() - - try: - print("📤 Sending task request...") - print(f"Task: {task_request['task']['description']}") - - response = requests.post( - f"{API_URL}/execute-task", - json=task_request, - headers={"Content-Type": "application/json"}, - timeout=30 - ) - - if response.status_code == 200: - result = response.json() - print("✅ Task accepted:", result) - print(f" Task ID: {result.get('task_id')}") - print(f" Status: {result.get('status')}") - print(f" Message: {result.get('message')}") - return True - else: - print(f"❌ Task execution failed: {response.status_code}") - print(f" Response: {response.text}") - return False - - except Exception as e: - print(f"❌ Task execution error: {e}") - return False - -def create_complex_task(): - """Create a more complex test task""" - return { - "method": "execute_task_in_sandbox", - "task": { - "task_id": 9998, - "number": 9998, - "title": "Complex Chat API Test", - "description": "Create a Python script that implements a simple calculator with add, subtract, multiply, and divide functions. Include basic error handling and save to calculator.py", - "repository": { - "owner": "test", - "repository": "calculator-test" - }, - "git_url": "", - "task_type": "development", - "priority": "high", - "requirements": [ - "Python functions for basic math operations", - "Error handling for division by zero", - "Simple command-line interface" - ], - "deliverables": ["calculator.py with Calculator class"], - "context": "Complex test task to validate full execution pipeline" - }, - "execution_options": { - "sandbox_image": "registry.home.deepblack.cloud/tony/bzzz-sandbox:latest", - "timeout": "600s", - "max_iterations": 10, - "return_full_log": True, - "cleanup_on_complete": False # Keep sandbox for inspection - }, - "callback": { - "webhook_url": "http://localhost:8080/test-callback", - "include_artifacts": True - } - } - -def test_complex_execution(): - """Test complex task execution""" - print("\n🧠 Testing complex task execution...") - - task_request = create_complex_task() - - try: - print("📤 Sending complex task request...") - print(f"Task: {task_request['task']['description']}") - - response = requests.post( - f"{API_URL}/execute-task", - json=task_request, - headers={"Content-Type": "application/json"}, - timeout=30 - ) - - if response.status_code == 200: - result = response.json() - print("✅ Complex task accepted:", result) - return True - else: - print(f"❌ Complex task failed: {response.status_code}") - print(f" Response: {response.text}") - return False - - except Exception as e: - print(f"❌ Complex task error: {e}") - return False - -def main(): - """Run all tests""" - print("🧪 Bzzz Chat API Test Suite") - print("=" * 40) - - # Test health check - if not test_health_check(): - print("❌ Health check failed, is the server running?") - print(" Start with: ./test/run_chat_api.sh") - sys.exit(1) - - # Test simple task execution - if not test_task_execution(): - print("❌ Simple task execution failed") - sys.exit(1) - - # Test complex task execution - if not test_complex_execution(): - print("❌ Complex task execution failed") - sys.exit(1) - - print("\n✅ All tests passed!") - print("\n📋 Next steps:") - print("1. Import the N8N workflow from chat-to-code-integration.json") - print("2. Configure webhook URLs to point to your N8N instance") - print("3. Test with actual chat interface") - print("4. Monitor execution logs in ./data/chat-api-logs/") - - print("\n💬 Example chat messages to try:") - print(' "Create a simple hello world function in Python"') - print(' "Task: Build a REST API endpoint\\nRepo: https://github.com/myorg/api.git\\nLanguage: Python"') - print(' "Fix the memory leak in the session handler"') - -if __name__ == "__main__": - main() \ No newline at end of file