WIP: Save agent roles integration work before CHORUS rebrand
- Agent roles and coordination features - Chat API integration testing - New configuration and workspace management 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
192
archived/2025-07-17/DEVELOPMENT_PLAN.md
Normal file
192
archived/2025-07-17/DEVELOPMENT_PLAN.md
Normal file
@@ -0,0 +1,192 @@
|
||||
# Project Bzzz: Decentralized Task Execution Network - Development Plan
|
||||
|
||||
## 1. Overview & Vision
|
||||
|
||||
This document outlines the development plan for **Project Bzzz**, a decentralized task execution network designed to enhance the existing **Hive Cluster**.
|
||||
|
||||
The vision is to evolve from a centrally coordinated system to a resilient, peer-to-peer (P2P) mesh of autonomous agents. This architecture eliminates single points of failure, improves scalability, and allows for dynamic, collaborative task resolution. Bzzz will complement the existing N8N orchestration layer, acting as a powerful, self-organizing execution fabric.
|
||||
|
||||
---
|
||||
|
||||
## 2. Core Architecture
|
||||
|
||||
The system is built on three key pillars: decentralized networking, GitHub-native task management, and verifiable, distributed logging.
|
||||
|
||||
| Component | Technology | Purpose |
|
||||
| :--- | :--- | :--- |
|
||||
| **Networking** | **libp2p** | For peer discovery (mDNS, DHT), identity, and secure P2P communication. |
|
||||
| **Task Management** | **GitHub Issues** | The single source of truth for task definition, allocation, and tracking. |
|
||||
| **Messaging** | **libp2p Pub/Sub** | For broadcasting capabilities and coordinating collaborative help requests. |
|
||||
| **Logging** | **Hypercore Protocol** | For creating a tamper-proof, decentralized, and replicable logging system for debugging. |
|
||||
|
||||
---
|
||||
|
||||
## 3. Architectural Refinements & Key Features
|
||||
|
||||
Based on our analysis, the following refinements will be adopted:
|
||||
|
||||
### 3.1. Task Allocation via GitHub Assignment
|
||||
|
||||
To prevent race conditions and simplify logic, we will use GitHub's native issue assignment mechanism as an atomic lock. The `task_claim` pub/sub topic is no longer needed.
|
||||
|
||||
**Workflow:**
|
||||
1. A `bzzz-agent` discovers a new, *unassigned* issue in the target repository.
|
||||
2. The agent immediately attempts to **assign itself** to the issue via the GitHub API.
|
||||
3. **Success:** If the assignment succeeds, the agent has exclusive ownership of the task and begins execution.
|
||||
4. **Failure:** If the assignment fails (because another agent was faster), the agent logs the contention and looks for another task.
|
||||
|
||||
### 3.2. Collaborative Task Execution with Hop Limit
|
||||
|
||||
The `task_help_request` feature enables agents to collaborate on complex tasks. To prevent infinite request loops and network flooding, we will implement a **hop limit**.
|
||||
|
||||
- **Hop Limit:** A `task_help_request` will be discarded after being forwarded **3 times**.
|
||||
- If a task cannot be completed after 3 help requests, it will be marked as "failed," and a comment will be added to the GitHub issue for manual review.
|
||||
|
||||
### 3.3. Decentralized Logging with Hypercore
|
||||
|
||||
To solve the challenge of debugging a distributed system, each agent will manage its own secure, append-only log stream using the Hypercore Protocol.
|
||||
|
||||
- **Log Creation:** Each agent generates a `hypercore` and broadcasts its public key via the `capabilities` message.
|
||||
- **Log Replication:** Any other agent (or a dedicated monitoring node) can use this key to replicate the log stream in real-time or after the fact.
|
||||
- **Benefits:** This creates a verifiable and resilient audit trail for every agent's actions, which is invaluable for debugging without relying on a centralized logging server.
|
||||
|
||||
---
|
||||
|
||||
## 4. Integration with the Hive Ecosystem
|
||||
|
||||
Bzzz is designed to integrate seamlessly with the existing cluster infrastructure.
|
||||
|
||||
### 4.1. Deployment Strategy: Docker + Host Networking (PREFERRED APPROACH)
|
||||
|
||||
Based on comprehensive analysis of the existing Hive infrastructure and Bzzz's P2P requirements, we will use a **hybrid deployment approach** that combines Docker containerization with host networking:
|
||||
|
||||
```yaml
|
||||
# Docker Compose configuration for bzzz-agent
|
||||
services:
|
||||
bzzz-agent:
|
||||
image: registry.home.deepblack.cloud/tony/bzzz-agent:latest
|
||||
network_mode: "host" # Direct host network access for P2P
|
||||
volumes:
|
||||
- ./data:/app/data
|
||||
- /var/run/docker.sock:/var/run/docker.sock # Docker API access
|
||||
environment:
|
||||
- NODE_ID=${HOSTNAME}
|
||||
- GITHUB_TOKEN_FILE=/run/secrets/github-token
|
||||
secrets:
|
||||
- github-token
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == worker # Deploy on all worker nodes
|
||||
```
|
||||
|
||||
**Rationale for Docker + Host Networking:**
|
||||
- ✅ **P2P Networking Advantages**: Direct access to host networking enables efficient mDNS discovery, NAT traversal, and lower latency communication
|
||||
- ✅ **Infrastructure Consistency**: Maintains Docker Swarm deployment patterns and existing operational procedures
|
||||
- ✅ **Resource Efficiency**: Eliminates Docker overlay network overhead for P2P communication
|
||||
- ✅ **Best of Both Worlds**: Container portability and management with native network performance
|
||||
|
||||
### 4.2. Cluster Integration Points
|
||||
|
||||
- **Phased Rollout:** Deploy `bzzz-agent` containers across all cluster nodes (ACACIA, WALNUT, IRONWOOD, ROSEWOOD, FORSTEINET) using Docker Swarm
|
||||
- **Network Architecture**: Leverages existing 192.168.1.0/24 LAN for P2P mesh communication
|
||||
- **Resource Coordination**: Agents discover and utilize existing Ollama endpoints (port 11434) and CLI tools
|
||||
- **Storage Integration**: Uses NFS shares (/rust/containers/) for shared configuration and Hypercore log storage
|
||||
|
||||
### 4.3. Integration with Existing Services
|
||||
|
||||
- **N8N as a Task Initiator:** High-level workflows in N8N will now terminate by creating a detailed GitHub Issue. This action triggers the Bzzz mesh, which handles the execution and reports back by creating a Pull Request.
|
||||
- **Hive Coexistence**: Bzzz will run alongside existing Hive services on different ports, allowing gradual migration of workloads
|
||||
- **The "Mesh Visualizer":** A dedicated monitoring dashboard will be created. It will:
|
||||
1. Subscribe to the `capabilities` pub/sub topic to visualize the live network topology.
|
||||
2. Replicate and display the Hypercore log streams from all active agents.
|
||||
3. Integrate with existing Grafana dashboards for unified monitoring
|
||||
|
||||
---
|
||||
|
||||
## 5. Security Strategy
|
||||
|
||||
- **GitHub Token Management:** Agents will use short-lived, fine-grained Personal Access Tokens. These tokens will be stored securely in **HashiCorp Vault** or a similar secrets management tool, and retrieved by the agent at runtime.
|
||||
- **Network Security:** All peer-to-peer communication is automatically **encrypted end-to-end** by `libp2p`.
|
||||
|
||||
---
|
||||
|
||||
## 6. Recommended Tech Stack
|
||||
|
||||
| Category | Recommendation | Notes |
|
||||
| :--- | :--- | :--- |
|
||||
| **Language** | **Go** or **Rust** | Strongly recommended for performance, concurrency, and system-level programming. |
|
||||
| **Networking** | `go-libp2p` / `rust-libp2p` | The official and most mature implementations. |
|
||||
| **Logging** | `hypercore-go` / `hypercore-rs` | Libraries for implementing the Hypercore Protocol. |
|
||||
| **GitHub API** | `go-github` / `octokit.rs` | Official and community-maintained clients for interacting with GitHub. |
|
||||
|
||||
---
|
||||
|
||||
## 7. Development Milestones
|
||||
|
||||
This 8-week plan incorporates the refined architecture.
|
||||
|
||||
| Week | Deliverables | Key Features |
|
||||
| :--- | :--- | :--- |
|
||||
| **1** | **P2P Foundation & Logging** | Setup libp2p peer discovery and establish a **Hypercore log stream** for each agent. |
|
||||
| **2** | **Capability Broadcasting** | Implement `capability_detector` and broadcast agent status via pub/sub. |
|
||||
| **3** | **GitHub Task Claiming** | Ingest issues from GitHub and implement the **assignment-based task claiming** logic. |
|
||||
| **4** | **Core Task Execution** | Integrate local CLIs (Ollama, etc.) to perform basic tasks based on issue content. |
|
||||
| **5** | **GitHub Result Workflow** | Implement logic to create Pull Requests or follow-up issues upon task completion. |
|
||||
| **6** | **Collaborative Help** | Implement the `task_help_request` and `task_help_response` flow with the **hop limit**. |
|
||||
| **7** | **Monitoring & Visualization** | Build the first version of the **Mesh Visualizer** dashboard to display agent status and logs. |
|
||||
| **8** | **Deployment & Testing** | Package the agent as a Docker container with host networking, write Docker Swarm deployment guide, and conduct end-to-end testing across cluster nodes. |
|
||||
|
||||
---
|
||||
|
||||
## 8. Potential Risks & Mitigation
|
||||
|
||||
- **Network Partitions ("Split-Brain"):**
|
||||
- **Risk:** A network partition could lead to two separate meshes trying to work on the same task.
|
||||
- **Mitigation:** Using GitHub's issue assignment as the atomic lock effectively solves this. The first agent to successfully claim the issue wins, regardless of network state.
|
||||
- **Dependency on GitHub:**
|
||||
- **Risk:** The system's ability to acquire new tasks is dependent on the availability of the GitHub API.
|
||||
- **Mitigation:** This is an accepted trade-off for gaining a robust, native task management platform. Agents can be designed to continue working on already-claimed tasks during a GitHub outage.
|
||||
- **Debugging Complexity:**
|
||||
- **Risk:** Debugging distributed systems remains challenging.
|
||||
- **Mitigation:** The Hypercore-based logging system provides a powerful and verifiable audit trail, which is a significant step towards mitigating this complexity. The Mesh Visualizer will also be a critical tool.
|
||||
- **Docker Host Networking Security:**
|
||||
- **Risk:** Host networking mode exposes containers directly to the host network, reducing isolation.
|
||||
- **Mitigation:**
|
||||
- Implement strict firewall rules on each node
|
||||
- Use libp2p's built-in encryption for all P2P communication
|
||||
- Run containers with restricted user privileges (non-root)
|
||||
- Regular security audits of exposed ports and services
|
||||
|
||||
---
|
||||
|
||||
## 9. Migration Strategy from Hive
|
||||
|
||||
### 9.1. Gradual Transition Plan
|
||||
|
||||
1. **Phase 1: Parallel Deployment** (Weeks 1-2)
|
||||
- Deploy Bzzz agents alongside existing Hive infrastructure
|
||||
- Use different port ranges to avoid conflicts
|
||||
- Monitor resource usage and network performance
|
||||
|
||||
2. **Phase 2: Simple Task Migration** (Weeks 3-4)
|
||||
- Route basic code generation tasks through GitHub issues → Bzzz
|
||||
- Keep complex multi-agent workflows in existing Hive + n8n
|
||||
- Compare performance metrics between systems
|
||||
|
||||
3. **Phase 3: Workflow Integration** (Weeks 5-6)
|
||||
- Modify n8n workflows to create GitHub issues as final step
|
||||
- Implement Bzzz → Hive result reporting for hybrid workflows
|
||||
- Test end-to-end task lifecycle
|
||||
|
||||
4. **Phase 4: Full Migration** (Weeks 7-8)
|
||||
- Migrate majority of workloads to Bzzz mesh
|
||||
- Retain Hive for monitoring and dashboard functionality
|
||||
- Plan eventual deprecation of centralized coordinator
|
||||
|
||||
### 9.2. Compatibility Layer
|
||||
|
||||
- **API Bridge**: Maintain existing Hive API endpoints that proxy to Bzzz mesh
|
||||
- **Data Migration**: Export task history and agent configurations from PostgreSQL
|
||||
- **Monitoring Continuity**: Integrate Bzzz metrics into existing Grafana dashboards
|
||||
138
archived/2025-07-17/PROGRESS_REPORT.md
Normal file
138
archived/2025-07-17/PROGRESS_REPORT.md
Normal file
@@ -0,0 +1,138 @@
|
||||
# Bzzz P2P Coordination System - Progress Report
|
||||
|
||||
## Overview
|
||||
This report documents the implementation and testing progress of the Bzzz P2P mesh coordination system with meta-thinking capabilities (Antennae framework).
|
||||
|
||||
## Major Accomplishments
|
||||
|
||||
### 1. High-Priority Feature Implementation ✅
|
||||
- **Fixed stub function implementations** in `github/integration.go`
|
||||
- Implemented proper task filtering based on agent capabilities
|
||||
- Added task announcement logic for P2P coordination
|
||||
- Enhanced capability-based task matching with keyword analysis
|
||||
|
||||
- **Completed Hive API client integration**
|
||||
- Extended PostgreSQL database schema for bzzz integration
|
||||
- Updated ProjectService to use database instead of filesystem scanning
|
||||
- Implemented secure Docker secrets for GitHub token access
|
||||
|
||||
- **Removed hardcoded repository configuration**
|
||||
- Dynamic repository discovery via Hive API
|
||||
- Database-driven project management
|
||||
|
||||
### 2. Security Enhancements ✅
|
||||
- **Docker Secrets Implementation**
|
||||
- Replaced filesystem-based GitHub token access with Docker secrets
|
||||
- Updated docker-compose.swarm.yml with proper secrets configuration
|
||||
- Enhanced security posture for credential management
|
||||
|
||||
### 3. Database Integration ✅
|
||||
- **Extended Hive Database Schema**
|
||||
- Added bzzz-specific fields to projects table
|
||||
- Inserted Hive repository as test project with 9 bzzz-task labeled issues
|
||||
- Successful GitHub API integration showing real issue discovery
|
||||
|
||||
### 4. Independent Testing Infrastructure ✅
|
||||
- **Mock Hive API Server** (`mock-hive-server.py`)
|
||||
- Provides fake projects and tasks for real bzzz coordination
|
||||
- Comprehensive task simulation with realistic coordination scenarios
|
||||
- Background task generation for dynamic testing
|
||||
- Enhanced with work capture endpoints:
|
||||
- `/api/bzzz/projects/<id>/submit-work` - Capture actual agent work/code
|
||||
- `/api/bzzz/projects/<id>/create-pr` - Capture pull request content
|
||||
- `/api/bzzz/projects/<id>/coordination-discussion` - Log coordination discussions
|
||||
- `/api/bzzz/projects/<id>/log-prompt` - Log agent prompts and model usage
|
||||
|
||||
- **Real-Time Monitoring Dashboard** (`cmd/bzzz-monitor.py`)
|
||||
- btop/nvtop-style console interface for coordination monitoring
|
||||
- Real coordination channel metrics and message rate tracking
|
||||
- Compact timestamp display and efficient space utilization
|
||||
- Live agent activity and P2P network status monitoring
|
||||
|
||||
### 5. P2P Network Verification ✅
|
||||
- **Confirmed Multi-Node Operation**
|
||||
- WALNUT, ACACIA, IRONWOOD nodes running as systemd services
|
||||
- 2 connected peers with regular availability broadcasts
|
||||
- P2P mesh discovery and communication functioning correctly
|
||||
|
||||
### 6. Cross-Repository Coordination Framework ✅
|
||||
- **Antennae Meta-Discussion System**
|
||||
- Advanced cross-repository coordination capabilities
|
||||
- Dependency detection and conflict resolution
|
||||
- AI-powered coordination plan generation
|
||||
- Consensus detection algorithms
|
||||
|
||||
## Current System Status
|
||||
|
||||
### Working Components
|
||||
1. ✅ P2P mesh networking (libp2p + mDNS)
|
||||
2. ✅ Agent availability broadcasting
|
||||
3. ✅ Database-driven repository discovery
|
||||
4. ✅ Secure credential management
|
||||
5. ✅ Real-time monitoring infrastructure
|
||||
6. ✅ Mock API testing framework
|
||||
7. ✅ Work capture endpoints (ready for use)
|
||||
|
||||
### Identified Issues
|
||||
1. ❌ **GitHub Repository Verification Failures**
|
||||
- Mock repositories (e.g., `mock-org/hive`) return 404 errors
|
||||
- Prevents agents from proceeding with task discovery
|
||||
- Need local Git hosting solution
|
||||
|
||||
2. ❌ **Task Claim Logic Incomplete**
|
||||
- Agents broadcast availability but don't actively claim tasks
|
||||
- Missing integration between P2P discovery and task claiming
|
||||
- Need to enhance bzzz binary task claim workflow
|
||||
|
||||
3. ❌ **Docker Overlay Network Issues**
|
||||
- Some connectivity issues between services
|
||||
- May impact agent coordination in containerized environments
|
||||
|
||||
## File Locations and Key Components
|
||||
|
||||
### Core Implementation Files
|
||||
- `/home/tony/AI/projects/Bzzz/github/integration.go` - Enhanced task filtering and P2P coordination
|
||||
- `/home/tony/AI/projects/hive/backend/app/services/project_service.py` - Database-driven project service
|
||||
- `/home/tony/AI/projects/hive/docker-compose.swarm.yml` - Docker secrets configuration
|
||||
|
||||
### Testing and Monitoring
|
||||
- `/home/tony/AI/projects/Bzzz/mock-hive-server.py` - Mock API with work capture
|
||||
- `/home/tony/AI/projects/Bzzz/cmd/bzzz-monitor.py` - Real-time coordination dashboard
|
||||
- `/home/tony/AI/projects/Bzzz/scripts/trigger_mock_coordination.sh` - Coordination test script
|
||||
|
||||
### Configuration
|
||||
- `/etc/systemd/system/bzzz.service.d/mock-api.conf` - Systemd override for mock API testing
|
||||
- `/tmp/bzzz_agent_work/` - Directory for captured agent work (when functioning)
|
||||
- `/tmp/bzzz_pull_requests/` - Directory for captured pull requests
|
||||
- `/tmp/bzzz_agent_prompts/` - Directory for captured agent prompts and model usage
|
||||
|
||||
## Technical Achievements
|
||||
|
||||
### Database Schema Extensions
|
||||
```sql
|
||||
-- Extended projects table with bzzz integration fields
|
||||
ALTER TABLE projects ADD COLUMN bzzz_enabled BOOLEAN DEFAULT false;
|
||||
ALTER TABLE projects ADD COLUMN ready_to_claim BOOLEAN DEFAULT false;
|
||||
ALTER TABLE projects ADD COLUMN private_repo BOOLEAN DEFAULT false;
|
||||
ALTER TABLE projects ADD COLUMN github_token_required BOOLEAN DEFAULT false;
|
||||
```
|
||||
|
||||
### Docker Secrets Integration
|
||||
```yaml
|
||||
secrets:
|
||||
- github_token
|
||||
environment:
|
||||
- GITHUB_TOKEN_FILE=/run/secrets/github_token
|
||||
```
|
||||
|
||||
### P2P Network Statistics
|
||||
- **Active Nodes**: 3 (WALNUT, ACACIA, IRONWOOD)
|
||||
- **Connected Peers**: 2 per node
|
||||
- **Network Protocol**: libp2p with mDNS discovery
|
||||
- **Message Broadcasting**: Availability, capability, coordination
|
||||
|
||||
## Next Steps Required
|
||||
See PROJECT_TODOS.md for comprehensive task list.
|
||||
|
||||
## Summary
|
||||
The Bzzz P2P coordination system has a solid foundation with working P2P networking, database integration, secure credential management, and comprehensive testing infrastructure. The main blockers are the need for a local Git hosting solution and completion of the task claim logic in the bzzz binary.
|
||||
224
archived/2025-07-17/PROJECT_PLAN.md
Normal file
224
archived/2025-07-17/PROJECT_PLAN.md
Normal file
@@ -0,0 +1,224 @@
|
||||
🐝 Project: Bzzz — P2P Task Coordination System
|
||||
|
||||
## 🔧 Architecture Overview (libp2p + pubsub + JSON)
|
||||
|
||||
This system will compliment and partially replace elements of the Hive Software System. This is intended to be a replacement for the multitude of MCP, and API calls to the ollama and gemini-cli agents over port 11434 etc. By replacing the master/slave paradigm with a mesh network we allow each node to trigger workflows or respond to calls for work as availability dictates rather than being stuck in endless timeouts awaiting responses. We also eliminate the central coordinator as a single point of failure.
|
||||
|
||||
### 📂 Components
|
||||
|
||||
#### 1. **Peer Node**
|
||||
|
||||
Each machine runs a P2P agent that:
|
||||
|
||||
- Connects to other peers via libp2p
|
||||
- Subscribes to pubsub topics
|
||||
- Periodically broadcasts status/capabilities
|
||||
- Receives and executes tasks
|
||||
- Publishes task results as GitHub pull requests or issues
|
||||
- Can request assistance from other peers
|
||||
- Monitors a GitHub repository for new issues (task source)
|
||||
|
||||
Each node uses a dedicated GitHub account with:
|
||||
- A personal access token (fine-scoped to repo/PRs)
|
||||
- A configured `.gitconfig` for commit identity
|
||||
|
||||
#### 2. **libp2p Network**
|
||||
|
||||
- All peers discover each other using mDNS, Bootstrap peers, or DHT
|
||||
- Peer identity is cryptographic (libp2p peer ID)
|
||||
- Communication is encrypted end-to-end
|
||||
|
||||
#### 3. **GitHub Integration**
|
||||
|
||||
- Tasks are sourced from GitHub Issues in a designated repository
|
||||
- Nodes will claim and respond to tasks by:
|
||||
- Forking the repository (once)
|
||||
- Creating a working branch
|
||||
- Making changes to files as instructed by task input
|
||||
- Committing changes using their GitHub identity
|
||||
- Creating a pull request or additional GitHub issues
|
||||
- Publishing final result as a PR, issue(s), or failure report
|
||||
|
||||
#### 4. **PubSub Topics**
|
||||
|
||||
| Topic | Direction | Purpose |
|
||||
|------------------|------------------|---------------------------------------------|
|
||||
| `capabilities` | Peer → All Peers | Broadcast available models, status |
|
||||
| `task_broadcast` | Peer → All Peers | Publish a GitHub issue as task |
|
||||
| `task_claim` | Peer → All Peers | Claim responsibility for a task |
|
||||
| `task_result` | Peer → All Peers | Share PR, issue, or failure result |
|
||||
| `presence_ping` | Peer → All Peers | Lightweight presence signal |
|
||||
| `task_help_request` | Peer → All Peers | Request assistance for a task |
|
||||
| `task_help_response`| Peer → All Peers | Offer help or handle sub-task |
|
||||
|
||||
### 📊 Data Flow Diagram
|
||||
```
|
||||
+------------------+ libp2p +------------------+
|
||||
| Peer A |<------------------->| Peer B |
|
||||
| |<------------------->| |
|
||||
| - Publishes: | | - Publishes: |
|
||||
| capabilities | | task_result |
|
||||
| task_broadcast | | capabilities |
|
||||
| help_request | | help_response |
|
||||
| - Subscribes to: | | - Subscribes to: |
|
||||
| task_result | | task_broadcast |
|
||||
| help_request | | help_request |
|
||||
+------------------+ +------------------+
|
||||
^ ^
|
||||
| |
|
||||
| |
|
||||
+----------------------+-----------------+
|
||||
|
|
||||
v
|
||||
+------------------+
|
||||
| Peer C |
|
||||
+------------------+
|
||||
```
|
||||
|
||||
### 📂 Sample JSON Messages
|
||||
|
||||
#### `capabilities`
|
||||
```json
|
||||
{
|
||||
"type": "capabilities",
|
||||
"node_id": "pi-node-1",
|
||||
"cpu": 43.5,
|
||||
"gpu": 2.3,
|
||||
"models": ["llama3", "mistral"],
|
||||
"installed": ["ollama", "gemini-cli"],
|
||||
"status": "idle",
|
||||
"timestamp": "2025-07-12T01:23:45Z"
|
||||
}
|
||||
```
|
||||
|
||||
#### `task_broadcast`
|
||||
```json
|
||||
{
|
||||
"type": "task",
|
||||
"task_id": "#42",
|
||||
"repo": "example-org/task-repo",
|
||||
"issue_url": "https://github.com/example-org/task-repo/issues/42",
|
||||
"model": "ollama",
|
||||
"input": "Add unit tests to utils module",
|
||||
"params": {"branch_prefix": "task-42-"},
|
||||
"timestamp": "2025-07-12T02:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
#### `task_claim`
|
||||
```json
|
||||
{
|
||||
"type": "task_claim",
|
||||
"task_id": "#42",
|
||||
"node_id": "pi-node-2",
|
||||
"timestamp": "2025-07-12T02:00:03Z"
|
||||
}
|
||||
```
|
||||
|
||||
#### `task_result`
|
||||
```json
|
||||
{
|
||||
"type": "task_result",
|
||||
"task_id": "#42",
|
||||
"node_id": "pi-node-2",
|
||||
"result_type": "pull_request",
|
||||
"result_url": "https://github.com/example-org/task-repo/pull/97",
|
||||
"duration_ms": 15830,
|
||||
"timestamp": "2025-07-12T02:10:05Z"
|
||||
}
|
||||
```
|
||||
|
||||
#### `task_help_request`
|
||||
```json
|
||||
{
|
||||
"type": "task_help_request",
|
||||
"task_id": "#42",
|
||||
"from_node": "pi-node-2",
|
||||
"reason": "Long-running task or missing capability",
|
||||
"requested_capability": "claude-cli",
|
||||
"timestamp": "2025-07-12T02:05:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
#### `task_help_response`
|
||||
```json
|
||||
{
|
||||
"type": "task_help_response",
|
||||
"task_id": "#42",
|
||||
"from_node": "pi-node-3",
|
||||
"can_help": true,
|
||||
"capabilities": ["claude-cli"],
|
||||
"eta_seconds": 30,
|
||||
"timestamp": "2025-07-12T02:05:02Z"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Development Brief
|
||||
|
||||
### 🧱 Tech Stack
|
||||
|
||||
- **Language**: Node.js (or Go/Rust)
|
||||
- **Networking**: libp2p
|
||||
- **Messaging**: pubsub with JSON
|
||||
- **Task Execution**: Local CLI (ollama, gemini, claude)
|
||||
- **System Monitoring**: `os-utils`, `psutil`, `nvidia-smi`
|
||||
- **Runtime**: systemd services on Linux
|
||||
- **GitHub Interaction**: `octokit` (Node), Git CLI
|
||||
|
||||
### 🛠 Key Modules
|
||||
|
||||
#### 1. `peer_agent.js`
|
||||
|
||||
- Initializes libp2p node
|
||||
- Joins pubsub topics
|
||||
- Periodically publishes capabilities
|
||||
- Listens for tasks, runs them, and reports PR/results
|
||||
- Handles help requests and responses
|
||||
|
||||
#### 2. `capability_detector.js`
|
||||
|
||||
- Detects:
|
||||
- CPU/GPU load
|
||||
- Installed models (via `ollama list`)
|
||||
- Installed CLIs (`which gemini`, `which claude`)
|
||||
|
||||
#### 3. `task_executor.js`
|
||||
|
||||
- Parses GitHub issue input
|
||||
- Forks repo (if needed)
|
||||
- Creates working branch, applies changes
|
||||
- Commits changes using local Git identity
|
||||
- Pushes branch and creates pull request or follow-up issues
|
||||
|
||||
#### 4. `github_bot.js`
|
||||
|
||||
- Authenticates GitHub API client
|
||||
- Watches for new issues in repo
|
||||
- Publishes them as `task_broadcast`
|
||||
- Handles PR/issue creation and error handling
|
||||
|
||||
#### 5. `state_manager.js`
|
||||
|
||||
- Keeps internal view of network state
|
||||
- Tracks peers’ capabilities, liveness
|
||||
- Matches help requests to eligible peers
|
||||
|
||||
### 📆 Milestones
|
||||
|
||||
| Week | Deliverables |
|
||||
| ---- | ------------------------------------------------------------ |
|
||||
| 1 | libp2p peer bootstrapping + pubsub skeleton |
|
||||
| 2 | JSON messaging spec + capability broadcasting |
|
||||
| 3 | GitHub issue ingestion + task broadcast |
|
||||
| 4 | CLI integration with Ollama/Gemini/Claude |
|
||||
| 5 | GitHub PR/issue/failure workflows |
|
||||
| 6 | Help request/response logic, delegation framework |
|
||||
| 7 | systemd setup, CLI utilities, and resilience |
|
||||
| 8 | End-to-end testing, GitHub org coordination, deployment guide|
|
||||
|
||||
---
|
||||
|
||||
Would you like a prototype `task_help_request` matchmaking function or sample test matrix for capability validation?
|
||||
|
||||
165
archived/2025-07-17/README_MONITORING.md
Normal file
165
archived/2025-07-17/README_MONITORING.md
Normal file
@@ -0,0 +1,165 @@
|
||||
# Bzzz Antennae Monitoring Dashboard
|
||||
|
||||
A real-time console monitoring dashboard for the Bzzz P2P coordination system, similar to btop/nvtop for system monitoring.
|
||||
|
||||
## Features
|
||||
|
||||
🔍 **Real-time P2P Status**
|
||||
- Connected peer count with history graph
|
||||
- Node ID and network status
|
||||
- Hive API connectivity status
|
||||
|
||||
🤖 **Agent Activity Monitoring**
|
||||
- Live agent availability updates
|
||||
- Agent status distribution (ready/working/busy)
|
||||
- Recent activity tracking
|
||||
|
||||
🎯 **Coordination Activity**
|
||||
- Task announcements and completions
|
||||
- Coordination session tracking
|
||||
- Message flow statistics
|
||||
|
||||
📊 **Visual Elements**
|
||||
- ASCII graphs for historical data
|
||||
- Color-coded status indicators
|
||||
- Live activity log with timestamps
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
```bash
|
||||
# Run with default 1-second refresh rate
|
||||
python3 cmd/bzzz-monitor.py
|
||||
|
||||
# Custom refresh rate (2 seconds)
|
||||
python3 cmd/bzzz-monitor.py --refresh-rate 2.0
|
||||
|
||||
# Disable colors for logging/screenshots
|
||||
python3 cmd/bzzz-monitor.py --no-color
|
||||
```
|
||||
|
||||
### Installation as System Command
|
||||
```bash
|
||||
# Copy to system bin
|
||||
sudo cp cmd/bzzz-monitor.py /usr/local/bin/bzzz-monitor
|
||||
sudo chmod +x /usr/local/bin/bzzz-monitor
|
||||
|
||||
# Now run from anywhere
|
||||
bzzz-monitor
|
||||
```
|
||||
|
||||
## Dashboard Layout
|
||||
|
||||
```
|
||||
┌─ Bzzz P2P Coordination Monitor ─┐
|
||||
│ Uptime: 0:02:15 │ Node: 12*SEE3To... │
|
||||
└───────────────────────────────────┘
|
||||
|
||||
P2P Network Status
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Connected Peers: 2
|
||||
Hive API Status: Offline (Overlay Network Issues)
|
||||
|
||||
Peer History (last 20 samples):
|
||||
███▇▆▆▇████▇▆▇███▇▆▇ (1-3 peers)
|
||||
|
||||
Agent Activity
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Recent Updates (1m): 8
|
||||
Ready: 6
|
||||
Working: 2
|
||||
|
||||
Coordination Activity
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Total Messages: 45
|
||||
Total Tasks: 12
|
||||
Active Sessions: 1
|
||||
Recent Tasks (5m): 8
|
||||
|
||||
Recent Activity
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
11:10:35 [AVAIL] Agent acacia-node... status: ready
|
||||
11:10:33 [TASK] Task announcement: hive#15 - WebSocket support
|
||||
11:10:30 [COORD] Meta-coordination session started
|
||||
11:10:28 [AVAIL] Agent ironwood-node... status: working
|
||||
11:10:25 [ERROR] Failed to get active repositories: API 404
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Press Ctrl+C to exit | Refresh rate: 1.0s
|
||||
```
|
||||
|
||||
## Monitoring Data Sources
|
||||
|
||||
The dashboard pulls data from:
|
||||
|
||||
1. **Systemd Service Logs**: `journalctl -u bzzz.service`
|
||||
2. **P2P Network Status**: Extracted from bzzz log messages
|
||||
3. **Agent Availability**: Parsed from availability_broadcast messages
|
||||
4. **Task Activity**: Detected from task/repository-related log entries
|
||||
5. **Error Tracking**: Monitors for failures and connection issues
|
||||
|
||||
## Color Coding
|
||||
|
||||
- 🟢 **Green**: Good status, active connections, ready agents
|
||||
- 🟡 **Yellow**: Working status, moderate activity
|
||||
- 🔴 **Red**: Errors, failed connections, busy agents
|
||||
- 🔵 **Blue**: Information, neutral data
|
||||
- 🟣 **Magenta**: Coordination-specific activity
|
||||
- 🔷 **Cyan**: Network and P2P data
|
||||
|
||||
## Real-time Updates
|
||||
|
||||
The dashboard updates every 1-2 seconds by default and tracks:
|
||||
|
||||
- **P2P Connections**: Shows immediate peer join/leave events
|
||||
- **Agent Status**: Real-time availability broadcasts from all nodes
|
||||
- **Task Flow**: Live task announcements and coordination activity
|
||||
- **System Health**: Continuous monitoring of service status and errors
|
||||
|
||||
## Performance
|
||||
|
||||
- **Low Resource Usage**: Python-based with minimal CPU/memory impact
|
||||
- **Efficient Parsing**: Only processes recent logs (last 30-50 lines)
|
||||
- **Responsive UI**: Fast refresh rates without overwhelming the terminal
|
||||
- **Historical Data**: Maintains rolling buffers for trend analysis
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No Data Appearing
|
||||
```bash
|
||||
# Check if bzzz service is running
|
||||
systemctl status bzzz.service
|
||||
|
||||
# Verify log access permissions
|
||||
journalctl -u bzzz.service --since "1 minute ago"
|
||||
```
|
||||
|
||||
### High CPU Usage
|
||||
```bash
|
||||
# Reduce refresh rate
|
||||
bzzz-monitor --refresh-rate 5.0
|
||||
```
|
||||
|
||||
### Color Issues
|
||||
```bash
|
||||
# Disable colors
|
||||
bzzz-monitor --no-color
|
||||
|
||||
# Check terminal color support
|
||||
echo $TERM
|
||||
```
|
||||
|
||||
## Integration
|
||||
|
||||
The monitor works alongside:
|
||||
- **Live Bzzz System**: Monitors real P2P mesh (WALNUT/ACACIA/IRONWOOD)
|
||||
- **Test Suite**: Can monitor test coordination scenarios
|
||||
- **Development**: Perfect for debugging antennae coordination logic
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- 📈 Export metrics to CSV/JSON
|
||||
- 🔔 Alert system for critical events
|
||||
- 📊 Web-based dashboard version
|
||||
- 🎯 Coordination session drill-down
|
||||
- 📱 Mobile-friendly output
|
||||
112
archived/2025-07-17/TASK_BACKLOG.md
Normal file
112
archived/2025-07-17/TASK_BACKLOG.md
Normal file
@@ -0,0 +1,112 @@
|
||||
# Bzzz + Antennae Development Task Backlog
|
||||
|
||||
Based on the UNIFIED_DEVELOPMENT_PLAN.md, here are the development tasks ready for distribution to the Hive cluster:
|
||||
|
||||
## Week 1-2: Foundation Tasks
|
||||
|
||||
### Task 1: P2P Networking Foundation 🔧
|
||||
**Assigned to**: WALNUT (Advanced Coding - starcoder2:15b)
|
||||
**Priority**: 5 (Critical)
|
||||
**Objective**: Design and implement core P2P networking foundation for Project Bzzz using libp2p in Go
|
||||
|
||||
**Requirements**:
|
||||
- Use go-libp2p library for mesh networking
|
||||
- Implement mDNS peer discovery for local network (192.168.1.0/24)
|
||||
- Create secure encrypted P2P connections with peer identity
|
||||
- Design pub/sub topics for both task coordination (Bzzz) and meta-discussion (Antennae)
|
||||
- Prepare for Docker + host networking deployment
|
||||
- Create modular Go code structure in `/home/tony/AI/projects/Bzzz/`
|
||||
|
||||
**Deliverables**:
|
||||
- `main.go` - Entry point and peer initialization
|
||||
- `p2p/` - P2P networking module with libp2p integration
|
||||
- `discovery/` - mDNS peer discovery implementation
|
||||
- `pubsub/` - Pub/sub messaging for capability broadcasting
|
||||
- `go.mod` - Go module definition with dependencies
|
||||
- `Dockerfile` - Container with host networking support
|
||||
|
||||
### Task 2: Distributed Logging System 📊
|
||||
**Assigned to**: IRONWOOD (Reasoning Analysis - phi4:14b)
|
||||
**Priority**: 4 (High)
|
||||
**Dependencies**: Task 1 (P2P Foundation)
|
||||
**Objective**: Architect and implement Hypercore-based distributed logging system
|
||||
|
||||
**Requirements**:
|
||||
- Design append-only log streams using Hypercore Protocol
|
||||
- Implement public key broadcasting for log identity
|
||||
- Create log replication capabilities between peers
|
||||
- Store both execution logs (Bzzz) and discussion transcripts (Antennae)
|
||||
- Ensure tamper-proof audit trails for debugging
|
||||
- Integrate with P2P capability detection module
|
||||
|
||||
**Deliverables**:
|
||||
- `logging/` - Hypercore-based logging module
|
||||
- `replication/` - Log replication and synchronization
|
||||
- `audit/` - Tamper-proof audit trail verification
|
||||
- Documentation on log schema and replication protocol
|
||||
|
||||
### Task 3: GitHub Integration Module 📋
|
||||
**Assigned to**: ACACIA (Code Review/Docs - codellama)
|
||||
**Priority**: 4 (High)
|
||||
**Dependencies**: Task 1 (P2P Foundation)
|
||||
**Objective**: Implement GitHub integration for atomic task claiming and collaborative workflows
|
||||
|
||||
**Requirements**:
|
||||
- Create atomic issue assignment mechanism (GitHub's native assignment)
|
||||
- Implement repository forking, branch creation, and commit workflows
|
||||
- Generate pull requests with discussion transcript links
|
||||
- Handle task result posting and failure reporting
|
||||
- Use GitHub API for all interactions
|
||||
- Include comprehensive error handling and retry logic
|
||||
|
||||
**Deliverables**:
|
||||
- `github/` - GitHub API integration module
|
||||
- `workflows/` - Repository and branch management
|
||||
- `tasks/` - Task claiming and result posting
|
||||
- Integration tests with GitHub API
|
||||
- Documentation on GitHub workflow process
|
||||
|
||||
## Week 3-4: Integration Tasks
|
||||
|
||||
### Task 4: Meta-Discussion Implementation 💬
|
||||
**Assigned to**: IRONWOOD (Reasoning Analysis)
|
||||
**Priority**: 3 (Medium)
|
||||
**Dependencies**: Task 1, Task 2
|
||||
**Objective**: Implement Antennae meta-discussion layer for collaborative reasoning
|
||||
|
||||
**Requirements**:
|
||||
- Create structured messaging for agent collaboration
|
||||
- Implement "propose plan" and "objection period" logic
|
||||
- Add hop limits (3 hops) and participant caps for safety
|
||||
- Design escalation paths to human intervention
|
||||
- Integrate with Hypercore logging for discussion transcripts
|
||||
|
||||
### Task 5: End-to-End Integration 🔄
|
||||
**Assigned to**: WALNUT (Advanced Coding)
|
||||
**Priority**: 2 (Normal)
|
||||
**Dependencies**: All previous tasks
|
||||
**Objective**: Integrate all components and create working Bzzz+Antennae system
|
||||
|
||||
**Requirements**:
|
||||
- Combine P2P networking, logging, and GitHub integration
|
||||
- Implement full task lifecycle with meta-discussion
|
||||
- Create Docker Swarm deployment configuration
|
||||
- Add monitoring and health checks
|
||||
- Comprehensive testing across cluster nodes
|
||||
|
||||
## Current Status
|
||||
|
||||
✅ **Hive Cluster Ready**: 3 agents registered with proper specializations
|
||||
- walnut: starcoder2:15b (kernel_dev)
|
||||
- ironwood: phi4:14b (reasoning)
|
||||
- acacia: codellama (docs_writer)
|
||||
|
||||
✅ **Authentication Working**: Dev user and API access configured
|
||||
|
||||
⚠️ **Task Submission**: Need to resolve API endpoint issues for automated task distribution
|
||||
|
||||
**Next Steps**:
|
||||
1. Fix task creation API endpoint issues
|
||||
2. Submit tasks to respective agents based on specializations
|
||||
3. Monitor execution and coordinate between agents
|
||||
4. Test the collaborative reasoning (Antennae) layer once P2P foundation is complete
|
||||
254
archived/2025-07-17/demo_advanced_meta_discussion.py
Normal file
254
archived/2025-07-17/demo_advanced_meta_discussion.py
Normal file
@@ -0,0 +1,254 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Advanced Meta Discussion Demo for Bzzz P2P Mesh
|
||||
Shows cross-repository coordination and dependency detection
|
||||
"""
|
||||
|
||||
import json
|
||||
import time
|
||||
from datetime import datetime
|
||||
|
||||
def demo_cross_repository_coordination():
|
||||
"""Demonstrate advanced meta discussion features"""
|
||||
|
||||
print("🎯 ADVANCED BZZZ META DISCUSSION DEMO")
|
||||
print("=" * 60)
|
||||
print("Scenario: Multi-repository microservices coordination")
|
||||
print()
|
||||
|
||||
# Simulate multiple repositories in the system
|
||||
repositories = {
|
||||
"api-gateway": {
|
||||
"agent": "walnut-12345",
|
||||
"capabilities": ["code-generation", "api-design", "security"],
|
||||
"current_task": {
|
||||
"id": 42,
|
||||
"title": "Implement OAuth2 authentication flow",
|
||||
"description": "Add OAuth2 support to API gateway with JWT tokens",
|
||||
"labels": ["security", "api", "authentication"]
|
||||
}
|
||||
},
|
||||
"user-service": {
|
||||
"agent": "acacia-67890",
|
||||
"capabilities": ["code-analysis", "database", "microservices"],
|
||||
"current_task": {
|
||||
"id": 87,
|
||||
"title": "Update user schema for OAuth integration",
|
||||
"description": "Add OAuth provider fields to user table",
|
||||
"labels": ["database", "schema", "authentication"]
|
||||
}
|
||||
},
|
||||
"notification-service": {
|
||||
"agent": "ironwood-54321",
|
||||
"capabilities": ["advanced-reasoning", "integration", "messaging"],
|
||||
"current_task": {
|
||||
"id": 156,
|
||||
"title": "Secure webhook endpoints with JWT",
|
||||
"description": "Validate JWT tokens on webhook endpoints",
|
||||
"labels": ["security", "webhook", "authentication"]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
print("📋 ACTIVE TASKS ACROSS REPOSITORIES:")
|
||||
for repo, info in repositories.items():
|
||||
task = info["current_task"]
|
||||
print(f" 🔧 {repo}: #{task['id']} - {task['title']}")
|
||||
print(f" Agent: {info['agent']} | Labels: {', '.join(task['labels'])}")
|
||||
print()
|
||||
|
||||
# Demo 1: Dependency Detection
|
||||
print("🔍 PHASE 1: DEPENDENCY DETECTION")
|
||||
print("-" * 40)
|
||||
|
||||
dependencies = [
|
||||
{
|
||||
"task1": "api-gateway/#42",
|
||||
"task2": "user-service/#87",
|
||||
"relationship": "API_Contract",
|
||||
"reason": "OAuth implementation requires coordinated schema changes",
|
||||
"confidence": 0.9
|
||||
},
|
||||
{
|
||||
"task1": "api-gateway/#42",
|
||||
"task2": "notification-service/#156",
|
||||
"relationship": "Security_Compliance",
|
||||
"reason": "Both implement JWT token validation",
|
||||
"confidence": 0.85
|
||||
}
|
||||
]
|
||||
|
||||
for dep in dependencies:
|
||||
print(f"🔗 DEPENDENCY DETECTED:")
|
||||
print(f" {dep['task1']} ↔ {dep['task2']}")
|
||||
print(f" Type: {dep['relationship']} (confidence: {dep['confidence']})")
|
||||
print(f" Reason: {dep['reason']}")
|
||||
print()
|
||||
|
||||
# Demo 2: Coordination Session Creation
|
||||
print("🎯 PHASE 2: COORDINATION SESSION INITIATED")
|
||||
print("-" * 40)
|
||||
|
||||
session_id = f"coord_oauth_{int(time.time())}"
|
||||
print(f"📝 Session ID: {session_id}")
|
||||
print(f"📅 Created: {datetime.now().strftime('%H:%M:%S')}")
|
||||
print(f"👥 Participants: walnut-12345, acacia-67890, ironwood-54321")
|
||||
print()
|
||||
|
||||
# Demo 3: AI-Generated Coordination Plan
|
||||
print("🤖 PHASE 3: AI-GENERATED COORDINATION PLAN")
|
||||
print("-" * 40)
|
||||
|
||||
coordination_plan = """
|
||||
COORDINATION PLAN: OAuth2 Implementation Across Services
|
||||
|
||||
1. EXECUTION ORDER:
|
||||
- Phase 1: user-service (schema changes)
|
||||
- Phase 2: api-gateway (OAuth implementation)
|
||||
- Phase 3: notification-service (JWT validation)
|
||||
|
||||
2. SHARED ARTIFACTS:
|
||||
- JWT token format specification
|
||||
- OAuth2 endpoint documentation
|
||||
- Database schema migration scripts
|
||||
- Shared security configuration
|
||||
|
||||
3. COORDINATION REQUIREMENTS:
|
||||
- walnut-12345: Define JWT token structure before implementation
|
||||
- acacia-67890: Migrate user schema first, share field mappings
|
||||
- ironwood-54321: Wait for JWT format, implement validation
|
||||
|
||||
4. POTENTIAL CONFLICTS:
|
||||
- JWT payload structure disagreements
|
||||
- Token expiration time mismatches
|
||||
- Security scope definition conflicts
|
||||
|
||||
5. SUCCESS CRITERIA:
|
||||
- All services use consistent JWT format
|
||||
- OAuth flow works end-to-end
|
||||
- Security audit passes on all endpoints
|
||||
- Integration tests pass across all services
|
||||
"""
|
||||
|
||||
print(coordination_plan)
|
||||
|
||||
# Demo 4: Agent Coordination Messages
|
||||
print("💬 PHASE 4: AGENT COORDINATION MESSAGES")
|
||||
print("-" * 40)
|
||||
|
||||
messages = [
|
||||
{
|
||||
"timestamp": "14:32:01",
|
||||
"from": "walnut-12345 (api-gateway)",
|
||||
"type": "proposal",
|
||||
"content": "I propose using RS256 JWT tokens with 15min expiry. Standard claims: sub, iat, exp, scope."
|
||||
},
|
||||
{
|
||||
"timestamp": "14:32:45",
|
||||
"from": "acacia-67890 (user-service)",
|
||||
"type": "question",
|
||||
"content": "Should we store the OAuth provider info in the user table or separate table? Also need refresh token strategy."
|
||||
},
|
||||
{
|
||||
"timestamp": "14:33:20",
|
||||
"from": "ironwood-54321 (notification-service)",
|
||||
"type": "agreement",
|
||||
"content": "RS256 sounds good. For webhooks, I'll validate signature and check 'webhook' scope. Need the public key endpoint."
|
||||
},
|
||||
{
|
||||
"timestamp": "14:34:10",
|
||||
"from": "walnut-12345 (api-gateway)",
|
||||
"type": "response",
|
||||
"content": "Separate oauth_providers table is better for multiple providers. Public key at /.well-known/jwks.json"
|
||||
},
|
||||
{
|
||||
"timestamp": "14:34:55",
|
||||
"from": "acacia-67890 (user-service)",
|
||||
"type": "agreement",
|
||||
"content": "Agreed on separate table. I'll create migration script and share the schema. ETA: 2 hours."
|
||||
}
|
||||
]
|
||||
|
||||
for msg in messages:
|
||||
print(f"[{msg['timestamp']}] {msg['from']} ({msg['type']}):")
|
||||
print(f" {msg['content']}")
|
||||
print()
|
||||
|
||||
# Demo 5: Automatic Resolution Detection
|
||||
print("✅ PHASE 5: COORDINATION RESOLUTION")
|
||||
print("-" * 40)
|
||||
|
||||
print("🔍 ANALYSIS: Consensus detected")
|
||||
print(" - All agents agreed on JWT format (RS256)")
|
||||
print(" - Database strategy decided (separate oauth_providers table)")
|
||||
print(" - Public key endpoint established (/.well-known/jwks.json)")
|
||||
print(" - Implementation order confirmed")
|
||||
print()
|
||||
print("📋 COORDINATION COMPLETE:")
|
||||
print(" - Session status: RESOLVED")
|
||||
print(" - Resolution: Consensus reached on OAuth implementation")
|
||||
print(" - Next steps: acacia-67890 starts schema migration")
|
||||
print(" - Dependencies: walnut-12345 waits for schema completion")
|
||||
print()
|
||||
|
||||
# Demo 6: Alternative - Escalation Scenario
|
||||
print("🚨 ALTERNATIVE: ESCALATION SCENARIO")
|
||||
print("-" * 40)
|
||||
|
||||
escalation_scenario = """
|
||||
ESCALATION TRIGGERED: Security Implementation Conflict
|
||||
|
||||
Reason: Agents cannot agree on JWT token expiration time
|
||||
- walnut-12345 wants 15 minutes (high security)
|
||||
- acacia-67890 wants 4 hours (user experience)
|
||||
- ironwood-54321 wants 1 hour (compromise)
|
||||
|
||||
Messages exceeded threshold: 12 messages without consensus
|
||||
Human expert summoned via N8N webhook to deepblack.cloud
|
||||
|
||||
Escalation webhook payload:
|
||||
{
|
||||
"session_id": "coord_oauth_1752401234",
|
||||
"conflict_type": "security_policy_disagreement",
|
||||
"agents_involved": ["walnut-12345", "acacia-67890", "ironwood-54321"],
|
||||
"repositories": ["api-gateway", "user-service", "notification-service"],
|
||||
"issue_summary": "JWT expiration time conflict preventing OAuth implementation",
|
||||
"requires_human_decision": true,
|
||||
"urgency": "medium"
|
||||
}
|
||||
"""
|
||||
|
||||
print(escalation_scenario)
|
||||
|
||||
# Demo 7: System Capabilities Summary
|
||||
print("🎯 ADVANCED META DISCUSSION CAPABILITIES")
|
||||
print("-" * 40)
|
||||
|
||||
capabilities = [
|
||||
"✅ Cross-repository dependency detection",
|
||||
"✅ Intelligent task relationship analysis",
|
||||
"✅ AI-generated coordination plans",
|
||||
"✅ Multi-agent conversation management",
|
||||
"✅ Consensus detection and resolution",
|
||||
"✅ Automatic escalation to humans",
|
||||
"✅ Session lifecycle management",
|
||||
"✅ Hop-limited message propagation",
|
||||
"✅ Custom dependency rules",
|
||||
"✅ Project-aware coordination"
|
||||
]
|
||||
|
||||
for cap in capabilities:
|
||||
print(f" {cap}")
|
||||
|
||||
print()
|
||||
print("🚀 PRODUCTION READY:")
|
||||
print(" - P2P mesh infrastructure: ✅ Deployed")
|
||||
print(" - Antennae meta-discussion: ✅ Active")
|
||||
print(" - Dependency detection: ✅ Implemented")
|
||||
print(" - Coordination sessions: ✅ Functional")
|
||||
print(" - Human escalation: ✅ N8N integrated")
|
||||
print()
|
||||
print("🎯 Ready for real cross-repository coordination!")
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo_cross_repository_coordination()
|
||||
702
archived/2025-07-17/mock-hive-server.py
Executable file
702
archived/2025-07-17/mock-hive-server.py
Executable file
@@ -0,0 +1,702 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Mock Hive API Server for Bzzz Testing
|
||||
|
||||
This simulates what the real Hive API would provide to bzzz agents:
|
||||
- Active repositories with bzzz-enabled tasks
|
||||
- Fake GitHub issues with bzzz-task labels
|
||||
- Task dependencies and coordination scenarios
|
||||
|
||||
The real bzzz agents will consume this fake data and do actual coordination.
|
||||
"""
|
||||
|
||||
import json
|
||||
import random
|
||||
import time
|
||||
from datetime import datetime, timedelta
|
||||
from flask import Flask, jsonify, request
|
||||
from threading import Thread
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
# Mock data for repositories and tasks
|
||||
MOCK_REPOSITORIES = [
|
||||
{
|
||||
"project_id": 1,
|
||||
"name": "hive-coordination-platform",
|
||||
"git_url": "https://github.com/mock/hive",
|
||||
"owner": "mock-org",
|
||||
"repository": "hive",
|
||||
"branch": "main",
|
||||
"bzzz_enabled": True,
|
||||
"ready_to_claim": True,
|
||||
"private_repo": False,
|
||||
"github_token_required": False
|
||||
},
|
||||
{
|
||||
"project_id": 2,
|
||||
"name": "bzzz-p2p-system",
|
||||
"git_url": "https://github.com/mock/bzzz",
|
||||
"owner": "mock-org",
|
||||
"repository": "bzzz",
|
||||
"branch": "main",
|
||||
"bzzz_enabled": True,
|
||||
"ready_to_claim": True,
|
||||
"private_repo": False,
|
||||
"github_token_required": False
|
||||
},
|
||||
{
|
||||
"project_id": 3,
|
||||
"name": "distributed-ai-development",
|
||||
"git_url": "https://github.com/mock/distributed-ai-dev",
|
||||
"owner": "mock-org",
|
||||
"repository": "distributed-ai-dev",
|
||||
"branch": "main",
|
||||
"bzzz_enabled": True,
|
||||
"ready_to_claim": True,
|
||||
"private_repo": False,
|
||||
"github_token_required": False
|
||||
},
|
||||
{
|
||||
"project_id": 4,
|
||||
"name": "infrastructure-automation",
|
||||
"git_url": "https://github.com/mock/infra-automation",
|
||||
"owner": "mock-org",
|
||||
"repository": "infra-automation",
|
||||
"branch": "main",
|
||||
"bzzz_enabled": True,
|
||||
"ready_to_claim": True,
|
||||
"private_repo": False,
|
||||
"github_token_required": False
|
||||
}
|
||||
]
|
||||
|
||||
# Mock tasks with realistic coordination scenarios
|
||||
MOCK_TASKS = {
|
||||
1: [ # hive tasks
|
||||
{
|
||||
"number": 15,
|
||||
"title": "Add WebSocket support for real-time coordination",
|
||||
"description": "Implement WebSocket endpoints for real-time agent coordination messages",
|
||||
"state": "open",
|
||||
"labels": ["bzzz-task", "feature", "realtime", "coordination"],
|
||||
"created_at": "2025-01-14T10:00:00Z",
|
||||
"updated_at": "2025-01-14T10:30:00Z",
|
||||
"html_url": "https://github.com/mock/hive/issues/15",
|
||||
"is_claimed": False,
|
||||
"assignees": [],
|
||||
"task_type": "feature",
|
||||
"dependencies": [
|
||||
{
|
||||
"repository": "bzzz",
|
||||
"task_number": 23,
|
||||
"dependency_type": "api_contract"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 16,
|
||||
"title": "Implement agent authentication system",
|
||||
"description": "Add secure JWT-based authentication for bzzz agents accessing Hive APIs",
|
||||
"state": "open",
|
||||
"labels": ["bzzz-task", "security", "auth", "high-priority"],
|
||||
"created_at": "2025-01-14T09:30:00Z",
|
||||
"updated_at": "2025-01-14T10:45:00Z",
|
||||
"html_url": "https://github.com/mock/hive/issues/16",
|
||||
"is_claimed": False,
|
||||
"assignees": [],
|
||||
"task_type": "security",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"number": 17,
|
||||
"title": "Create coordination metrics dashboard",
|
||||
"description": "Build dashboard showing cross-repository coordination statistics",
|
||||
"state": "open",
|
||||
"labels": ["bzzz-task", "dashboard", "metrics", "ui"],
|
||||
"created_at": "2025-01-14T11:00:00Z",
|
||||
"updated_at": "2025-01-14T11:15:00Z",
|
||||
"html_url": "https://github.com/mock/hive/issues/17",
|
||||
"is_claimed": False,
|
||||
"assignees": [],
|
||||
"task_type": "feature",
|
||||
"dependencies": [
|
||||
{
|
||||
"repository": "bzzz",
|
||||
"task_number": 24,
|
||||
"dependency_type": "api_contract"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
2: [ # bzzz tasks
|
||||
{
|
||||
"number": 23,
|
||||
"title": "Define coordination API contract",
|
||||
"description": "Standardize API contract for cross-repository coordination messaging",
|
||||
"state": "open",
|
||||
"labels": ["bzzz-task", "api", "coordination", "blocker"],
|
||||
"created_at": "2025-01-14T09:00:00Z",
|
||||
"updated_at": "2025-01-14T10:00:00Z",
|
||||
"html_url": "https://github.com/mock/bzzz/issues/23",
|
||||
"is_claimed": False,
|
||||
"assignees": [],
|
||||
"task_type": "api_design",
|
||||
"dependencies": []
|
||||
},
|
||||
{
|
||||
"number": 24,
|
||||
"title": "Implement dependency detection algorithm",
|
||||
"description": "Auto-detect task dependencies across repositories using graph analysis",
|
||||
"state": "open",
|
||||
"labels": ["bzzz-task", "algorithm", "coordination", "complex"],
|
||||
"created_at": "2025-01-14T10:15:00Z",
|
||||
"updated_at": "2025-01-14T10:30:00Z",
|
||||
"html_url": "https://github.com/mock/bzzz/issues/24",
|
||||
"is_claimed": False,
|
||||
"assignees": [],
|
||||
"task_type": "feature",
|
||||
"dependencies": [
|
||||
{
|
||||
"repository": "bzzz",
|
||||
"task_number": 23,
|
||||
"dependency_type": "api_contract"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 25,
|
||||
"title": "Add consensus algorithm for coordination",
|
||||
"description": "Implement distributed consensus for multi-agent task coordination",
|
||||
"state": "open",
|
||||
"labels": ["bzzz-task", "consensus", "distributed-systems", "hard"],
|
||||
"created_at": "2025-01-14T11:30:00Z",
|
||||
"updated_at": "2025-01-14T11:45:00Z",
|
||||
"html_url": "https://github.com/mock/bzzz/issues/25",
|
||||
"is_claimed": False,
|
||||
"assignees": [],
|
||||
"task_type": "feature",
|
||||
"dependencies": []
|
||||
}
|
||||
],
|
||||
3: [ # distributed-ai-dev tasks
|
||||
{
|
||||
"number": 8,
|
||||
"title": "Add support for bzzz coordination",
|
||||
"description": "Integrate with bzzz P2P coordination system for distributed AI development",
|
||||
"state": "open",
|
||||
"labels": ["bzzz-task", "integration", "p2p", "ai"],
|
||||
"created_at": "2025-01-14T10:45:00Z",
|
||||
"updated_at": "2025-01-14T11:00:00Z",
|
||||
"html_url": "https://github.com/mock/distributed-ai-dev/issues/8",
|
||||
"is_claimed": False,
|
||||
"assignees": [],
|
||||
"task_type": "integration",
|
||||
"dependencies": [
|
||||
{
|
||||
"repository": "bzzz",
|
||||
"task_number": 23,
|
||||
"dependency_type": "api_contract"
|
||||
},
|
||||
{
|
||||
"repository": "hive",
|
||||
"task_number": 16,
|
||||
"dependency_type": "security"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"number": 9,
|
||||
"title": "Implement AI model coordination",
|
||||
"description": "Enable coordination between AI models across different development environments",
|
||||
"state": "open",
|
||||
"labels": ["bzzz-task", "ai-coordination", "models", "complex"],
|
||||
"created_at": "2025-01-14T11:15:00Z",
|
||||
"updated_at": "2025-01-14T11:30:00Z",
|
||||
"html_url": "https://github.com/mock/distributed-ai-dev/issues/9",
|
||||
"is_claimed": False,
|
||||
"assignees": [],
|
||||
"task_type": "feature",
|
||||
"dependencies": [
|
||||
{
|
||||
"repository": "distributed-ai-dev",
|
||||
"task_number": 8,
|
||||
"dependency_type": "integration"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
4: [ # infra-automation tasks
|
||||
{
|
||||
"number": 12,
|
||||
"title": "Automate bzzz deployment across cluster",
|
||||
"description": "Create automated deployment scripts for bzzz agents on all cluster nodes",
|
||||
"state": "open",
|
||||
"labels": ["bzzz-task", "deployment", "automation", "devops"],
|
||||
"created_at": "2025-01-14T12:00:00Z",
|
||||
"updated_at": "2025-01-14T12:15:00Z",
|
||||
"html_url": "https://github.com/mock/infra-automation/issues/12",
|
||||
"is_claimed": False,
|
||||
"assignees": [],
|
||||
"task_type": "infrastructure",
|
||||
"dependencies": [
|
||||
{
|
||||
"repository": "hive",
|
||||
"task_number": 16,
|
||||
"dependency_type": "security"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
# Track claimed tasks
|
||||
claimed_tasks = {}
|
||||
|
||||
@app.route('/health', methods=['GET'])
|
||||
def health():
|
||||
"""Health check endpoint"""
|
||||
return jsonify({"status": "healthy", "service": "mock-hive-api", "timestamp": datetime.now().isoformat()})
|
||||
|
||||
@app.route('/api/bzzz/active-repos', methods=['GET'])
|
||||
def get_active_repositories():
|
||||
"""Return mock active repositories for bzzz consumption"""
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 📡 Bzzz requested active repositories")
|
||||
|
||||
# Randomly vary the number of available repos for more realistic testing
|
||||
available_repos = random.sample(MOCK_REPOSITORIES, k=random.randint(2, len(MOCK_REPOSITORIES)))
|
||||
|
||||
return jsonify({"repositories": available_repos})
|
||||
|
||||
@app.route('/api/bzzz/projects/<int:project_id>/tasks', methods=['GET'])
|
||||
def get_project_tasks(project_id):
|
||||
"""Return mock bzzz-task labeled issues for a specific project"""
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 📋 Bzzz requested tasks for project {project_id}")
|
||||
|
||||
if project_id not in MOCK_TASKS:
|
||||
return jsonify([])
|
||||
|
||||
# Return tasks, updating claim status
|
||||
tasks = []
|
||||
for task in MOCK_TASKS[project_id]:
|
||||
task_copy = task.copy()
|
||||
claim_key = f"{project_id}-{task['number']}"
|
||||
|
||||
# Check if task is claimed
|
||||
if claim_key in claimed_tasks:
|
||||
claim_info = claimed_tasks[claim_key]
|
||||
# Tasks expire after 30 minutes if not updated
|
||||
if datetime.now() - claim_info['claimed_at'] < timedelta(minutes=30):
|
||||
task_copy['is_claimed'] = True
|
||||
task_copy['assignees'] = [claim_info['agent_id']]
|
||||
else:
|
||||
# Claim expired
|
||||
del claimed_tasks[claim_key]
|
||||
task_copy['is_claimed'] = False
|
||||
task_copy['assignees'] = []
|
||||
|
||||
tasks.append(task_copy)
|
||||
|
||||
return jsonify(tasks)
|
||||
|
||||
@app.route('/api/bzzz/projects/<int:project_id>/claim', methods=['POST'])
|
||||
def claim_task(project_id):
|
||||
"""Register task claim with mock Hive system"""
|
||||
data = request.get_json()
|
||||
task_number = data.get('task_number')
|
||||
agent_id = data.get('agent_id')
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🎯 Agent {agent_id} claiming task {project_id}#{task_number}")
|
||||
|
||||
if not task_number or not agent_id:
|
||||
return jsonify({"error": "task_number and agent_id are required"}), 400
|
||||
|
||||
claim_key = f"{project_id}-{task_number}"
|
||||
|
||||
# Check if already claimed
|
||||
if claim_key in claimed_tasks:
|
||||
existing_claim = claimed_tasks[claim_key]
|
||||
if datetime.now() - existing_claim['claimed_at'] < timedelta(minutes=30):
|
||||
return jsonify({
|
||||
"error": "Task already claimed",
|
||||
"claimed_by": existing_claim['agent_id'],
|
||||
"claimed_at": existing_claim['claimed_at'].isoformat()
|
||||
}), 409
|
||||
|
||||
# Register the claim
|
||||
claim_id = f"{project_id}-{task_number}-{agent_id}-{int(time.time())}"
|
||||
claimed_tasks[claim_key] = {
|
||||
"agent_id": agent_id,
|
||||
"claimed_at": datetime.now(),
|
||||
"claim_id": claim_id
|
||||
}
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] ✅ Task {project_id}#{task_number} claimed by {agent_id}")
|
||||
|
||||
return jsonify({"success": True, "claim_id": claim_id})
|
||||
|
||||
@app.route('/api/bzzz/projects/<int:project_id>/status', methods=['PUT'])
|
||||
def update_task_status(project_id):
|
||||
"""Update task status in mock Hive system"""
|
||||
data = request.get_json()
|
||||
task_number = data.get('task_number')
|
||||
status = data.get('status')
|
||||
metadata = data.get('metadata', {})
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 📊 Task {project_id}#{task_number} status: {status}")
|
||||
|
||||
if not task_number or not status:
|
||||
return jsonify({"error": "task_number and status are required"}), 400
|
||||
|
||||
# Log status update
|
||||
if status == "completed":
|
||||
claim_key = f"{project_id}-{task_number}"
|
||||
if claim_key in claimed_tasks:
|
||||
agent_id = claimed_tasks[claim_key]['agent_id']
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🎉 Task {project_id}#{task_number} completed by {agent_id}")
|
||||
del claimed_tasks[claim_key] # Remove claim
|
||||
elif status == "escalated":
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🚨 Task {project_id}#{task_number} escalated: {metadata}")
|
||||
|
||||
return jsonify({"success": True})
|
||||
|
||||
@app.route('/api/bzzz/coordination-log', methods=['POST'])
|
||||
def log_coordination_activity():
|
||||
"""Log coordination activity for monitoring"""
|
||||
data = request.get_json()
|
||||
activity_type = data.get('type', 'unknown')
|
||||
details = data.get('details', {})
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🧠 Coordination: {activity_type} - {details}")
|
||||
|
||||
# Save coordination activity to file
|
||||
save_coordination_work(activity_type, details)
|
||||
|
||||
return jsonify({"success": True, "logged": True})
|
||||
|
||||
@app.route('/api/bzzz/projects/<int:project_id>/submit-work', methods=['POST'])
|
||||
def submit_work(project_id):
|
||||
"""Endpoint for agents to submit their actual work/code/solutions"""
|
||||
data = request.get_json()
|
||||
task_number = data.get('task_number')
|
||||
agent_id = data.get('agent_id')
|
||||
work_type = data.get('work_type', 'code') # code, documentation, configuration, etc.
|
||||
content = data.get('content', '')
|
||||
files = data.get('files', {}) # Dictionary of filename -> content
|
||||
commit_message = data.get('commit_message', '')
|
||||
description = data.get('description', '')
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 📝 Work submission: {agent_id} -> Project {project_id} Task {task_number}")
|
||||
print(f" Type: {work_type}, Files: {len(files)}, Content length: {len(content)}")
|
||||
|
||||
# Save the actual work content
|
||||
work_data = {
|
||||
"project_id": project_id,
|
||||
"task_number": task_number,
|
||||
"agent_id": agent_id,
|
||||
"work_type": work_type,
|
||||
"content": content,
|
||||
"files": files,
|
||||
"commit_message": commit_message,
|
||||
"description": description,
|
||||
"submitted_at": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
save_agent_work(work_data)
|
||||
|
||||
return jsonify({
|
||||
"success": True,
|
||||
"work_id": f"{project_id}-{task_number}-{int(time.time())}",
|
||||
"message": "Work submitted successfully to mock repository"
|
||||
})
|
||||
|
||||
@app.route('/api/bzzz/projects/<int:project_id>/create-pr', methods=['POST'])
|
||||
def create_pull_request(project_id):
|
||||
"""Endpoint for agents to submit pull request content"""
|
||||
data = request.get_json()
|
||||
task_number = data.get('task_number')
|
||||
agent_id = data.get('agent_id')
|
||||
pr_title = data.get('title', '')
|
||||
pr_description = data.get('description', '')
|
||||
files_changed = data.get('files_changed', {})
|
||||
branch_name = data.get('branch_name', f"bzzz-task-{task_number}")
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🔀 Pull Request: {agent_id} -> Project {project_id}")
|
||||
print(f" Title: {pr_title}")
|
||||
print(f" Files changed: {len(files_changed)}")
|
||||
|
||||
# Save the pull request content
|
||||
pr_data = {
|
||||
"project_id": project_id,
|
||||
"task_number": task_number,
|
||||
"agent_id": agent_id,
|
||||
"title": pr_title,
|
||||
"description": pr_description,
|
||||
"files_changed": files_changed,
|
||||
"branch_name": branch_name,
|
||||
"created_at": datetime.now().isoformat(),
|
||||
"status": "open"
|
||||
}
|
||||
|
||||
save_pull_request(pr_data)
|
||||
|
||||
return jsonify({
|
||||
"success": True,
|
||||
"pr_number": random.randint(100, 999),
|
||||
"pr_url": f"https://github.com/mock/{get_repo_name(project_id)}/pull/{random.randint(100, 999)}",
|
||||
"message": "Pull request created successfully in mock repository"
|
||||
})
|
||||
|
||||
@app.route('/api/bzzz/projects/<int:project_id>/coordination-discussion', methods=['POST'])
|
||||
def log_coordination_discussion(project_id):
|
||||
"""Endpoint for agents to log coordination discussions and decisions"""
|
||||
data = request.get_json()
|
||||
discussion_type = data.get('type', 'general') # dependency_analysis, conflict_resolution, etc.
|
||||
participants = data.get('participants', [])
|
||||
messages = data.get('messages', [])
|
||||
decisions = data.get('decisions', [])
|
||||
context = data.get('context', {})
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 💬 Coordination Discussion: Project {project_id}")
|
||||
print(f" Type: {discussion_type}, Participants: {len(participants)}, Messages: {len(messages)}")
|
||||
|
||||
# Save coordination discussion
|
||||
discussion_data = {
|
||||
"project_id": project_id,
|
||||
"type": discussion_type,
|
||||
"participants": participants,
|
||||
"messages": messages,
|
||||
"decisions": decisions,
|
||||
"context": context,
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
save_coordination_discussion(discussion_data)
|
||||
|
||||
return jsonify({"success": True, "logged": True})
|
||||
|
||||
@app.route('/api/bzzz/projects/<int:project_id>/log-prompt', methods=['POST'])
|
||||
def log_agent_prompt(project_id):
|
||||
"""Endpoint for agents to log the prompts they are receiving/generating"""
|
||||
data = request.get_json()
|
||||
task_number = data.get('task_number')
|
||||
agent_id = data.get('agent_id')
|
||||
prompt_type = data.get('prompt_type', 'task_analysis') # task_analysis, coordination, meta_thinking
|
||||
prompt_content = data.get('prompt_content', '')
|
||||
context = data.get('context', {})
|
||||
model_used = data.get('model_used', 'unknown')
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🧠 Prompt Log: {agent_id} -> {prompt_type}")
|
||||
print(f" Model: {model_used}, Task: {project_id}#{task_number}")
|
||||
print(f" Prompt length: {len(prompt_content)} chars")
|
||||
|
||||
# Save the prompt data
|
||||
prompt_data = {
|
||||
"project_id": project_id,
|
||||
"task_number": task_number,
|
||||
"agent_id": agent_id,
|
||||
"prompt_type": prompt_type,
|
||||
"prompt_content": prompt_content,
|
||||
"context": context,
|
||||
"model_used": model_used,
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
save_agent_prompt(prompt_data)
|
||||
|
||||
return jsonify({"success": True, "logged": True})
|
||||
|
||||
def save_agent_prompt(prompt_data):
|
||||
"""Save agent prompts to files for analysis"""
|
||||
import os
|
||||
timestamp = datetime.now()
|
||||
work_dir = "/tmp/bzzz_agent_prompts"
|
||||
os.makedirs(work_dir, exist_ok=True)
|
||||
|
||||
# Create filename with project, task, and timestamp
|
||||
project_id = prompt_data["project_id"]
|
||||
task_number = prompt_data["task_number"]
|
||||
agent_id = prompt_data["agent_id"].replace("/", "_") # Clean agent ID for filename
|
||||
prompt_type = prompt_data["prompt_type"]
|
||||
|
||||
filename = f"prompt_{prompt_type}_p{project_id}_t{task_number}_{agent_id}_{timestamp.strftime('%H%M%S')}.json"
|
||||
prompt_file = os.path.join(work_dir, filename)
|
||||
|
||||
with open(prompt_file, "w") as f:
|
||||
json.dump(prompt_data, f, indent=2)
|
||||
|
||||
print(f" 💾 Saved prompt to: {prompt_file}")
|
||||
|
||||
# Also save to daily log
|
||||
log_file = os.path.join(work_dir, f"agent_prompts_log_{timestamp.strftime('%Y%m%d')}.jsonl")
|
||||
with open(log_file, "a") as f:
|
||||
f.write(json.dumps(prompt_data) + "\n")
|
||||
|
||||
def save_agent_work(work_data):
|
||||
"""Save actual agent work submissions to files"""
|
||||
import os
|
||||
timestamp = datetime.now()
|
||||
work_dir = "/tmp/bzzz_agent_work"
|
||||
os.makedirs(work_dir, exist_ok=True)
|
||||
|
||||
# Create filename with project, task, and timestamp
|
||||
project_id = work_data["project_id"]
|
||||
task_number = work_data["task_number"]
|
||||
agent_id = work_data["agent_id"].replace("/", "_") # Clean agent ID for filename
|
||||
|
||||
filename = f"work_p{project_id}_t{task_number}_{agent_id}_{timestamp.strftime('%H%M%S')}.json"
|
||||
work_file = os.path.join(work_dir, filename)
|
||||
|
||||
with open(work_file, "w") as f:
|
||||
json.dump(work_data, f, indent=2)
|
||||
|
||||
print(f" 💾 Saved work to: {work_file}")
|
||||
|
||||
# Also save to daily log
|
||||
log_file = os.path.join(work_dir, f"agent_work_log_{timestamp.strftime('%Y%m%d')}.jsonl")
|
||||
with open(log_file, "a") as f:
|
||||
f.write(json.dumps(work_data) + "\n")
|
||||
|
||||
def save_pull_request(pr_data):
|
||||
"""Save pull request content to files"""
|
||||
import os
|
||||
timestamp = datetime.now()
|
||||
work_dir = "/tmp/bzzz_pull_requests"
|
||||
os.makedirs(work_dir, exist_ok=True)
|
||||
|
||||
# Create filename with project, task, and timestamp
|
||||
project_id = pr_data["project_id"]
|
||||
task_number = pr_data["task_number"]
|
||||
agent_id = pr_data["agent_id"].replace("/", "_") # Clean agent ID for filename
|
||||
|
||||
filename = f"pr_p{project_id}_t{task_number}_{agent_id}_{timestamp.strftime('%H%M%S')}.json"
|
||||
pr_file = os.path.join(work_dir, filename)
|
||||
|
||||
with open(pr_file, "w") as f:
|
||||
json.dump(pr_data, f, indent=2)
|
||||
|
||||
print(f" 💾 Saved PR to: {pr_file}")
|
||||
|
||||
# Also save to daily log
|
||||
log_file = os.path.join(work_dir, f"pull_requests_log_{timestamp.strftime('%Y%m%d')}.jsonl")
|
||||
with open(log_file, "a") as f:
|
||||
f.write(json.dumps(pr_data) + "\n")
|
||||
|
||||
def save_coordination_discussion(discussion_data):
|
||||
"""Save coordination discussions to files"""
|
||||
import os
|
||||
timestamp = datetime.now()
|
||||
work_dir = "/tmp/bzzz_coordination_discussions"
|
||||
os.makedirs(work_dir, exist_ok=True)
|
||||
|
||||
# Create filename with project and timestamp
|
||||
project_id = discussion_data["project_id"]
|
||||
discussion_type = discussion_data["type"]
|
||||
|
||||
filename = f"discussion_{discussion_type}_p{project_id}_{timestamp.strftime('%H%M%S')}.json"
|
||||
discussion_file = os.path.join(work_dir, filename)
|
||||
|
||||
with open(discussion_file, "w") as f:
|
||||
json.dump(discussion_data, f, indent=2)
|
||||
|
||||
print(f" 💾 Saved discussion to: {discussion_file}")
|
||||
|
||||
# Also save to daily log
|
||||
log_file = os.path.join(work_dir, f"coordination_discussions_{timestamp.strftime('%Y%m%d')}.jsonl")
|
||||
with open(log_file, "a") as f:
|
||||
f.write(json.dumps(discussion_data) + "\n")
|
||||
|
||||
def get_repo_name(project_id):
|
||||
"""Get repository name from project ID"""
|
||||
repo_map = {
|
||||
1: "hive",
|
||||
2: "bzzz",
|
||||
3: "distributed-ai-dev",
|
||||
4: "infra-automation"
|
||||
}
|
||||
return repo_map.get(project_id, "unknown-repo")
|
||||
|
||||
def save_coordination_work(activity_type, details):
|
||||
"""Save coordination work to files for analysis"""
|
||||
timestamp = datetime.now()
|
||||
work_dir = "/tmp/bzzz_coordination_work"
|
||||
os.makedirs(work_dir, exist_ok=True)
|
||||
|
||||
# Create detailed log entry
|
||||
work_entry = {
|
||||
"timestamp": timestamp.isoformat(),
|
||||
"type": activity_type,
|
||||
"details": details,
|
||||
"session_id": details.get("session_id", "unknown")
|
||||
}
|
||||
|
||||
# Save to daily log file
|
||||
log_file = os.path.join(work_dir, f"coordination_work_{timestamp.strftime('%Y%m%d')}.jsonl")
|
||||
with open(log_file, "a") as f:
|
||||
f.write(json.dumps(work_entry) + "\n")
|
||||
|
||||
# Save individual work items to separate files
|
||||
if activity_type in ["code_generation", "task_solution", "pull_request_content"]:
|
||||
work_file = os.path.join(work_dir, f"{activity_type}_{timestamp.strftime('%H%M%S')}.json")
|
||||
with open(work_file, "w") as f:
|
||||
json.dump(work_entry, f, indent=2)
|
||||
|
||||
def start_background_task_updates():
|
||||
"""Background thread to simulate changing task priorities and new tasks"""
|
||||
def background_updates():
|
||||
while True:
|
||||
time.sleep(random.randint(60, 180)) # Every 1-3 minutes
|
||||
|
||||
# Occasionally add a new urgent task
|
||||
if random.random() < 0.3: # 30% chance
|
||||
project_id = random.choice([1, 2, 3, 4])
|
||||
urgent_task = {
|
||||
"number": random.randint(100, 999),
|
||||
"title": f"URGENT: {random.choice(['Critical bug fix', 'Security patch', 'Production issue', 'Integration failure'])}",
|
||||
"description": "High priority task requiring immediate attention",
|
||||
"state": "open",
|
||||
"labels": ["bzzz-task", "urgent", "critical"],
|
||||
"created_at": datetime.now().isoformat(),
|
||||
"updated_at": datetime.now().isoformat(),
|
||||
"html_url": f"https://github.com/mock/repo/issues/{random.randint(100, 999)}",
|
||||
"is_claimed": False,
|
||||
"assignees": [],
|
||||
"task_type": "bug",
|
||||
"dependencies": []
|
||||
}
|
||||
|
||||
if project_id not in MOCK_TASKS:
|
||||
MOCK_TASKS[project_id] = []
|
||||
MOCK_TASKS[project_id].append(urgent_task)
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🚨 NEW URGENT TASK: Project {project_id} - {urgent_task['title']}")
|
||||
|
||||
thread = Thread(target=background_updates, daemon=True)
|
||||
thread.start()
|
||||
|
||||
if __name__ == '__main__':
|
||||
print("🚀 Starting Mock Hive API Server for Bzzz Testing")
|
||||
print("=" * 50)
|
||||
print("This server provides fake projects and tasks to real bzzz agents")
|
||||
print("Real bzzz coordination will happen with this simulated data")
|
||||
print("")
|
||||
print("Available endpoints:")
|
||||
print(" GET /health - Health check")
|
||||
print(" GET /api/bzzz/active-repos - Active repositories")
|
||||
print(" GET /api/bzzz/projects/<id>/tasks - Project tasks")
|
||||
print(" POST /api/bzzz/projects/<id>/claim - Claim task")
|
||||
print(" PUT /api/bzzz/projects/<id>/status - Update task status")
|
||||
print(" POST /api/bzzz/projects/<id>/submit-work - Submit actual work/code")
|
||||
print(" POST /api/bzzz/projects/<id>/create-pr - Submit pull request content")
|
||||
print(" POST /api/bzzz/projects/<id>/coordination-discussion - Log coordination discussions")
|
||||
print(" POST /api/bzzz/projects/<id>/log-prompt - Log agent prompts and model usage")
|
||||
print(" POST /api/bzzz/coordination-log - Log coordination activity")
|
||||
print("")
|
||||
print("Starting background task updates...")
|
||||
start_background_task_updates()
|
||||
|
||||
print(f"🌟 Mock Hive API running on http://localhost:5000")
|
||||
print("Configure bzzz to use: BZZZ_HIVE_API_URL=http://localhost:5000")
|
||||
print("")
|
||||
|
||||
app.run(host='0.0.0.0', port=5000, debug=False)
|
||||
21
archived/2025-07-17/test-config.yaml
Normal file
21
archived/2025-07-17/test-config.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
hive_api:
|
||||
base_url: "https://hive.home.deepblack.cloud"
|
||||
api_key: ""
|
||||
timeout: "30s"
|
||||
|
||||
agent:
|
||||
id: "test-agent"
|
||||
capabilities: ["task-coordination", "meta-discussion", "general"]
|
||||
models: ["phi3"]
|
||||
specialization: "general_developer"
|
||||
poll_interval: "60s"
|
||||
max_tasks: 1
|
||||
|
||||
github:
|
||||
token_file: ""
|
||||
|
||||
p2p:
|
||||
escalation_webhook: "https://n8n.home.deepblack.cloud/webhook-test/human-escalation"
|
||||
|
||||
logging:
|
||||
level: "debug"
|
||||
94
archived/2025-07-17/test_hive_api.py
Normal file
94
archived/2025-07-17/test_hive_api.py
Normal file
@@ -0,0 +1,94 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for Bzzz-Hive API integration.
|
||||
Tests the newly created API endpoints for dynamic repository discovery.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
sys.path.append('/home/tony/AI/projects/hive/backend')
|
||||
|
||||
from app.services.project_service import ProjectService
|
||||
import json
|
||||
|
||||
def test_project_service():
|
||||
"""Test the ProjectService with Bzzz integration methods."""
|
||||
print("🧪 Testing ProjectService with Bzzz integration...")
|
||||
|
||||
service = ProjectService()
|
||||
|
||||
# Test 1: Get all projects
|
||||
print("\n📁 Testing get_all_projects()...")
|
||||
projects = service.get_all_projects()
|
||||
print(f"Found {len(projects)} total projects")
|
||||
|
||||
# Find projects with GitHub repos
|
||||
github_projects = [p for p in projects if p.get('github_repo')]
|
||||
print(f"Found {len(github_projects)} projects with GitHub repositories:")
|
||||
for project in github_projects:
|
||||
print(f" - {project['name']}: {project['github_repo']}")
|
||||
|
||||
# Test 2: Get active repositories for Bzzz
|
||||
print("\n🐝 Testing get_bzzz_active_repositories()...")
|
||||
try:
|
||||
active_repos = service.get_bzzz_active_repositories()
|
||||
print(f"Found {len(active_repos)} repositories ready for Bzzz coordination:")
|
||||
|
||||
for repo in active_repos:
|
||||
print(f"\n 📦 Repository: {repo['name']}")
|
||||
print(f" Owner: {repo['owner']}")
|
||||
print(f" Repository: {repo['repository']}")
|
||||
print(f" Git URL: {repo['git_url']}")
|
||||
print(f" Ready to claim: {repo['ready_to_claim']}")
|
||||
print(f" Project ID: {repo['project_id']}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing active repositories: {e}")
|
||||
|
||||
# Test 3: Get bzzz-task issues for the hive project specifically
|
||||
print("\n🎯 Testing get_bzzz_project_tasks() for 'hive' project...")
|
||||
try:
|
||||
hive_tasks = service.get_bzzz_project_tasks('hive')
|
||||
print(f"Found {len(hive_tasks)} bzzz-task issues in hive project:")
|
||||
|
||||
for task in hive_tasks:
|
||||
print(f"\n 🎫 Issue #{task['number']}: {task['title']}")
|
||||
print(f" State: {task['state']}")
|
||||
print(f" Labels: {task['labels']}")
|
||||
print(f" Task Type: {task['task_type']}")
|
||||
print(f" Claimed: {task['is_claimed']}")
|
||||
if task['assignees']:
|
||||
print(f" Assignees: {', '.join(task['assignees'])}")
|
||||
print(f" URL: {task['html_url']}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error testing hive project tasks: {e}")
|
||||
|
||||
# Test 4: Simulate API endpoint response format
|
||||
print("\n📡 Testing API endpoint response format...")
|
||||
try:
|
||||
active_repos = service.get_bzzz_active_repositories()
|
||||
api_response = {"repositories": active_repos}
|
||||
|
||||
print("API Response Preview (first 500 chars):")
|
||||
response_json = json.dumps(api_response, indent=2)
|
||||
print(response_json[:500] + "..." if len(response_json) > 500 else response_json)
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error formatting API response: {e}")
|
||||
|
||||
def main():
|
||||
print("🚀 Starting Bzzz-Hive API Integration Test")
|
||||
print("="*50)
|
||||
|
||||
try:
|
||||
test_project_service()
|
||||
print("\n✅ Test completed successfully!")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Test failed with error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
98
archived/2025-07-17/test_meta_discussion.py
Normal file
98
archived/2025-07-17/test_meta_discussion.py
Normal file
@@ -0,0 +1,98 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script to trigger and observe bzzz meta discussion
|
||||
"""
|
||||
|
||||
import json
|
||||
import time
|
||||
import requests
|
||||
from datetime import datetime
|
||||
|
||||
def test_meta_discussion():
|
||||
"""Test the Antennae meta discussion by simulating a complex task"""
|
||||
|
||||
print("🎯 Testing Bzzz Antennae Meta Discussion")
|
||||
print("=" * 50)
|
||||
|
||||
# Test 1: Check if the P2P mesh is active
|
||||
print("1. Checking P2P mesh status...")
|
||||
|
||||
# We can't directly inject into the P2P mesh from here, but we can:
|
||||
# - Check the bzzz service logs for meta discussion activity
|
||||
# - Create a mock scenario description
|
||||
|
||||
mock_scenario = {
|
||||
"task_type": "complex_architecture_design",
|
||||
"description": "Design a microservices architecture for a distributed AI system with P2P coordination",
|
||||
"complexity": "high",
|
||||
"requires_collaboration": True,
|
||||
"estimated_agents_needed": 3
|
||||
}
|
||||
|
||||
print(f"📋 Mock Complex Task:")
|
||||
print(f" Type: {mock_scenario['task_type']}")
|
||||
print(f" Description: {mock_scenario['description']}")
|
||||
print(f" Complexity: {mock_scenario['complexity']}")
|
||||
print(f" Collaboration Required: {mock_scenario['requires_collaboration']}")
|
||||
|
||||
# Test 2: Demonstrate what would happen in meta discussion
|
||||
print("\n2. Simulating Antennae Meta Discussion Flow:")
|
||||
print(" 🤖 Agent A (walnut): 'I'll handle the API gateway design'")
|
||||
print(" 🤖 Agent B (acacia): 'I can work on the data layer architecture'")
|
||||
print(" 🤖 Agent C (ironwood): 'I'll focus on the P2P coordination logic'")
|
||||
print(" 🎯 Meta Discussion: Agents coordinate task splitting and dependencies")
|
||||
|
||||
# Test 3: Show escalation scenario
|
||||
print("\n3. Human Escalation Scenario:")
|
||||
print(" ⚠️ Agents detect conflicting approaches to distributed consensus")
|
||||
print(" 🚨 Automatic escalation triggered after 3 rounds of discussion")
|
||||
print(" 👤 Human expert summoned via N8N webhook")
|
||||
|
||||
# Test 4: Check current bzzz logs for any meta discussion activity
|
||||
print("\n4. Checking recent bzzz activity...")
|
||||
|
||||
try:
|
||||
# This would show any recent meta discussion logs
|
||||
import subprocess
|
||||
result = subprocess.run([
|
||||
'journalctl', '-u', 'bzzz.service', '--no-pager', '-l', '-n', '20'
|
||||
], capture_output=True, text=True, timeout=10)
|
||||
|
||||
if result.returncode == 0:
|
||||
logs = result.stdout
|
||||
if 'meta' in logs.lower() or 'antennae' in logs.lower():
|
||||
print(" ✅ Found meta discussion activity in logs!")
|
||||
# Show relevant lines
|
||||
for line in logs.split('\n'):
|
||||
if 'meta' in line.lower() or 'antennae' in line.lower():
|
||||
print(f" 📝 {line}")
|
||||
else:
|
||||
print(" ℹ️ No recent meta discussion activity (expected - no active tasks)")
|
||||
else:
|
||||
print(" ⚠️ Could not access bzzz logs")
|
||||
|
||||
except Exception as e:
|
||||
print(f" ⚠️ Error checking logs: {e}")
|
||||
|
||||
# Test 5: Show what capabilities support meta discussion
|
||||
print("\n5. Meta Discussion Capabilities:")
|
||||
capabilities = [
|
||||
"meta-discussion",
|
||||
"task-coordination",
|
||||
"collaborative-reasoning",
|
||||
"human-escalation",
|
||||
"cross-repository-coordination"
|
||||
]
|
||||
|
||||
for cap in capabilities:
|
||||
print(f" ✅ {cap}")
|
||||
|
||||
print("\n🎯 Meta Discussion Test Complete!")
|
||||
print("\nTo see meta discussion in action:")
|
||||
print("1. Configure repositories in Hive with 'bzzz_enabled: true'")
|
||||
print("2. Create complex GitHub issues labeled 'bzzz-task'")
|
||||
print("3. Watch agents coordinate via Antennae P2P channel")
|
||||
print("4. Monitor logs: journalctl -u bzzz.service -f | grep -i meta")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_meta_discussion()
|
||||
95
archived/2025-07-17/test_simple_github.py
Normal file
95
archived/2025-07-17/test_simple_github.py
Normal file
@@ -0,0 +1,95 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Simple test to check GitHub API access for bzzz-task issues.
|
||||
"""
|
||||
|
||||
import requests
|
||||
from pathlib import Path
|
||||
|
||||
def get_github_token():
|
||||
"""Get GitHub token from secrets file."""
|
||||
try:
|
||||
# Try gh-token first
|
||||
gh_token_path = Path("/home/tony/AI/secrets/passwords_and_tokens/gh-token")
|
||||
if gh_token_path.exists():
|
||||
return gh_token_path.read_text().strip()
|
||||
|
||||
# Try GitHub token
|
||||
github_token_path = Path("/home/tony/AI/secrets/passwords_and_tokens/github-token")
|
||||
if github_token_path.exists():
|
||||
return github_token_path.read_text().strip()
|
||||
|
||||
# Fallback to GitLab token if GitHub token doesn't exist
|
||||
gitlab_token_path = Path("/home/tony/AI/secrets/passwords_and_tokens/claude-gitlab-token")
|
||||
if gitlab_token_path.exists():
|
||||
return gitlab_token_path.read_text().strip()
|
||||
except Exception:
|
||||
pass
|
||||
return None
|
||||
|
||||
def test_github_bzzz_tasks():
|
||||
"""Test fetching bzzz-task issues from GitHub."""
|
||||
token = get_github_token()
|
||||
if not token:
|
||||
print("❌ No GitHub token found")
|
||||
return
|
||||
|
||||
print("🐙 Testing GitHub API access for bzzz-task issues...")
|
||||
|
||||
# Test with the hive repository
|
||||
repo = "anthonyrawlins/hive"
|
||||
url = f"https://api.github.com/repos/{repo}/issues"
|
||||
|
||||
headers = {
|
||||
"Authorization": f"token {token}",
|
||||
"Accept": "application/vnd.github.v3+json"
|
||||
}
|
||||
|
||||
# First, get all open issues
|
||||
print(f"\n📊 Fetching all open issues from {repo}...")
|
||||
response = requests.get(url, headers=headers, params={"state": "open"}, timeout=10)
|
||||
|
||||
if response.status_code == 200:
|
||||
all_issues = response.json()
|
||||
print(f"Found {len(all_issues)} total open issues")
|
||||
|
||||
# Show all labels used in the repository
|
||||
all_labels = set()
|
||||
for issue in all_issues:
|
||||
for label in issue.get('labels', []):
|
||||
all_labels.add(label['name'])
|
||||
|
||||
print(f"All labels in use: {sorted(all_labels)}")
|
||||
|
||||
else:
|
||||
print(f"❌ Failed to fetch issues: {response.status_code} - {response.text}")
|
||||
return
|
||||
|
||||
# Now test for bzzz-task labeled issues
|
||||
print(f"\n🐝 Fetching bzzz-task labeled issues from {repo}...")
|
||||
response = requests.get(url, headers=headers, params={"labels": "bzzz-task", "state": "open"}, timeout=10)
|
||||
|
||||
if response.status_code == 200:
|
||||
bzzz_issues = response.json()
|
||||
print(f"Found {len(bzzz_issues)} issues with 'bzzz-task' label")
|
||||
|
||||
if not bzzz_issues:
|
||||
print("ℹ️ No issues found with 'bzzz-task' label")
|
||||
print(" You can create test issues with this label for testing")
|
||||
|
||||
for issue in bzzz_issues:
|
||||
print(f"\n 🎫 Issue #{issue['number']}: {issue['title']}")
|
||||
print(f" State: {issue['state']}")
|
||||
print(f" Labels: {[label['name'] for label in issue.get('labels', [])]}")
|
||||
print(f" Assignees: {[assignee['login'] for assignee in issue.get('assignees', [])]}")
|
||||
print(f" URL: {issue['html_url']}")
|
||||
else:
|
||||
print(f"❌ Failed to fetch bzzz-task issues: {response.status_code} - {response.text}")
|
||||
|
||||
def main():
|
||||
print("🚀 Simple GitHub API Test for Bzzz Integration")
|
||||
print("="*50)
|
||||
test_github_bzzz_tasks()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user