Add enhanced mock API with work capture endpoints and comprehensive project documentation
- Enhanced mock-hive-server.py with work submission endpoints - Added PROGRESS_REPORT.md documenting system accomplishments - Added PROJECT_TODOS.md with comprehensive task breakdown - Added trigger_mock_coordination.sh test script 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
138
PROGRESS_REPORT.md
Normal file
138
PROGRESS_REPORT.md
Normal file
@@ -0,0 +1,138 @@
|
||||
# Bzzz P2P Coordination System - Progress Report
|
||||
|
||||
## Overview
|
||||
This report documents the implementation and testing progress of the Bzzz P2P mesh coordination system with meta-thinking capabilities (Antennae framework).
|
||||
|
||||
## Major Accomplishments
|
||||
|
||||
### 1. High-Priority Feature Implementation ✅
|
||||
- **Fixed stub function implementations** in `github/integration.go`
|
||||
- Implemented proper task filtering based on agent capabilities
|
||||
- Added task announcement logic for P2P coordination
|
||||
- Enhanced capability-based task matching with keyword analysis
|
||||
|
||||
- **Completed Hive API client integration**
|
||||
- Extended PostgreSQL database schema for bzzz integration
|
||||
- Updated ProjectService to use database instead of filesystem scanning
|
||||
- Implemented secure Docker secrets for GitHub token access
|
||||
|
||||
- **Removed hardcoded repository configuration**
|
||||
- Dynamic repository discovery via Hive API
|
||||
- Database-driven project management
|
||||
|
||||
### 2. Security Enhancements ✅
|
||||
- **Docker Secrets Implementation**
|
||||
- Replaced filesystem-based GitHub token access with Docker secrets
|
||||
- Updated docker-compose.swarm.yml with proper secrets configuration
|
||||
- Enhanced security posture for credential management
|
||||
|
||||
### 3. Database Integration ✅
|
||||
- **Extended Hive Database Schema**
|
||||
- Added bzzz-specific fields to projects table
|
||||
- Inserted Hive repository as test project with 9 bzzz-task labeled issues
|
||||
- Successful GitHub API integration showing real issue discovery
|
||||
|
||||
### 4. Independent Testing Infrastructure ✅
|
||||
- **Mock Hive API Server** (`mock-hive-server.py`)
|
||||
- Provides fake projects and tasks for real bzzz coordination
|
||||
- Comprehensive task simulation with realistic coordination scenarios
|
||||
- Background task generation for dynamic testing
|
||||
- Enhanced with work capture endpoints:
|
||||
- `/api/bzzz/projects/<id>/submit-work` - Capture actual agent work/code
|
||||
- `/api/bzzz/projects/<id>/create-pr` - Capture pull request content
|
||||
- `/api/bzzz/projects/<id>/coordination-discussion` - Log coordination discussions
|
||||
- `/api/bzzz/projects/<id>/log-prompt` - Log agent prompts and model usage
|
||||
|
||||
- **Real-Time Monitoring Dashboard** (`cmd/bzzz-monitor.py`)
|
||||
- btop/nvtop-style console interface for coordination monitoring
|
||||
- Real coordination channel metrics and message rate tracking
|
||||
- Compact timestamp display and efficient space utilization
|
||||
- Live agent activity and P2P network status monitoring
|
||||
|
||||
### 5. P2P Network Verification ✅
|
||||
- **Confirmed Multi-Node Operation**
|
||||
- WALNUT, ACACIA, IRONWOOD nodes running as systemd services
|
||||
- 2 connected peers with regular availability broadcasts
|
||||
- P2P mesh discovery and communication functioning correctly
|
||||
|
||||
### 6. Cross-Repository Coordination Framework ✅
|
||||
- **Antennae Meta-Discussion System**
|
||||
- Advanced cross-repository coordination capabilities
|
||||
- Dependency detection and conflict resolution
|
||||
- AI-powered coordination plan generation
|
||||
- Consensus detection algorithms
|
||||
|
||||
## Current System Status
|
||||
|
||||
### Working Components
|
||||
1. ✅ P2P mesh networking (libp2p + mDNS)
|
||||
2. ✅ Agent availability broadcasting
|
||||
3. ✅ Database-driven repository discovery
|
||||
4. ✅ Secure credential management
|
||||
5. ✅ Real-time monitoring infrastructure
|
||||
6. ✅ Mock API testing framework
|
||||
7. ✅ Work capture endpoints (ready for use)
|
||||
|
||||
### Identified Issues
|
||||
1. ❌ **GitHub Repository Verification Failures**
|
||||
- Mock repositories (e.g., `mock-org/hive`) return 404 errors
|
||||
- Prevents agents from proceeding with task discovery
|
||||
- Need local Git hosting solution
|
||||
|
||||
2. ❌ **Task Claim Logic Incomplete**
|
||||
- Agents broadcast availability but don't actively claim tasks
|
||||
- Missing integration between P2P discovery and task claiming
|
||||
- Need to enhance bzzz binary task claim workflow
|
||||
|
||||
3. ❌ **Docker Overlay Network Issues**
|
||||
- Some connectivity issues between services
|
||||
- May impact agent coordination in containerized environments
|
||||
|
||||
## File Locations and Key Components
|
||||
|
||||
### Core Implementation Files
|
||||
- `/home/tony/AI/projects/Bzzz/github/integration.go` - Enhanced task filtering and P2P coordination
|
||||
- `/home/tony/AI/projects/hive/backend/app/services/project_service.py` - Database-driven project service
|
||||
- `/home/tony/AI/projects/hive/docker-compose.swarm.yml` - Docker secrets configuration
|
||||
|
||||
### Testing and Monitoring
|
||||
- `/home/tony/AI/projects/Bzzz/mock-hive-server.py` - Mock API with work capture
|
||||
- `/home/tony/AI/projects/Bzzz/cmd/bzzz-monitor.py` - Real-time coordination dashboard
|
||||
- `/home/tony/AI/projects/Bzzz/scripts/trigger_mock_coordination.sh` - Coordination test script
|
||||
|
||||
### Configuration
|
||||
- `/etc/systemd/system/bzzz.service.d/mock-api.conf` - Systemd override for mock API testing
|
||||
- `/tmp/bzzz_agent_work/` - Directory for captured agent work (when functioning)
|
||||
- `/tmp/bzzz_pull_requests/` - Directory for captured pull requests
|
||||
- `/tmp/bzzz_agent_prompts/` - Directory for captured agent prompts and model usage
|
||||
|
||||
## Technical Achievements
|
||||
|
||||
### Database Schema Extensions
|
||||
```sql
|
||||
-- Extended projects table with bzzz integration fields
|
||||
ALTER TABLE projects ADD COLUMN bzzz_enabled BOOLEAN DEFAULT false;
|
||||
ALTER TABLE projects ADD COLUMN ready_to_claim BOOLEAN DEFAULT false;
|
||||
ALTER TABLE projects ADD COLUMN private_repo BOOLEAN DEFAULT false;
|
||||
ALTER TABLE projects ADD COLUMN github_token_required BOOLEAN DEFAULT false;
|
||||
```
|
||||
|
||||
### Docker Secrets Integration
|
||||
```yaml
|
||||
secrets:
|
||||
- github_token
|
||||
environment:
|
||||
- GITHUB_TOKEN_FILE=/run/secrets/github_token
|
||||
```
|
||||
|
||||
### P2P Network Statistics
|
||||
- **Active Nodes**: 3 (WALNUT, ACACIA, IRONWOOD)
|
||||
- **Connected Peers**: 2 per node
|
||||
- **Network Protocol**: libp2p with mDNS discovery
|
||||
- **Message Broadcasting**: Availability, capability, coordination
|
||||
|
||||
## Next Steps Required
|
||||
See PROJECT_TODOS.md for comprehensive task list.
|
||||
|
||||
## Summary
|
||||
The Bzzz P2P coordination system has a solid foundation with working P2P networking, database integration, secure credential management, and comprehensive testing infrastructure. The main blockers are the need for a local Git hosting solution and completion of the task claim logic in the bzzz binary.
|
||||
169
PROJECT_TODOS.md
Normal file
169
PROJECT_TODOS.md
Normal file
@@ -0,0 +1,169 @@
|
||||
# Bzzz P2P Coordination System - TODO List
|
||||
|
||||
## High Priority - Immediate Blockers
|
||||
|
||||
### 1. Local Git Hosting Solution
|
||||
**Priority: Critical**
|
||||
- [ ] **Deploy Local GitLab Instance**
|
||||
- [ ] Configure GitLab Community Edition on Docker Swarm
|
||||
- [ ] Set up domain/subdomain (e.g., `gitlab.bzzz.local` or `git.home.deepblack.cloud`)
|
||||
- [ ] Configure SSL certificates via Traefik/Let's Encrypt
|
||||
- [ ] Create test organization and repositories
|
||||
- [ ] Import/create realistic project structures
|
||||
|
||||
- [ ] **Alternative: Deploy Gitea Instance**
|
||||
- [ ] Evaluate Gitea as lighter alternative to GitLab
|
||||
- [ ] Docker Swarm deployment configuration
|
||||
- [ ] Domain and SSL setup
|
||||
- [ ] Test repository creation and API access
|
||||
|
||||
- [ ] **Local Repository Setup**
|
||||
- [ ] Create mock repositories that actually exist:
|
||||
- `bzzz-coordination-platform` (simulating Hive)
|
||||
- `bzzz-p2p-system` (actual Bzzz codebase)
|
||||
- `distributed-ai-development`
|
||||
- `infrastructure-automation`
|
||||
- [ ] Add realistic issues with `bzzz-task` labels
|
||||
- [ ] Configure repository access tokens
|
||||
- [ ] Test GitHub API compatibility
|
||||
|
||||
### 2. Task Claim Logic Enhancement
|
||||
**Priority: Critical**
|
||||
- [ ] **Analyze Current Bzzz Binary Workflow**
|
||||
- [ ] Map current task discovery process in bzzz binary
|
||||
- [ ] Identify where task claiming should occur
|
||||
- [ ] Document current P2P message flow
|
||||
|
||||
- [ ] **Implement Active Task Discovery**
|
||||
- [ ] Add periodic repository polling in bzzz agents
|
||||
- [ ] Implement task evaluation and filtering logic
|
||||
- [ ] Add task claiming attempts with conflict resolution
|
||||
|
||||
- [ ] **Enhance Task Claim Logic in Go Code**
|
||||
- [ ] Modify `github/integration.go` to actively claim suitable tasks
|
||||
- [ ] Add retry logic for failed claims
|
||||
- [ ] Implement task priority evaluation
|
||||
- [ ] Add coordination messaging for task claims
|
||||
|
||||
- [ ] **P2P Coordination for Task Claims**
|
||||
- [ ] Implement distributed task claiming protocol
|
||||
- [ ] Add conflict resolution when multiple agents claim same task
|
||||
- [ ] Enhance availability broadcasting with claimed task status
|
||||
|
||||
## Medium Priority - Core Functionality
|
||||
|
||||
### 3. Agent Work Execution
|
||||
- [ ] **Complete Work Capture Integration**
|
||||
- [ ] Modify bzzz agents to actually submit work to mock API endpoints
|
||||
- [ ] Test prompt logging with Ollama models
|
||||
- [ ] Verify meta-thinking tool utilization
|
||||
- [ ] Capture actual code generation and pull request content
|
||||
|
||||
- [ ] **Ollama Model Integration Testing**
|
||||
- [ ] Verify agent prompts are reaching Ollama endpoints
|
||||
- [ ] Test meta-thinking capabilities with local models
|
||||
- [ ] Document model performance with coordination tasks
|
||||
- [ ] Optimize prompt engineering for coordination scenarios
|
||||
|
||||
### 4. Real Coordination Scenarios
|
||||
- [ ] **Cross-Repository Dependency Testing**
|
||||
- [ ] Create realistic dependency scenarios between repositories
|
||||
- [ ] Test antennae framework with actual dependency conflicts
|
||||
- [ ] Verify coordination session creation and resolution
|
||||
|
||||
- [ ] **Multi-Agent Task Coordination**
|
||||
- [ ] Test scenarios with multiple agents working on related tasks
|
||||
- [ ] Verify conflict detection and resolution
|
||||
- [ ] Test consensus mechanisms
|
||||
|
||||
### 5. Infrastructure Improvements
|
||||
- [ ] **Docker Overlay Network Issues**
|
||||
- [ ] Debug connectivity issues between services
|
||||
- [ ] Optimize network performance for coordination messages
|
||||
- [ ] Ensure proper service discovery in swarm environment
|
||||
|
||||
- [ ] **Enhanced Monitoring**
|
||||
- [ ] Add metrics collection for coordination performance
|
||||
- [ ] Implement alerting for coordination failures
|
||||
- [ ] Create historical coordination analytics
|
||||
|
||||
## Low Priority - Nice to Have
|
||||
|
||||
### 6. User Interface Enhancements
|
||||
- [ ] **Web-Based Coordination Dashboard**
|
||||
- [ ] Create web interface for monitoring coordination activity
|
||||
- [ ] Add visual representation of P2P network topology
|
||||
- [ ] Show task dependencies and coordination sessions
|
||||
|
||||
- [ ] **Enhanced CLI Tools**
|
||||
- [ ] Add bzzz CLI commands for manual task management
|
||||
- [ ] Create debugging tools for coordination issues
|
||||
- [ ] Add configuration management utilities
|
||||
|
||||
### 7. Documentation and Testing
|
||||
- [ ] **Comprehensive Documentation**
|
||||
- [ ] Document P2P coordination protocols
|
||||
- [ ] Create deployment guides for new environments
|
||||
- [ ] Add troubleshooting documentation
|
||||
|
||||
- [ ] **Automated Testing Suite**
|
||||
- [ ] Create integration tests for coordination scenarios
|
||||
- [ ] Add performance benchmarks
|
||||
- [ ] Implement continuous testing pipeline
|
||||
|
||||
### 8. Advanced Features
|
||||
- [ ] **Dynamic Agent Capabilities**
|
||||
- [ ] Allow agents to learn and adapt capabilities
|
||||
- [ ] Implement capability evolution based on task history
|
||||
- [ ] Add skill-based task routing
|
||||
|
||||
- [ ] **Advanced Coordination Algorithms**
|
||||
- [ ] Implement more sophisticated consensus mechanisms
|
||||
- [ ] Add economic models for task allocation
|
||||
- [ ] Create coordination learning from historical data
|
||||
|
||||
## Technical Debt and Maintenance
|
||||
|
||||
### 9. Code Quality Improvements
|
||||
- [ ] **Error Handling Enhancement**
|
||||
- [ ] Improve error reporting in coordination failures
|
||||
- [ ] Add graceful degradation for network issues
|
||||
- [ ] Implement proper logging throughout the system
|
||||
|
||||
- [ ] **Performance Optimization**
|
||||
- [ ] Profile P2P message overhead
|
||||
- [ ] Optimize database queries for task discovery
|
||||
- [ ] Improve coordination session efficiency
|
||||
|
||||
### 10. Security Enhancements
|
||||
- [ ] **Agent Authentication**
|
||||
- [ ] Implement proper agent identity verification
|
||||
- [ ] Add authorization for task claims
|
||||
- [ ] Secure coordination message exchange
|
||||
|
||||
- [ ] **Repository Access Security**
|
||||
- [ ] Audit GitHub/Git access patterns
|
||||
- [ ] Implement least-privilege access principles
|
||||
- [ ] Add credential rotation mechanisms
|
||||
|
||||
## Immediate Next Steps (This Week)
|
||||
|
||||
1. **Deploy Local GitLab/Gitea** - Resolve repository access issues
|
||||
2. **Enhance Task Claim Logic** - Make agents actively discover and claim tasks
|
||||
3. **Test Real Coordination** - Verify agents actually perform work on local repositories
|
||||
4. **Debug Network Issues** - Ensure all components communicate properly
|
||||
|
||||
## Dependencies and Blockers
|
||||
|
||||
- **Local Git Hosting**: Blocks real task testing and agent work verification
|
||||
- **Task Claim Logic**: Blocks agent activation and coordination testing
|
||||
- **Network Issues**: May impact agent communication and coordination
|
||||
|
||||
## Success Metrics
|
||||
|
||||
- [ ] Agents successfully discover and claim tasks from local repositories
|
||||
- [ ] Real code generation and pull request creation captured
|
||||
- [ ] Cross-repository coordination sessions functioning
|
||||
- [ ] Multiple agents coordinating on dependent tasks
|
||||
- [ ] Ollama models successfully utilized for meta-thinking
|
||||
- [ ] Performance metrics showing sub-second coordination response times
|
||||
@@ -369,8 +369,278 @@ def log_coordination_activity():
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🧠 Coordination: {activity_type} - {details}")
|
||||
|
||||
# Save coordination activity to file
|
||||
save_coordination_work(activity_type, details)
|
||||
|
||||
return jsonify({"success": True, "logged": True})
|
||||
|
||||
@app.route('/api/bzzz/projects/<int:project_id>/submit-work', methods=['POST'])
|
||||
def submit_work(project_id):
|
||||
"""Endpoint for agents to submit their actual work/code/solutions"""
|
||||
data = request.get_json()
|
||||
task_number = data.get('task_number')
|
||||
agent_id = data.get('agent_id')
|
||||
work_type = data.get('work_type', 'code') # code, documentation, configuration, etc.
|
||||
content = data.get('content', '')
|
||||
files = data.get('files', {}) # Dictionary of filename -> content
|
||||
commit_message = data.get('commit_message', '')
|
||||
description = data.get('description', '')
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 📝 Work submission: {agent_id} -> Project {project_id} Task {task_number}")
|
||||
print(f" Type: {work_type}, Files: {len(files)}, Content length: {len(content)}")
|
||||
|
||||
# Save the actual work content
|
||||
work_data = {
|
||||
"project_id": project_id,
|
||||
"task_number": task_number,
|
||||
"agent_id": agent_id,
|
||||
"work_type": work_type,
|
||||
"content": content,
|
||||
"files": files,
|
||||
"commit_message": commit_message,
|
||||
"description": description,
|
||||
"submitted_at": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
save_agent_work(work_data)
|
||||
|
||||
return jsonify({
|
||||
"success": True,
|
||||
"work_id": f"{project_id}-{task_number}-{int(time.time())}",
|
||||
"message": "Work submitted successfully to mock repository"
|
||||
})
|
||||
|
||||
@app.route('/api/bzzz/projects/<int:project_id>/create-pr', methods=['POST'])
|
||||
def create_pull_request(project_id):
|
||||
"""Endpoint for agents to submit pull request content"""
|
||||
data = request.get_json()
|
||||
task_number = data.get('task_number')
|
||||
agent_id = data.get('agent_id')
|
||||
pr_title = data.get('title', '')
|
||||
pr_description = data.get('description', '')
|
||||
files_changed = data.get('files_changed', {})
|
||||
branch_name = data.get('branch_name', f"bzzz-task-{task_number}")
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🔀 Pull Request: {agent_id} -> Project {project_id}")
|
||||
print(f" Title: {pr_title}")
|
||||
print(f" Files changed: {len(files_changed)}")
|
||||
|
||||
# Save the pull request content
|
||||
pr_data = {
|
||||
"project_id": project_id,
|
||||
"task_number": task_number,
|
||||
"agent_id": agent_id,
|
||||
"title": pr_title,
|
||||
"description": pr_description,
|
||||
"files_changed": files_changed,
|
||||
"branch_name": branch_name,
|
||||
"created_at": datetime.now().isoformat(),
|
||||
"status": "open"
|
||||
}
|
||||
|
||||
save_pull_request(pr_data)
|
||||
|
||||
return jsonify({
|
||||
"success": True,
|
||||
"pr_number": random.randint(100, 999),
|
||||
"pr_url": f"https://github.com/mock/{get_repo_name(project_id)}/pull/{random.randint(100, 999)}",
|
||||
"message": "Pull request created successfully in mock repository"
|
||||
})
|
||||
|
||||
@app.route('/api/bzzz/projects/<int:project_id>/coordination-discussion', methods=['POST'])
|
||||
def log_coordination_discussion(project_id):
|
||||
"""Endpoint for agents to log coordination discussions and decisions"""
|
||||
data = request.get_json()
|
||||
discussion_type = data.get('type', 'general') # dependency_analysis, conflict_resolution, etc.
|
||||
participants = data.get('participants', [])
|
||||
messages = data.get('messages', [])
|
||||
decisions = data.get('decisions', [])
|
||||
context = data.get('context', {})
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 💬 Coordination Discussion: Project {project_id}")
|
||||
print(f" Type: {discussion_type}, Participants: {len(participants)}, Messages: {len(messages)}")
|
||||
|
||||
# Save coordination discussion
|
||||
discussion_data = {
|
||||
"project_id": project_id,
|
||||
"type": discussion_type,
|
||||
"participants": participants,
|
||||
"messages": messages,
|
||||
"decisions": decisions,
|
||||
"context": context,
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
save_coordination_discussion(discussion_data)
|
||||
|
||||
return jsonify({"success": True, "logged": True})
|
||||
|
||||
@app.route('/api/bzzz/projects/<int:project_id>/log-prompt', methods=['POST'])
|
||||
def log_agent_prompt(project_id):
|
||||
"""Endpoint for agents to log the prompts they are receiving/generating"""
|
||||
data = request.get_json()
|
||||
task_number = data.get('task_number')
|
||||
agent_id = data.get('agent_id')
|
||||
prompt_type = data.get('prompt_type', 'task_analysis') # task_analysis, coordination, meta_thinking
|
||||
prompt_content = data.get('prompt_content', '')
|
||||
context = data.get('context', {})
|
||||
model_used = data.get('model_used', 'unknown')
|
||||
|
||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] 🧠 Prompt Log: {agent_id} -> {prompt_type}")
|
||||
print(f" Model: {model_used}, Task: {project_id}#{task_number}")
|
||||
print(f" Prompt length: {len(prompt_content)} chars")
|
||||
|
||||
# Save the prompt data
|
||||
prompt_data = {
|
||||
"project_id": project_id,
|
||||
"task_number": task_number,
|
||||
"agent_id": agent_id,
|
||||
"prompt_type": prompt_type,
|
||||
"prompt_content": prompt_content,
|
||||
"context": context,
|
||||
"model_used": model_used,
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
save_agent_prompt(prompt_data)
|
||||
|
||||
return jsonify({"success": True, "logged": True})
|
||||
|
||||
def save_agent_prompt(prompt_data):
|
||||
"""Save agent prompts to files for analysis"""
|
||||
import os
|
||||
timestamp = datetime.now()
|
||||
work_dir = "/tmp/bzzz_agent_prompts"
|
||||
os.makedirs(work_dir, exist_ok=True)
|
||||
|
||||
# Create filename with project, task, and timestamp
|
||||
project_id = prompt_data["project_id"]
|
||||
task_number = prompt_data["task_number"]
|
||||
agent_id = prompt_data["agent_id"].replace("/", "_") # Clean agent ID for filename
|
||||
prompt_type = prompt_data["prompt_type"]
|
||||
|
||||
filename = f"prompt_{prompt_type}_p{project_id}_t{task_number}_{agent_id}_{timestamp.strftime('%H%M%S')}.json"
|
||||
prompt_file = os.path.join(work_dir, filename)
|
||||
|
||||
with open(prompt_file, "w") as f:
|
||||
json.dump(prompt_data, f, indent=2)
|
||||
|
||||
print(f" 💾 Saved prompt to: {prompt_file}")
|
||||
|
||||
# Also save to daily log
|
||||
log_file = os.path.join(work_dir, f"agent_prompts_log_{timestamp.strftime('%Y%m%d')}.jsonl")
|
||||
with open(log_file, "a") as f:
|
||||
f.write(json.dumps(prompt_data) + "\n")
|
||||
|
||||
def save_agent_work(work_data):
|
||||
"""Save actual agent work submissions to files"""
|
||||
import os
|
||||
timestamp = datetime.now()
|
||||
work_dir = "/tmp/bzzz_agent_work"
|
||||
os.makedirs(work_dir, exist_ok=True)
|
||||
|
||||
# Create filename with project, task, and timestamp
|
||||
project_id = work_data["project_id"]
|
||||
task_number = work_data["task_number"]
|
||||
agent_id = work_data["agent_id"].replace("/", "_") # Clean agent ID for filename
|
||||
|
||||
filename = f"work_p{project_id}_t{task_number}_{agent_id}_{timestamp.strftime('%H%M%S')}.json"
|
||||
work_file = os.path.join(work_dir, filename)
|
||||
|
||||
with open(work_file, "w") as f:
|
||||
json.dump(work_data, f, indent=2)
|
||||
|
||||
print(f" 💾 Saved work to: {work_file}")
|
||||
|
||||
# Also save to daily log
|
||||
log_file = os.path.join(work_dir, f"agent_work_log_{timestamp.strftime('%Y%m%d')}.jsonl")
|
||||
with open(log_file, "a") as f:
|
||||
f.write(json.dumps(work_data) + "\n")
|
||||
|
||||
def save_pull_request(pr_data):
|
||||
"""Save pull request content to files"""
|
||||
import os
|
||||
timestamp = datetime.now()
|
||||
work_dir = "/tmp/bzzz_pull_requests"
|
||||
os.makedirs(work_dir, exist_ok=True)
|
||||
|
||||
# Create filename with project, task, and timestamp
|
||||
project_id = pr_data["project_id"]
|
||||
task_number = pr_data["task_number"]
|
||||
agent_id = pr_data["agent_id"].replace("/", "_") # Clean agent ID for filename
|
||||
|
||||
filename = f"pr_p{project_id}_t{task_number}_{agent_id}_{timestamp.strftime('%H%M%S')}.json"
|
||||
pr_file = os.path.join(work_dir, filename)
|
||||
|
||||
with open(pr_file, "w") as f:
|
||||
json.dump(pr_data, f, indent=2)
|
||||
|
||||
print(f" 💾 Saved PR to: {pr_file}")
|
||||
|
||||
# Also save to daily log
|
||||
log_file = os.path.join(work_dir, f"pull_requests_log_{timestamp.strftime('%Y%m%d')}.jsonl")
|
||||
with open(log_file, "a") as f:
|
||||
f.write(json.dumps(pr_data) + "\n")
|
||||
|
||||
def save_coordination_discussion(discussion_data):
|
||||
"""Save coordination discussions to files"""
|
||||
import os
|
||||
timestamp = datetime.now()
|
||||
work_dir = "/tmp/bzzz_coordination_discussions"
|
||||
os.makedirs(work_dir, exist_ok=True)
|
||||
|
||||
# Create filename with project and timestamp
|
||||
project_id = discussion_data["project_id"]
|
||||
discussion_type = discussion_data["type"]
|
||||
|
||||
filename = f"discussion_{discussion_type}_p{project_id}_{timestamp.strftime('%H%M%S')}.json"
|
||||
discussion_file = os.path.join(work_dir, filename)
|
||||
|
||||
with open(discussion_file, "w") as f:
|
||||
json.dump(discussion_data, f, indent=2)
|
||||
|
||||
print(f" 💾 Saved discussion to: {discussion_file}")
|
||||
|
||||
# Also save to daily log
|
||||
log_file = os.path.join(work_dir, f"coordination_discussions_{timestamp.strftime('%Y%m%d')}.jsonl")
|
||||
with open(log_file, "a") as f:
|
||||
f.write(json.dumps(discussion_data) + "\n")
|
||||
|
||||
def get_repo_name(project_id):
|
||||
"""Get repository name from project ID"""
|
||||
repo_map = {
|
||||
1: "hive",
|
||||
2: "bzzz",
|
||||
3: "distributed-ai-dev",
|
||||
4: "infra-automation"
|
||||
}
|
||||
return repo_map.get(project_id, "unknown-repo")
|
||||
|
||||
def save_coordination_work(activity_type, details):
|
||||
"""Save coordination work to files for analysis"""
|
||||
timestamp = datetime.now()
|
||||
work_dir = "/tmp/bzzz_coordination_work"
|
||||
os.makedirs(work_dir, exist_ok=True)
|
||||
|
||||
# Create detailed log entry
|
||||
work_entry = {
|
||||
"timestamp": timestamp.isoformat(),
|
||||
"type": activity_type,
|
||||
"details": details,
|
||||
"session_id": details.get("session_id", "unknown")
|
||||
}
|
||||
|
||||
# Save to daily log file
|
||||
log_file = os.path.join(work_dir, f"coordination_work_{timestamp.strftime('%Y%m%d')}.jsonl")
|
||||
with open(log_file, "a") as f:
|
||||
f.write(json.dumps(work_entry) + "\n")
|
||||
|
||||
# Save individual work items to separate files
|
||||
if activity_type in ["code_generation", "task_solution", "pull_request_content"]:
|
||||
work_file = os.path.join(work_dir, f"{activity_type}_{timestamp.strftime('%H%M%S')}.json")
|
||||
with open(work_file, "w") as f:
|
||||
json.dump(work_entry, f, indent=2)
|
||||
|
||||
def start_background_task_updates():
|
||||
"""Background thread to simulate changing task priorities and new tasks"""
|
||||
def background_updates():
|
||||
@@ -416,6 +686,11 @@ if __name__ == '__main__':
|
||||
print(" GET /api/bzzz/projects/<id>/tasks - Project tasks")
|
||||
print(" POST /api/bzzz/projects/<id>/claim - Claim task")
|
||||
print(" PUT /api/bzzz/projects/<id>/status - Update task status")
|
||||
print(" POST /api/bzzz/projects/<id>/submit-work - Submit actual work/code")
|
||||
print(" POST /api/bzzz/projects/<id>/create-pr - Submit pull request content")
|
||||
print(" POST /api/bzzz/projects/<id>/coordination-discussion - Log coordination discussions")
|
||||
print(" POST /api/bzzz/projects/<id>/log-prompt - Log agent prompts and model usage")
|
||||
print(" POST /api/bzzz/coordination-log - Log coordination activity")
|
||||
print("")
|
||||
print("Starting background task updates...")
|
||||
start_background_task_updates()
|
||||
|
||||
118
scripts/trigger_mock_coordination.sh
Executable file
118
scripts/trigger_mock_coordination.sh
Executable file
@@ -0,0 +1,118 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Script to trigger coordination activity with mock API data
|
||||
# This simulates task updates to cause real bzzz coordination
|
||||
|
||||
MOCK_API="http://localhost:5000"
|
||||
|
||||
echo "🎯 Triggering Mock Coordination Test"
|
||||
echo "===================================="
|
||||
echo "This will cause real bzzz agents to coordinate on fake tasks"
|
||||
echo ""
|
||||
|
||||
# Function to simulate task claim attempts
|
||||
simulate_task_claims() {
|
||||
echo "📋 Simulating task claim attempts..."
|
||||
|
||||
# Try to claim tasks from different projects
|
||||
for project_id in 1 2 3; do
|
||||
for task_num in 15 23 8; do
|
||||
echo "🎯 Agent attempting to claim project $project_id task $task_num"
|
||||
|
||||
curl -s -X POST "$MOCK_API/api/bzzz/projects/$project_id/claim" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"task_number\": $task_num, \"agent_id\": \"test-agent-$project_id\"}" | jq .
|
||||
|
||||
sleep 2
|
||||
done
|
||||
done
|
||||
}
|
||||
|
||||
# Function to simulate task status updates
|
||||
simulate_task_updates() {
|
||||
echo ""
|
||||
echo "📊 Simulating task status updates..."
|
||||
|
||||
# Update task statuses to trigger coordination
|
||||
curl -s -X PUT "$MOCK_API/api/bzzz/projects/1/status" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"task_number": 15, "status": "in_progress", "metadata": {"progress": 25}}' | jq .
|
||||
|
||||
sleep 3
|
||||
|
||||
curl -s -X PUT "$MOCK_API/api/bzzz/projects/2/status" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"task_number": 23, "status": "completed", "metadata": {"completion_time": "2025-01-14T12:00:00Z"}}' | jq .
|
||||
|
||||
sleep 3
|
||||
|
||||
curl -s -X PUT "$MOCK_API/api/bzzz/projects/3/status" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"task_number": 8, "status": "escalated", "metadata": {"reason": "dependency_conflict"}}' | jq .
|
||||
}
|
||||
|
||||
# Function to add urgent tasks
|
||||
add_urgent_tasks() {
|
||||
echo ""
|
||||
echo "🚨 Adding urgent tasks to trigger immediate coordination..."
|
||||
|
||||
# The mock API has background task generation, but we can trigger it manually
|
||||
# by checking repositories multiple times rapidly
|
||||
for i in {1..5}; do
|
||||
echo "🔄 Repository refresh $i/5"
|
||||
curl -s "$MOCK_API/api/bzzz/active-repos" > /dev/null
|
||||
curl -s "$MOCK_API/api/bzzz/projects/1/tasks" > /dev/null
|
||||
curl -s "$MOCK_API/api/bzzz/projects/2/tasks" > /dev/null
|
||||
sleep 1
|
||||
done
|
||||
}
|
||||
|
||||
# Function to check bzzz response
|
||||
check_bzzz_activity() {
|
||||
echo ""
|
||||
echo "📡 Checking recent bzzz activity..."
|
||||
|
||||
# Check last 30 seconds of bzzz logs for API calls
|
||||
echo "Recent bzzz log entries:"
|
||||
journalctl -u bzzz.service --since "30 seconds ago" -n 10 | grep -E "(API|repository|task|coordination)" || echo "No recent coordination activity"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
echo "🔍 Testing mock API connectivity..."
|
||||
curl -s "$MOCK_API/health" | jq .
|
||||
|
||||
echo ""
|
||||
echo "📋 Current active repositories:"
|
||||
curl -s "$MOCK_API/api/bzzz/active-repos" | jq .repositories[].name
|
||||
|
||||
echo ""
|
||||
echo "🎯 Phase 1: Task Claims"
|
||||
simulate_task_claims
|
||||
|
||||
echo ""
|
||||
echo "📊 Phase 2: Status Updates"
|
||||
simulate_task_updates
|
||||
|
||||
echo ""
|
||||
echo "🚨 Phase 3: Urgent Tasks"
|
||||
add_urgent_tasks
|
||||
|
||||
echo ""
|
||||
echo "📡 Phase 4: Check Results"
|
||||
check_bzzz_activity
|
||||
|
||||
echo ""
|
||||
echo "✅ Mock coordination test complete!"
|
||||
echo ""
|
||||
echo "🎯 Watch your monitoring dashboard for:"
|
||||
echo " - Task claim attempts"
|
||||
echo " - Status update processing"
|
||||
echo " - Coordination session activity"
|
||||
echo " - Agent availability changes"
|
||||
echo ""
|
||||
echo "📝 Check mock API server output for request logs"
|
||||
}
|
||||
|
||||
# Run the test
|
||||
main
|
||||
Reference in New Issue
Block a user