Files
WHOOSH/docs/CHORUS_INTEGRATION_SPEC.md
Claude Code 33676bae6d Add WHOOSH search service with BACKBEAT integration
Complete implementation:
- Go-based search service with PostgreSQL and Redis backend
- BACKBEAT SDK integration for beat-aware search operations
- Docker containerization with multi-stage builds
- Comprehensive API endpoints for project analysis and search
- Database migrations and schema management
- GITEA integration for repository management
- Team composition analysis and recommendations

Key features:
- Beat-synchronized search operations with timing coordination
- Phase-based operation tracking (started → querying → ranking → completed)
- Docker Swarm deployment configuration
- Health checks and monitoring
- Secure configuration with environment variables

Architecture:
- Microservice design with clean API boundaries
- Background processing for long-running analysis
- Modular internal structure with proper separation of concerns
- Integration with CHORUS ecosystem via BACKBEAT timing

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-06 11:16:39 +10:00

1266 lines
41 KiB
Markdown

# WHOOSH-CHORUS Integration Specification
## Autonomous Agent Self-Organization and P2P Collaboration
Addendum (Terminology, Topics, MVP)
- Terminology: all former “BZZZ” references are CHORUS; CHORUS runs dockerized (no systemd assumptions).
- Topic naming: team channel root is `whoosh.team.<first16_of_sha256(normalize(@project:task))>` with optional `.control`, `.voting`, `.artefacts` (references only). Include UCXL address metadata.
- Discovery: prefer webhook-driven discovery from WHOOSH (Gitea issues events), with polling fallback. Debounce duplicate applications across agents.
- MVP toggle: single-agent executor mode (no team self-application) for `bzzz-task` issues is the default until channels stabilize; team application/commenting is feature-flagged.
- Security: sign all control messages; maintain revocation lists in SLURP; reject unsigned/stale. Apply SHHH redaction before persistence and fan-out.
### Overview
This document specifies the comprehensive integration between WHOOSH's Team Composer and the CHORUS agent network, enabling autonomous AI agents to discover team opportunities, self-assess their capabilities, apply to teams, and collaborate through P2P channels with structured reasoning (HMMM) and democratic consensus mechanisms.
## 🎯 Integration Architecture
```
WHOOSH Team Composer → GITEA Team Issues → CHORUS Agent Discovery → P2P Team Channels → SLURP Artifact Submission
┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐
│ WHOOSH Platform │ │ GITEA Repository │ │ CHORUS Agent Fleet │
│ │ │ │ │ │
│ ┌─────────────────┐ │ │ ┌─────────────────┐ │ │ ┌─────────────────┐ │
│ │ Team Composer │─┼────┼→│ Team Issues │ │ │ │ Agent Discovery │ │
│ └─────────────────┘ │ │ └─────────────────┘ │ │ └─────────────────┘ │
│ │ │ │ │ │
│ ┌─────────────────┐ │ │ ┌─────────────────┐ │ │ ┌─────────────────┐ │
│ │ Agent Registry │←┼────┼─│ Agent Comments │←┼────┼─│ Self-Application│ │
│ └─────────────────┘ │ │ └─────────────────┘ │ │ └─────────────────┘ │
└─────────────────────┘ └─────────────────────┘ └─────────────────────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐
│ P2P Team Channels │ │ HMMM Reasoning │ │ SLURP Integration │
│ │ │ │ │ │
│ ┌─────────────────┐ │ │ ┌─────────────────┐ │ │ ┌─────────────────┐ │
│ │ UCXL Addressing │ │ │ │ Thought Chains │ │ │ │ Artifact Bundle │ │
│ └─────────────────┘ │ │ └─────────────────┘ │ │ └─────────────────┘ │
│ │ │ │ │ │
│ ┌─────────────────┐ │ │ ┌─────────────────┐ │ │ ┌─────────────────┐ │
│ │ Team Consensus │ │ │ │ Decision Records│ │ │ │ Context Archive │ │
│ └─────────────────┘ │ │ └─────────────────┘ │ │ └─────────────────┘ │
└─────────────────────┘ └─────────────────────┘ └─────────────────────┘
```
## 🤖 CHORUS Agent Enhancement
### Agent Self-Awareness System
```go
// Enhanced CHORUS agent with self-awareness capabilities
type SelfAwareAgent struct {
BaseAgent *chorus.Agent
SelfAssessment *AgentSelfAssessment
TeamMonitor *TeamOpportunityMonitor
ApplicationMgr *TeamApplicationManager
CollabClient *P2PCollaborationClient
}
type AgentSelfAssessment struct {
// Core capabilities
Capabilities map[string]CapabilityProfile
Specialization string
ExperienceLevel float64
// Performance tracking
CompletedTeams int
SuccessRate float64
AverageContribution float64
PeerRatings []PeerRating
// Current status
CurrentLoad float64
AvailableCapacity float64
MaxConcurrentTeams int
// Preferences and style
PreferredRoles []string
CollaborationStyle CollaborationPreferences
WorkingHours AvailabilityWindow
// Learning and growth
SkillGrowthTrends map[string]float64
LearningGoals []string
MentorshipInterest bool
// Hardware and model info
AIModels []AIModelProfile
HardwareProfile HardwareSpecification
PerformanceMetrics PerformanceProfile
}
type CapabilityProfile struct {
Domain string
Proficiency float64 // 0.0-1.0
ConfidenceLevel float64 // How confident the agent is in this skill
RecentExperience []string // Recent projects using this skill
ValidationSources []string // How this proficiency was determined
GrowthTrend float64 // Positive = improving, negative = declining
LastUsed time.Time
UsageFrequency float64 // How often this skill is used
}
```
### Team Opportunity Discovery
```go
type TeamOpportunityMonitor struct {
GiteaClient *gitea.Client
WHOOSHClient *whoosh.Client
SubscribedRepos []string
MonitorInterval time.Duration
// Filtering and matching
CapabilityMatcher *CapabilityMatcher
InterestFilter *OpportunityFilter
}
func (tom *TeamOpportunityMonitor) StartMonitoring(ctx context.Context) error {
ticker := time.NewTicker(tom.MonitorInterval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return ctx.Err()
case <-ticker.C:
if err := tom.scanForOpportunities(ctx); err != nil {
log.Printf("Error scanning for opportunities: %v", err)
}
}
}
}
func (tom *TeamOpportunityMonitor) scanForOpportunities(ctx context.Context) error {
// Scan all subscribed repositories for team formation issues
for _, repo := range tom.SubscribedRepos {
issues, err := tom.GiteaClient.GetTeamFormationIssues(ctx, repo)
if err != nil {
log.Printf("Failed to get issues from %s: %v", repo, err)
continue
}
for _, issue := range issues {
opportunity := tom.parseTeamOpportunity(issue)
if opportunity != nil {
// Assess fit for this opportunity
fit := tom.CapabilityMatcher.AssessFit(opportunity)
if fit.OverallScore >= 0.7 { // High confidence threshold
tom.handleOpportunity(ctx, opportunity, fit)
}
}
}
}
return nil
}
type TeamOpportunity struct {
TeamID string
Repository string
IssueID int
IssueURL string
// Task details
TaskTitle string
TaskDescription string
TaskType string
Complexity float64
EstimatedHours int
// Team composition
RequiredRoles []RoleRequirement
OptionalRoles []RoleRequirement
CurrentTeamSize int
MaxTeamSize int
// Requirements
SkillRequirements map[string]float64
QualityGates []string
ConsensusType string
// Timeline
FormationDeadline time.Time
ProjectDeadline time.Time
EstimatedDuration time.Duration
// Communication
P2PChannel string
UCXLAddress string
// Metadata
CreatedAt time.Time
UpdatedAt time.Time
Priority string
Labels []string
}
type RoleRequirement struct {
RoleName string
Required bool
MinProficiency float64
RequiredSkills []string
Responsibilities []string
EstimatedEffort int
CurrentApplications int
MaxPositions int
}
```
### Self-Application Decision Engine
```go
type TeamApplicationManager struct {
Agent *SelfAwareAgent
DecisionEngine *ApplicationDecisionEngine
ApplicationDB *ApplicationDatabase
// Application strategies
RiskTolerance float64
GrowthOriented bool
QualityFocus bool
}
type ApplicationDecisionEngine struct {
LLMClient *llm.Client
DecisionModel string
ConfidenceThreshold float64
}
func (ade *ApplicationDecisionEngine) ShouldApplyToTeam(
agent *SelfAwareAgent,
opportunity *TeamOpportunity,
) (*ApplicationDecision, error) {
// Construct decision prompt for LLM reasoning
prompt := fmt.Sprintf(`
Analyze whether this AI agent should apply to join this development team:
AGENT PROFILE:
- Specialization: %s
- Experience Level: %.2f
- Current Load: %.2f (capacity for %.1f more)
- Success Rate: %.2f across %d completed teams
- Key Capabilities: %s
- Recent Performance Trends: %s
- Available Hours: %d per day
TEAM OPPORTUNITY:
- Task: %s
- Type: %s, Complexity: %.2f
- Required Skills: %s
- Role Options: %s
- Team Size: %d/%d members
- Timeline: %s (%d days)
- Quality Requirements: %s
DECISION FACTORS TO ANALYZE:
1. SKILL MATCH ASSESSMENT:
- How well do agent skills align with role requirements?
- What is the proficiency gap for required skills?
- Can the agent contribute meaningfully to this task type?
- Are there opportunities to learn and grow?
2. CAPACITY & AVAILABILITY:
- Does the agent have sufficient capacity?
- Can they commit to the timeline?
- How does this fit with current workload?
- Is the estimated effort realistic?
3. TEAM FIT EVALUATION:
- Would the agent complement the existing team?
- Are there potential collaboration conflicts?
- Does the team size and dynamics suit the agent?
- Is the communication style compatible?
4. CAREER DEVELOPMENT:
- Does this opportunity advance agent capabilities?
- Are there valuable learning opportunities?
- Will this improve the agent's reputation and network?
- Does it align with growth goals?
5. RISK/REWARD ANALYSIS:
- What are the risks of joining this team?
- What are the potential rewards (learning, reputation, network)?
- How likely is the project to succeed?
- What happens if the agent underperforms?
6. STRATEGIC ALIGNMENT:
- Does this support long-term specialization goals?
- Are there portfolio diversification benefits?
- How does this compare to other opportunities?
- Is the timing optimal?
Provide detailed analysis and recommendation:
- Overall recommendation: APPLY / DON'T_APPLY / MONITOR
- Confidence level (0.0-1.0)
- Key reasons for decision
- Preferred role to apply for
- Application strategy suggestions
- Risk mitigation approaches
- Success probability estimate
`,
agent.Specialization,
agent.SelfAssessment.ExperienceLevel,
agent.SelfAssessment.CurrentLoad,
agent.SelfAssessment.AvailableCapacity,
agent.SelfAssessment.SuccessRate,
agent.SelfAssessment.CompletedTeams,
formatCapabilities(agent.SelfAssessment.Capabilities),
formatPerformanceTrends(agent.SelfAssessment.SkillGrowthTrends),
calculateAvailableHoursPerDay(agent.SelfAssessment.WorkingHours),
opportunity.TaskTitle,
opportunity.TaskType,
opportunity.Complexity,
formatSkillRequirements(opportunity.SkillRequirements),
formatRoleOptions(opportunity.RequiredRoles, opportunity.OptionalRoles),
opportunity.CurrentTeamSize,
opportunity.MaxTeamSize,
opportunity.ProjectDeadline.Format("2006-01-02"),
int(opportunity.ProjectDeadline.Sub(time.Now()).Hours()/24),
strings.Join(opportunity.QualityGates, ", "),
)
response, err := ade.LLMClient.Complete(context.Background(), llm.CompletionRequest{
Model: ade.DecisionModel,
Prompt: prompt,
Temperature: 0.1, // Low temperature for consistent decision-making
MaxTokens: 2000,
})
if err != nil {
return nil, fmt.Errorf("LLM decision analysis failed: %w", err)
}
// Parse structured response
decision, err := parseApplicationDecision(response.Content)
if err != nil {
return nil, fmt.Errorf("failed to parse decision: %w", err)
}
return decision, nil
}
type ApplicationDecision struct {
Recommendation ApplicationAction
Confidence float64
ReasoningChain string
KeyFactors []string
// Application details
PreferredRole string
AlternativeRoles []string
CommitmentLevel CommitmentLevel
// Strategy
ApplicationMessage string
ValueProposition string
RiskMitigation []string
// Predictions
SuccessProbability float64
LearningPotential float64
ReputationImpact float64
// Monitoring
MonitoringPeriod time.Duration
ReassessmentTriggers []string
}
type ApplicationAction int
const (
ApplyImmediately ApplicationAction = iota
ApplyIfSlotAvailable
MonitorAndReassess
DontApply
WaitForBetterMatch
)
```
### GITEA Integration for Applications
```go
type GITEAApplicationManager struct {
GiteaClient *gitea.Client
Agent *SelfAwareAgent
MessageTemplate *ApplicationMessageTemplate
}
func (gam *GITEAApplicationManager) SubmitTeamApplication(
ctx context.Context,
opportunity *TeamOpportunity,
decision *ApplicationDecision,
) error {
// Generate application comment for GITEA issue
applicationComment := gam.generateApplicationComment(opportunity, decision)
// Submit comment to team formation issue
comment, err := gam.GiteaClient.CreateIssueComment(ctx, gitea.CreateCommentRequest{
Repository: opportunity.Repository,
IssueID: opportunity.IssueID,
Content: applicationComment,
Metadata: map[string]interface{}{
"application_type": "team_member_application",
"agent_id": gam.Agent.ID,
"target_role": decision.PreferredRole,
"commitment_level": decision.CommitmentLevel,
"auto_generated": true,
},
})
if err != nil {
return fmt.Errorf("failed to submit application: %w", err)
}
// Track application in local database
application := &TeamApplication{
TeamID: opportunity.TeamID,
AgentID: gam.Agent.ID,
IssueID: opportunity.IssueID,
CommentID: comment.ID,
TargetRole: decision.PreferredRole,
ApplicationText: applicationComment,
Status: ApplicationStatusPending,
SubmittedAt: time.Now(),
DecisionReasoning: decision.ReasoningChain,
}
return gam.Agent.ApplicationDB.StoreApplication(application)
}
func (gam *GITEAApplicationManager) generateApplicationComment(
opportunity *TeamOpportunity,
decision *ApplicationDecision,
) string {
template := `
## 🤖 Team Application - %s
**Applying for Role:** %s
**Commitment Level:** %s
**Confidence:** %.1f%%
### 🎯 Value Proposition
%s
### 💪 Relevant Capabilities
%s
### 📊 Experience & Track Record
- **Completed Teams:** %d (%.1f%% success rate)
- **Specialization:** %s
- **Experience Level:** %.1f/1.0
- **Recent Performance:** %s
### ⏱️ Availability
- **Current Load:** %.1f%% (%.1f%% capacity available)
- **Max Concurrent Teams:** %d
- **Available Hours/Day:** %d
- **Working Timezone:** %s
### 🔧 Technical Profile
%s
### 🎲 Risk Mitigation
%s
### 🤝 Collaboration Approach
%s
---
*This application was generated by autonomous agent self-assessment. Please review and respond with approval/feedback.*
**Agent Contact:** %s
**P2P Node ID:** %s
**Application ID:** %s
`
return fmt.Sprintf(template,
gam.Agent.Name,
decision.PreferredRole,
decision.CommitmentLevel.String(),
decision.Confidence*100,
decision.ValueProposition,
formatRelevantCapabilities(gam.Agent.SelfAssessment.Capabilities, opportunity),
gam.Agent.SelfAssessment.CompletedTeams,
gam.Agent.SelfAssessment.SuccessRate*100,
gam.Agent.SelfAssessment.Specialization,
gam.Agent.SelfAssessment.ExperienceLevel,
formatPerformanceMetrics(gam.Agent.SelfAssessment.PerformanceMetrics),
gam.Agent.SelfAssessment.CurrentLoad*100,
gam.Agent.SelfAssessment.AvailableCapacity*100,
gam.Agent.SelfAssessment.MaxConcurrentTeams,
calculateAvailableHoursPerDay(gam.Agent.SelfAssessment.WorkingHours),
gam.Agent.SelfAssessment.WorkingHours.Timezone,
formatTechnicalProfile(gam.Agent.SelfAssessment.AIModels, gam.Agent.SelfAssessment.HardwareProfile),
strings.Join(decision.RiskMitigation, "\n- "),
formatCollaborationStyle(gam.Agent.SelfAssessment.CollaborationStyle),
gam.Agent.Endpoint,
gam.Agent.NodeID,
generateApplicationID(),
)
}
```
## 🔗 P2P Team Collaboration
### UCXL Addressing Integration
```go
type P2PTeamCollaborationClient struct {
P2PHost libp2p.Host
DHT *dht.IpfsDHT
PubSub *pubsub.PubSub
// Team communication
TeamChannels map[string]*TeamChannel
UCXLRouter *UCXLRouter
// HMMM integration
ReasoningEngine *HMMMReasoningEngine
ConsensusManager *TeamConsensusManager
}
type TeamChannel struct {
TeamID string
ChannelID string
UCXLAddress string
// Communication
MessageStream *pubsub.Subscription
TopicStreams map[string]*TopicStream
// Participants
TeamMembers map[string]*TeamMember
// State management
ChannelState *ChannelState
MessageHistory []ChannelMessage
// Consensus tracking
ActiveVotes map[string]*TeamVote
Decisions []TeamDecision
}
type UCXLRouter struct {
AddressParser *UCXLAddressParser
RouteCache map[string]*Route
}
// UCXL Address format: ucxl://project:task@team-id/#topic-stream/
func (ur *UCXLRouter) ResolveAddress(ucxlAddr string) (*UCXLRoute, error) {
parsed, err := ur.AddressParser.Parse(ucxlAddr)
if err != nil {
return nil, fmt.Errorf("invalid UCXL address: %w", err)
}
route := &UCXLRoute{
Project: parsed.Project,
Task: parsed.Task,
TeamID: parsed.TeamID,
TopicStream: parsed.TopicStream,
MessageID: parsed.MessageID,
}
return route, nil
}
func (ptcc *P2PTeamCollaborationClient) JoinTeamChannel(
ctx context.Context,
teamID string,
ucxlAddress string,
role string,
) (*TeamChannel, error) {
// Parse UCXL address
route, err := ptcc.UCXLRouter.ResolveAddress(ucxlAddress)
if err != nil {
return nil, fmt.Errorf("failed to resolve team address: %w", err)
}
// Create team channel
channel := &TeamChannel{
TeamID: teamID,
ChannelID: fmt.Sprintf("team-%s", teamID),
UCXLAddress: ucxlAddress,
TeamMembers: make(map[string]*TeamMember),
TopicStreams: make(map[string]*TopicStream),
ActiveVotes: make(map[string]*TeamVote),
}
// Subscribe to team communication topic
teamTopic := fmt.Sprintf("chorus/teams/%s/coordination", teamID)
sub, err := ptcc.PubSub.Subscribe(teamTopic)
if err != nil {
return nil, fmt.Errorf("failed to subscribe to team topic: %w", err)
}
channel.MessageStream = sub
// Initialize topic streams
defaultStreams := []string{"planning", "implementation", "review", "testing", "integration"}
for _, stream := range defaultStreams {
streamAddr := fmt.Sprintf("%s#%s/", ucxlAddress, stream)
topicStream, err := ptcc.createTopicStream(teamID, stream, streamAddr)
if err != nil {
log.Printf("Failed to create topic stream %s: %v", stream, err)
continue
}
channel.TopicStreams[stream] = topicStream
}
// Start message processing
go ptcc.processTeamMessages(ctx, channel)
// Announce joining to team
joinMessage := &TeamMessage{
Type: MessageTypeAgentJoined,
AgentID: ptcc.Agent.ID,
TeamID: teamID,
Content: fmt.Sprintf("Agent %s joined as %s", ptcc.Agent.Name, role),
Timestamp: time.Now(),
UCXLAddress: ucxlAddress,
}
if err := ptcc.broadcastTeamMessage(channel, joinMessage); err != nil {
log.Printf("Failed to announce team joining: %v", err)
}
ptcc.TeamChannels[teamID] = channel
return channel, nil
}
```
### HMMM Reasoning Integration
```go
type HMMMReasoningEngine struct {
Agent *SelfAwareAgent
LLMClient *llm.Client
ReasoningModel string
// Reasoning chains
ActiveReasoningChains map[string]*ReasoningChain
// Context and memory
TeamContext map[string]*TeamWorkingContext
DecisionHistory map[string][]TeamDecision
}
type ReasoningChain struct {
ID string
AgentID string
TeamID string
UCXLAddress string
// Reasoning content
Context string
Problem string
ThoughtProcess *ThoughtProcess
Conclusion string
Confidence float64
// Team interaction
QuestionsForTeam []string
RequestingFeedback bool
DecisionRequired bool
// Evidence and support
SupportingEvidence []string
RelatedArtifacts []string
References []Reference
// Response tracking
TeamResponses []ReasoningResponse
ConsensusAchieved bool
// Metadata
PublishedAt time.Time
LastUpdated time.Time
}
type ThoughtProcess struct {
OptionsConsidered []ReasoningOption
Analysis string
TradeOffs []TradeOff
RiskAssessment string
RecommendedAction string
}
type ReasoningOption struct {
Option string
Pros []string
Cons []string
Feasibility float64
Impact ImpactAssessment
}
func (hre *HMMMReasoningEngine) GenerateReasoningChain(
ctx context.Context,
teamID string,
context string,
problem string,
requestFeedback bool,
) (*ReasoningChain, error) {
// Get team context for reasoning
teamContext := hre.getTeamContext(teamID)
reasoningPrompt := fmt.Sprintf(`
As an AI agent working in a development team, provide structured reasoning for this problem:
TEAM CONTEXT:
- Team: %s
- Current Phase: %s
- Team Members: %s
- Recent Decisions: %s
PROBLEM TO ANALYZE:
Context: %s
Problem: %s
Provide comprehensive reasoning following HMMM (Hierarchical Multi-Modal Reasoning) structure:
1. PROBLEM ANALYSIS:
- Restate the problem clearly
- Identify key constraints and requirements
- Determine decision criteria and success metrics
- Assess urgency and impact level
2. OPTIONS GENERATION:
For each viable option:
- Clear description of the approach
- Advantages and benefits
- Disadvantages and risks
- Implementation feasibility (0.0-1.0)
- Resource requirements
- Time implications
- Quality implications
3. COMPARATIVE ANALYSIS:
- Trade-offs between options
- Risk vs reward analysis
- Short-term vs long-term implications
- Alignment with project goals
- Team capability considerations
4. RECOMMENDATION:
- Preferred option with clear rationale
- Implementation approach
- Risk mitigation strategies
- Success metrics and validation
- Fallback options if primary approach fails
5. TEAM COLLABORATION:
- Questions needing team input
- Areas requiring expertise from specific roles
- Consensus points requiring team agreement
- Timeline for team feedback and decision
6. SUPPORTING EVIDENCE:
- Technical documentation references
- Similar problem patterns from past projects
- Industry best practices
- Performance benchmarks or data
Provide reasoning that demonstrates deep technical understanding while being accessible to all team members.
`,
teamContext.TeamName,
teamContext.CurrentPhase,
formatTeamMembers(teamContext.Members),
formatRecentDecisions(hre.DecisionHistory[teamID]),
context,
problem,
)
response, err := hre.LLMClient.Complete(ctx, llm.CompletionRequest{
Model: hre.ReasoningModel,
Prompt: reasoningPrompt,
Temperature: 0.2, // Low temperature for consistent reasoning
MaxTokens: 3000,
})
if err != nil {
return nil, fmt.Errorf("failed to generate reasoning: %w", err)
}
// Parse structured reasoning response
reasoning, err := parseReasoningResponse(response.Content)
if err != nil {
return nil, fmt.Errorf("failed to parse reasoning: %w", err)
}
// Create reasoning chain
chainID := generateReasoningChainID()
ucxlAddress := fmt.Sprintf("ucxl://%s:reasoning@%s/#reasoning/%s",
teamContext.ProjectName, teamID, chainID)
chain := &ReasoningChain{
ID: chainID,
AgentID: hre.Agent.ID,
TeamID: teamID,
UCXLAddress: ucxlAddress,
Context: context,
Problem: problem,
ThoughtProcess: reasoning.ThoughtProcess,
Conclusion: reasoning.Conclusion,
Confidence: reasoning.Confidence,
QuestionsForTeam: reasoning.QuestionsForTeam,
RequestingFeedback: requestFeedback,
DecisionRequired: reasoning.DecisionRequired,
SupportingEvidence: reasoning.SupportingEvidence,
RelatedArtifacts: reasoning.RelatedArtifacts,
References: reasoning.References,
PublishedAt: time.Now(),
}
hre.ActiveReasoningChains[chainID] = chain
return chain, nil
}
func (hre *HMMMReasoningEngine) PublishReasoningToTeam(
ctx context.Context,
teamID string,
reasoning *ReasoningChain,
) error {
// Create reasoning message for team
reasoningMsg := &ReasoningMessage{
Type: MessageTypeReasoning,
AgentID: hre.Agent.ID,
TeamID: teamID,
ReasoningChain: reasoning,
UCXLAddress: reasoning.UCXLAddress,
Timestamp: time.Now(),
}
// Broadcast to team channel
teamChannel := hre.getTeamChannel(teamID)
if err := hre.broadcastReasoningMessage(teamChannel, reasoningMsg); err != nil {
return fmt.Errorf("failed to broadcast reasoning: %w", err)
}
// Store for SLURP ingestion
if err := hre.storeReasoningForSLURP(reasoning); err != nil {
log.Printf("Failed to store reasoning for SLURP: %v", err)
}
return nil
}
```
### Democratic Consensus System
```go
type TeamConsensusManager struct {
Agent *SelfAwareAgent
VotingEngine *VotingEngine
DecisionTracker *DecisionTracker
// Active consensus processes
ActiveVotes map[string]*TeamVote
PendingDecisions map[string]*PendingDecision
}
type TeamVote struct {
VoteID string
TeamID string
InitiatedBy string
// Vote details
Title string
Description string
VoteType VoteType
Options []VoteOption
// Consensus requirements
ConsensusType ConsensusType
MinParticipation int
EligibleVoters []string
VotingWeights map[string]float64
// Timeline
StartTime time.Time
EndTime time.Time
Duration time.Duration
// Context
RelatedReasoning []string
DecisionImpact ImpactLevel
Dependencies []string
// Results
VoteSubmissions map[string]*VoteSubmission
CurrentTally *VoteTally
ConsensusReached bool
WinningOption string
// UCXL addressing
UCXLAddress string
}
type VotingEngine struct {
LLMClient *llm.Client
VotingModel string
Agent *SelfAwareAgent
}
func (ve *VotingEngine) AnalyzeVotingDecision(
ctx context.Context,
vote *TeamVote,
teamContext *TeamWorkingContext,
) (*VotingDecision, error) {
votingPrompt := fmt.Sprintf(`
Analyze this team vote and determine the best voting choice:
VOTE DETAILS:
- Title: %s
- Description: %s
- Vote Type: %s
- Options: %s
- Consensus Required: %s
- Impact Level: %s
TEAM CONTEXT:
- Team Phase: %s
- My Role: %s
- Team Members: %s
- Project Goals: %s
RELATED REASONING:
%s
VOTING ANALYSIS FRAMEWORK:
1. OPTION EVALUATION:
For each voting option:
- Technical merit and feasibility
- Alignment with project goals
- Risk assessment and mitigation
- Resource implications
- Timeline impact
- Quality implications
2. TEAM DYNAMICS:
- How does this align with team capabilities?
- What are other team members likely to prefer?
- Are there coalition/alliance opportunities?
- How does this affect team cohesion?
3. STRATEGIC CONSIDERATIONS:
- Long-term vs short-term implications
- Precedent this sets for future decisions
- Impact on project success probability
- Alignment with quality gates
4. AGENT SPECIALIZATION:
- How does my expertise inform this decision?
- What unique perspective do I bring?
- What are the technical trade-offs I can see?
- How confident am I in each option?
5. CONSENSUS BUILDING:
- Which option is most likely to achieve consensus?
- How can I help build team alignment?
- What compromise positions might work?
- Should I advocate strongly or find middle ground?
Provide voting recommendation with:
- Preferred option(s) and rationale
- Confidence level in choice
- Supporting arguments to share with team
- Potential concerns or objections
- Consensus-building strategy
`,
vote.Title,
vote.Description,
vote.VoteType.String(),
formatVoteOptions(vote.Options),
vote.ConsensusType.String(),
vote.DecisionImpact.String(),
teamContext.CurrentPhase,
ve.Agent.GetCurrentRole(vote.TeamID),
formatTeamMembers(teamContext.Members),
teamContext.ProjectGoals,
formatRelatedReasoning(vote.RelatedReasoning),
)
response, err := ve.LLMClient.Complete(ctx, llm.CompletionRequest{
Model: ve.VotingModel,
Prompt: votingPrompt,
Temperature: 0.1, // Very low temperature for consistent voting decisions
MaxTokens: 2000,
})
if err != nil {
return nil, fmt.Errorf("failed to analyze voting decision: %w", err)
}
decision, err := parseVotingDecision(response.Content)
if err != nil {
return nil, fmt.Errorf("failed to parse voting decision: %w", err)
}
return decision, nil
}
func (tcm *TeamConsensusManager) SubmitVote(
ctx context.Context,
voteID string,
decision *VotingDecision,
) error {
vote := tcm.ActiveVotes[voteID]
if vote == nil {
return fmt.Errorf("vote %s not found", voteID)
}
// Create vote submission
submission := &VoteSubmission{
VoteID: voteID,
AgentID: tcm.Agent.ID,
SelectedOptions: decision.PreferredOptions,
VoteWeight: vote.VotingWeights[tcm.Agent.ID],
Confidence: decision.Confidence,
Reasoning: decision.SupportingArguments,
SubmittedAt: time.Now(),
UCXLAddress: fmt.Sprintf("%s/vote/%s", vote.UCXLAddress, tcm.Agent.ID),
}
// Broadcast vote submission to team
voteMsg := &VoteMessage{
Type: MessageTypeVoteSubmission,
AgentID: tcm.Agent.ID,
TeamID: vote.TeamID,
VoteSubmission: submission,
UCXLAddress: submission.UCXLAddress,
Timestamp: time.Now(),
}
teamChannel := tcm.getTeamChannel(vote.TeamID)
if err := tcm.broadcastVoteMessage(teamChannel, voteMsg); err != nil {
return fmt.Errorf("failed to broadcast vote: %w", err)
}
// Update vote tally
vote.VoteSubmissions[tcm.Agent.ID] = submission
if err := tcm.updateVoteTally(vote); err != nil {
return fmt.Errorf("failed to update vote tally: %w", err)
}
// Check if consensus is reached
if tcm.checkConsensusReached(vote) {
if err := tcm.processConsensusReached(vote); err != nil {
log.Printf("Error processing consensus: %v", err)
}
}
return nil
}
```
## 🎯 SLURP Integration
### Artifact Preparation
```go
type SLURPArtifactManager struct {
TeamChannel *TeamChannel
Agent *SelfAwareAgent
SLURPClient *slurp.Client
// Artifact collection
ArtifactCollector *TeamArtifactCollector
ContextManager *TeamContextManager
QualityValidator *ArtifactQualityValidator
}
func (sam *SLURPArtifactManager) PrepareTeamDeliverable(
ctx context.Context,
teamID string,
submissionConfig *SLURPSubmissionConfig,
) (*TeamDeliverable, error) {
teamChannel := sam.getTeamChannel(teamID)
// Collect all team artifacts
artifacts, err := sam.ArtifactCollector.CollectTeamArtifacts(ctx, teamChannel)
if err != nil {
return nil, fmt.Errorf("failed to collect artifacts: %w", err)
}
// Package reasoning chains and decisions
reasoningChains, err := sam.collectReasoningChains(ctx, teamChannel)
if err != nil {
log.Printf("Warning: failed to collect reasoning chains: %v", err)
}
teamDecisions, err := sam.collectTeamDecisions(ctx, teamChannel)
if err != nil {
log.Printf("Warning: failed to collect team decisions: %v", err)
}
// Create comprehensive deliverable package
deliverable := &TeamDeliverable{
TeamID: teamID,
SubmissionType: submissionConfig.Type,
UCXLAddress: generateDeliverableUCXLAddress(teamID),
// Core artifacts
CodeArtifacts: artifacts.Code,
TestArtifacts: artifacts.Tests,
DocumentationArtifacts: artifacts.Documentation,
ConfigurationArtifacts: artifacts.Configuration,
// Team process artifacts
ReasoningChains: reasoningChains,
TeamDecisions: teamDecisions,
ConsensusRecords: teamChannel.getConsensusHistory(),
// Context and metadata
TeamContext: sam.ContextManager.CaptureTeamContext(teamChannel),
CollaborationMetrics: sam.calculateCollaborationMetrics(teamChannel),
QualityMetrics: sam.calculateQualityMetrics(artifacts),
// Compliance and institutional requirements
ProvenanceRecords: sam.generateProvenanceRecords(teamChannel),
TemporalPin: time.Now(),
SecretsClean: true, // Will be validated
DecisionRationale: sam.generateDecisionRationale(teamDecisions),
// Metadata
CreatedAt: time.Now(),
TeamMembers: sam.getTeamMemberList(teamChannel),
SubmissionConfig: submissionConfig,
}
// Validate quality gates
validationResult, err := sam.QualityValidator.ValidateDeliverable(deliverable)
if err != nil {
return nil, fmt.Errorf("quality validation failed: %w", err)
}
deliverable.QualityValidation = validationResult
return deliverable, nil
}
func (sam *SLURPArtifactManager) SubmitToSLURP(
ctx context.Context,
deliverable *TeamDeliverable,
) (*SLURPSubmissionResult, error) {
// Perform final institutional compliance checks
complianceResult, err := sam.performComplianceCheck(deliverable)
if err != nil {
return nil, fmt.Errorf("compliance check failed: %w", err)
}
if !complianceResult.Passed {
return nil, fmt.Errorf("deliverable failed compliance: %v", complianceResult.Issues)
}
// Package for SLURP submission
submissionPackage := &slurp.SubmissionPackage{
UCXLAddress: deliverable.UCXLAddress,
SubmissionType: deliverable.SubmissionType,
TeamID: deliverable.TeamID,
// Artifacts
Artifacts: sam.packageArtifacts(deliverable),
ReasoningChains: deliverable.ReasoningChains,
DecisionRecords: deliverable.TeamDecisions,
// Context
TeamContext: deliverable.TeamContext,
ProvenanceTrail: deliverable.ProvenanceRecords,
QualityMetrics: deliverable.QualityMetrics,
// Institutional compliance
TemporalPin: deliverable.TemporalPin,
SecretsValidation: complianceResult.SecretsClean,
DecisionRationale: deliverable.DecisionRationale,
// Metadata
SubmissionTimestamp: time.Now(),
TeamConsensus: deliverable.TeamConsensus,
}
// Submit to SLURP
result, err := sam.SLURPClient.SubmitDeliverable(ctx, submissionPackage)
if err != nil {
return nil, fmt.Errorf("SLURP submission failed: %w", err)
}
return result, nil
}
```
## 📊 Integration Monitoring
### Performance Metrics
```go
type CHORUSIntegrationMetrics struct {
// Team formation metrics
OpportunityDiscoveryRate float64
ApplicationSuccessRate float64
TeamFormationTime time.Duration
AgentUtilizationRate float64
// Collaboration metrics
ReasoningChainsPerTeam float64
ConsensusAchievementRate float64
P2PMessageThroughput float64
DecisionResolutionTime time.Duration
// Quality metrics
ArtifactQualityScore float64
SLURPSubmissionSuccessRate float64
TeamSatisfactionScore float64
LearningOutcomeScore float64
// System performance
UCXLAddressResolutionTime time.Duration
P2PNetworkLatency time.Duration
LLMReasoningLatency time.Duration
DatabaseQueryPerformance time.Duration
}
func (cim *CHORUSIntegrationMetrics) TrackTeamFormationEvent(
event *TeamFormationEvent,
) {
switch event.Type {
case EventTypeOpportunityDiscovered:
cim.recordOpportunityDiscovery(event)
case EventTypeApplicationSubmitted:
cim.recordApplicationSubmission(event)
case EventTypeTeamFormed:
cim.recordTeamFormation(event)
case EventTypeCollaborationStarted:
cim.recordCollaborationStart(event)
}
}
func (cim *CHORUSIntegrationMetrics) GenerateIntegrationReport() *IntegrationHealthReport {
return &IntegrationHealthReport{
OverallHealth: cim.calculateOverallHealth(),
FormationEfficiency: cim.calculateFormationEfficiency(),
CollaborationHealth: cim.calculateCollaborationHealth(),
QualityMetrics: cim.calculateQualityMetrics(),
PerformanceMetrics: cim.calculatePerformanceMetrics(),
Recommendations: cim.generateRecommendations(),
GeneratedAt: time.Now(),
}
}
```
This comprehensive CHORUS integration specification enables autonomous AI agents to seamlessly discover team opportunities, apply intelligently, collaborate through P2P channels with structured reasoning, and deliver high-quality artifacts through democratic consensus processes within the WHOOSH ecosystem.