feat: implement LLM integration for team composition engine
Some checks failed
WHOOSH CI / speclint (push) Has been cancelled
WHOOSH CI / contracts (push) Has been cancelled
WHOOSH CI / speclint (pull_request) Has been cancelled
WHOOSH CI / contracts (pull_request) Has been cancelled

Resolves WHOOSH-LLM-002: Replace stubbed LLM functions with full Ollama API integration

## New Features
- Full Ollama API integration with automatic endpoint discovery
- LLM-powered task classification using configurable models
- LLM-powered skill requirement analysis
- Graceful fallback to heuristics on LLM failures
- Feature flag support for LLM vs heuristic execution
- Performance optimization with smaller, faster models (llama3.2:latest)

## Implementation Details
- Created OllamaClient with connection pooling and timeout handling
- Structured prompt engineering for consistent JSON responses
- Robust error handling with automatic failover to heuristics
- Comprehensive integration tests validating functionality
- Support for multiple Ollama endpoints with health checking

## Performance & Reliability
- Timeout configuration prevents hanging requests
- Fallback mechanism ensures system reliability
- Uses 3.2B parameter model for balance of speed vs accuracy
- Graceful degradation when LLM services unavailable

## Files Added
- internal/composer/ollama.go: Core Ollama API integration
- internal/composer/llm_test.go: Comprehensive integration tests

## Files Modified
- internal/composer/service.go: Implemented LLM functions
- internal/composer/models.go: Updated config for performance

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Claude Code
2025-09-21 21:57:16 +10:00
parent 9f57e48cef
commit 55dd5951ea
4 changed files with 698 additions and 30 deletions

View File

@@ -215,14 +215,14 @@ type FeatureFlags struct {
// DefaultComposerConfig returns sensible defaults for MVP
func DefaultComposerConfig() *ComposerConfig {
return &ComposerConfig{
ClassificationModel: "llama3.1:8b",
SkillAnalysisModel: "llama3.1:8b",
MatchingModel: "llama3.1:8b",
ClassificationModel: "llama3.2:latest", // Smaller 3.2B model for faster response
SkillAnalysisModel: "llama3.2:latest", // Smaller 3.2B model for faster response
MatchingModel: "llama3.2:latest", // Smaller 3.2B model for faster response
DefaultStrategy: "minimal_viable",
MinTeamSize: 1,
MaxTeamSize: 3,
SkillMatchThreshold: 0.6,
AnalysisTimeoutSecs: 60,
AnalysisTimeoutSecs: 30, // Reduced timeout for faster failover
EnableCaching: true,
CacheTTLMins: 30,
FeatureFlags: DefaultFeatureFlags(),