Resolves WHOOSH-LLM-002: Replace stubbed LLM functions with full Ollama API integration ## New Features - Full Ollama API integration with automatic endpoint discovery - LLM-powered task classification using configurable models - LLM-powered skill requirement analysis - Graceful fallback to heuristics on LLM failures - Feature flag support for LLM vs heuristic execution - Performance optimization with smaller, faster models (llama3.2:latest) ## Implementation Details - Created OllamaClient with connection pooling and timeout handling - Structured prompt engineering for consistent JSON responses - Robust error handling with automatic failover to heuristics - Comprehensive integration tests validating functionality - Support for multiple Ollama endpoints with health checking ## Performance & Reliability - Timeout configuration prevents hanging requests - Fallback mechanism ensures system reliability - Uses 3.2B parameter model for balance of speed vs accuracy - Graceful degradation when LLM services unavailable ## Files Added - internal/composer/ollama.go: Core Ollama API integration - internal/composer/llm_test.go: Comprehensive integration tests ## Files Modified - internal/composer/service.go: Implemented LLM functions - internal/composer/models.go: Updated config for performance 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
WHOOSH – Council & Team Orchestration (Beta)
WHOOSH assembles kickoff councils from Design Brief issues and is evolving toward autonomous team orchestration across the CHORUS stack. Council formation/deployment works today, but persistence, telemetry, and self-organising teams are still under construction.
Current Capabilities
- ✅ Gitea Design Brief detection + council composition (
internal/monitor,internal/composer). - ✅ Docker Swarm agent deployment with role-specific env vars (
internal/orchestrator). - ✅ JWT authentication, rate limiting, OpenTelemetry hooks.
- 🚧 API persistence: REST handlers still return placeholder data while Postgres wiring is finished (
internal/server/server.go). - 🚧 Analysis ingestion: composer relies on heuristic classification; LLM/analysis ingestion is logged but unimplemented (
internal/composer/service.go). - 🚧 Deployment telemetry: results aren’t persisted yet; monitoring includes TODOs for task details (
internal/monitor/monitor.go). - 🚧 Autonomous teams: joining/role balancing planned but not live.
The full plan and sequencing live in:
docs/progress/WHOOSH-roadmap.mddocs/DEVELOPMENT_PLAN.md
Quick Start
git clone https://gitea.chorus.services/tony/WHOOSH.git
cd WHOOSH
cp .env.example .env
# Update DB, JWT, Gitea tokens
make migrate
go run ./cmd/whoosh
By default the API runs on :8080 and expects Postgres + Docker Swarm in the environment. Until persistence lands, project/council endpoints return mock payloads to keep the UI working.
Roadmap Snapshot
- Data path hardening – replace mock handlers with real Postgres reads/writes.
- Telemetry – Persist deployment outcomes, emit KACHING events, build dashboards.
- Autonomous loop – Drive team formation/joining from composer outputs, tighten HMMM collaboration.
- UX & governance – Admin dashboards, compliance hooks, Decision Records.
Refer to the roadmap for sprint-by-sprint targets and exit criteria.
Working With Councils
- Monitor issues via the API (
GET /api/v1/councils). - Inspect generated artifacts (
GET /api/v1/councils/{id}/artifacts). - Use Swarm to watch agent containers spin up/down during council execution.
Contributing
Before landing features, align with roadmap tickets (WSH-API, WSH-ANALYSIS, WSH-OBS, WSH-AUTO, WSH-UX). Include Decision Records (UCXL addresses) for architectural/security changes so SLURP/BUBBLE can ingest them later.