- Fixed module path from github.com/deepblackcloud/bzzz to github.com/anthonyrawlins/bzzz
- Added dynamic Ollama model detection via /api/tags endpoint
- Implemented intelligent model selection through N8N webhook integration
- Added BZZZ_MODEL_SELECTION_WEBHOOK environment variable support
- Fixed GitHub assignee issue by using valid username instead of peer ID
- Added comprehensive model fallback mechanisms
- Updated all import statements across the codebase
- Removed duplicate systemd service file
- Added sandbox execution environment and type definitions
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit introduces the new 'reasoning' module, which allows the bzzz-agent to connect to a local Ollama instance.
It also modifies the 'github' integration module to use this new capability. When a task is claimed, the agent now generates a step-by-step execution plan using an Ollama model and publishes this plan to the Antennae meta-discussion channel for peer review before proceeding.
This completes Phase 1 of the Ollama integration plan.