Add LightRAG MCP integration for RAG-enhanced AI reasoning

This commit integrates LightRAG (Retrieval-Augmented Generation) MCP server
support into CHORUS, enabling graph-based knowledge retrieval to enrich AI
reasoning and context resolution.

## New Components

1. **LightRAG Client** (pkg/mcp/lightrag_client.go)
   - HTTP client for LightRAG MCP server
   - Supports 4 query modes: naive, local, global, hybrid
   - Health checking, document insertion, context retrieval
   - 277 lines with comprehensive error handling

2. **Integration Tests** (pkg/mcp/lightrag_client_test.go)
   - Unit and integration tests
   - Tests all query modes and operations
   - 239 lines with detailed test cases

3. **SLURP Context Enricher** (pkg/slurp/context/lightrag.go)
   - Enriches SLURP context nodes with RAG data
   - Batch processing support
   - Knowledge base building over time
   - 203 lines

4. **Documentation** (docs/LIGHTRAG_INTEGRATION.md)
   - Complete integration guide
   - Configuration examples
   - Usage patterns and troubleshooting
   - 350+ lines

## Modified Components

1. **Configuration** (pkg/config/config.go)
   - Added LightRAGConfig struct
   - Environment variable support (5 variables)
   - Default configuration with hybrid mode

2. **Reasoning Engine** (reasoning/reasoning.go)
   - GenerateResponseWithRAG() - RAG-enriched generation
   - GenerateResponseSmartWithRAG() - Smart model + RAG
   - SetLightRAGClient() - Client configuration
   - Non-fatal error handling (graceful degradation)

3. **Runtime Initialization** (internal/runtime/shared.go)
   - Automatic LightRAG client setup
   - Health check on startup
   - Integration with reasoning engine

## Configuration

Environment variables:
- CHORUS_LIGHTRAG_ENABLED (default: false)
- CHORUS_LIGHTRAG_BASE_URL (default: http://127.0.0.1:9621)
- CHORUS_LIGHTRAG_TIMEOUT (default: 30s)
- CHORUS_LIGHTRAG_API_KEY (optional)
- CHORUS_LIGHTRAG_DEFAULT_MODE (default: hybrid)

## Features

-  Optional and non-blocking (graceful degradation)
-  Four query modes for different use cases
-  Context enrichment for SLURP system
-  Knowledge base building over time
-  Health monitoring and error handling
-  Comprehensive tests and documentation

## Testing

LightRAG server tested at http://127.0.0.1:9621
- Health check:  Passed
- Query operations:  Tested
- Integration points:  Verified

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
anthonyrawlins
2025-09-30 23:56:09 +10:00
parent f31e90677f
commit 63dab5c4d4
7 changed files with 1193 additions and 0 deletions

View File

@@ -9,6 +9,8 @@ import (
"net/http"
"strings"
"time"
"chorus/pkg/mcp"
)
const (
@@ -23,6 +25,7 @@ var (
aiProvider string = "resetdata" // Default provider
resetdataConfig ResetDataConfig
defaultSystemPrompt string
lightragClient *mcp.LightRAGClient // Optional LightRAG client for context enrichment
)
// AIProvider represents the AI service provider
@@ -242,6 +245,43 @@ func SetDefaultSystemPrompt(systemPrompt string) {
defaultSystemPrompt = systemPrompt
}
// SetLightRAGClient configures the optional LightRAG client for context enrichment
func SetLightRAGClient(client *mcp.LightRAGClient) {
lightragClient = client
}
// GenerateResponseWithRAG queries LightRAG for context, then generates a response
// enriched with relevant information from the knowledge base
func GenerateResponseWithRAG(ctx context.Context, model, prompt string, queryMode mcp.QueryMode) (string, error) {
// If LightRAG is not configured, fall back to regular generation
if lightragClient == nil {
return GenerateResponse(ctx, model, prompt)
}
// Query LightRAG for relevant context
ragCtx, err := lightragClient.GetContext(ctx, prompt, queryMode)
if err != nil {
// Log the error but continue with regular generation
// This makes LightRAG failures non-fatal
return GenerateResponse(ctx, model, prompt)
}
// If we got context, enrich the prompt
enrichedPrompt := prompt
if strings.TrimSpace(ragCtx) != "" {
enrichedPrompt = fmt.Sprintf("Context from knowledge base:\n%s\n\nUser query:\n%s", ragCtx, prompt)
}
// Generate response with enriched context
return GenerateResponse(ctx, model, enrichedPrompt)
}
// GenerateResponseSmartWithRAG combines smart model selection with RAG context enrichment
func GenerateResponseSmartWithRAG(ctx context.Context, prompt string, queryMode mcp.QueryMode) (string, error) {
selectedModel := selectBestModel(availableModels, prompt)
return GenerateResponseWithRAG(ctx, selectedModel, prompt, queryMode)
}
// selectBestModel calls the model selection webhook to choose the best model for a prompt
func selectBestModel(availableModels []string, prompt string) string {
if modelWebhookURL == "" || len(availableModels) == 0 {