Files
CHORUS/pkg/slurp/storage/doc.go
anthonyrawlins 9bdcbe0447 Integrate BACKBEAT SDK and resolve KACHING license validation
Major integrations and fixes:
- Added BACKBEAT SDK integration for P2P operation timing
- Implemented beat-aware status tracking for distributed operations
- Added Docker secrets support for secure license management
- Resolved KACHING license validation via HTTPS/TLS
- Updated docker-compose configuration for clean stack deployment
- Disabled rollback policies to prevent deployment failures
- Added license credential storage (CHORUS-DEV-MULTI-001)

Technical improvements:
- BACKBEAT P2P operation tracking with phase management
- Enhanced configuration system with file-based secrets
- Improved error handling for license validation
- Clean separation of KACHING and CHORUS deployment stacks

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-06 07:56:26 +10:00

81 lines
3.5 KiB
Go

// Package storage provides context persistence and retrieval capabilities for the SLURP system.
//
// This package implements the storage layer for context data, providing both local
// and distributed storage capabilities with encryption, caching, and efficient
// retrieval mechanisms. It integrates with the CHORUS DHT for distributed context
// sharing while maintaining role-based access control.
//
// Key Features:
// - Local context storage with efficient indexing and retrieval
// - Distributed context storage using CHORUS DHT infrastructure
// - Role-based encryption for secure context sharing
// - Multi-level caching for performance optimization
// - Backup and recovery capabilities for data durability
// - Transaction support for consistent updates
// - Search and indexing for efficient context discovery
//
// Core Components:
// - ContextStore: Main interface for context storage operations
// - LocalStorage: Local filesystem-based storage implementation
// - DistributedStorage: DHT-based distributed storage
// - CacheManager: Multi-level caching system
// - IndexManager: Search and indexing capabilities
// - BackupManager: Backup and recovery operations
//
// Integration Points:
// - pkg/dht: Distributed Hash Table for network storage
// - pkg/crypto: Role-based encryption and access control
// - pkg/slurp/context: Context types and validation
// - pkg/election: Leader coordination for storage operations
// - Local filesystem: Persistent local storage
//
// Example Usage:
//
// store := storage.NewContextStore(config, dht, crypto)
// ctx := context.Background()
//
// // Store a context node
// err := store.StoreContext(ctx, contextNode, []string{"developer", "architect"})
// if err != nil {
// log.Fatal(err)
// }
//
// // Retrieve context for a role
// retrieved, err := store.RetrieveContext(ctx, "ucxl://project/src/main.go", "developer")
// if err != nil {
// log.Fatal(err)
// }
//
// // Search contexts by criteria
// results, err := store.SearchContexts(ctx, &SearchQuery{
// Tags: []string{"backend", "api"},
// Technologies: []string{"go"},
// })
//
// Storage Architecture:
// The storage system uses a layered approach with local caching, distributed
// replication, and role-based encryption. Context data is stored locally for
// fast access and replicated across the CHORUS cluster for availability and
// collaboration. Encryption ensures that only authorized roles can access
// sensitive context information.
//
// Performance Considerations:
// - Multi-level caching reduces latency for frequently accessed contexts
// - Background synchronization minimizes impact on user operations
// - Batched operations optimize network usage for bulk operations
// - Index optimization provides fast search capabilities
// - Compression reduces storage overhead and network transfer costs
//
// Consistency Model:
// The storage system provides eventual consistency across the distributed
// cluster with conflict resolution for concurrent updates. Local storage
// provides strong consistency for single-node operations, while distributed
// storage uses optimistic concurrency control with vector clocks for
// conflict detection and resolution.
//
// Data Durability:
// Multiple levels of data protection ensure context durability including
// local backups, distributed replication, and periodic snapshots. The
// system can recover from node failures and network partitions while
// maintaining data integrity and availability.
package storage