// Package distribution provides context network distribution capabilities via DHT integration. // // This package implements distributed context sharing across the CHORUS cluster using // the existing Distributed Hash Table (DHT) infrastructure. It provides role-based // encrypted distribution, conflict resolution, and eventual consistency for context // data synchronization across multiple nodes. // // Key Features: // - DHT-based distributed context storage and retrieval // - Role-based encryption for secure context sharing // - Conflict resolution for concurrent context updates // - Eventual consistency with vector clock synchronization // - Replication factor management for fault tolerance // - Network partitioning resilience and recovery // - Efficient gossip protocols for metadata synchronization // // Core Components: // - ContextDistributor: Main interface for distributed context operations // - DHTStorage: DHT integration for context storage and retrieval // - ConflictResolver: Handles conflicts during concurrent updates // - ReplicationManager: Manages context replication across nodes // - GossipProtocol: Efficient metadata synchronization // - NetworkManager: Network topology and partition handling // // Integration Points: // - pkg/dht: Existing CHORUS DHT infrastructure // - pkg/crypto: Role-based encryption and decryption // - pkg/election: Leader coordination for conflict resolution // - pkg/slurp/context: Context types and validation // - pkg/slurp/storage: Storage interfaces and operations // // Example Usage: // // distributor := distribution.NewContextDistributor(dht, crypto, election) // ctx := context.Background() // // // Distribute context to cluster with role-based encryption // err := distributor.DistributeContext(ctx, contextNode, []string{"developer", "architect"}) // if err != nil { // log.Fatal(err) // } // // // Retrieve distributed context for a role // resolved, err := distributor.RetrieveContext(ctx, address, "developer") // if err != nil { // log.Fatal(err) // } // // // Synchronize with other nodes // err = distributor.Sync(ctx) // if err != nil { // log.Printf("Sync failed: %v", err) // } // // Distribution Architecture: // The distribution system uses a layered approach with the DHT providing the // underlying storage substrate, role-based encryption ensuring access control, // and gossip protocols providing efficient metadata synchronization. Context // data is partitioned across the cluster based on UCXL address hashing with // configurable replication factors for fault tolerance. // // Consistency Model: // The system provides eventual consistency with conflict resolution based on // vector clocks and last-writer-wins semantics. Leader nodes coordinate // complex conflict resolution scenarios and ensure cluster-wide consistency // convergence within bounded time periods. // // Security Model: // All context data is encrypted before distribution using role-specific keys // from the CHORUS crypto system. Only nodes with appropriate role permissions // can decrypt and access context information, ensuring secure collaborative // development while maintaining access control boundaries. // // Performance Characteristics: // - O(log N) lookup time for context retrieval // - Configurable replication factors (typically 3-5 nodes) // - Gossip synchronization in O(log N) rounds // - Automatic load balancing based on node capacity // - Background optimization and compaction processes // // Fault Tolerance: // The system handles node failures, network partitions, and data corruption // through multiple mechanisms including replication, checksums, repair // protocols, and automatic failover. Recovery time is typically proportional // to the size of affected data and available network bandwidth. package distribution