chore: align slurp config and scaffolding

This commit is contained in:
anthonyrawlins
2025-09-27 21:03:12 +10:00
parent acc4361463
commit 4a77862289
47 changed files with 5133 additions and 4274 deletions

View File

@@ -0,0 +1,94 @@
# SEC-SLURP UCXL Beacon & Pin Steward Design Notes
## Purpose
- Establish the authoritative UCXL context beacon that bridges SLURP persistence with WHOOSH/role-aware agents.
- Define the Pin Steward responsibilities so DHT replication, healing, and telemetry satisfy SEC-SLURP 1.1a acceptance criteria.
- Provide an incremental execution plan aligned with the Persistence Wiring Report and DHT Resilience Supplement.
## UCXL Beacon Data Model
- **manifest_id** (`string`): deterministic hash of `project:task:address:version`.
- **ucxl_address** (`ucxl.Address`): canonical address that produced the manifest.
- **context_version** (`int`): monotonic version from SLURP temporal graph.
- **source_hash** (`string`): content hash emitted by `persistContext` (LevelDB) for change detection.
- **generated_by** (`string`): CHORUS agent id / role bundle that wrote the context.
- **generated_at** (`time.Time`): timestamp from SLURP persistence event.
- **replica_targets** (`[]string`): desired replica node ids (Pin Steward enforces `replication_factor`).
- **replica_state** (`[]ReplicaInfo`): health snapshot (`node_id`, `provider_id`, `status`, `last_checked`, `latency_ms`).
- **encryption** (`EncryptionMetadata`):
- `dek_fingerprint` (`string`)
- `kek_policy` (`string`): BACKBEAT rotation policy identifier.
- `rotation_due` (`time.Time`)
- **compliance_tags** (`[]string`): SHHH/WHOOSH governance hooks (e.g. `sec-high`, `audit-required`).
- **beacon_metrics** (`BeaconMetrics`): summarized counters for cache hits, DHT retrieves, validation errors.
### Storage Strategy
- Primary persistence in LevelDB (`pkg/slurp/slurp.go`) using key prefix `beacon::<manifest_id>`.
- Secondary replication to DHT under `dht://beacon/<manifest_id>` enabling WHOOSH agents to read via Pin Steward API.
- Optional export to UCXL Decision Record envelope for historical traceability.
## Beacon APIs
| Endpoint | Purpose | Notes |
|----------|---------|-------|
| `Beacon.Upsert(manifest)` | Persist/update manifest | Called by SLURP after `persistContext` success. |
| `Beacon.Get(ucxlAddress)` | Resolve latest manifest | Used by WHOOSH/agents to locate canonical context. |
| `Beacon.List(filter)` | Query manifests by tags/roles/time | Backs dashboards and Pin Steward audits. |
| `Beacon.StreamChanges(since)` | Provide change feed for Pin Steward anti-entropy jobs | Implements backpressure and bookmark tokens. |
All APIs return envelope with UCXL citation + checksum to make SLURP⇄WHOOSH handoff auditable.
## Pin Steward Responsibilities
1. **Replication Planning**
- Read manifests via `Beacon.StreamChanges`.
- Evaluate current replica_state vs. `replication_factor` from configuration.
- Produce queue of DHT store/refresh tasks (`storeAsync`, `storeSync`, `storeQuorum`).
2. **Healing & Anti-Entropy**
- Schedule `heal_under_replicated` jobs every `anti_entropy_interval`.
- Re-announce providers on Pulse/Reverb when TTL < threshold.
- Record outcomes back into manifest (`replica_state`).
3. **Envelope Encryption Enforcement**
- Request KEK material from KACHING/SHHH as described in SEC-SLURP 1.1a.
- Ensure DEK fingerprints match `encryption` metadata; trigger rotation if stale.
4. **Telemetry Export**
- Emit Prometheus counters: `pin_steward_replica_heal_total`, `pin_steward_replica_unhealthy`, `pin_steward_encryption_rotations_total`.
- Surface aggregated health to WHOOSH dashboards for council visibility.
## Interaction Flow
1. **SLURP Persistence**
- `UpsertContext` → LevelDB write → manifests assembled (`persistContext`).
- Beacon `Upsert` called with manifest + context hash.
2. **Pin Steward Intake**
- `StreamChanges` yields manifest → steward verifies encryption metadata and schedules replication tasks.
3. **DHT Coordination**
- `ReplicationManager.EnsureReplication` invoked with target factor.
- `defaultVectorClockManager` (temporary) to be replaced with libp2p-aware implementation for provider TTL tracking.
4. **WHOOSH Consumption**
- WHOOSH SLURP proxy fetches manifest via `Beacon.Get`, caches in WHOOSH DB, attaches to deliverable artifacts.
- Council UI surfaces replication state + encryption posture for operator decisions.
## Incremental Delivery Plan
1. **Sprint A (Persistence parity)**
- Finalize LevelDB manifest schema + tests (extend `slurp_persistence_test.go`).
- Implement Beacon interfaces within SLURP service (in-memory + LevelDB).
- Add Prometheus metrics for persistence reads/misses.
2. **Sprint B (Pin Steward MVP)**
- Build steward worker with configurable reconciliation loop.
- Wire to existing `DistributedStorage` stubs (`StoreAsync/Sync/Quorum`).
- Emit health logs; integrate with CLI diagnostics.
3. **Sprint C (DHT Resilience)**
- Swap `defaultVectorClockManager` with libp2p implementation; add provider TTL probes.
- Implement envelope encryption path leveraging KACHING/SHHH interfaces (replace stubs in `pkg/crypto`).
- Add CI checks: replica factor assertions, provider refresh tests, beacon schema validation.
4. **Sprint D (WHOOSH Integration)**
- Expose REST/gRPC endpoint for WHOOSH to query manifests.
- Update WHOOSH SLURPArtifactManager to require beacon confirmation before submission.
- Surface Pin Steward alerts in WHOOSH admin UI.
## Open Questions
- Confirm whether Beacon manifests should include DER signatures or rely on UCXL envelope hash.
- Determine storage for historical manifests (append-only log vs. latest-only) to support temporal rewind.
- Align Pin Steward job scheduling with existing BACKBEAT cadence to avoid conflicting rotations.
## Next Actions
- Prototype `BeaconStore` interface + LevelDB implementation in SLURP package.
- Document Pin Steward anti-entropy algorithm with pseudocode and integrate into SEC-SLURP test plan.
- Sync with WHOOSH team on manifest query contract (REST vs. gRPC; pagination semantics).

View File

@@ -0,0 +1,52 @@
# WHOOSH ↔ CHORUS Integration Demo Plan (SEC-SLURP Track)
## Demo Objectives
- Showcase end-to-end persistence → UCXL beacon → Pin Steward → WHOOSH artifact submission flow.
- Validate role-based agent interactions with SLURP contexts (resolver + temporal graph) prior to DHT hardening.
- Capture metrics/telemetry needed for SEC-SLURP exit criteria and WHOOSH Phase 1 sign-off.
## Sequenced Milestones
1. **Persistence Validation Session**
- Run `GOWORK=off go test ./pkg/slurp/...` with stubs patched; demo LevelDB warm/load using `slurp_persistence_test.go`.
- Inspect beacon manifests via CLI (`slurpctl beacon list`).
- Deliverable: test log + manifest sample archived in UCXL.
2. **Beacon → Pin Steward Dry Run**
- Replay stored manifests through Pin Steward worker with mock DHT backend.
- Show replication planner queue + telemetry counters (`pin_steward_replica_heal_total`).
- Deliverable: decision record linking manifest to replication outcome.
3. **WHOOSH SLURP Proxy Alignment**
- Point WHOOSH dev stack (`npm run dev`) at local SLURP with beacon API enabled.
- Walk through council formation, capture SLURP artifact submission with beacon confirmation modal.
- Deliverable: screen recording + WHOOSH DB entry referencing beacon manifest id.
4. **DHT Resilience Checkpoint**
- Switch Pin Steward to libp2p DHT (once wired) and run replication + provider TTL check.
- Fail one node intentionally, demonstrate heal path + alert surfaced in WHOOSH UI.
- Deliverable: telemetry dump + alert screenshot.
5. **Governance & Telemetry Wrap-Up**
- Export Prometheus metrics (cache hit/miss, beacon writes, replication heals) into KACHING dashboard.
- Publish Decision Record documenting UCXL address flow, referencing SEC-SLURP docs.
## Roles & Responsibilities
- **SLURP Team:** finalize persistence build, implement beacon APIs, own Pin Steward worker.
- **WHOOSH Team:** wire beacon client, expose replication/encryption status in UI, capture council telemetry.
- **KACHING/SHHH Stakeholders:** validate telemetry ingestion and encryption custody notes.
- **Program Management:** schedule demo rehearsal, ensure Decision Records and UCXL addresses recorded.
## Tooling & Environments
- Local cluster via `docker compose up slurp whoosh pin-steward` (to be scripted in `commands/`).
- Use `make demo-sec-slurp` target to run integration harness (to be added).
- Prometheus/Grafana docker compose for metrics validation.
## Success Criteria
- Beacon manifest accessible from WHOOSH UI within 2s average latency.
- Pin Steward resolves under-replicated manifest within demo timeline (<30s) and records healing event.
- All demo steps logged with UCXL references and SHHH redaction checks passing.
## Open Items
- Need sample repo/issues to feed WHOOSH analyzer (consider `project-queues/active/WHOOSH/demo-data`).
- Determine minimal DHT cluster footprint for the demo (3 vs 5 nodes).
- Align on telemetry retention window for demo (24h?).

View File

@@ -0,0 +1,32 @@
# SEC-SLURP 1.1a DHT Resilience Supplement
## Requirements (derived from `docs/Modules/DHT.md`)
1. **Real DHT state & persistence**
- Replace mock DHT usage with libp2p-based storage or equivalent real implementation.
- Store DHT/blockstore data on persistent volumes (named volumes/ZFS/NFS) with node placement constraints.
- Ensure bootstrap nodes are stateful and survive container churn.
2. **Pin Steward + replication policy**
- Introduce a Pin Steward service that tracks UCXL CID manifests and enforces replication factor (e.g. 35 replicas).
- Re-announce providers on Pulse/Reverb and heal under-replicated content.
- Schedule anti-entropy jobs to verify and repair replicas.
3. **Envelope encryption & shared key custody**
- Implement envelope encryption (DEK+KEK) with threshold/organizational custody rather than per-role ownership.
- Store KEK metadata with UCXL manifests; rotate via BACKBEAT.
- Update crypto/key-manager stubs to real implementations once available.
4. **Shared UCXL Beacon index**
- Maintain an authoritative CID registry (DR/UCXL) replicated outside individual agents.
- Ensure metadata updates are durable and role-agnostic to prevent stranded CIDs.
5. **CI/SLO validation**
- Add automated tests/health checks covering provider refresh, replication factor, and persistent-storage guarantees.
- Gate releases on DHT resilience checks (provider TTLs, replica counts).
## Integration Path for SEC-SLURP 1.1
- Incorporate the above requirements as acceptance criteria alongside LevelDB persistence.
- Sequence work to: migrate DHT interactions, introduce Pin Steward, implement envelope crypto, and wire CI validation.
- Attach artifacts (Pin Steward design, envelope crypto spec, CI scripts) to the Phase 1 deliverable checklist.

View File

@@ -5,10 +5,14 @@
- Upgraded SLURPs lifecycle so initialization bootstraps cached context data from disk, cache misses hydrate from persistence, successful `UpsertContext` calls write back to LevelDB, and shutdown closes the store with error telemetry. - Upgraded SLURPs lifecycle so initialization bootstraps cached context data from disk, cache misses hydrate from persistence, successful `UpsertContext` calls write back to LevelDB, and shutdown closes the store with error telemetry.
- Introduced `pkg/slurp/slurp_persistence_test.go` to confirm contexts survive process restarts and can be resolved after clearing in-memory caches. - Introduced `pkg/slurp/slurp_persistence_test.go` to confirm contexts survive process restarts and can be resolved after clearing in-memory caches.
- Instrumented cache/persistence metrics so hit/miss ratios and storage failures are tracked for observability. - Instrumented cache/persistence metrics so hit/miss ratios and storage failures are tracked for observability.
- Attempted `GOWORK=off go test ./pkg/slurp`; execution was blocked by legacy references to `config.Authority*` symbols in `pkg/slurp/context`, so the new test did not run. - Implemented lightweight crypto/key-management stubs (`pkg/crypto/role_crypto_stub.go`, `pkg/crypto/key_manager_stub.go`) so SLURP modules compile while the production stack is ported.
- Updated DHT distribution and encrypted storage layers (`pkg/slurp/distribution/dht_impl.go`, `pkg/slurp/storage/encrypted_storage.go`) to use the crypto stubs, adding per-role fingerprints and durable decoding logic.
- Expanded storage metadata models (`pkg/slurp/storage/types.go`, `pkg/slurp/storage/backup_manager.go`) with fields referenced by backup/replication flows (progress, error messages, retention, data size).
- Incrementally stubbed/simplified distributed storage helpers to inch toward a compilable SLURP package.
- Attempted `GOWORK=off go test ./pkg/slurp`; the original authority-level blocker is resolved, but builds still fail in storage/index code due to remaining stub work (e.g., Bleve queries, DHT helpers).
## Recommended Next Steps ## Recommended Next Steps
- Address the `config.Authority*` symbol drift (or scope down the impacted packages) so the SLURP test suite can compile cleanly, then rerun `GOWORK=off go test ./pkg/slurp` to validate persistence changes. - Stub the remaining storage/index dependencies (Bleve query scaffolding, UCXL helpers, `errorCh` queues, cache regex usage) or neutralize the heavy modules so that `GOWORK=off go test ./pkg/slurp` compiles and runs.
- Feed the durable store into the resolver and temporal graph implementations to finish the remaining Phase1 SLURP roadmap items. - Feed the durable store into the resolver and temporal graph implementations to finish the SEC-SLURP1.1 milestone once the package builds cleanly.
- Expand Prometheus metrics and logging to track cache hit/miss ratios plus persistence errors for SEC-SLURP observability goals. - Extend Prometheus metrics/logging to track cache hit/miss ratios plus persistence errors for observability alignment.
- Review unrelated changes on `feature/phase-4-real-providers` (e.g., docker-compose edits) and either align them with this roadmap work or revert to keep the branch focused. - Review unrelated changes still tracked on `feature/phase-4-real-providers` (e.g., docker-compose edits) and either align them with this roadmap work or revert for focus.

View File

@@ -130,7 +130,27 @@ type ResolutionConfig struct {
// SlurpConfig defines SLURP settings // SlurpConfig defines SLURP settings
type SlurpConfig struct { type SlurpConfig struct {
Enabled bool `yaml:"enabled"` Enabled bool `yaml:"enabled"`
BaseURL string `yaml:"base_url"`
APIKey string `yaml:"api_key"`
Timeout time.Duration `yaml:"timeout"`
RetryCount int `yaml:"retry_count"`
RetryDelay time.Duration `yaml:"retry_delay"`
TemporalAnalysis SlurpTemporalAnalysisConfig `yaml:"temporal_analysis"`
Performance SlurpPerformanceConfig `yaml:"performance"`
}
// SlurpTemporalAnalysisConfig captures temporal behaviour tuning for SLURP.
type SlurpTemporalAnalysisConfig struct {
MaxDecisionHops int `yaml:"max_decision_hops"`
StalenessCheckInterval time.Duration `yaml:"staleness_check_interval"`
StalenessThreshold float64 `yaml:"staleness_threshold"`
}
// SlurpPerformanceConfig exposes performance related tunables for SLURP.
type SlurpPerformanceConfig struct {
MaxConcurrentResolutions int `yaml:"max_concurrent_resolutions"`
MetricsCollectionInterval time.Duration `yaml:"metrics_collection_interval"`
} }
// WHOOSHAPIConfig defines WHOOSH API integration settings // WHOOSHAPIConfig defines WHOOSH API integration settings
@@ -211,7 +231,21 @@ func LoadFromEnvironment() (*Config, error) {
}, },
}, },
Slurp: SlurpConfig{ Slurp: SlurpConfig{
Enabled: getEnvBoolOrDefault("CHORUS_SLURP_ENABLED", false), Enabled: getEnvBoolOrDefault("CHORUS_SLURP_ENABLED", false),
BaseURL: getEnvOrDefault("CHORUS_SLURP_API_BASE_URL", "http://localhost:9090"),
APIKey: getEnvOrFileContent("CHORUS_SLURP_API_KEY", "CHORUS_SLURP_API_KEY_FILE"),
Timeout: getEnvDurationOrDefault("CHORUS_SLURP_API_TIMEOUT", 15*time.Second),
RetryCount: getEnvIntOrDefault("CHORUS_SLURP_API_RETRY_COUNT", 3),
RetryDelay: getEnvDurationOrDefault("CHORUS_SLURP_API_RETRY_DELAY", 2*time.Second),
TemporalAnalysis: SlurpTemporalAnalysisConfig{
MaxDecisionHops: getEnvIntOrDefault("CHORUS_SLURP_MAX_DECISION_HOPS", 5),
StalenessCheckInterval: getEnvDurationOrDefault("CHORUS_SLURP_STALENESS_CHECK_INTERVAL", 5*time.Minute),
StalenessThreshold: 0.2,
},
Performance: SlurpPerformanceConfig{
MaxConcurrentResolutions: getEnvIntOrDefault("CHORUS_SLURP_MAX_CONCURRENT_RESOLUTIONS", 4),
MetricsCollectionInterval: getEnvDurationOrDefault("CHORUS_SLURP_METRICS_COLLECTION_INTERVAL", time.Minute),
},
}, },
Security: SecurityConfig{ Security: SecurityConfig{
KeyRotationDays: getEnvIntOrDefault("CHORUS_KEY_ROTATION_DAYS", 30), KeyRotationDays: getEnvIntOrDefault("CHORUS_KEY_ROTATION_DAYS", 30),

View File

@@ -0,0 +1,23 @@
package crypto
import "time"
// GenerateKey returns a deterministic placeholder key identifier for the given role.
func (km *KeyManager) GenerateKey(role string) (string, error) {
return "stub-key-" + role, nil
}
// DeprecateKey is a no-op in the stub implementation.
func (km *KeyManager) DeprecateKey(keyID string) error {
return nil
}
// GetKeysForRotation mirrors SEC-SLURP-1.1 key rotation discovery while remaining inert.
func (km *KeyManager) GetKeysForRotation(maxAge time.Duration) ([]*KeyInfo, error) {
return nil, nil
}
// ValidateKeyFingerprint accepts all fingerprints in the stubbed environment.
func (km *KeyManager) ValidateKeyFingerprint(role, fingerprint string) bool {
return true
}

View File

@@ -0,0 +1,75 @@
package crypto
import (
"crypto/sha256"
"encoding/base64"
"encoding/json"
"fmt"
"chorus/pkg/config"
)
type RoleCrypto struct {
config *config.Config
}
func NewRoleCrypto(cfg *config.Config, _ interface{}, _ interface{}, _ interface{}) (*RoleCrypto, error) {
if cfg == nil {
return nil, fmt.Errorf("config cannot be nil")
}
return &RoleCrypto{config: cfg}, nil
}
func (rc *RoleCrypto) EncryptForRole(data []byte, role string) ([]byte, string, error) {
if len(data) == 0 {
return []byte{}, rc.fingerprint(data), nil
}
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(data)))
base64.StdEncoding.Encode(encoded, data)
return encoded, rc.fingerprint(data), nil
}
func (rc *RoleCrypto) DecryptForRole(data []byte, role string, _ string) ([]byte, error) {
if len(data) == 0 {
return []byte{}, nil
}
decoded := make([]byte, base64.StdEncoding.DecodedLen(len(data)))
n, err := base64.StdEncoding.Decode(decoded, data)
if err != nil {
return nil, err
}
return decoded[:n], nil
}
func (rc *RoleCrypto) EncryptContextForRoles(payload interface{}, roles []string, _ []string) ([]byte, error) {
raw, err := json.Marshal(payload)
if err != nil {
return nil, err
}
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(raw)))
base64.StdEncoding.Encode(encoded, raw)
return encoded, nil
}
func (rc *RoleCrypto) fingerprint(data []byte) string {
sum := sha256.Sum256(data)
return base64.StdEncoding.EncodeToString(sum[:])
}
type StorageAccessController interface {
CanStore(role, key string) bool
CanRetrieve(role, key string) bool
}
type StorageAuditLogger interface {
LogEncryptionOperation(role, key, operation string, success bool)
LogDecryptionOperation(role, key, operation string, success bool)
LogKeyRotation(role, keyID string, success bool, message string)
LogError(message string)
LogAccessDenial(role, key, operation string)
}
type KeyInfo struct {
Role string
KeyID string
}

View File

@@ -0,0 +1,284 @@
package alignment
import "time"
// GoalStatistics summarizes goal management metrics.
type GoalStatistics struct {
TotalGoals int
ActiveGoals int
Completed int
Archived int
LastUpdated time.Time
}
// AlignmentGapAnalysis captures detected misalignments that require follow-up.
type AlignmentGapAnalysis struct {
Address string
Severity string
Findings []string
DetectedAt time.Time
}
// AlignmentComparison provides a simple comparison view between two contexts.
type AlignmentComparison struct {
PrimaryScore float64
SecondaryScore float64
Differences []string
}
// AlignmentStatistics aggregates assessment metrics across contexts.
type AlignmentStatistics struct {
TotalAssessments int
AverageScore float64
SuccessRate float64
FailureRate float64
LastUpdated time.Time
}
// ProgressHistory captures historical progress samples for a goal.
type ProgressHistory struct {
GoalID string
Samples []ProgressSample
}
// ProgressSample represents a single progress measurement.
type ProgressSample struct {
Timestamp time.Time
Percentage float64
}
// CompletionPrediction represents a simple completion forecast for a goal.
type CompletionPrediction struct {
GoalID string
EstimatedFinish time.Time
Confidence float64
}
// ProgressStatistics aggregates goal progress metrics.
type ProgressStatistics struct {
AverageCompletion float64
OpenGoals int
OnTrackGoals int
AtRiskGoals int
}
// DriftHistory tracks historical drift events.
type DriftHistory struct {
Address string
Events []DriftEvent
}
// DriftEvent captures a single drift occurrence.
type DriftEvent struct {
Timestamp time.Time
Severity DriftSeverity
Details string
}
// DriftThresholds defines sensitivity thresholds for drift detection.
type DriftThresholds struct {
SeverityThreshold DriftSeverity
ScoreDelta float64
ObservationWindow time.Duration
}
// DriftPatternAnalysis summarizes detected drift patterns.
type DriftPatternAnalysis struct {
Patterns []string
Summary string
}
// DriftPrediction provides a lightweight stub for future drift forecasting.
type DriftPrediction struct {
Address string
Horizon time.Duration
Severity DriftSeverity
Confidence float64
}
// DriftAlert represents an alert emitted when drift exceeds thresholds.
type DriftAlert struct {
ID string
Address string
Severity DriftSeverity
CreatedAt time.Time
Message string
}
// GoalRecommendation summarises next actions for a specific goal.
type GoalRecommendation struct {
GoalID string
Title string
Description string
Priority int
}
// StrategicRecommendation captures higher-level alignment guidance.
type StrategicRecommendation struct {
Theme string
Summary string
Impact string
RecommendedBy string
}
// PrioritizedRecommendation wraps a recommendation with ranking metadata.
type PrioritizedRecommendation struct {
Recommendation *AlignmentRecommendation
Score float64
Rank int
}
// RecommendationHistory tracks lifecycle updates for a recommendation.
type RecommendationHistory struct {
RecommendationID string
Entries []RecommendationHistoryEntry
}
// RecommendationHistoryEntry represents a single change entry.
type RecommendationHistoryEntry struct {
Timestamp time.Time
Status ImplementationStatus
Notes string
}
// ImplementationStatus reflects execution state for recommendations.
type ImplementationStatus string
const (
ImplementationPending ImplementationStatus = "pending"
ImplementationActive ImplementationStatus = "active"
ImplementationBlocked ImplementationStatus = "blocked"
ImplementationDone ImplementationStatus = "completed"
)
// RecommendationEffectiveness offers coarse metrics on outcome quality.
type RecommendationEffectiveness struct {
SuccessRate float64
AverageTime time.Duration
Feedback []string
}
// RecommendationStatistics aggregates recommendation issuance metrics.
type RecommendationStatistics struct {
TotalCreated int
TotalCompleted int
AveragePriority float64
LastUpdated time.Time
}
// AlignmentMetrics is a lightweight placeholder exported for engine integration.
type AlignmentMetrics struct {
Assessments int
SuccessRate float64
FailureRate float64
AverageScore float64
}
// GoalMetrics is a stub summarising per-goal metrics.
type GoalMetrics struct {
GoalID string
AverageScore float64
SuccessRate float64
LastUpdated time.Time
}
// ProgressMetrics is a stub capturing aggregate progress data.
type ProgressMetrics struct {
OverallCompletion float64
ActiveGoals int
CompletedGoals int
UpdatedAt time.Time
}
// MetricsTrends wraps high-level trend information.
type MetricsTrends struct {
Metric string
TrendLine []float64
Timestamp time.Time
}
// MetricsReport represents a generated metrics report placeholder.
type MetricsReport struct {
ID string
Generated time.Time
Summary string
}
// MetricsConfiguration reflects configuration for metrics collection.
type MetricsConfiguration struct {
Enabled bool
Interval time.Duration
}
// SyncResult summarises a synchronisation run.
type SyncResult struct {
SyncedItems int
Errors []string
}
// ImportResult summarises the outcome of an import operation.
type ImportResult struct {
Imported int
Skipped int
Errors []string
}
// SyncSettings captures synchronisation preferences.
type SyncSettings struct {
Enabled bool
Interval time.Duration
}
// SyncStatus provides health information about sync processes.
type SyncStatus struct {
LastSync time.Time
Healthy bool
Message string
}
// AssessmentValidation provides validation results for assessments.
type AssessmentValidation struct {
Valid bool
Issues []string
CheckedAt time.Time
}
// ConfigurationValidation summarises configuration validation status.
type ConfigurationValidation struct {
Valid bool
Messages []string
}
// WeightsValidation describes validation for weighting schemes.
type WeightsValidation struct {
Normalized bool
Adjustments map[string]float64
}
// ConsistencyIssue represents a detected consistency issue.
type ConsistencyIssue struct {
Description string
Severity DriftSeverity
DetectedAt time.Time
}
// AlignmentHealthCheck is a stub for health check outputs.
type AlignmentHealthCheck struct {
Status string
Details string
CheckedAt time.Time
}
// NotificationRules captures notification configuration stubs.
type NotificationRules struct {
Enabled bool
Channels []string
}
// NotificationRecord represents a delivered notification.
type NotificationRecord struct {
ID string
Timestamp time.Time
Recipient string
Status string
}

View File

@@ -4,176 +4,175 @@ import (
"time" "time"
"chorus/pkg/ucxl" "chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context"
) )
// ProjectGoal represents a high-level project objective // ProjectGoal represents a high-level project objective
type ProjectGoal struct { type ProjectGoal struct {
ID string `json:"id"` // Unique identifier ID string `json:"id"` // Unique identifier
Name string `json:"name"` // Goal name Name string `json:"name"` // Goal name
Description string `json:"description"` // Detailed description Description string `json:"description"` // Detailed description
Keywords []string `json:"keywords"` // Associated keywords Keywords []string `json:"keywords"` // Associated keywords
Priority int `json:"priority"` // Priority level (1=highest) Priority int `json:"priority"` // Priority level (1=highest)
Phase string `json:"phase"` // Project phase Phase string `json:"phase"` // Project phase
Category string `json:"category"` // Goal category Category string `json:"category"` // Goal category
Owner string `json:"owner"` // Goal owner Owner string `json:"owner"` // Goal owner
Status GoalStatus `json:"status"` // Current status Status GoalStatus `json:"status"` // Current status
// Success criteria // Success criteria
Metrics []string `json:"metrics"` // Success metrics Metrics []string `json:"metrics"` // Success metrics
SuccessCriteria []*SuccessCriterion `json:"success_criteria"` // Detailed success criteria SuccessCriteria []*SuccessCriterion `json:"success_criteria"` // Detailed success criteria
AcceptanceCriteria []string `json:"acceptance_criteria"` // Acceptance criteria AcceptanceCriteria []string `json:"acceptance_criteria"` // Acceptance criteria
// Timeline // Timeline
StartDate *time.Time `json:"start_date,omitempty"` // Goal start date StartDate *time.Time `json:"start_date,omitempty"` // Goal start date
TargetDate *time.Time `json:"target_date,omitempty"` // Target completion date TargetDate *time.Time `json:"target_date,omitempty"` // Target completion date
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
// Relationships // Relationships
ParentGoalID *string `json:"parent_goal_id,omitempty"` // Parent goal ParentGoalID *string `json:"parent_goal_id,omitempty"` // Parent goal
ChildGoalIDs []string `json:"child_goal_ids"` // Child goals ChildGoalIDs []string `json:"child_goal_ids"` // Child goals
Dependencies []string `json:"dependencies"` // Goal dependencies Dependencies []string `json:"dependencies"` // Goal dependencies
// Configuration // Configuration
Weights *GoalWeights `json:"weights"` // Assessment weights Weights *GoalWeights `json:"weights"` // Assessment weights
ThresholdScore float64 `json:"threshold_score"` // Minimum alignment score ThresholdScore float64 `json:"threshold_score"` // Minimum alignment score
// Metadata // Metadata
CreatedAt time.Time `json:"created_at"` // When created CreatedAt time.Time `json:"created_at"` // When created
UpdatedAt time.Time `json:"updated_at"` // When last updated UpdatedAt time.Time `json:"updated_at"` // When last updated
CreatedBy string `json:"created_by"` // Who created it CreatedBy string `json:"created_by"` // Who created it
Tags []string `json:"tags"` // Goal tags Tags []string `json:"tags"` // Goal tags
Metadata map[string]interface{} `json:"metadata"` // Additional metadata Metadata map[string]interface{} `json:"metadata"` // Additional metadata
} }
// GoalStatus represents the current status of a goal // GoalStatus represents the current status of a goal
type GoalStatus string type GoalStatus string
const ( const (
GoalStatusDraft GoalStatus = "draft" // Goal is in draft state GoalStatusDraft GoalStatus = "draft" // Goal is in draft state
GoalStatusActive GoalStatus = "active" // Goal is active GoalStatusActive GoalStatus = "active" // Goal is active
GoalStatusOnHold GoalStatus = "on_hold" // Goal is on hold GoalStatusOnHold GoalStatus = "on_hold" // Goal is on hold
GoalStatusCompleted GoalStatus = "completed" // Goal is completed GoalStatusCompleted GoalStatus = "completed" // Goal is completed
GoalStatusCancelled GoalStatus = "cancelled" // Goal is cancelled GoalStatusCancelled GoalStatus = "cancelled" // Goal is cancelled
GoalStatusArchived GoalStatus = "archived" // Goal is archived GoalStatusArchived GoalStatus = "archived" // Goal is archived
) )
// SuccessCriterion represents a specific success criterion for a goal // SuccessCriterion represents a specific success criterion for a goal
type SuccessCriterion struct { type SuccessCriterion struct {
ID string `json:"id"` // Criterion ID ID string `json:"id"` // Criterion ID
Description string `json:"description"` // Criterion description Description string `json:"description"` // Criterion description
MetricName string `json:"metric_name"` // Associated metric MetricName string `json:"metric_name"` // Associated metric
TargetValue interface{} `json:"target_value"` // Target value TargetValue interface{} `json:"target_value"` // Target value
CurrentValue interface{} `json:"current_value"` // Current value CurrentValue interface{} `json:"current_value"` // Current value
Unit string `json:"unit"` // Value unit Unit string `json:"unit"` // Value unit
ComparisonOp string `json:"comparison_op"` // Comparison operator (>=, <=, ==, etc.) ComparisonOp string `json:"comparison_op"` // Comparison operator (>=, <=, ==, etc.)
Weight float64 `json:"weight"` // Criterion weight Weight float64 `json:"weight"` // Criterion weight
Achieved bool `json:"achieved"` // Whether achieved Achieved bool `json:"achieved"` // Whether achieved
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
} }
// GoalWeights represents weights for different aspects of goal alignment assessment // GoalWeights represents weights for different aspects of goal alignment assessment
type GoalWeights struct { type GoalWeights struct {
KeywordMatch float64 `json:"keyword_match"` // Weight for keyword matching KeywordMatch float64 `json:"keyword_match"` // Weight for keyword matching
SemanticAlignment float64 `json:"semantic_alignment"` // Weight for semantic alignment SemanticAlignment float64 `json:"semantic_alignment"` // Weight for semantic alignment
PurposeAlignment float64 `json:"purpose_alignment"` // Weight for purpose alignment PurposeAlignment float64 `json:"purpose_alignment"` // Weight for purpose alignment
TechnologyMatch float64 `json:"technology_match"` // Weight for technology matching TechnologyMatch float64 `json:"technology_match"` // Weight for technology matching
QualityScore float64 `json:"quality_score"` // Weight for context quality QualityScore float64 `json:"quality_score"` // Weight for context quality
RecentActivity float64 `json:"recent_activity"` // Weight for recent activity RecentActivity float64 `json:"recent_activity"` // Weight for recent activity
ImportanceScore float64 `json:"importance_score"` // Weight for component importance ImportanceScore float64 `json:"importance_score"` // Weight for component importance
} }
// AlignmentAssessment represents overall alignment assessment for a context // AlignmentAssessment represents overall alignment assessment for a context
type AlignmentAssessment struct { type AlignmentAssessment struct {
Address ucxl.Address `json:"address"` // Context address Address ucxl.Address `json:"address"` // Context address
OverallScore float64 `json:"overall_score"` // Overall alignment score (0-1) OverallScore float64 `json:"overall_score"` // Overall alignment score (0-1)
GoalAlignments []*GoalAlignment `json:"goal_alignments"` // Individual goal alignments GoalAlignments []*GoalAlignment `json:"goal_alignments"` // Individual goal alignments
StrengthAreas []string `json:"strength_areas"` // Areas of strong alignment StrengthAreas []string `json:"strength_areas"` // Areas of strong alignment
WeaknessAreas []string `json:"weakness_areas"` // Areas of weak alignment WeaknessAreas []string `json:"weakness_areas"` // Areas of weak alignment
Recommendations []*AlignmentRecommendation `json:"recommendations"` // Improvement recommendations Recommendations []*AlignmentRecommendation `json:"recommendations"` // Improvement recommendations
AssessedAt time.Time `json:"assessed_at"` // When assessment was performed AssessedAt time.Time `json:"assessed_at"` // When assessment was performed
AssessmentVersion string `json:"assessment_version"` // Assessment algorithm version AssessmentVersion string `json:"assessment_version"` // Assessment algorithm version
Confidence float64 `json:"confidence"` // Assessment confidence (0-1) Confidence float64 `json:"confidence"` // Assessment confidence (0-1)
Metadata map[string]interface{} `json:"metadata"` // Additional metadata Metadata map[string]interface{} `json:"metadata"` // Additional metadata
} }
// GoalAlignment represents alignment assessment for a specific goal // GoalAlignment represents alignment assessment for a specific goal
type GoalAlignment struct { type GoalAlignment struct {
GoalID string `json:"goal_id"` // Goal identifier GoalID string `json:"goal_id"` // Goal identifier
GoalName string `json:"goal_name"` // Goal name GoalName string `json:"goal_name"` // Goal name
AlignmentScore float64 `json:"alignment_score"` // Alignment score (0-1) AlignmentScore float64 `json:"alignment_score"` // Alignment score (0-1)
ComponentScores *AlignmentScores `json:"component_scores"` // Component-wise scores ComponentScores *AlignmentScores `json:"component_scores"` // Component-wise scores
MatchedKeywords []string `json:"matched_keywords"` // Keywords that matched MatchedKeywords []string `json:"matched_keywords"` // Keywords that matched
MatchedCriteria []string `json:"matched_criteria"` // Criteria that matched MatchedCriteria []string `json:"matched_criteria"` // Criteria that matched
Explanation string `json:"explanation"` // Alignment explanation Explanation string `json:"explanation"` // Alignment explanation
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in assessment ConfidenceLevel float64 `json:"confidence_level"` // Confidence in assessment
ImprovementAreas []string `json:"improvement_areas"` // Areas for improvement ImprovementAreas []string `json:"improvement_areas"` // Areas for improvement
Strengths []string `json:"strengths"` // Alignment strengths Strengths []string `json:"strengths"` // Alignment strengths
} }
// AlignmentScores represents component scores for alignment assessment // AlignmentScores represents component scores for alignment assessment
type AlignmentScores struct { type AlignmentScores struct {
KeywordScore float64 `json:"keyword_score"` // Keyword matching score KeywordScore float64 `json:"keyword_score"` // Keyword matching score
SemanticScore float64 `json:"semantic_score"` // Semantic alignment score SemanticScore float64 `json:"semantic_score"` // Semantic alignment score
PurposeScore float64 `json:"purpose_score"` // Purpose alignment score PurposeScore float64 `json:"purpose_score"` // Purpose alignment score
TechnologyScore float64 `json:"technology_score"` // Technology alignment score TechnologyScore float64 `json:"technology_score"` // Technology alignment score
QualityScore float64 `json:"quality_score"` // Context quality score QualityScore float64 `json:"quality_score"` // Context quality score
ActivityScore float64 `json:"activity_score"` // Recent activity score ActivityScore float64 `json:"activity_score"` // Recent activity score
ImportanceScore float64 `json:"importance_score"` // Component importance score ImportanceScore float64 `json:"importance_score"` // Component importance score
} }
// AlignmentRecommendation represents a recommendation for improving alignment // AlignmentRecommendation represents a recommendation for improving alignment
type AlignmentRecommendation struct { type AlignmentRecommendation struct {
ID string `json:"id"` // Recommendation ID ID string `json:"id"` // Recommendation ID
Type RecommendationType `json:"type"` // Recommendation type Type RecommendationType `json:"type"` // Recommendation type
Priority int `json:"priority"` // Priority (1=highest) Priority int `json:"priority"` // Priority (1=highest)
Title string `json:"title"` // Recommendation title Title string `json:"title"` // Recommendation title
Description string `json:"description"` // Detailed description Description string `json:"description"` // Detailed description
GoalID *string `json:"goal_id,omitempty"` // Related goal GoalID *string `json:"goal_id,omitempty"` // Related goal
Address ucxl.Address `json:"address"` // Context address Address ucxl.Address `json:"address"` // Context address
// Implementation details // Implementation details
ActionItems []string `json:"action_items"` // Specific actions ActionItems []string `json:"action_items"` // Specific actions
EstimatedEffort EffortLevel `json:"estimated_effort"` // Estimated effort EstimatedEffort EffortLevel `json:"estimated_effort"` // Estimated effort
ExpectedImpact ImpactLevel `json:"expected_impact"` // Expected impact ExpectedImpact ImpactLevel `json:"expected_impact"` // Expected impact
RequiredRoles []string `json:"required_roles"` // Required roles RequiredRoles []string `json:"required_roles"` // Required roles
Prerequisites []string `json:"prerequisites"` // Prerequisites Prerequisites []string `json:"prerequisites"` // Prerequisites
// Status tracking // Status tracking
Status RecommendationStatus `json:"status"` // Implementation status Status RecommendationStatus `json:"status"` // Implementation status
AssignedTo []string `json:"assigned_to"` // Assigned team members AssignedTo []string `json:"assigned_to"` // Assigned team members
CreatedAt time.Time `json:"created_at"` // When created CreatedAt time.Time `json:"created_at"` // When created
DueDate *time.Time `json:"due_date,omitempty"` // Implementation due date DueDate *time.Time `json:"due_date,omitempty"` // Implementation due date
CompletedAt *time.Time `json:"completed_at,omitempty"` // When completed CompletedAt *time.Time `json:"completed_at,omitempty"` // When completed
// Metadata // Metadata
Tags []string `json:"tags"` // Recommendation tags Tags []string `json:"tags"` // Recommendation tags
Metadata map[string]interface{} `json:"metadata"` // Additional metadata Metadata map[string]interface{} `json:"metadata"` // Additional metadata
} }
// RecommendationType represents types of alignment recommendations // RecommendationType represents types of alignment recommendations
type RecommendationType string type RecommendationType string
const ( const (
RecommendationKeywordImprovement RecommendationType = "keyword_improvement" // Improve keyword matching RecommendationKeywordImprovement RecommendationType = "keyword_improvement" // Improve keyword matching
RecommendationPurposeAlignment RecommendationType = "purpose_alignment" // Align purpose better RecommendationPurposeAlignment RecommendationType = "purpose_alignment" // Align purpose better
RecommendationTechnologyUpdate RecommendationType = "technology_update" // Update technology usage RecommendationTechnologyUpdate RecommendationType = "technology_update" // Update technology usage
RecommendationQualityImprovement RecommendationType = "quality_improvement" // Improve context quality RecommendationQualityImprovement RecommendationType = "quality_improvement" // Improve context quality
RecommendationDocumentation RecommendationType = "documentation" // Add/improve documentation RecommendationDocumentation RecommendationType = "documentation" // Add/improve documentation
RecommendationRefactoring RecommendationType = "refactoring" // Code refactoring RecommendationRefactoring RecommendationType = "refactoring" // Code refactoring
RecommendationArchitectural RecommendationType = "architectural" // Architectural changes RecommendationArchitectural RecommendationType = "architectural" // Architectural changes
RecommendationTesting RecommendationType = "testing" // Testing improvements RecommendationTesting RecommendationType = "testing" // Testing improvements
RecommendationPerformance RecommendationType = "performance" // Performance optimization RecommendationPerformance RecommendationType = "performance" // Performance optimization
RecommendationSecurity RecommendationType = "security" // Security enhancements RecommendationSecurity RecommendationType = "security" // Security enhancements
) )
// EffortLevel represents estimated effort levels // EffortLevel represents estimated effort levels
type EffortLevel string type EffortLevel string
const ( const (
EffortLow EffortLevel = "low" // Low effort (1-2 hours) EffortLow EffortLevel = "low" // Low effort (1-2 hours)
EffortMedium EffortLevel = "medium" // Medium effort (1-2 days) EffortMedium EffortLevel = "medium" // Medium effort (1-2 days)
EffortHigh EffortLevel = "high" // High effort (1-2 weeks) EffortHigh EffortLevel = "high" // High effort (1-2 weeks)
EffortVeryHigh EffortLevel = "very_high" // Very high effort (>2 weeks) EffortVeryHigh EffortLevel = "very_high" // Very high effort (>2 weeks)
) )
@@ -181,9 +180,9 @@ const (
type ImpactLevel string type ImpactLevel string
const ( const (
ImpactLow ImpactLevel = "low" // Low impact ImpactLow ImpactLevel = "low" // Low impact
ImpactMedium ImpactLevel = "medium" // Medium impact ImpactMedium ImpactLevel = "medium" // Medium impact
ImpactHigh ImpactLevel = "high" // High impact ImpactHigh ImpactLevel = "high" // High impact
ImpactCritical ImpactLevel = "critical" // Critical impact ImpactCritical ImpactLevel = "critical" // Critical impact
) )
@@ -201,38 +200,38 @@ const (
// GoalProgress represents progress toward goal achievement // GoalProgress represents progress toward goal achievement
type GoalProgress struct { type GoalProgress struct {
GoalID string `json:"goal_id"` // Goal identifier GoalID string `json:"goal_id"` // Goal identifier
CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage (0-100) CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage (0-100)
CriteriaProgress []*CriterionProgress `json:"criteria_progress"` // Progress for each criterion CriteriaProgress []*CriterionProgress `json:"criteria_progress"` // Progress for each criterion
Milestones []*MilestoneProgress `json:"milestones"` // Milestone progress Milestones []*MilestoneProgress `json:"milestones"` // Milestone progress
Velocity float64 `json:"velocity"` // Progress velocity (% per day) Velocity float64 `json:"velocity"` // Progress velocity (% per day)
EstimatedCompletion *time.Time `json:"estimated_completion,omitempty"` // Estimated completion date EstimatedCompletion *time.Time `json:"estimated_completion,omitempty"` // Estimated completion date
RiskFactors []string `json:"risk_factors"` // Identified risk factors RiskFactors []string `json:"risk_factors"` // Identified risk factors
Blockers []string `json:"blockers"` // Current blockers Blockers []string `json:"blockers"` // Current blockers
LastUpdated time.Time `json:"last_updated"` // When last updated LastUpdated time.Time `json:"last_updated"` // When last updated
UpdatedBy string `json:"updated_by"` // Who last updated UpdatedBy string `json:"updated_by"` // Who last updated
} }
// CriterionProgress represents progress for a specific success criterion // CriterionProgress represents progress for a specific success criterion
type CriterionProgress struct { type CriterionProgress struct {
CriterionID string `json:"criterion_id"` // Criterion ID CriterionID string `json:"criterion_id"` // Criterion ID
CurrentValue interface{} `json:"current_value"` // Current value CurrentValue interface{} `json:"current_value"` // Current value
TargetValue interface{} `json:"target_value"` // Target value TargetValue interface{} `json:"target_value"` // Target value
ProgressPercentage float64 `json:"progress_percentage"` // Progress percentage ProgressPercentage float64 `json:"progress_percentage"` // Progress percentage
Achieved bool `json:"achieved"` // Whether achieved Achieved bool `json:"achieved"` // Whether achieved
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
Notes string `json:"notes"` // Progress notes Notes string `json:"notes"` // Progress notes
} }
// MilestoneProgress represents progress for a goal milestone // MilestoneProgress represents progress for a goal milestone
type MilestoneProgress struct { type MilestoneProgress struct {
MilestoneID string `json:"milestone_id"` // Milestone ID MilestoneID string `json:"milestone_id"` // Milestone ID
Name string `json:"name"` // Milestone name Name string `json:"name"` // Milestone name
Status MilestoneStatus `json:"status"` // Current status Status MilestoneStatus `json:"status"` // Current status
CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage
PlannedDate time.Time `json:"planned_date"` // Planned completion date PlannedDate time.Time `json:"planned_date"` // Planned completion date
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
DelayReason string `json:"delay_reason"` // Reason for delay if applicable DelayReason string `json:"delay_reason"` // Reason for delay if applicable
} }
// MilestoneStatus represents status of a milestone // MilestoneStatus represents status of a milestone
@@ -248,27 +247,27 @@ const (
// AlignmentDrift represents detected alignment drift // AlignmentDrift represents detected alignment drift
type AlignmentDrift struct { type AlignmentDrift struct {
Address ucxl.Address `json:"address"` // Context address Address ucxl.Address `json:"address"` // Context address
DriftType DriftType `json:"drift_type"` // Type of drift DriftType DriftType `json:"drift_type"` // Type of drift
Severity DriftSeverity `json:"severity"` // Drift severity Severity DriftSeverity `json:"severity"` // Drift severity
CurrentScore float64 `json:"current_score"` // Current alignment score CurrentScore float64 `json:"current_score"` // Current alignment score
PreviousScore float64 `json:"previous_score"` // Previous alignment score PreviousScore float64 `json:"previous_score"` // Previous alignment score
ScoreDelta float64 `json:"score_delta"` // Change in score ScoreDelta float64 `json:"score_delta"` // Change in score
AffectedGoals []string `json:"affected_goals"` // Goals affected by drift AffectedGoals []string `json:"affected_goals"` // Goals affected by drift
DetectedAt time.Time `json:"detected_at"` // When drift was detected DetectedAt time.Time `json:"detected_at"` // When drift was detected
DriftReason []string `json:"drift_reason"` // Reasons for drift DriftReason []string `json:"drift_reason"` // Reasons for drift
RecommendedActions []string `json:"recommended_actions"` // Recommended actions RecommendedActions []string `json:"recommended_actions"` // Recommended actions
Priority DriftPriority `json:"priority"` // Priority for addressing Priority DriftPriority `json:"priority"` // Priority for addressing
} }
// DriftType represents types of alignment drift // DriftType represents types of alignment drift
type DriftType string type DriftType string
const ( const (
DriftTypeGradual DriftType = "gradual" // Gradual drift over time DriftTypeGradual DriftType = "gradual" // Gradual drift over time
DriftTypeSudden DriftType = "sudden" // Sudden drift DriftTypeSudden DriftType = "sudden" // Sudden drift
DriftTypeOscillating DriftType = "oscillating" // Oscillating drift pattern DriftTypeOscillating DriftType = "oscillating" // Oscillating drift pattern
DriftTypeGoalChange DriftType = "goal_change" // Due to goal changes DriftTypeGoalChange DriftType = "goal_change" // Due to goal changes
DriftTypeContextChange DriftType = "context_change" // Due to context changes DriftTypeContextChange DriftType = "context_change" // Due to context changes
) )
@@ -286,68 +285,68 @@ const (
type DriftPriority string type DriftPriority string
const ( const (
DriftPriorityLow DriftPriority = "low" // Low priority DriftPriorityLow DriftPriority = "low" // Low priority
DriftPriorityMedium DriftPriority = "medium" // Medium priority DriftPriorityMedium DriftPriority = "medium" // Medium priority
DriftPriorityHigh DriftPriority = "high" // High priority DriftPriorityHigh DriftPriority = "high" // High priority
DriftPriorityUrgent DriftPriority = "urgent" // Urgent priority DriftPriorityUrgent DriftPriority = "urgent" // Urgent priority
) )
// AlignmentTrends represents alignment trends over time // AlignmentTrends represents alignment trends over time
type AlignmentTrends struct { type AlignmentTrends struct {
Address ucxl.Address `json:"address"` // Context address Address ucxl.Address `json:"address"` // Context address
TimeRange time.Duration `json:"time_range"` // Analyzed time range TimeRange time.Duration `json:"time_range"` // Analyzed time range
DataPoints []*TrendDataPoint `json:"data_points"` // Trend data points DataPoints []*TrendDataPoint `json:"data_points"` // Trend data points
OverallTrend TrendDirection `json:"overall_trend"` // Overall trend direction OverallTrend TrendDirection `json:"overall_trend"` // Overall trend direction
TrendStrength float64 `json:"trend_strength"` // Trend strength (0-1) TrendStrength float64 `json:"trend_strength"` // Trend strength (0-1)
Volatility float64 `json:"volatility"` // Score volatility Volatility float64 `json:"volatility"` // Score volatility
SeasonalPatterns []*SeasonalPattern `json:"seasonal_patterns"` // Detected seasonal patterns SeasonalPatterns []*SeasonalPattern `json:"seasonal_patterns"` // Detected seasonal patterns
AnomalousPoints []*AnomalousPoint `json:"anomalous_points"` // Anomalous data points AnomalousPoints []*AnomalousPoint `json:"anomalous_points"` // Anomalous data points
Predictions []*TrendPrediction `json:"predictions"` // Future trend predictions Predictions []*TrendPrediction `json:"predictions"` // Future trend predictions
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
} }
// TrendDataPoint represents a single data point in alignment trends // TrendDataPoint represents a single data point in alignment trends
type TrendDataPoint struct { type TrendDataPoint struct {
Timestamp time.Time `json:"timestamp"` // Data point timestamp Timestamp time.Time `json:"timestamp"` // Data point timestamp
AlignmentScore float64 `json:"alignment_score"` // Alignment score at this time AlignmentScore float64 `json:"alignment_score"` // Alignment score at this time
GoalScores map[string]float64 `json:"goal_scores"` // Individual goal scores GoalScores map[string]float64 `json:"goal_scores"` // Individual goal scores
Events []string `json:"events"` // Events that occurred around this time Events []string `json:"events"` // Events that occurred around this time
} }
// TrendDirection represents direction of alignment trends // TrendDirection represents direction of alignment trends
type TrendDirection string type TrendDirection string
const ( const (
TrendDirectionImproving TrendDirection = "improving" // Improving trend TrendDirectionImproving TrendDirection = "improving" // Improving trend
TrendDirectionDeclining TrendDirection = "declining" // Declining trend TrendDirectionDeclining TrendDirection = "declining" // Declining trend
TrendDirectionStable TrendDirection = "stable" // Stable trend TrendDirectionStable TrendDirection = "stable" // Stable trend
TrendDirectionVolatile TrendDirection = "volatile" // Volatile trend TrendDirectionVolatile TrendDirection = "volatile" // Volatile trend
) )
// SeasonalPattern represents a detected seasonal pattern in alignment // SeasonalPattern represents a detected seasonal pattern in alignment
type SeasonalPattern struct { type SeasonalPattern struct {
PatternType string `json:"pattern_type"` // Type of pattern (weekly, monthly, etc.) PatternType string `json:"pattern_type"` // Type of pattern (weekly, monthly, etc.)
Period time.Duration `json:"period"` // Pattern period Period time.Duration `json:"period"` // Pattern period
Amplitude float64 `json:"amplitude"` // Pattern amplitude Amplitude float64 `json:"amplitude"` // Pattern amplitude
Confidence float64 `json:"confidence"` // Pattern confidence Confidence float64 `json:"confidence"` // Pattern confidence
Description string `json:"description"` // Pattern description Description string `json:"description"` // Pattern description
} }
// AnomalousPoint represents an anomalous data point // AnomalousPoint represents an anomalous data point
type AnomalousPoint struct { type AnomalousPoint struct {
Timestamp time.Time `json:"timestamp"` // When anomaly occurred Timestamp time.Time `json:"timestamp"` // When anomaly occurred
ExpectedScore float64 `json:"expected_score"` // Expected alignment score ExpectedScore float64 `json:"expected_score"` // Expected alignment score
ActualScore float64 `json:"actual_score"` // Actual alignment score ActualScore float64 `json:"actual_score"` // Actual alignment score
AnomalyScore float64 `json:"anomaly_score"` // Anomaly score AnomalyScore float64 `json:"anomaly_score"` // Anomaly score
PossibleCauses []string `json:"possible_causes"` // Possible causes PossibleCauses []string `json:"possible_causes"` // Possible causes
} }
// TrendPrediction represents a prediction of future alignment trends // TrendPrediction represents a prediction of future alignment trends
type TrendPrediction struct { type TrendPrediction struct {
Timestamp time.Time `json:"timestamp"` // Predicted timestamp Timestamp time.Time `json:"timestamp"` // Predicted timestamp
PredictedScore float64 `json:"predicted_score"` // Predicted alignment score PredictedScore float64 `json:"predicted_score"` // Predicted alignment score
ConfidenceInterval *ConfidenceInterval `json:"confidence_interval"` // Confidence interval ConfidenceInterval *ConfidenceInterval `json:"confidence_interval"` // Confidence interval
Probability float64 `json:"probability"` // Prediction probability Probability float64 `json:"probability"` // Prediction probability
} }
// ConfidenceInterval represents a confidence interval for predictions // ConfidenceInterval represents a confidence interval for predictions
@@ -359,21 +358,21 @@ type ConfidenceInterval struct {
// AlignmentWeights represents weights for alignment calculation // AlignmentWeights represents weights for alignment calculation
type AlignmentWeights struct { type AlignmentWeights struct {
GoalWeights map[string]float64 `json:"goal_weights"` // Weights by goal ID GoalWeights map[string]float64 `json:"goal_weights"` // Weights by goal ID
CategoryWeights map[string]float64 `json:"category_weights"` // Weights by goal category CategoryWeights map[string]float64 `json:"category_weights"` // Weights by goal category
PriorityWeights map[int]float64 `json:"priority_weights"` // Weights by priority level PriorityWeights map[int]float64 `json:"priority_weights"` // Weights by priority level
PhaseWeights map[string]float64 `json:"phase_weights"` // Weights by project phase PhaseWeights map[string]float64 `json:"phase_weights"` // Weights by project phase
RoleWeights map[string]float64 `json:"role_weights"` // Weights by role RoleWeights map[string]float64 `json:"role_weights"` // Weights by role
ComponentWeights *AlignmentScores `json:"component_weights"` // Weights for score components ComponentWeights *AlignmentScores `json:"component_weights"` // Weights for score components
TemporalWeights *TemporalWeights `json:"temporal_weights"` // Temporal weighting factors TemporalWeights *TemporalWeights `json:"temporal_weights"` // Temporal weighting factors
} }
// TemporalWeights represents temporal weighting factors // TemporalWeights represents temporal weighting factors
type TemporalWeights struct { type TemporalWeights struct {
RecentWeight float64 `json:"recent_weight"` // Weight for recent changes RecentWeight float64 `json:"recent_weight"` // Weight for recent changes
DecayFactor float64 `json:"decay_factor"` // Score decay factor over time DecayFactor float64 `json:"decay_factor"` // Score decay factor over time
RecencyWindow time.Duration `json:"recency_window"` // Window for considering recent activity RecencyWindow time.Duration `json:"recency_window"` // Window for considering recent activity
HistoricalWeight float64 `json:"historical_weight"` // Weight for historical alignment HistoricalWeight float64 `json:"historical_weight"` // Weight for historical alignment
} }
// GoalFilter represents filtering criteria for goal listing // GoalFilter represents filtering criteria for goal listing
@@ -393,55 +392,55 @@ type GoalFilter struct {
// GoalHierarchy represents the hierarchical structure of goals // GoalHierarchy represents the hierarchical structure of goals
type GoalHierarchy struct { type GoalHierarchy struct {
RootGoals []*GoalNode `json:"root_goals"` // Root level goals RootGoals []*GoalNode `json:"root_goals"` // Root level goals
MaxDepth int `json:"max_depth"` // Maximum hierarchy depth MaxDepth int `json:"max_depth"` // Maximum hierarchy depth
TotalGoals int `json:"total_goals"` // Total number of goals TotalGoals int `json:"total_goals"` // Total number of goals
GeneratedAt time.Time `json:"generated_at"` // When hierarchy was generated GeneratedAt time.Time `json:"generated_at"` // When hierarchy was generated
} }
// GoalNode represents a node in the goal hierarchy // GoalNode represents a node in the goal hierarchy
type GoalNode struct { type GoalNode struct {
Goal *ProjectGoal `json:"goal"` // Goal information Goal *ProjectGoal `json:"goal"` // Goal information
Children []*GoalNode `json:"children"` // Child goals Children []*GoalNode `json:"children"` // Child goals
Depth int `json:"depth"` // Depth in hierarchy Depth int `json:"depth"` // Depth in hierarchy
Path []string `json:"path"` // Path from root Path []string `json:"path"` // Path from root
} }
// GoalValidation represents validation results for a goal // GoalValidation represents validation results for a goal
type GoalValidation struct { type GoalValidation struct {
Valid bool `json:"valid"` // Whether goal is valid Valid bool `json:"valid"` // Whether goal is valid
Issues []*ValidationIssue `json:"issues"` // Validation issues Issues []*ValidationIssue `json:"issues"` // Validation issues
Warnings []*ValidationWarning `json:"warnings"` // Validation warnings Warnings []*ValidationWarning `json:"warnings"` // Validation warnings
ValidatedAt time.Time `json:"validated_at"` // When validated ValidatedAt time.Time `json:"validated_at"` // When validated
} }
// ValidationIssue represents a validation issue // ValidationIssue represents a validation issue
type ValidationIssue struct { type ValidationIssue struct {
Field string `json:"field"` // Affected field Field string `json:"field"` // Affected field
Code string `json:"code"` // Issue code Code string `json:"code"` // Issue code
Message string `json:"message"` // Issue message Message string `json:"message"` // Issue message
Severity string `json:"severity"` // Issue severity Severity string `json:"severity"` // Issue severity
Suggestion string `json:"suggestion"` // Suggested fix Suggestion string `json:"suggestion"` // Suggested fix
} }
// ValidationWarning represents a validation warning // ValidationWarning represents a validation warning
type ValidationWarning struct { type ValidationWarning struct {
Field string `json:"field"` // Affected field Field string `json:"field"` // Affected field
Code string `json:"code"` // Warning code Code string `json:"code"` // Warning code
Message string `json:"message"` // Warning message Message string `json:"message"` // Warning message
Suggestion string `json:"suggestion"` // Suggested improvement Suggestion string `json:"suggestion"` // Suggested improvement
} }
// GoalMilestone represents a milestone for goal tracking // GoalMilestone represents a milestone for goal tracking
type GoalMilestone struct { type GoalMilestone struct {
ID string `json:"id"` // Milestone ID ID string `json:"id"` // Milestone ID
Name string `json:"name"` // Milestone name Name string `json:"name"` // Milestone name
Description string `json:"description"` // Milestone description Description string `json:"description"` // Milestone description
PlannedDate time.Time `json:"planned_date"` // Planned completion date PlannedDate time.Time `json:"planned_date"` // Planned completion date
Weight float64 `json:"weight"` // Milestone weight Weight float64 `json:"weight"` // Milestone weight
Criteria []string `json:"criteria"` // Completion criteria Criteria []string `json:"criteria"` // Completion criteria
Dependencies []string `json:"dependencies"` // Milestone dependencies Dependencies []string `json:"dependencies"` // Milestone dependencies
CreatedAt time.Time `json:"created_at"` // When created CreatedAt time.Time `json:"created_at"` // When created
} }
// MilestoneStatus represents status of a milestone (duplicate removed) // MilestoneStatus represents status of a milestone (duplicate removed)
@@ -449,39 +448,39 @@ type GoalMilestone struct {
// ProgressUpdate represents an update to goal progress // ProgressUpdate represents an update to goal progress
type ProgressUpdate struct { type ProgressUpdate struct {
UpdateType ProgressUpdateType `json:"update_type"` // Type of update UpdateType ProgressUpdateType `json:"update_type"` // Type of update
CompletionDelta float64 `json:"completion_delta"` // Change in completion percentage CompletionDelta float64 `json:"completion_delta"` // Change in completion percentage
CriteriaUpdates []*CriterionUpdate `json:"criteria_updates"` // Updates to criteria CriteriaUpdates []*CriterionUpdate `json:"criteria_updates"` // Updates to criteria
MilestoneUpdates []*MilestoneUpdate `json:"milestone_updates"` // Updates to milestones MilestoneUpdates []*MilestoneUpdate `json:"milestone_updates"` // Updates to milestones
Notes string `json:"notes"` // Update notes Notes string `json:"notes"` // Update notes
UpdatedBy string `json:"updated_by"` // Who made the update UpdatedBy string `json:"updated_by"` // Who made the update
Evidence []string `json:"evidence"` // Evidence for progress Evidence []string `json:"evidence"` // Evidence for progress
RiskFactors []string `json:"risk_factors"` // New risk factors RiskFactors []string `json:"risk_factors"` // New risk factors
Blockers []string `json:"blockers"` // New blockers Blockers []string `json:"blockers"` // New blockers
} }
// ProgressUpdateType represents types of progress updates // ProgressUpdateType represents types of progress updates
type ProgressUpdateType string type ProgressUpdateType string
const ( const (
ProgressUpdateTypeIncrement ProgressUpdateType = "increment" // Incremental progress ProgressUpdateTypeIncrement ProgressUpdateType = "increment" // Incremental progress
ProgressUpdateTypeAbsolute ProgressUpdateType = "absolute" // Absolute progress value ProgressUpdateTypeAbsolute ProgressUpdateType = "absolute" // Absolute progress value
ProgressUpdateTypeMilestone ProgressUpdateType = "milestone" // Milestone completion ProgressUpdateTypeMilestone ProgressUpdateType = "milestone" // Milestone completion
ProgressUpdateTypeCriterion ProgressUpdateType = "criterion" // Criterion achievement ProgressUpdateTypeCriterion ProgressUpdateType = "criterion" // Criterion achievement
) )
// CriterionUpdate represents an update to a success criterion // CriterionUpdate represents an update to a success criterion
type CriterionUpdate struct { type CriterionUpdate struct {
CriterionID string `json:"criterion_id"` // Criterion ID CriterionID string `json:"criterion_id"` // Criterion ID
NewValue interface{} `json:"new_value"` // New current value NewValue interface{} `json:"new_value"` // New current value
Achieved bool `json:"achieved"` // Whether now achieved Achieved bool `json:"achieved"` // Whether now achieved
Notes string `json:"notes"` // Update notes Notes string `json:"notes"` // Update notes
} }
// MilestoneUpdate represents an update to a milestone // MilestoneUpdate represents an update to a milestone
type MilestoneUpdate struct { type MilestoneUpdate struct {
MilestoneID string `json:"milestone_id"` // Milestone ID MilestoneID string `json:"milestone_id"` // Milestone ID
NewStatus MilestoneStatus `json:"new_status"` // New status NewStatus MilestoneStatus `json:"new_status"` // New status
CompletedDate *time.Time `json:"completed_date,omitempty"` // Completion date if completed CompletedDate *time.Time `json:"completed_date,omitempty"` // Completion date if completed
Notes string `json:"notes"` // Update notes Notes string `json:"notes"` // Update notes
} }

View File

@@ -26,12 +26,25 @@ type ContextNode struct {
Insights []string `json:"insights"` // Analytical insights Insights []string `json:"insights"` // Analytical insights
// Hierarchy control // Hierarchy control
OverridesParent bool `json:"overrides_parent"` // Whether this overrides parent context OverridesParent bool `json:"overrides_parent"` // Whether this overrides parent context
ContextSpecificity int `json:"context_specificity"` // Specificity level (higher = more specific) ContextSpecificity int `json:"context_specificity"` // Specificity level (higher = more specific)
AppliesToChildren bool `json:"applies_to_children"` // Whether this applies to child directories AppliesToChildren bool `json:"applies_to_children"` // Whether this applies to child directories
AppliesTo ContextScope `json:"applies_to"` // Scope of application within hierarchy
Parent *string `json:"parent,omitempty"` // Parent context path
Children []string `json:"children,omitempty"` // Child context paths
// Metadata // File metadata
FileType string `json:"file_type"` // File extension or type
Language *string `json:"language,omitempty"` // Programming language
Size *int64 `json:"size,omitempty"` // File size in bytes
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification timestamp
ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection
// Temporal metadata
GeneratedAt time.Time `json:"generated_at"` // When context was generated GeneratedAt time.Time `json:"generated_at"` // When context was generated
UpdatedAt time.Time `json:"updated_at"` // Last update timestamp
CreatedBy string `json:"created_by"` // Who created the context
WhoUpdated string `json:"who_updated"` // Who performed the last update
RAGConfidence float64 `json:"rag_confidence"` // RAG system confidence (0-1) RAGConfidence float64 `json:"rag_confidence"` // RAG system confidence (0-1)
// Access control // Access control

View File

@@ -40,7 +40,7 @@ func (ch *ConsistentHashingImpl) AddNode(nodeID string) error {
for i := 0; i < ch.virtualNodes; i++ { for i := 0; i < ch.virtualNodes; i++ {
virtualNodeKey := fmt.Sprintf("%s:%d", nodeID, i) virtualNodeKey := fmt.Sprintf("%s:%d", nodeID, i)
hash := ch.hashKey(virtualNodeKey) hash := ch.hashKey(virtualNodeKey)
ch.ring[hash] = nodeID ch.ring[hash] = nodeID
ch.sortedHashes = append(ch.sortedHashes, hash) ch.sortedHashes = append(ch.sortedHashes, hash)
} }
@@ -88,7 +88,7 @@ func (ch *ConsistentHashingImpl) GetNode(key string) (string, error) {
} }
hash := ch.hashKey(key) hash := ch.hashKey(key)
// Find the first node with hash >= key hash // Find the first node with hash >= key hash
idx := sort.Search(len(ch.sortedHashes), func(i int) bool { idx := sort.Search(len(ch.sortedHashes), func(i int) bool {
return ch.sortedHashes[i] >= hash return ch.sortedHashes[i] >= hash
@@ -175,7 +175,7 @@ func (ch *ConsistentHashingImpl) GetNodeDistribution() map[string]float64 {
// Calculate the range each node is responsible for // Calculate the range each node is responsible for
for i, hash := range ch.sortedHashes { for i, hash := range ch.sortedHashes {
nodeID := ch.ring[hash] nodeID := ch.ring[hash]
var rangeSize uint64 var rangeSize uint64
if i == len(ch.sortedHashes)-1 { if i == len(ch.sortedHashes)-1 {
// Last hash wraps around to first // Last hash wraps around to first
@@ -230,7 +230,7 @@ func (ch *ConsistentHashingImpl) calculateLoadBalance() float64 {
} }
avgVariance := totalVariance / float64(len(distribution)) avgVariance := totalVariance / float64(len(distribution))
// Convert to a balance score (higher is better, 1.0 is perfect) // Convert to a balance score (higher is better, 1.0 is perfect)
// Use 1/(1+variance) to map variance to [0,1] range // Use 1/(1+variance) to map variance to [0,1] range
return 1.0 / (1.0 + avgVariance/100.0) return 1.0 / (1.0 + avgVariance/100.0)
@@ -261,11 +261,11 @@ func (ch *ConsistentHashingImpl) GetMetrics() *ConsistentHashMetrics {
defer ch.mu.RUnlock() defer ch.mu.RUnlock()
return &ConsistentHashMetrics{ return &ConsistentHashMetrics{
TotalKeys: 0, // Would be maintained by usage tracking TotalKeys: 0, // Would be maintained by usage tracking
NodeUtilization: ch.GetNodeDistribution(), NodeUtilization: ch.GetNodeDistribution(),
RebalanceEvents: 0, // Would be maintained by event tracking RebalanceEvents: 0, // Would be maintained by event tracking
AverageSeekTime: 0.1, // Placeholder - would be measured AverageSeekTime: 0.1, // Placeholder - would be measured
LoadBalanceScore: ch.calculateLoadBalance(), LoadBalanceScore: ch.calculateLoadBalance(),
LastRebalanceTime: 0, // Would be maintained by event tracking LastRebalanceTime: 0, // Would be maintained by event tracking
} }
} }
@@ -306,7 +306,7 @@ func (ch *ConsistentHashingImpl) addNodeUnsafe(nodeID string) error {
for i := 0; i < ch.virtualNodes; i++ { for i := 0; i < ch.virtualNodes; i++ {
virtualNodeKey := fmt.Sprintf("%s:%d", nodeID, i) virtualNodeKey := fmt.Sprintf("%s:%d", nodeID, i)
hash := ch.hashKey(virtualNodeKey) hash := ch.hashKey(virtualNodeKey)
ch.ring[hash] = nodeID ch.ring[hash] = nodeID
ch.sortedHashes = append(ch.sortedHashes, hash) ch.sortedHashes = append(ch.sortedHashes, hash)
} }
@@ -333,7 +333,7 @@ func (ch *ConsistentHashingImpl) SetVirtualNodeCount(count int) error {
defer ch.mu.Unlock() defer ch.mu.Unlock()
ch.virtualNodes = count ch.virtualNodes = count
// Rehash with new virtual node count // Rehash with new virtual node count
return ch.Rehash() return ch.Rehash()
} }
@@ -364,8 +364,8 @@ func (ch *ConsistentHashingImpl) FindClosestNodes(key string, count int) ([]stri
if hash >= keyHash { if hash >= keyHash {
distance = hash - keyHash distance = hash - keyHash
} else { } else {
// Wrap around distance // Wrap around distance without overflowing 32-bit space
distance = (1<<32 - keyHash) + hash distance = uint32((uint64(1)<<32 - uint64(keyHash)) + uint64(hash))
} }
distances = append(distances, struct { distances = append(distances, struct {
@@ -397,4 +397,4 @@ func (ch *ConsistentHashingImpl) FindClosestNodes(key string, count int) ([]stri
} }
return nodes, hashes, nil return nodes, hashes, nil
} }

View File

@@ -7,39 +7,39 @@ import (
"sync" "sync"
"time" "time"
"chorus/pkg/dht"
"chorus/pkg/crypto"
"chorus/pkg/election"
"chorus/pkg/config" "chorus/pkg/config"
"chorus/pkg/ucxl" "chorus/pkg/crypto"
"chorus/pkg/dht"
"chorus/pkg/election"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
) )
// DistributionCoordinator orchestrates distributed context operations across the cluster // DistributionCoordinator orchestrates distributed context operations across the cluster
type DistributionCoordinator struct { type DistributionCoordinator struct {
mu sync.RWMutex mu sync.RWMutex
config *config.Config config *config.Config
dht *dht.DHT dht dht.DHT
roleCrypto *crypto.RoleCrypto roleCrypto *crypto.RoleCrypto
election election.Election election election.Election
distributor ContextDistributor distributor ContextDistributor
replicationMgr ReplicationManager replicationMgr ReplicationManager
conflictResolver ConflictResolver conflictResolver ConflictResolver
gossipProtocol GossipProtocol gossipProtocol GossipProtocol
networkMgr NetworkManager networkMgr NetworkManager
// Coordination state // Coordination state
isLeader bool isLeader bool
leaderID string leaderID string
coordinationTasks chan *CoordinationTask coordinationTasks chan *CoordinationTask
distributionQueue chan *DistributionRequest distributionQueue chan *DistributionRequest
roleFilters map[string]*RoleFilter roleFilters map[string]*RoleFilter
healthMonitors map[string]*HealthMonitor healthMonitors map[string]*HealthMonitor
// Statistics and metrics // Statistics and metrics
stats *CoordinationStatistics stats *CoordinationStatistics
performanceMetrics *PerformanceMetrics performanceMetrics *PerformanceMetrics
// Configuration // Configuration
maxConcurrentTasks int maxConcurrentTasks int
healthCheckInterval time.Duration healthCheckInterval time.Duration
@@ -49,14 +49,14 @@ type DistributionCoordinator struct {
// CoordinationTask represents a task for the coordinator // CoordinationTask represents a task for the coordinator
type CoordinationTask struct { type CoordinationTask struct {
TaskID string `json:"task_id"` TaskID string `json:"task_id"`
TaskType CoordinationTaskType `json:"task_type"` TaskType CoordinationTaskType `json:"task_type"`
Priority Priority `json:"priority"` Priority Priority `json:"priority"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
RequestedBy string `json:"requested_by"` RequestedBy string `json:"requested_by"`
Payload interface{} `json:"payload"` Payload interface{} `json:"payload"`
Context context.Context `json:"-"` Context context.Context `json:"-"`
Callback func(error) `json:"-"` Callback func(error) `json:"-"`
} }
// CoordinationTaskType represents different types of coordination tasks // CoordinationTaskType represents different types of coordination tasks
@@ -74,55 +74,55 @@ const (
// DistributionRequest represents a request for context distribution // DistributionRequest represents a request for context distribution
type DistributionRequest struct { type DistributionRequest struct {
RequestID string `json:"request_id"` RequestID string `json:"request_id"`
ContextNode *slurpContext.ContextNode `json:"context_node"` ContextNode *slurpContext.ContextNode `json:"context_node"`
TargetRoles []string `json:"target_roles"` TargetRoles []string `json:"target_roles"`
Priority Priority `json:"priority"` Priority Priority `json:"priority"`
RequesterID string `json:"requester_id"` RequesterID string `json:"requester_id"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
Options *DistributionOptions `json:"options"` Options *DistributionOptions `json:"options"`
Callback func(*DistributionResult, error) `json:"-"` Callback func(*DistributionResult, error) `json:"-"`
} }
// DistributionOptions contains options for context distribution // DistributionOptions contains options for context distribution
type DistributionOptions struct { type DistributionOptions struct {
ReplicationFactor int `json:"replication_factor"` ReplicationFactor int `json:"replication_factor"`
ConsistencyLevel ConsistencyLevel `json:"consistency_level"` ConsistencyLevel ConsistencyLevel `json:"consistency_level"`
EncryptionLevel crypto.AccessLevel `json:"encryption_level"` EncryptionLevel crypto.AccessLevel `json:"encryption_level"`
TTL *time.Duration `json:"ttl,omitempty"` TTL *time.Duration `json:"ttl,omitempty"`
PreferredZones []string `json:"preferred_zones"` PreferredZones []string `json:"preferred_zones"`
ExcludedNodes []string `json:"excluded_nodes"` ExcludedNodes []string `json:"excluded_nodes"`
ConflictResolution ResolutionType `json:"conflict_resolution"` ConflictResolution ResolutionType `json:"conflict_resolution"`
} }
// DistributionResult represents the result of a distribution operation // DistributionResult represents the result of a distribution operation
type DistributionResult struct { type DistributionResult struct {
RequestID string `json:"request_id"` RequestID string `json:"request_id"`
Success bool `json:"success"` Success bool `json:"success"`
DistributedNodes []string `json:"distributed_nodes"` DistributedNodes []string `json:"distributed_nodes"`
ReplicationFactor int `json:"replication_factor"` ReplicationFactor int `json:"replication_factor"`
ProcessingTime time.Duration `json:"processing_time"` ProcessingTime time.Duration `json:"processing_time"`
Errors []string `json:"errors"` Errors []string `json:"errors"`
ConflictResolved *ConflictResolution `json:"conflict_resolved,omitempty"` ConflictResolved *ConflictResolution `json:"conflict_resolved,omitempty"`
CompletedAt time.Time `json:"completed_at"` CompletedAt time.Time `json:"completed_at"`
} }
// RoleFilter manages role-based filtering for context access // RoleFilter manages role-based filtering for context access
type RoleFilter struct { type RoleFilter struct {
RoleID string `json:"role_id"` RoleID string `json:"role_id"`
AccessLevel crypto.AccessLevel `json:"access_level"` AccessLevel crypto.AccessLevel `json:"access_level"`
AllowedCompartments []string `json:"allowed_compartments"` AllowedCompartments []string `json:"allowed_compartments"`
FilterRules []*FilterRule `json:"filter_rules"` FilterRules []*FilterRule `json:"filter_rules"`
LastUpdated time.Time `json:"last_updated"` LastUpdated time.Time `json:"last_updated"`
} }
// FilterRule represents a single filtering rule // FilterRule represents a single filtering rule
type FilterRule struct { type FilterRule struct {
RuleID string `json:"rule_id"` RuleID string `json:"rule_id"`
RuleType FilterRuleType `json:"rule_type"` RuleType FilterRuleType `json:"rule_type"`
Pattern string `json:"pattern"` Pattern string `json:"pattern"`
Action FilterAction `json:"action"` Action FilterAction `json:"action"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
} }
// FilterRuleType represents different types of filter rules // FilterRuleType represents different types of filter rules
@@ -139,10 +139,10 @@ const (
type FilterAction string type FilterAction string
const ( const (
FilterActionAllow FilterAction = "allow" FilterActionAllow FilterAction = "allow"
FilterActionDeny FilterAction = "deny" FilterActionDeny FilterAction = "deny"
FilterActionModify FilterAction = "modify" FilterActionModify FilterAction = "modify"
FilterActionAudit FilterAction = "audit" FilterActionAudit FilterAction = "audit"
) )
// HealthMonitor monitors the health of a specific component // HealthMonitor monitors the health of a specific component
@@ -160,10 +160,10 @@ type HealthMonitor struct {
type ComponentType string type ComponentType string
const ( const (
ComponentTypeDHT ComponentType = "dht" ComponentTypeDHT ComponentType = "dht"
ComponentTypeReplication ComponentType = "replication" ComponentTypeReplication ComponentType = "replication"
ComponentTypeGossip ComponentType = "gossip" ComponentTypeGossip ComponentType = "gossip"
ComponentTypeNetwork ComponentType = "network" ComponentTypeNetwork ComponentType = "network"
ComponentTypeConflictResolver ComponentType = "conflict_resolver" ComponentTypeConflictResolver ComponentType = "conflict_resolver"
) )
@@ -190,13 +190,13 @@ type CoordinationStatistics struct {
// PerformanceMetrics tracks detailed performance metrics // PerformanceMetrics tracks detailed performance metrics
type PerformanceMetrics struct { type PerformanceMetrics struct {
ThroughputPerSecond float64 `json:"throughput_per_second"` ThroughputPerSecond float64 `json:"throughput_per_second"`
LatencyPercentiles map[string]float64 `json:"latency_percentiles"` LatencyPercentiles map[string]float64 `json:"latency_percentiles"`
ErrorRateByType map[string]float64 `json:"error_rate_by_type"` ErrorRateByType map[string]float64 `json:"error_rate_by_type"`
ResourceUtilization map[string]float64 `json:"resource_utilization"` ResourceUtilization map[string]float64 `json:"resource_utilization"`
NetworkMetrics *NetworkMetrics `json:"network_metrics"` NetworkMetrics *NetworkMetrics `json:"network_metrics"`
StorageMetrics *StorageMetrics `json:"storage_metrics"` StorageMetrics *StorageMetrics `json:"storage_metrics"`
LastCalculated time.Time `json:"last_calculated"` LastCalculated time.Time `json:"last_calculated"`
} }
// NetworkMetrics tracks network-related performance // NetworkMetrics tracks network-related performance
@@ -210,24 +210,24 @@ type NetworkMetrics struct {
// StorageMetrics tracks storage-related performance // StorageMetrics tracks storage-related performance
type StorageMetrics struct { type StorageMetrics struct {
TotalContexts int64 `json:"total_contexts"` TotalContexts int64 `json:"total_contexts"`
StorageUtilization float64 `json:"storage_utilization"` StorageUtilization float64 `json:"storage_utilization"`
CompressionRatio float64 `json:"compression_ratio"` CompressionRatio float64 `json:"compression_ratio"`
ReplicationEfficiency float64 `json:"replication_efficiency"` ReplicationEfficiency float64 `json:"replication_efficiency"`
CacheHitRate float64 `json:"cache_hit_rate"` CacheHitRate float64 `json:"cache_hit_rate"`
} }
// NewDistributionCoordinator creates a new distribution coordinator // NewDistributionCoordinator creates a new distribution coordinator
func NewDistributionCoordinator( func NewDistributionCoordinator(
config *config.Config, config *config.Config,
dht *dht.DHT, dhtInstance dht.DHT,
roleCrypto *crypto.RoleCrypto, roleCrypto *crypto.RoleCrypto,
election election.Election, election election.Election,
) (*DistributionCoordinator, error) { ) (*DistributionCoordinator, error) {
if config == nil { if config == nil {
return nil, fmt.Errorf("config is required") return nil, fmt.Errorf("config is required")
} }
if dht == nil { if dhtInstance == nil {
return nil, fmt.Errorf("DHT instance is required") return nil, fmt.Errorf("DHT instance is required")
} }
if roleCrypto == nil { if roleCrypto == nil {
@@ -238,14 +238,14 @@ func NewDistributionCoordinator(
} }
// Create distributor // Create distributor
distributor, err := NewDHTContextDistributor(dht, roleCrypto, election, config) distributor, err := NewDHTContextDistributor(dhtInstance, roleCrypto, election, config)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to create context distributor: %w", err) return nil, fmt.Errorf("failed to create context distributor: %w", err)
} }
coord := &DistributionCoordinator{ coord := &DistributionCoordinator{
config: config, config: config,
dht: dht, dht: dhtInstance,
roleCrypto: roleCrypto, roleCrypto: roleCrypto,
election: election, election: election,
distributor: distributor, distributor: distributor,
@@ -264,9 +264,9 @@ func NewDistributionCoordinator(
LatencyPercentiles: make(map[string]float64), LatencyPercentiles: make(map[string]float64),
ErrorRateByType: make(map[string]float64), ErrorRateByType: make(map[string]float64),
ResourceUtilization: make(map[string]float64), ResourceUtilization: make(map[string]float64),
NetworkMetrics: &NetworkMetrics{}, NetworkMetrics: &NetworkMetrics{},
StorageMetrics: &StorageMetrics{}, StorageMetrics: &StorageMetrics{},
LastCalculated: time.Now(), LastCalculated: time.Now(),
}, },
} }
@@ -356,7 +356,7 @@ func (dc *DistributionCoordinator) CoordinateReplication(
CreatedAt: time.Now(), CreatedAt: time.Now(),
RequestedBy: dc.config.Agent.ID, RequestedBy: dc.config.Agent.ID,
Payload: map[string]interface{}{ Payload: map[string]interface{}{
"address": address, "address": address,
"target_factor": targetFactor, "target_factor": targetFactor,
}, },
Context: ctx, Context: ctx,
@@ -398,14 +398,14 @@ func (dc *DistributionCoordinator) GetClusterHealth() (*ClusterHealth, error) {
defer dc.mu.RUnlock() defer dc.mu.RUnlock()
health := &ClusterHealth{ health := &ClusterHealth{
OverallStatus: dc.calculateOverallHealth(), OverallStatus: dc.calculateOverallHealth(),
NodeCount: len(dc.dht.GetConnectedPeers()) + 1, // +1 for current node NodeCount: len(dc.healthMonitors) + 1, // Placeholder count including current node
HealthyNodes: 0, HealthyNodes: 0,
UnhealthyNodes: 0, UnhealthyNodes: 0,
ComponentHealth: make(map[string]*ComponentHealth), ComponentHealth: make(map[string]*ComponentHealth),
LastUpdated: time.Now(), LastUpdated: time.Now(),
Alerts: []string{}, Alerts: []string{},
Recommendations: []string{}, Recommendations: []string{},
} }
// Calculate component health // Calculate component health
@@ -582,7 +582,7 @@ func (dc *DistributionCoordinator) initializeComponents() error {
func (dc *DistributionCoordinator) initializeRoleFilters() { func (dc *DistributionCoordinator) initializeRoleFilters() {
// Initialize role filters based on configuration // Initialize role filters based on configuration
roles := []string{"senior_architect", "project_manager", "devops_engineer", "backend_developer", "frontend_developer"} roles := []string{"senior_architect", "project_manager", "devops_engineer", "backend_developer", "frontend_developer"}
for _, role := range roles { for _, role := range roles {
dc.roleFilters[role] = &RoleFilter{ dc.roleFilters[role] = &RoleFilter{
RoleID: role, RoleID: role,
@@ -598,8 +598,8 @@ func (dc *DistributionCoordinator) initializeHealthMonitors() {
components := map[string]ComponentType{ components := map[string]ComponentType{
"dht": ComponentTypeDHT, "dht": ComponentTypeDHT,
"replication": ComponentTypeReplication, "replication": ComponentTypeReplication,
"gossip": ComponentTypeGossip, "gossip": ComponentTypeGossip,
"network": ComponentTypeNetwork, "network": ComponentTypeNetwork,
"conflict_resolver": ComponentTypeConflictResolver, "conflict_resolver": ComponentTypeConflictResolver,
} }
@@ -682,8 +682,8 @@ func (dc *DistributionCoordinator) executeDistribution(ctx context.Context, requ
Success: false, Success: false,
DistributedNodes: []string{}, DistributedNodes: []string{},
ProcessingTime: 0, ProcessingTime: 0,
Errors: []string{}, Errors: []string{},
CompletedAt: time.Now(), CompletedAt: time.Now(),
} }
// Execute distribution via distributor // Execute distribution via distributor
@@ -703,14 +703,14 @@ func (dc *DistributionCoordinator) executeDistribution(ctx context.Context, requ
// ClusterHealth represents overall cluster health // ClusterHealth represents overall cluster health
type ClusterHealth struct { type ClusterHealth struct {
OverallStatus HealthStatus `json:"overall_status"` OverallStatus HealthStatus `json:"overall_status"`
NodeCount int `json:"node_count"` NodeCount int `json:"node_count"`
HealthyNodes int `json:"healthy_nodes"` HealthyNodes int `json:"healthy_nodes"`
UnhealthyNodes int `json:"unhealthy_nodes"` UnhealthyNodes int `json:"unhealthy_nodes"`
ComponentHealth map[string]*ComponentHealth `json:"component_health"` ComponentHealth map[string]*ComponentHealth `json:"component_health"`
LastUpdated time.Time `json:"last_updated"` LastUpdated time.Time `json:"last_updated"`
Alerts []string `json:"alerts"` Alerts []string `json:"alerts"`
Recommendations []string `json:"recommendations"` Recommendations []string `json:"recommendations"`
} }
// ComponentHealth represents individual component health // ComponentHealth represents individual component health
@@ -736,14 +736,14 @@ func (dc *DistributionCoordinator) getDefaultDistributionOptions() *Distribution
return &DistributionOptions{ return &DistributionOptions{
ReplicationFactor: 3, ReplicationFactor: 3,
ConsistencyLevel: ConsistencyEventual, ConsistencyLevel: ConsistencyEventual,
EncryptionLevel: crypto.AccessMedium, EncryptionLevel: crypto.AccessLevel(slurpContext.AccessMedium),
ConflictResolution: ResolutionMerged, ConflictResolution: ResolutionMerged,
} }
} }
func (dc *DistributionCoordinator) getAccessLevelForRole(role string) crypto.AccessLevel { func (dc *DistributionCoordinator) getAccessLevelForRole(role string) crypto.AccessLevel {
// Placeholder implementation // Placeholder implementation
return crypto.AccessMedium return crypto.AccessLevel(slurpContext.AccessMedium)
} }
func (dc *DistributionCoordinator) getAllowedCompartments(role string) []string { func (dc *DistributionCoordinator) getAllowedCompartments(role string) []string {
@@ -796,13 +796,13 @@ func (dc *DistributionCoordinator) updatePerformanceMetrics() {
func (dc *DistributionCoordinator) priorityFromSeverity(severity ConflictSeverity) Priority { func (dc *DistributionCoordinator) priorityFromSeverity(severity ConflictSeverity) Priority {
switch severity { switch severity {
case SeverityCritical: case ConflictSeverityCritical:
return PriorityCritical return PriorityCritical
case SeverityHigh: case ConflictSeverityHigh:
return PriorityHigh return PriorityHigh
case SeverityMedium: case ConflictSeverityMedium:
return PriorityNormal return PriorityNormal
default: default:
return PriorityLow return PriorityLow
} }
} }

View File

@@ -9,12 +9,12 @@ import (
"sync" "sync"
"time" "time"
"chorus/pkg/dht"
"chorus/pkg/crypto"
"chorus/pkg/election"
"chorus/pkg/ucxl"
"chorus/pkg/config" "chorus/pkg/config"
"chorus/pkg/crypto"
"chorus/pkg/dht"
"chorus/pkg/election"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
) )
// ContextDistributor handles distributed context operations via DHT // ContextDistributor handles distributed context operations via DHT
@@ -27,62 +27,68 @@ type ContextDistributor interface {
// The context is encrypted for each specified role and distributed across // The context is encrypted for each specified role and distributed across
// the cluster with the configured replication factor // the cluster with the configured replication factor
DistributeContext(ctx context.Context, node *slurpContext.ContextNode, roles []string) error DistributeContext(ctx context.Context, node *slurpContext.ContextNode, roles []string) error
// RetrieveContext gets context from DHT and decrypts for the requesting role // RetrieveContext gets context from DHT and decrypts for the requesting role
// Automatically handles role-based decryption and returns the resolved context // Automatically handles role-based decryption and returns the resolved context
RetrieveContext(ctx context.Context, address ucxl.Address, role string) (*slurpContext.ResolvedContext, error) RetrieveContext(ctx context.Context, address ucxl.Address, role string) (*slurpContext.ResolvedContext, error)
// UpdateContext updates existing distributed context with conflict resolution // UpdateContext updates existing distributed context with conflict resolution
// Uses vector clocks and leader coordination for consistent updates // Uses vector clocks and leader coordination for consistent updates
UpdateContext(ctx context.Context, node *slurpContext.ContextNode, roles []string) (*ConflictResolution, error) UpdateContext(ctx context.Context, node *slurpContext.ContextNode, roles []string) (*ConflictResolution, error)
// DeleteContext removes context from distributed storage // DeleteContext removes context from distributed storage
// Handles distributed deletion across all replicas // Handles distributed deletion across all replicas
DeleteContext(ctx context.Context, address ucxl.Address) error DeleteContext(ctx context.Context, address ucxl.Address) error
// ListDistributedContexts lists contexts available in the DHT for a role // ListDistributedContexts lists contexts available in the DHT for a role
// Provides efficient enumeration with role-based filtering // Provides efficient enumeration with role-based filtering
ListDistributedContexts(ctx context.Context, role string, criteria *DistributionCriteria) ([]*DistributedContextInfo, error) ListDistributedContexts(ctx context.Context, role string, criteria *DistributionCriteria) ([]*DistributedContextInfo, error)
// Sync synchronizes local state with distributed DHT // Sync synchronizes local state with distributed DHT
// Ensures eventual consistency by exchanging metadata with peers // Ensures eventual consistency by exchanging metadata with peers
Sync(ctx context.Context) (*SyncResult, error) Sync(ctx context.Context) (*SyncResult, error)
// Replicate ensures context has the desired replication factor // Replicate ensures context has the desired replication factor
// Manages replica placement and health across cluster nodes // Manages replica placement and health across cluster nodes
Replicate(ctx context.Context, address ucxl.Address, replicationFactor int) error Replicate(ctx context.Context, address ucxl.Address, replicationFactor int) error
// GetReplicaHealth returns health status of context replicas // GetReplicaHealth returns health status of context replicas
// Provides visibility into replication status and node health // Provides visibility into replication status and node health
GetReplicaHealth(ctx context.Context, address ucxl.Address) (*ReplicaHealth, error) GetReplicaHealth(ctx context.Context, address ucxl.Address) (*ReplicaHealth, error)
// GetDistributionStats returns distribution performance statistics // GetDistributionStats returns distribution performance statistics
GetDistributionStats() (*DistributionStatistics, error) GetDistributionStats() (*DistributionStatistics, error)
// SetReplicationPolicy configures replication behavior // SetReplicationPolicy configures replication behavior
SetReplicationPolicy(policy *ReplicationPolicy) error SetReplicationPolicy(policy *ReplicationPolicy) error
// Start initializes background distribution routines
Start(ctx context.Context) error
// Stop releases distribution resources
Stop(ctx context.Context) error
} }
// DHTStorage provides direct DHT storage operations for context data // DHTStorage provides direct DHT storage operations for context data
type DHTStorage interface { type DHTStorage interface {
// Put stores encrypted context data in the DHT // Put stores encrypted context data in the DHT
Put(ctx context.Context, key string, data []byte, options *DHTStoreOptions) error Put(ctx context.Context, key string, data []byte, options *DHTStoreOptions) error
// Get retrieves encrypted context data from the DHT // Get retrieves encrypted context data from the DHT
Get(ctx context.Context, key string) ([]byte, *DHTMetadata, error) Get(ctx context.Context, key string) ([]byte, *DHTMetadata, error)
// Delete removes data from the DHT // Delete removes data from the DHT
Delete(ctx context.Context, key string) error Delete(ctx context.Context, key string) error
// Exists checks if data exists in the DHT // Exists checks if data exists in the DHT
Exists(ctx context.Context, key string) (bool, error) Exists(ctx context.Context, key string) (bool, error)
// FindProviders finds nodes that have the specified data // FindProviders finds nodes that have the specified data
FindProviders(ctx context.Context, key string) ([]string, error) FindProviders(ctx context.Context, key string) ([]string, error)
// ListKeys lists all keys matching a pattern // ListKeys lists all keys matching a pattern
ListKeys(ctx context.Context, pattern string) ([]string, error) ListKeys(ctx context.Context, pattern string) ([]string, error)
// GetStats returns DHT operation statistics // GetStats returns DHT operation statistics
GetStats() (*DHTStatistics, error) GetStats() (*DHTStatistics, error)
} }
@@ -92,18 +98,18 @@ type ConflictResolver interface {
// ResolveConflict resolves conflicts between concurrent context updates // ResolveConflict resolves conflicts between concurrent context updates
// Uses vector clocks and semantic merging rules for resolution // Uses vector clocks and semantic merging rules for resolution
ResolveConflict(ctx context.Context, local *slurpContext.ContextNode, remote *slurpContext.ContextNode) (*ConflictResolution, error) ResolveConflict(ctx context.Context, local *slurpContext.ContextNode, remote *slurpContext.ContextNode) (*ConflictResolution, error)
// DetectConflicts detects potential conflicts before they occur // DetectConflicts detects potential conflicts before they occur
// Provides early warning for conflicting operations // Provides early warning for conflicting operations
DetectConflicts(ctx context.Context, update *slurpContext.ContextNode) ([]*PotentialConflict, error) DetectConflicts(ctx context.Context, update *slurpContext.ContextNode) ([]*PotentialConflict, error)
// MergeContexts merges multiple context versions semantically // MergeContexts merges multiple context versions semantically
// Combines changes from different sources intelligently // Combines changes from different sources intelligently
MergeContexts(ctx context.Context, contexts []*slurpContext.ContextNode) (*slurpContext.ContextNode, error) MergeContexts(ctx context.Context, contexts []*slurpContext.ContextNode) (*slurpContext.ContextNode, error)
// GetConflictHistory returns history of resolved conflicts // GetConflictHistory returns history of resolved conflicts
GetConflictHistory(ctx context.Context, address ucxl.Address) ([]*ConflictResolution, error) GetConflictHistory(ctx context.Context, address ucxl.Address) ([]*ConflictResolution, error)
// SetResolutionStrategy configures conflict resolution strategy // SetResolutionStrategy configures conflict resolution strategy
SetResolutionStrategy(strategy *ResolutionStrategy) error SetResolutionStrategy(strategy *ResolutionStrategy) error
} }
@@ -112,19 +118,19 @@ type ConflictResolver interface {
type ReplicationManager interface { type ReplicationManager interface {
// EnsureReplication ensures context meets replication requirements // EnsureReplication ensures context meets replication requirements
EnsureReplication(ctx context.Context, address ucxl.Address, factor int) error EnsureReplication(ctx context.Context, address ucxl.Address, factor int) error
// RepairReplicas repairs missing or corrupted replicas // RepairReplicas repairs missing or corrupted replicas
RepairReplicas(ctx context.Context, address ucxl.Address) (*RepairResult, error) RepairReplicas(ctx context.Context, address ucxl.Address) (*RepairResult, error)
// BalanceReplicas rebalances replicas across cluster nodes // BalanceReplicas rebalances replicas across cluster nodes
BalanceReplicas(ctx context.Context) (*RebalanceResult, error) BalanceReplicas(ctx context.Context) (*RebalanceResult, error)
// GetReplicationStatus returns current replication status // GetReplicationStatus returns current replication status
GetReplicationStatus(ctx context.Context, address ucxl.Address) (*ReplicationStatus, error) GetReplicationStatus(ctx context.Context, address ucxl.Address) (*ReplicationStatus, error)
// SetReplicationFactor sets the desired replication factor // SetReplicationFactor sets the desired replication factor
SetReplicationFactor(factor int) error SetReplicationFactor(factor int) error
// GetReplicationStats returns replication statistics // GetReplicationStats returns replication statistics
GetReplicationStats() (*ReplicationStatistics, error) GetReplicationStats() (*ReplicationStatistics, error)
} }
@@ -133,19 +139,19 @@ type ReplicationManager interface {
type GossipProtocol interface { type GossipProtocol interface {
// StartGossip begins gossip protocol for metadata synchronization // StartGossip begins gossip protocol for metadata synchronization
StartGossip(ctx context.Context) error StartGossip(ctx context.Context) error
// StopGossip stops gossip protocol // StopGossip stops gossip protocol
StopGossip(ctx context.Context) error StopGossip(ctx context.Context) error
// GossipMetadata exchanges metadata with peer nodes // GossipMetadata exchanges metadata with peer nodes
GossipMetadata(ctx context.Context, peer string) error GossipMetadata(ctx context.Context, peer string) error
// GetGossipState returns current gossip protocol state // GetGossipState returns current gossip protocol state
GetGossipState() (*GossipState, error) GetGossipState() (*GossipState, error)
// SetGossipInterval configures gossip frequency // SetGossipInterval configures gossip frequency
SetGossipInterval(interval time.Duration) error SetGossipInterval(interval time.Duration) error
// GetGossipStats returns gossip protocol statistics // GetGossipStats returns gossip protocol statistics
GetGossipStats() (*GossipStatistics, error) GetGossipStats() (*GossipStatistics, error)
} }
@@ -154,19 +160,19 @@ type GossipProtocol interface {
type NetworkManager interface { type NetworkManager interface {
// DetectPartition detects network partitions in the cluster // DetectPartition detects network partitions in the cluster
DetectPartition(ctx context.Context) (*PartitionInfo, error) DetectPartition(ctx context.Context) (*PartitionInfo, error)
// GetTopology returns current network topology // GetTopology returns current network topology
GetTopology(ctx context.Context) (*NetworkTopology, error) GetTopology(ctx context.Context) (*NetworkTopology, error)
// GetPeers returns list of available peer nodes // GetPeers returns list of available peer nodes
GetPeers(ctx context.Context) ([]*PeerInfo, error) GetPeers(ctx context.Context) ([]*PeerInfo, error)
// CheckConnectivity checks connectivity to peer nodes // CheckConnectivity checks connectivity to peer nodes
CheckConnectivity(ctx context.Context, peers []string) (*ConnectivityReport, error) CheckConnectivity(ctx context.Context, peers []string) (*ConnectivityReport, error)
// RecoverFromPartition attempts to recover from network partition // RecoverFromPartition attempts to recover from network partition
RecoverFromPartition(ctx context.Context) (*RecoveryResult, error) RecoverFromPartition(ctx context.Context) (*RecoveryResult, error)
// GetNetworkStats returns network performance statistics // GetNetworkStats returns network performance statistics
GetNetworkStats() (*NetworkStatistics, error) GetNetworkStats() (*NetworkStatistics, error)
} }
@@ -175,59 +181,59 @@ type NetworkManager interface {
// DistributionCriteria represents criteria for listing distributed contexts // DistributionCriteria represents criteria for listing distributed contexts
type DistributionCriteria struct { type DistributionCriteria struct {
Tags []string `json:"tags"` // Required tags Tags []string `json:"tags"` // Required tags
Technologies []string `json:"technologies"` // Required technologies Technologies []string `json:"technologies"` // Required technologies
MinReplicas int `json:"min_replicas"` // Minimum replica count MinReplicas int `json:"min_replicas"` // Minimum replica count
MaxAge *time.Duration `json:"max_age"` // Maximum age MaxAge *time.Duration `json:"max_age"` // Maximum age
HealthyOnly bool `json:"healthy_only"` // Only healthy replicas HealthyOnly bool `json:"healthy_only"` // Only healthy replicas
Limit int `json:"limit"` // Maximum results Limit int `json:"limit"` // Maximum results
Offset int `json:"offset"` // Result offset Offset int `json:"offset"` // Result offset
} }
// DistributedContextInfo represents information about distributed context // DistributedContextInfo represents information about distributed context
type DistributedContextInfo struct { type DistributedContextInfo struct {
Address ucxl.Address `json:"address"` // Context address Address ucxl.Address `json:"address"` // Context address
Roles []string `json:"roles"` // Accessible roles Roles []string `json:"roles"` // Accessible roles
ReplicaCount int `json:"replica_count"` // Number of replicas ReplicaCount int `json:"replica_count"` // Number of replicas
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
LastUpdated time.Time `json:"last_updated"` // Last update time LastUpdated time.Time `json:"last_updated"` // Last update time
Version int64 `json:"version"` // Version number Version int64 `json:"version"` // Version number
Size int64 `json:"size"` // Data size Size int64 `json:"size"` // Data size
Checksum string `json:"checksum"` // Data checksum Checksum string `json:"checksum"` // Data checksum
} }
// ConflictResolution represents the result of conflict resolution // ConflictResolution represents the result of conflict resolution
type ConflictResolution struct { type ConflictResolution struct {
Address ucxl.Address `json:"address"` // Context address Address ucxl.Address `json:"address"` // Context address
ResolutionType ResolutionType `json:"resolution_type"` // How conflict was resolved ResolutionType ResolutionType `json:"resolution_type"` // How conflict was resolved
MergedContext *slurpContext.ContextNode `json:"merged_context"` // Resulting merged context MergedContext *slurpContext.ContextNode `json:"merged_context"` // Resulting merged context
ConflictingSources []string `json:"conflicting_sources"` // Sources of conflict ConflictingSources []string `json:"conflicting_sources"` // Sources of conflict
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
ResolvedAt time.Time `json:"resolved_at"` // When resolved ResolvedAt time.Time `json:"resolved_at"` // When resolved
Confidence float64 `json:"confidence"` // Confidence in resolution Confidence float64 `json:"confidence"` // Confidence in resolution
ManualReview bool `json:"manual_review"` // Whether manual review needed ManualReview bool `json:"manual_review"` // Whether manual review needed
} }
// ResolutionType represents different types of conflict resolution // ResolutionType represents different types of conflict resolution
type ResolutionType string type ResolutionType string
const ( const (
ResolutionMerged ResolutionType = "merged" // Contexts were merged ResolutionMerged ResolutionType = "merged" // Contexts were merged
ResolutionLastWriter ResolutionType = "last_writer" // Last writer wins ResolutionLastWriter ResolutionType = "last_writer" // Last writer wins
ResolutionLeaderDecision ResolutionType = "leader_decision" // Leader made decision ResolutionLeaderDecision ResolutionType = "leader_decision" // Leader made decision
ResolutionManual ResolutionType = "manual" // Manual resolution required ResolutionManual ResolutionType = "manual" // Manual resolution required
ResolutionFailed ResolutionType = "failed" // Resolution failed ResolutionFailed ResolutionType = "failed" // Resolution failed
) )
// PotentialConflict represents a detected potential conflict // PotentialConflict represents a detected potential conflict
type PotentialConflict struct { type PotentialConflict struct {
Address ucxl.Address `json:"address"` // Context address Address ucxl.Address `json:"address"` // Context address
ConflictType ConflictType `json:"conflict_type"` // Type of conflict ConflictType ConflictType `json:"conflict_type"` // Type of conflict
Description string `json:"description"` // Conflict description Description string `json:"description"` // Conflict description
Severity ConflictSeverity `json:"severity"` // Conflict severity Severity ConflictSeverity `json:"severity"` // Conflict severity
AffectedFields []string `json:"affected_fields"` // Fields in conflict AffectedFields []string `json:"affected_fields"` // Fields in conflict
Suggestions []string `json:"suggestions"` // Resolution suggestions Suggestions []string `json:"suggestions"` // Resolution suggestions
DetectedAt time.Time `json:"detected_at"` // When detected DetectedAt time.Time `json:"detected_at"` // When detected
} }
// ConflictType represents different types of conflicts // ConflictType represents different types of conflicts
@@ -245,88 +251,88 @@ const (
type ConflictSeverity string type ConflictSeverity string
const ( const (
SeverityLow ConflictSeverity = "low" // Low severity - auto-resolvable ConflictSeverityLow ConflictSeverity = "low" // Low severity - auto-resolvable
SeverityMedium ConflictSeverity = "medium" // Medium severity - may need review ConflictSeverityMedium ConflictSeverity = "medium" // Medium severity - may need review
SeverityHigh ConflictSeverity = "high" // High severity - needs attention ConflictSeverityHigh ConflictSeverity = "high" // High severity - needs attention
SeverityCritical ConflictSeverity = "critical" // Critical - manual intervention required ConflictSeverityCritical ConflictSeverity = "critical" // Critical - manual intervention required
) )
// ResolutionStrategy represents conflict resolution strategy configuration // ResolutionStrategy represents conflict resolution strategy configuration
type ResolutionStrategy struct { type ResolutionStrategy struct {
DefaultResolution ResolutionType `json:"default_resolution"` // Default resolution method DefaultResolution ResolutionType `json:"default_resolution"` // Default resolution method
FieldPriorities map[string]int `json:"field_priorities"` // Field priority mapping FieldPriorities map[string]int `json:"field_priorities"` // Field priority mapping
AutoMergeEnabled bool `json:"auto_merge_enabled"` // Enable automatic merging AutoMergeEnabled bool `json:"auto_merge_enabled"` // Enable automatic merging
RequireConsensus bool `json:"require_consensus"` // Require node consensus RequireConsensus bool `json:"require_consensus"` // Require node consensus
LeaderBreaksTies bool `json:"leader_breaks_ties"` // Leader resolves ties LeaderBreaksTies bool `json:"leader_breaks_ties"` // Leader resolves ties
MaxConflictAge time.Duration `json:"max_conflict_age"` // Max age before escalation MaxConflictAge time.Duration `json:"max_conflict_age"` // Max age before escalation
EscalationRoles []string `json:"escalation_roles"` // Roles for manual escalation EscalationRoles []string `json:"escalation_roles"` // Roles for manual escalation
} }
// SyncResult represents the result of synchronization operation // SyncResult represents the result of synchronization operation
type SyncResult struct { type SyncResult struct {
SyncedContexts int `json:"synced_contexts"` // Contexts synchronized SyncedContexts int `json:"synced_contexts"` // Contexts synchronized
ConflictsResolved int `json:"conflicts_resolved"` // Conflicts resolved ConflictsResolved int `json:"conflicts_resolved"` // Conflicts resolved
Errors []string `json:"errors"` // Synchronization errors Errors []string `json:"errors"` // Synchronization errors
SyncTime time.Duration `json:"sync_time"` // Total sync time SyncTime time.Duration `json:"sync_time"` // Total sync time
PeersContacted int `json:"peers_contacted"` // Number of peers contacted PeersContacted int `json:"peers_contacted"` // Number of peers contacted
DataTransferred int64 `json:"data_transferred"` // Bytes transferred DataTransferred int64 `json:"data_transferred"` // Bytes transferred
SyncedAt time.Time `json:"synced_at"` // When sync completed SyncedAt time.Time `json:"synced_at"` // When sync completed
} }
// ReplicaHealth represents health status of context replicas // ReplicaHealth represents health status of context replicas
type ReplicaHealth struct { type ReplicaHealth struct {
Address ucxl.Address `json:"address"` // Context address Address ucxl.Address `json:"address"` // Context address
TotalReplicas int `json:"total_replicas"` // Total replica count TotalReplicas int `json:"total_replicas"` // Total replica count
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
FailedReplicas int `json:"failed_replicas"` // Failed replica count FailedReplicas int `json:"failed_replicas"` // Failed replica count
ReplicaNodes []*ReplicaNode `json:"replica_nodes"` // Individual replica status ReplicaNodes []*ReplicaNode `json:"replica_nodes"` // Individual replica status
OverallHealth HealthStatus `json:"overall_health"` // Overall health status OverallHealth HealthStatus `json:"overall_health"` // Overall health status
LastChecked time.Time `json:"last_checked"` // When last checked LastChecked time.Time `json:"last_checked"` // When last checked
RepairNeeded bool `json:"repair_needed"` // Whether repair is needed RepairNeeded bool `json:"repair_needed"` // Whether repair is needed
} }
// ReplicaNode represents status of individual replica node // ReplicaNode represents status of individual replica node
type ReplicaNode struct { type ReplicaNode struct {
NodeID string `json:"node_id"` // Node identifier NodeID string `json:"node_id"` // Node identifier
Status ReplicaStatus `json:"status"` // Replica status Status ReplicaStatus `json:"status"` // Replica status
LastSeen time.Time `json:"last_seen"` // When last seen LastSeen time.Time `json:"last_seen"` // When last seen
Version int64 `json:"version"` // Context version Version int64 `json:"version"` // Context version
Checksum string `json:"checksum"` // Data checksum Checksum string `json:"checksum"` // Data checksum
Latency time.Duration `json:"latency"` // Network latency Latency time.Duration `json:"latency"` // Network latency
NetworkAddress string `json:"network_address"` // Network address NetworkAddress string `json:"network_address"` // Network address
} }
// ReplicaStatus represents status of individual replica // ReplicaStatus represents status of individual replica
type ReplicaStatus string type ReplicaStatus string
const ( const (
ReplicaHealthy ReplicaStatus = "healthy" // Replica is healthy ReplicaHealthy ReplicaStatus = "healthy" // Replica is healthy
ReplicaStale ReplicaStatus = "stale" // Replica is stale ReplicaStale ReplicaStatus = "stale" // Replica is stale
ReplicaCorrupted ReplicaStatus = "corrupted" // Replica is corrupted ReplicaCorrupted ReplicaStatus = "corrupted" // Replica is corrupted
ReplicaUnreachable ReplicaStatus = "unreachable" // Replica is unreachable ReplicaUnreachable ReplicaStatus = "unreachable" // Replica is unreachable
ReplicaSyncing ReplicaStatus = "syncing" // Replica is syncing ReplicaSyncing ReplicaStatus = "syncing" // Replica is syncing
) )
// HealthStatus represents overall health status // HealthStatus represents overall health status
type HealthStatus string type HealthStatus string
const ( const (
HealthHealthy HealthStatus = "healthy" // All replicas healthy HealthHealthy HealthStatus = "healthy" // All replicas healthy
HealthDegraded HealthStatus = "degraded" // Some replicas unhealthy HealthDegraded HealthStatus = "degraded" // Some replicas unhealthy
HealthCritical HealthStatus = "critical" // Most replicas unhealthy HealthCritical HealthStatus = "critical" // Most replicas unhealthy
HealthFailed HealthStatus = "failed" // All replicas failed HealthFailed HealthStatus = "failed" // All replicas failed
) )
// ReplicationPolicy represents replication behavior configuration // ReplicationPolicy represents replication behavior configuration
type ReplicationPolicy struct { type ReplicationPolicy struct {
DefaultFactor int `json:"default_factor"` // Default replication factor DefaultFactor int `json:"default_factor"` // Default replication factor
MinFactor int `json:"min_factor"` // Minimum replication factor MinFactor int `json:"min_factor"` // Minimum replication factor
MaxFactor int `json:"max_factor"` // Maximum replication factor MaxFactor int `json:"max_factor"` // Maximum replication factor
PreferredZones []string `json:"preferred_zones"` // Preferred availability zones PreferredZones []string `json:"preferred_zones"` // Preferred availability zones
AvoidSameNode bool `json:"avoid_same_node"` // Avoid same physical node AvoidSameNode bool `json:"avoid_same_node"` // Avoid same physical node
ConsistencyLevel ConsistencyLevel `json:"consistency_level"` // Consistency requirements ConsistencyLevel ConsistencyLevel `json:"consistency_level"` // Consistency requirements
RepairThreshold float64 `json:"repair_threshold"` // Health threshold for repair RepairThreshold float64 `json:"repair_threshold"` // Health threshold for repair
RebalanceInterval time.Duration `json:"rebalance_interval"` // Rebalancing frequency RebalanceInterval time.Duration `json:"rebalance_interval"` // Rebalancing frequency
} }
// ConsistencyLevel represents consistency requirements // ConsistencyLevel represents consistency requirements
@@ -340,12 +346,12 @@ const (
// DHTStoreOptions represents options for DHT storage operations // DHTStoreOptions represents options for DHT storage operations
type DHTStoreOptions struct { type DHTStoreOptions struct {
ReplicationFactor int `json:"replication_factor"` // Number of replicas ReplicationFactor int `json:"replication_factor"` // Number of replicas
TTL *time.Duration `json:"ttl,omitempty"` // Time to live TTL *time.Duration `json:"ttl,omitempty"` // Time to live
Priority Priority `json:"priority"` // Storage priority Priority Priority `json:"priority"` // Storage priority
Compress bool `json:"compress"` // Whether to compress Compress bool `json:"compress"` // Whether to compress
Checksum bool `json:"checksum"` // Whether to checksum Checksum bool `json:"checksum"` // Whether to checksum
Metadata map[string]interface{} `json:"metadata"` // Additional metadata Metadata map[string]interface{} `json:"metadata"` // Additional metadata
} }
// Priority represents storage operation priority // Priority represents storage operation priority
@@ -360,12 +366,12 @@ const (
// DHTMetadata represents metadata for DHT stored data // DHTMetadata represents metadata for DHT stored data
type DHTMetadata struct { type DHTMetadata struct {
StoredAt time.Time `json:"stored_at"` // When stored StoredAt time.Time `json:"stored_at"` // When stored
UpdatedAt time.Time `json:"updated_at"` // When last updated UpdatedAt time.Time `json:"updated_at"` // When last updated
Version int64 `json:"version"` // Version number Version int64 `json:"version"` // Version number
Size int64 `json:"size"` // Data size Size int64 `json:"size"` // Data size
Checksum string `json:"checksum"` // Data checksum Checksum string `json:"checksum"` // Data checksum
ReplicationFactor int `json:"replication_factor"` // Number of replicas ReplicationFactor int `json:"replication_factor"` // Number of replicas
TTL *time.Time `json:"ttl,omitempty"` // Time to live TTL *time.Time `json:"ttl,omitempty"` // Time to live
Metadata map[string]interface{} `json:"metadata"` // Additional metadata Metadata map[string]interface{} `json:"metadata"` // Additional metadata
} }

View File

@@ -10,18 +10,18 @@ import (
"sync" "sync"
"time" "time"
"chorus/pkg/dht"
"chorus/pkg/crypto"
"chorus/pkg/election"
"chorus/pkg/ucxl"
"chorus/pkg/config" "chorus/pkg/config"
"chorus/pkg/crypto"
"chorus/pkg/dht"
"chorus/pkg/election"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
) )
// DHTContextDistributor implements ContextDistributor using CHORUS DHT infrastructure // DHTContextDistributor implements ContextDistributor using CHORUS DHT infrastructure
type DHTContextDistributor struct { type DHTContextDistributor struct {
mu sync.RWMutex mu sync.RWMutex
dht *dht.DHT dht dht.DHT
roleCrypto *crypto.RoleCrypto roleCrypto *crypto.RoleCrypto
election election.Election election election.Election
config *config.Config config *config.Config
@@ -37,7 +37,7 @@ type DHTContextDistributor struct {
// NewDHTContextDistributor creates a new DHT-based context distributor // NewDHTContextDistributor creates a new DHT-based context distributor
func NewDHTContextDistributor( func NewDHTContextDistributor(
dht *dht.DHT, dht dht.DHT,
roleCrypto *crypto.RoleCrypto, roleCrypto *crypto.RoleCrypto,
election election.Election, election election.Election,
config *config.Config, config *config.Config,
@@ -147,36 +147,43 @@ func (d *DHTContextDistributor) DistributeContext(ctx context.Context, node *slu
return d.recordError(fmt.Sprintf("failed to get vector clock: %v", err)) return d.recordError(fmt.Sprintf("failed to get vector clock: %v", err))
} }
// Encrypt context for roles // Prepare context payload for role encryption
encryptedData, err := d.roleCrypto.EncryptContextForRoles(node, roles, []string{}) rawContext, err := json.Marshal(node)
if err != nil { if err != nil {
return d.recordError(fmt.Sprintf("failed to encrypt context: %v", err)) return d.recordError(fmt.Sprintf("failed to marshal context: %v", err))
} }
// Create distribution metadata // Create distribution metadata (checksum calculated per-role below)
metadata := &DistributionMetadata{ metadata := &DistributionMetadata{
Address: node.UCXLAddress, Address: node.UCXLAddress,
Roles: roles, Roles: roles,
Version: 1, Version: 1,
VectorClock: clock, VectorClock: clock,
DistributedBy: d.config.Agent.ID, DistributedBy: d.config.Agent.ID,
DistributedAt: time.Now(), DistributedAt: time.Now(),
ReplicationFactor: d.getReplicationFactor(), ReplicationFactor: d.getReplicationFactor(),
Checksum: d.calculateChecksum(encryptedData),
} }
// Store encrypted data in DHT for each role // Store encrypted data in DHT for each role
for _, role := range roles { for _, role := range roles {
key := d.keyGenerator.GenerateContextKey(node.UCXLAddress.String(), role) key := d.keyGenerator.GenerateContextKey(node.UCXLAddress.String(), role)
cipher, fingerprint, err := d.roleCrypto.EncryptForRole(rawContext, role)
if err != nil {
return d.recordError(fmt.Sprintf("failed to encrypt context for role %s: %v", role, err))
}
// Create role-specific storage package // Create role-specific storage package
storagePackage := &ContextStoragePackage{ storagePackage := &ContextStoragePackage{
EncryptedData: encryptedData, EncryptedData: cipher,
Metadata: metadata, KeyFingerprint: fingerprint,
Role: role, Metadata: metadata,
StoredAt: time.Now(), Role: role,
StoredAt: time.Now(),
} }
metadata.Checksum = d.calculateChecksum(cipher)
// Serialize for storage // Serialize for storage
storageBytes, err := json.Marshal(storagePackage) storageBytes, err := json.Marshal(storagePackage)
if err != nil { if err != nil {
@@ -252,25 +259,30 @@ func (d *DHTContextDistributor) RetrieveContext(ctx context.Context, address ucx
} }
// Decrypt context for role // Decrypt context for role
contextNode, err := d.roleCrypto.DecryptContextForRole(storagePackage.EncryptedData, role) plain, err := d.roleCrypto.DecryptForRole(storagePackage.EncryptedData, role, storagePackage.KeyFingerprint)
if err != nil { if err != nil {
return nil, d.recordRetrievalError(fmt.Sprintf("failed to decrypt context: %v", err)) return nil, d.recordRetrievalError(fmt.Sprintf("failed to decrypt context: %v", err))
} }
var contextNode slurpContext.ContextNode
if err := json.Unmarshal(plain, &contextNode); err != nil {
return nil, d.recordRetrievalError(fmt.Sprintf("failed to decode context: %v", err))
}
// Convert to resolved context // Convert to resolved context
resolvedContext := &slurpContext.ResolvedContext{ resolvedContext := &slurpContext.ResolvedContext{
UCXLAddress: contextNode.UCXLAddress, UCXLAddress: contextNode.UCXLAddress,
Summary: contextNode.Summary, Summary: contextNode.Summary,
Purpose: contextNode.Purpose, Purpose: contextNode.Purpose,
Technologies: contextNode.Technologies, Technologies: contextNode.Technologies,
Tags: contextNode.Tags, Tags: contextNode.Tags,
Insights: contextNode.Insights, Insights: contextNode.Insights,
ContextSourcePath: contextNode.Path, ContextSourcePath: contextNode.Path,
InheritanceChain: []string{contextNode.Path}, InheritanceChain: []string{contextNode.Path},
ResolutionConfidence: contextNode.RAGConfidence, ResolutionConfidence: contextNode.RAGConfidence,
BoundedDepth: 1, BoundedDepth: 1,
GlobalContextsApplied: false, GlobalContextsApplied: false,
ResolvedAt: time.Now(), ResolvedAt: time.Now(),
} }
// Update statistics // Update statistics
@@ -304,15 +316,15 @@ func (d *DHTContextDistributor) UpdateContext(ctx context.Context, node *slurpCo
// Convert existing resolved context back to context node for comparison // Convert existing resolved context back to context node for comparison
existingNode := &slurpContext.ContextNode{ existingNode := &slurpContext.ContextNode{
Path: existingContext.ContextSourcePath, Path: existingContext.ContextSourcePath,
UCXLAddress: existingContext.UCXLAddress, UCXLAddress: existingContext.UCXLAddress,
Summary: existingContext.Summary, Summary: existingContext.Summary,
Purpose: existingContext.Purpose, Purpose: existingContext.Purpose,
Technologies: existingContext.Technologies, Technologies: existingContext.Technologies,
Tags: existingContext.Tags, Tags: existingContext.Tags,
Insights: existingContext.Insights, Insights: existingContext.Insights,
RAGConfidence: existingContext.ResolutionConfidence, RAGConfidence: existingContext.ResolutionConfidence,
GeneratedAt: existingContext.ResolvedAt, GeneratedAt: existingContext.ResolvedAt,
} }
// Use conflict resolver to handle the update // Use conflict resolver to handle the update
@@ -357,7 +369,7 @@ func (d *DHTContextDistributor) DeleteContext(ctx context.Context, address ucxl.
func (d *DHTContextDistributor) ListDistributedContexts(ctx context.Context, role string, criteria *DistributionCriteria) ([]*DistributedContextInfo, error) { func (d *DHTContextDistributor) ListDistributedContexts(ctx context.Context, role string, criteria *DistributionCriteria) ([]*DistributedContextInfo, error) {
// This is a simplified implementation // This is a simplified implementation
// In production, we'd maintain proper indexes and filtering // In production, we'd maintain proper indexes and filtering
results := []*DistributedContextInfo{} results := []*DistributedContextInfo{}
limit := 100 limit := 100
if criteria != nil && criteria.Limit > 0 { if criteria != nil && criteria.Limit > 0 {
@@ -380,13 +392,13 @@ func (d *DHTContextDistributor) Sync(ctx context.Context) (*SyncResult, error) {
} }
result := &SyncResult{ result := &SyncResult{
SyncedContexts: 0, // Would be populated in real implementation SyncedContexts: 0, // Would be populated in real implementation
ConflictsResolved: 0, ConflictsResolved: 0,
Errors: []string{}, Errors: []string{},
SyncTime: time.Since(start), SyncTime: time.Since(start),
PeersContacted: len(d.dht.GetConnectedPeers()), PeersContacted: len(d.dht.GetConnectedPeers()),
DataTransferred: 0, DataTransferred: 0,
SyncedAt: time.Now(), SyncedAt: time.Now(),
} }
return result, nil return result, nil
@@ -453,28 +465,13 @@ func (d *DHTContextDistributor) calculateChecksum(data interface{}) string {
return hex.EncodeToString(hash[:]) return hex.EncodeToString(hash[:])
} }
// Ensure DHT is bootstrapped before operations
func (d *DHTContextDistributor) ensureDHTReady() error {
if !d.dht.IsBootstrapped() {
return fmt.Errorf("DHT not bootstrapped")
}
return nil
}
// Start starts the distribution service // Start starts the distribution service
func (d *DHTContextDistributor) Start(ctx context.Context) error { func (d *DHTContextDistributor) Start(ctx context.Context) error {
// Bootstrap DHT if not already done if d.gossipProtocol != nil {
if !d.dht.IsBootstrapped() { if err := d.gossipProtocol.StartGossip(ctx); err != nil {
if err := d.dht.Bootstrap(); err != nil { return fmt.Errorf("failed to start gossip protocol: %w", err)
return fmt.Errorf("failed to bootstrap DHT: %w", err)
} }
} }
// Start gossip protocol
if err := d.gossipProtocol.StartGossip(ctx); err != nil {
return fmt.Errorf("failed to start gossip protocol: %w", err)
}
return nil return nil
} }
@@ -488,22 +485,23 @@ func (d *DHTContextDistributor) Stop(ctx context.Context) error {
// ContextStoragePackage represents a complete package for DHT storage // ContextStoragePackage represents a complete package for DHT storage
type ContextStoragePackage struct { type ContextStoragePackage struct {
EncryptedData *crypto.EncryptedContextData `json:"encrypted_data"` EncryptedData []byte `json:"encrypted_data"`
Metadata *DistributionMetadata `json:"metadata"` KeyFingerprint string `json:"key_fingerprint,omitempty"`
Role string `json:"role"` Metadata *DistributionMetadata `json:"metadata"`
StoredAt time.Time `json:"stored_at"` Role string `json:"role"`
StoredAt time.Time `json:"stored_at"`
} }
// DistributionMetadata contains metadata for distributed context // DistributionMetadata contains metadata for distributed context
type DistributionMetadata struct { type DistributionMetadata struct {
Address ucxl.Address `json:"address"` Address ucxl.Address `json:"address"`
Roles []string `json:"roles"` Roles []string `json:"roles"`
Version int64 `json:"version"` Version int64 `json:"version"`
VectorClock *VectorClock `json:"vector_clock"` VectorClock *VectorClock `json:"vector_clock"`
DistributedBy string `json:"distributed_by"` DistributedBy string `json:"distributed_by"`
DistributedAt time.Time `json:"distributed_at"` DistributedAt time.Time `json:"distributed_at"`
ReplicationFactor int `json:"replication_factor"` ReplicationFactor int `json:"replication_factor"`
Checksum string `json:"checksum"` Checksum string `json:"checksum"`
} }
// DHTKeyGenerator implements KeyGenerator interface // DHTKeyGenerator implements KeyGenerator interface
@@ -532,65 +530,124 @@ func (kg *DHTKeyGenerator) GenerateReplicationKey(address string) string {
// Component constructors - these would be implemented in separate files // Component constructors - these would be implemented in separate files
// NewReplicationManager creates a new replication manager // NewReplicationManager creates a new replication manager
func NewReplicationManager(dht *dht.DHT, config *config.Config) (ReplicationManager, error) { func NewReplicationManager(dht dht.DHT, config *config.Config) (ReplicationManager, error) {
// Placeholder implementation impl, err := NewReplicationManagerImpl(dht, config)
return &ReplicationManagerImpl{}, nil if err != nil {
return nil, err
}
return impl, nil
} }
// NewConflictResolver creates a new conflict resolver // NewConflictResolver creates a new conflict resolver
func NewConflictResolver(dht *dht.DHT, config *config.Config) (ConflictResolver, error) { func NewConflictResolver(dht dht.DHT, config *config.Config) (ConflictResolver, error) {
// Placeholder implementation // Placeholder implementation until full resolver is wired
return &ConflictResolverImpl{}, nil return &ConflictResolverImpl{}, nil
} }
// NewGossipProtocol creates a new gossip protocol // NewGossipProtocol creates a new gossip protocol
func NewGossipProtocol(dht *dht.DHT, config *config.Config) (GossipProtocol, error) { func NewGossipProtocol(dht dht.DHT, config *config.Config) (GossipProtocol, error) {
// Placeholder implementation impl, err := NewGossipProtocolImpl(dht, config)
return &GossipProtocolImpl{}, nil if err != nil {
return nil, err
}
return impl, nil
} }
// NewNetworkManager creates a new network manager // NewNetworkManager creates a new network manager
func NewNetworkManager(dht *dht.DHT, config *config.Config) (NetworkManager, error) { func NewNetworkManager(dht dht.DHT, config *config.Config) (NetworkManager, error) {
// Placeholder implementation impl, err := NewNetworkManagerImpl(dht, config)
return &NetworkManagerImpl{}, nil if err != nil {
return nil, err
}
return impl, nil
} }
// NewVectorClockManager creates a new vector clock manager // NewVectorClockManager creates a new vector clock manager
func NewVectorClockManager(dht *dht.DHT, nodeID string) (VectorClockManager, error) { func NewVectorClockManager(dht dht.DHT, nodeID string) (VectorClockManager, error) {
// Placeholder implementation return &defaultVectorClockManager{
return &VectorClockManagerImpl{}, nil clocks: make(map[string]*VectorClock),
}, nil
} }
// Placeholder structs for components - these would be properly implemented // ConflictResolverImpl is a temporary stub until the full resolver is implemented
type ReplicationManagerImpl struct{}
func (rm *ReplicationManagerImpl) EnsureReplication(ctx context.Context, address ucxl.Address, factor int) error { return nil }
func (rm *ReplicationManagerImpl) GetReplicationStatus(ctx context.Context, address ucxl.Address) (*ReplicaHealth, error) {
return &ReplicaHealth{}, nil
}
func (rm *ReplicationManagerImpl) SetReplicationFactor(factor int) error { return nil }
type ConflictResolverImpl struct{} type ConflictResolverImpl struct{}
func (cr *ConflictResolverImpl) ResolveConflict(ctx context.Context, local, remote *slurpContext.ContextNode) (*ConflictResolution, error) { func (cr *ConflictResolverImpl) ResolveConflict(ctx context.Context, local, remote *slurpContext.ContextNode) (*ConflictResolution, error) {
return &ConflictResolution{ return &ConflictResolution{
Address: local.UCXLAddress, Address: local.UCXLAddress,
ResolutionType: ResolutionMerged, ResolutionType: ResolutionMerged,
MergedContext: local, MergedContext: local,
ResolutionTime: time.Millisecond, ResolutionTime: time.Millisecond,
ResolvedAt: time.Now(), ResolvedAt: time.Now(),
Confidence: 0.95, Confidence: 0.95,
}, nil }, nil
} }
type GossipProtocolImpl struct{} // defaultVectorClockManager provides a minimal vector clock store for SEC-SLURP scaffolding.
func (gp *GossipProtocolImpl) StartGossip(ctx context.Context) error { return nil } type defaultVectorClockManager struct {
mu sync.Mutex
clocks map[string]*VectorClock
}
type NetworkManagerImpl struct{} func (vcm *defaultVectorClockManager) GetClock(nodeID string) (*VectorClock, error) {
vcm.mu.Lock()
defer vcm.mu.Unlock()
type VectorClockManagerImpl struct{} if clock, ok := vcm.clocks[nodeID]; ok {
func (vcm *VectorClockManagerImpl) GetClock(nodeID string) (*VectorClock, error) { return clock, nil
return &VectorClock{ }
Clock: map[string]int64{nodeID: time.Now().Unix()}, clock := &VectorClock{
Clock: map[string]int64{nodeID: time.Now().Unix()},
UpdatedAt: time.Now(), UpdatedAt: time.Now(),
}, nil }
} vcm.clocks[nodeID] = clock
return clock, nil
}
func (vcm *defaultVectorClockManager) UpdateClock(nodeID string, clock *VectorClock) error {
vcm.mu.Lock()
defer vcm.mu.Unlock()
vcm.clocks[nodeID] = clock
return nil
}
func (vcm *defaultVectorClockManager) CompareClock(clock1, clock2 *VectorClock) ClockRelation {
if clock1 == nil || clock2 == nil {
return ClockConcurrent
}
if clock1.UpdatedAt.Before(clock2.UpdatedAt) {
return ClockBefore
}
if clock1.UpdatedAt.After(clock2.UpdatedAt) {
return ClockAfter
}
return ClockEqual
}
func (vcm *defaultVectorClockManager) MergeClock(clocks []*VectorClock) *VectorClock {
if len(clocks) == 0 {
return &VectorClock{
Clock: map[string]int64{},
UpdatedAt: time.Now(),
}
}
merged := &VectorClock{
Clock: make(map[string]int64),
UpdatedAt: clocks[0].UpdatedAt,
}
for _, clock := range clocks {
if clock == nil {
continue
}
if clock.UpdatedAt.After(merged.UpdatedAt) {
merged.UpdatedAt = clock.UpdatedAt
}
for node, value := range clock.Clock {
if existing, ok := merged.Clock[node]; !ok || value > existing {
merged.Clock[node] = value
}
}
}
return merged
}

View File

@@ -15,48 +15,48 @@ import (
// MonitoringSystem provides comprehensive monitoring for the distributed context system // MonitoringSystem provides comprehensive monitoring for the distributed context system
type MonitoringSystem struct { type MonitoringSystem struct {
mu sync.RWMutex mu sync.RWMutex
config *config.Config config *config.Config
metrics *MetricsCollector metrics *MetricsCollector
healthChecks *HealthCheckManager healthChecks *HealthCheckManager
alertManager *AlertManager alertManager *AlertManager
dashboard *DashboardServer dashboard *DashboardServer
logManager *LogManager logManager *LogManager
traceManager *TraceManager traceManager *TraceManager
// State // State
running bool running bool
monitoringPort int monitoringPort int
updateInterval time.Duration updateInterval time.Duration
retentionPeriod time.Duration retentionPeriod time.Duration
} }
// MetricsCollector collects and aggregates system metrics // MetricsCollector collects and aggregates system metrics
type MetricsCollector struct { type MetricsCollector struct {
mu sync.RWMutex mu sync.RWMutex
timeSeries map[string]*TimeSeries timeSeries map[string]*TimeSeries
counters map[string]*Counter counters map[string]*Counter
gauges map[string]*Gauge gauges map[string]*Gauge
histograms map[string]*Histogram histograms map[string]*Histogram
customMetrics map[string]*CustomMetric customMetrics map[string]*CustomMetric
aggregatedStats *AggregatedStatistics aggregatedStats *AggregatedStatistics
exporters []MetricsExporter exporters []MetricsExporter
lastCollection time.Time lastCollection time.Time
} }
// TimeSeries represents a time-series metric // TimeSeries represents a time-series metric
type TimeSeries struct { type TimeSeries struct {
Name string `json:"name"` Name string `json:"name"`
Labels map[string]string `json:"labels"` Labels map[string]string `json:"labels"`
DataPoints []*TimeSeriesPoint `json:"data_points"` DataPoints []*TimeSeriesPoint `json:"data_points"`
RetentionTTL time.Duration `json:"retention_ttl"` RetentionTTL time.Duration `json:"retention_ttl"`
LastUpdated time.Time `json:"last_updated"` LastUpdated time.Time `json:"last_updated"`
} }
// TimeSeriesPoint represents a single data point in a time series // TimeSeriesPoint represents a single data point in a time series
type TimeSeriesPoint struct { type TimeSeriesPoint struct {
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
Value float64 `json:"value"` Value float64 `json:"value"`
Labels map[string]string `json:"labels,omitempty"` Labels map[string]string `json:"labels,omitempty"`
} }
@@ -64,7 +64,7 @@ type TimeSeriesPoint struct {
type Counter struct { type Counter struct {
Name string `json:"name"` Name string `json:"name"`
Value int64 `json:"value"` Value int64 `json:"value"`
Rate float64 `json:"rate"` // per second Rate float64 `json:"rate"` // per second
Labels map[string]string `json:"labels"` Labels map[string]string `json:"labels"`
LastUpdated time.Time `json:"last_updated"` LastUpdated time.Time `json:"last_updated"`
} }
@@ -82,13 +82,13 @@ type Gauge struct {
// Histogram represents distribution of values // Histogram represents distribution of values
type Histogram struct { type Histogram struct {
Name string `json:"name"` Name string `json:"name"`
Buckets map[float64]int64 `json:"buckets"` Buckets map[float64]int64 `json:"buckets"`
Count int64 `json:"count"` Count int64 `json:"count"`
Sum float64 `json:"sum"` Sum float64 `json:"sum"`
Labels map[string]string `json:"labels"` Labels map[string]string `json:"labels"`
Percentiles map[float64]float64 `json:"percentiles"` Percentiles map[float64]float64 `json:"percentiles"`
LastUpdated time.Time `json:"last_updated"` LastUpdated time.Time `json:"last_updated"`
} }
// CustomMetric represents application-specific metrics // CustomMetric represents application-specific metrics
@@ -114,81 +114,81 @@ const (
// AggregatedStatistics provides high-level system statistics // AggregatedStatistics provides high-level system statistics
type AggregatedStatistics struct { type AggregatedStatistics struct {
SystemOverview *SystemOverview `json:"system_overview"` SystemOverview *SystemOverview `json:"system_overview"`
PerformanceMetrics *PerformanceOverview `json:"performance_metrics"` PerformanceMetrics *PerformanceOverview `json:"performance_metrics"`
HealthMetrics *HealthOverview `json:"health_metrics"` HealthMetrics *HealthOverview `json:"health_metrics"`
ErrorMetrics *ErrorOverview `json:"error_metrics"` ErrorMetrics *ErrorOverview `json:"error_metrics"`
ResourceMetrics *ResourceOverview `json:"resource_metrics"` ResourceMetrics *ResourceOverview `json:"resource_metrics"`
NetworkMetrics *NetworkOverview `json:"network_metrics"` NetworkMetrics *NetworkOverview `json:"network_metrics"`
LastUpdated time.Time `json:"last_updated"` LastUpdated time.Time `json:"last_updated"`
} }
// SystemOverview provides system-wide overview metrics // SystemOverview provides system-wide overview metrics
type SystemOverview struct { type SystemOverview struct {
TotalNodes int `json:"total_nodes"` TotalNodes int `json:"total_nodes"`
HealthyNodes int `json:"healthy_nodes"` HealthyNodes int `json:"healthy_nodes"`
TotalContexts int64 `json:"total_contexts"` TotalContexts int64 `json:"total_contexts"`
DistributedContexts int64 `json:"distributed_contexts"` DistributedContexts int64 `json:"distributed_contexts"`
ReplicationFactor float64 `json:"average_replication_factor"` ReplicationFactor float64 `json:"average_replication_factor"`
SystemUptime time.Duration `json:"system_uptime"` SystemUptime time.Duration `json:"system_uptime"`
ClusterVersion string `json:"cluster_version"` ClusterVersion string `json:"cluster_version"`
LastRestart time.Time `json:"last_restart"` LastRestart time.Time `json:"last_restart"`
} }
// PerformanceOverview provides performance metrics // PerformanceOverview provides performance metrics
type PerformanceOverview struct { type PerformanceOverview struct {
RequestsPerSecond float64 `json:"requests_per_second"` RequestsPerSecond float64 `json:"requests_per_second"`
AverageResponseTime time.Duration `json:"average_response_time"` AverageResponseTime time.Duration `json:"average_response_time"`
P95ResponseTime time.Duration `json:"p95_response_time"` P95ResponseTime time.Duration `json:"p95_response_time"`
P99ResponseTime time.Duration `json:"p99_response_time"` P99ResponseTime time.Duration `json:"p99_response_time"`
Throughput float64 `json:"throughput_mbps"` Throughput float64 `json:"throughput_mbps"`
CacheHitRate float64 `json:"cache_hit_rate"` CacheHitRate float64 `json:"cache_hit_rate"`
QueueDepth int `json:"queue_depth"` QueueDepth int `json:"queue_depth"`
ActiveConnections int `json:"active_connections"` ActiveConnections int `json:"active_connections"`
} }
// HealthOverview provides health-related metrics // HealthOverview provides health-related metrics
type HealthOverview struct { type HealthOverview struct {
OverallHealthScore float64 `json:"overall_health_score"` OverallHealthScore float64 `json:"overall_health_score"`
ComponentHealth map[string]float64 `json:"component_health"` ComponentHealth map[string]float64 `json:"component_health"`
FailedHealthChecks int `json:"failed_health_checks"` FailedHealthChecks int `json:"failed_health_checks"`
LastHealthCheck time.Time `json:"last_health_check"` LastHealthCheck time.Time `json:"last_health_check"`
HealthTrend string `json:"health_trend"` // improving, stable, degrading HealthTrend string `json:"health_trend"` // improving, stable, degrading
CriticalAlerts int `json:"critical_alerts"` CriticalAlerts int `json:"critical_alerts"`
WarningAlerts int `json:"warning_alerts"` WarningAlerts int `json:"warning_alerts"`
} }
// ErrorOverview provides error-related metrics // ErrorOverview provides error-related metrics
type ErrorOverview struct { type ErrorOverview struct {
TotalErrors int64 `json:"total_errors"` TotalErrors int64 `json:"total_errors"`
ErrorRate float64 `json:"error_rate"` ErrorRate float64 `json:"error_rate"`
ErrorsByType map[string]int64 `json:"errors_by_type"` ErrorsByType map[string]int64 `json:"errors_by_type"`
ErrorsByComponent map[string]int64 `json:"errors_by_component"` ErrorsByComponent map[string]int64 `json:"errors_by_component"`
LastError *ErrorEvent `json:"last_error"` LastError *ErrorEvent `json:"last_error"`
ErrorTrend string `json:"error_trend"` // increasing, stable, decreasing ErrorTrend string `json:"error_trend"` // increasing, stable, decreasing
} }
// ResourceOverview provides resource utilization metrics // ResourceOverview provides resource utilization metrics
type ResourceOverview struct { type ResourceOverview struct {
CPUUtilization float64 `json:"cpu_utilization"` CPUUtilization float64 `json:"cpu_utilization"`
MemoryUtilization float64 `json:"memory_utilization"` MemoryUtilization float64 `json:"memory_utilization"`
DiskUtilization float64 `json:"disk_utilization"` DiskUtilization float64 `json:"disk_utilization"`
NetworkUtilization float64 `json:"network_utilization"` NetworkUtilization float64 `json:"network_utilization"`
StorageUsed int64 `json:"storage_used_bytes"` StorageUsed int64 `json:"storage_used_bytes"`
StorageAvailable int64 `json:"storage_available_bytes"` StorageAvailable int64 `json:"storage_available_bytes"`
FileDescriptors int `json:"open_file_descriptors"` FileDescriptors int `json:"open_file_descriptors"`
Goroutines int `json:"goroutines"` Goroutines int `json:"goroutines"`
} }
// NetworkOverview provides network-related metrics // NetworkOverview provides network-related metrics
type NetworkOverview struct { type NetworkOverview struct {
TotalConnections int `json:"total_connections"` TotalConnections int `json:"total_connections"`
ActiveConnections int `json:"active_connections"` ActiveConnections int `json:"active_connections"`
BandwidthUtilization float64 `json:"bandwidth_utilization"` BandwidthUtilization float64 `json:"bandwidth_utilization"`
PacketLossRate float64 `json:"packet_loss_rate"` PacketLossRate float64 `json:"packet_loss_rate"`
AverageLatency time.Duration `json:"average_latency"` AverageLatency time.Duration `json:"average_latency"`
NetworkPartitions int `json:"network_partitions"` NetworkPartitions int `json:"network_partitions"`
DataTransferred int64 `json:"data_transferred_bytes"` DataTransferred int64 `json:"data_transferred_bytes"`
} }
// MetricsExporter exports metrics to external systems // MetricsExporter exports metrics to external systems
@@ -200,49 +200,49 @@ type MetricsExporter interface {
// HealthCheckManager manages system health checks // HealthCheckManager manages system health checks
type HealthCheckManager struct { type HealthCheckManager struct {
mu sync.RWMutex mu sync.RWMutex
healthChecks map[string]*HealthCheck healthChecks map[string]*HealthCheck
checkResults map[string]*HealthCheckResult checkResults map[string]*HealthCheckResult
schedules map[string]*HealthCheckSchedule schedules map[string]*HealthCheckSchedule
running bool running bool
} }
// HealthCheck represents a single health check // HealthCheck represents a single health check
type HealthCheck struct { type HealthCheck struct {
Name string `json:"name"` Name string `json:"name"`
Description string `json:"description"` Description string `json:"description"`
CheckType HealthCheckType `json:"check_type"` CheckType HealthCheckType `json:"check_type"`
Target string `json:"target"` Target string `json:"target"`
Timeout time.Duration `json:"timeout"` Timeout time.Duration `json:"timeout"`
Interval time.Duration `json:"interval"` Interval time.Duration `json:"interval"`
Retries int `json:"retries"` Retries int `json:"retries"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
Enabled bool `json:"enabled"` Enabled bool `json:"enabled"`
CheckFunction func(context.Context) (*HealthCheckResult, error) `json:"-"` CheckFunction func(context.Context) (*HealthCheckResult, error) `json:"-"`
} }
// HealthCheckType represents different types of health checks // HealthCheckType represents different types of health checks
type HealthCheckType string type HealthCheckType string
const ( const (
HealthCheckTypeHTTP HealthCheckType = "http" HealthCheckTypeHTTP HealthCheckType = "http"
HealthCheckTypeTCP HealthCheckType = "tcp" HealthCheckTypeTCP HealthCheckType = "tcp"
HealthCheckTypeCustom HealthCheckType = "custom" HealthCheckTypeCustom HealthCheckType = "custom"
HealthCheckTypeComponent HealthCheckType = "component" HealthCheckTypeComponent HealthCheckType = "component"
HealthCheckTypeDatabase HealthCheckType = "database" HealthCheckTypeDatabase HealthCheckType = "database"
HealthCheckTypeService HealthCheckType = "service" HealthCheckTypeService HealthCheckType = "service"
) )
// HealthCheckResult represents the result of a health check // HealthCheckResult represents the result of a health check
type HealthCheckResult struct { type HealthCheckResult struct {
CheckName string `json:"check_name"` CheckName string `json:"check_name"`
Status HealthCheckStatus `json:"status"` Status HealthCheckStatus `json:"status"`
ResponseTime time.Duration `json:"response_time"` ResponseTime time.Duration `json:"response_time"`
Message string `json:"message"` Message string `json:"message"`
Details map[string]interface{} `json:"details"` Details map[string]interface{} `json:"details"`
Error string `json:"error,omitempty"` Error string `json:"error,omitempty"`
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
Attempt int `json:"attempt"` Attempt int `json:"attempt"`
} }
// HealthCheckStatus represents the status of a health check // HealthCheckStatus represents the status of a health check
@@ -258,45 +258,45 @@ const (
// HealthCheckSchedule defines when health checks should run // HealthCheckSchedule defines when health checks should run
type HealthCheckSchedule struct { type HealthCheckSchedule struct {
CheckName string `json:"check_name"` CheckName string `json:"check_name"`
Interval time.Duration `json:"interval"` Interval time.Duration `json:"interval"`
NextRun time.Time `json:"next_run"` NextRun time.Time `json:"next_run"`
LastRun time.Time `json:"last_run"` LastRun time.Time `json:"last_run"`
Enabled bool `json:"enabled"` Enabled bool `json:"enabled"`
FailureCount int `json:"failure_count"` FailureCount int `json:"failure_count"`
} }
// AlertManager manages system alerts and notifications // AlertManager manages system alerts and notifications
type AlertManager struct { type AlertManager struct {
mu sync.RWMutex mu sync.RWMutex
alertRules map[string]*AlertRule alertRules map[string]*AlertRule
activeAlerts map[string]*Alert activeAlerts map[string]*Alert
alertHistory []*Alert alertHistory []*Alert
notifiers []AlertNotifier notifiers []AlertNotifier
silences map[string]*AlertSilence silences map[string]*AlertSilence
running bool running bool
} }
// AlertRule defines conditions for triggering alerts // AlertRule defines conditions for triggering alerts
type AlertRule struct { type AlertRule struct {
Name string `json:"name"` Name string `json:"name"`
Description string `json:"description"` Description string `json:"description"`
Severity AlertSeverity `json:"severity"` Severity AlertSeverity `json:"severity"`
Conditions []*AlertCondition `json:"conditions"` Conditions []*AlertCondition `json:"conditions"`
Duration time.Duration `json:"duration"` // How long condition must persist Duration time.Duration `json:"duration"` // How long condition must persist
Cooldown time.Duration `json:"cooldown"` // Minimum time between alerts Cooldown time.Duration `json:"cooldown"` // Minimum time between alerts
Labels map[string]string `json:"labels"` Labels map[string]string `json:"labels"`
Annotations map[string]string `json:"annotations"` Annotations map[string]string `json:"annotations"`
Enabled bool `json:"enabled"` Enabled bool `json:"enabled"`
LastTriggered *time.Time `json:"last_triggered,omitempty"` LastTriggered *time.Time `json:"last_triggered,omitempty"`
} }
// AlertCondition defines a single condition for an alert // AlertCondition defines a single condition for an alert
type AlertCondition struct { type AlertCondition struct {
MetricName string `json:"metric_name"` MetricName string `json:"metric_name"`
Operator ConditionOperator `json:"operator"` Operator ConditionOperator `json:"operator"`
Threshold float64 `json:"threshold"` Threshold float64 `json:"threshold"`
Duration time.Duration `json:"duration"` Duration time.Duration `json:"duration"`
} }
// ConditionOperator represents comparison operators for alert conditions // ConditionOperator represents comparison operators for alert conditions
@@ -313,39 +313,39 @@ const (
// Alert represents an active alert // Alert represents an active alert
type Alert struct { type Alert struct {
ID string `json:"id"` ID string `json:"id"`
RuleName string `json:"rule_name"` RuleName string `json:"rule_name"`
Severity AlertSeverity `json:"severity"` Severity AlertSeverity `json:"severity"`
Status AlertStatus `json:"status"` Status AlertStatus `json:"status"`
Message string `json:"message"` Message string `json:"message"`
Details map[string]interface{} `json:"details"` Details map[string]interface{} `json:"details"`
Labels map[string]string `json:"labels"` Labels map[string]string `json:"labels"`
Annotations map[string]string `json:"annotations"` Annotations map[string]string `json:"annotations"`
StartsAt time.Time `json:"starts_at"` StartsAt time.Time `json:"starts_at"`
EndsAt *time.Time `json:"ends_at,omitempty"` EndsAt *time.Time `json:"ends_at,omitempty"`
LastUpdated time.Time `json:"last_updated"` LastUpdated time.Time `json:"last_updated"`
AckBy string `json:"acknowledged_by,omitempty"` AckBy string `json:"acknowledged_by,omitempty"`
AckAt *time.Time `json:"acknowledged_at,omitempty"` AckAt *time.Time `json:"acknowledged_at,omitempty"`
} }
// AlertSeverity represents the severity level of an alert // AlertSeverity represents the severity level of an alert
type AlertSeverity string type AlertSeverity string
const ( const (
SeverityInfo AlertSeverity = "info" AlertAlertSeverityInfo AlertSeverity = "info"
SeverityWarning AlertSeverity = "warning" AlertAlertSeverityWarning AlertSeverity = "warning"
SeverityError AlertSeverity = "error" AlertAlertSeverityError AlertSeverity = "error"
SeverityCritical AlertSeverity = "critical" AlertAlertSeverityCritical AlertSeverity = "critical"
) )
// AlertStatus represents the current status of an alert // AlertStatus represents the current status of an alert
type AlertStatus string type AlertStatus string
const ( const (
AlertStatusFiring AlertStatus = "firing" AlertStatusFiring AlertStatus = "firing"
AlertStatusResolved AlertStatus = "resolved" AlertStatusResolved AlertStatus = "resolved"
AlertStatusAcknowledged AlertStatus = "acknowledged" AlertStatusAcknowledged AlertStatus = "acknowledged"
AlertStatusSilenced AlertStatus = "silenced" AlertStatusSilenced AlertStatus = "silenced"
) )
// AlertNotifier sends alert notifications // AlertNotifier sends alert notifications
@@ -357,64 +357,64 @@ type AlertNotifier interface {
// AlertSilence represents a silenced alert // AlertSilence represents a silenced alert
type AlertSilence struct { type AlertSilence struct {
ID string `json:"id"` ID string `json:"id"`
Matchers map[string]string `json:"matchers"` Matchers map[string]string `json:"matchers"`
StartTime time.Time `json:"start_time"` StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"` EndTime time.Time `json:"end_time"`
CreatedBy string `json:"created_by"` CreatedBy string `json:"created_by"`
Comment string `json:"comment"` Comment string `json:"comment"`
Active bool `json:"active"` Active bool `json:"active"`
} }
// DashboardServer provides web-based monitoring dashboard // DashboardServer provides web-based monitoring dashboard
type DashboardServer struct { type DashboardServer struct {
mu sync.RWMutex mu sync.RWMutex
server *http.Server server *http.Server
dashboards map[string]*Dashboard dashboards map[string]*Dashboard
widgets map[string]*Widget widgets map[string]*Widget
customPages map[string]*CustomPage customPages map[string]*CustomPage
running bool running bool
port int port int
} }
// Dashboard represents a monitoring dashboard // Dashboard represents a monitoring dashboard
type Dashboard struct { type Dashboard struct {
ID string `json:"id"` ID string `json:"id"`
Name string `json:"name"` Name string `json:"name"`
Description string `json:"description"` Description string `json:"description"`
Widgets []*Widget `json:"widgets"` Widgets []*Widget `json:"widgets"`
Layout *DashboardLayout `json:"layout"` Layout *DashboardLayout `json:"layout"`
Settings *DashboardSettings `json:"settings"` Settings *DashboardSettings `json:"settings"`
CreatedBy string `json:"created_by"` CreatedBy string `json:"created_by"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"` UpdatedAt time.Time `json:"updated_at"`
} }
// Widget represents a dashboard widget // Widget represents a dashboard widget
type Widget struct { type Widget struct {
ID string `json:"id"` ID string `json:"id"`
Type WidgetType `json:"type"` Type WidgetType `json:"type"`
Title string `json:"title"` Title string `json:"title"`
DataSource string `json:"data_source"` DataSource string `json:"data_source"`
Query string `json:"query"` Query string `json:"query"`
Settings map[string]interface{} `json:"settings"` Settings map[string]interface{} `json:"settings"`
Position *WidgetPosition `json:"position"` Position *WidgetPosition `json:"position"`
RefreshRate time.Duration `json:"refresh_rate"` RefreshRate time.Duration `json:"refresh_rate"`
LastUpdated time.Time `json:"last_updated"` LastUpdated time.Time `json:"last_updated"`
} }
// WidgetType represents different types of dashboard widgets // WidgetType represents different types of dashboard widgets
type WidgetType string type WidgetType string
const ( const (
WidgetTypeMetric WidgetType = "metric" WidgetTypeMetric WidgetType = "metric"
WidgetTypeChart WidgetType = "chart" WidgetTypeChart WidgetType = "chart"
WidgetTypeTable WidgetType = "table" WidgetTypeTable WidgetType = "table"
WidgetTypeAlert WidgetType = "alert" WidgetTypeAlert WidgetType = "alert"
WidgetTypeHealth WidgetType = "health" WidgetTypeHealth WidgetType = "health"
WidgetTypeTopology WidgetType = "topology" WidgetTypeTopology WidgetType = "topology"
WidgetTypeLog WidgetType = "log" WidgetTypeLog WidgetType = "log"
WidgetTypeCustom WidgetType = "custom" WidgetTypeCustom WidgetType = "custom"
) )
// WidgetPosition defines widget position and size // WidgetPosition defines widget position and size
@@ -427,11 +427,11 @@ type WidgetPosition struct {
// DashboardLayout defines dashboard layout settings // DashboardLayout defines dashboard layout settings
type DashboardLayout struct { type DashboardLayout struct {
Columns int `json:"columns"` Columns int `json:"columns"`
RowHeight int `json:"row_height"` RowHeight int `json:"row_height"`
Margins [2]int `json:"margins"` // [x, y] Margins [2]int `json:"margins"` // [x, y]
Spacing [2]int `json:"spacing"` // [x, y] Spacing [2]int `json:"spacing"` // [x, y]
Breakpoints map[string]int `json:"breakpoints"` Breakpoints map[string]int `json:"breakpoints"`
} }
// DashboardSettings contains dashboard configuration // DashboardSettings contains dashboard configuration
@@ -446,43 +446,43 @@ type DashboardSettings struct {
// CustomPage represents a custom monitoring page // CustomPage represents a custom monitoring page
type CustomPage struct { type CustomPage struct {
Path string `json:"path"` Path string `json:"path"`
Title string `json:"title"` Title string `json:"title"`
Content string `json:"content"` Content string `json:"content"`
ContentType string `json:"content_type"` ContentType string `json:"content_type"`
Handler http.HandlerFunc `json:"-"` Handler http.HandlerFunc `json:"-"`
} }
// LogManager manages system logs and log analysis // LogManager manages system logs and log analysis
type LogManager struct { type LogManager struct {
mu sync.RWMutex mu sync.RWMutex
logSources map[string]*LogSource logSources map[string]*LogSource
logEntries []*LogEntry logEntries []*LogEntry
logAnalyzers []LogAnalyzer logAnalyzers []LogAnalyzer
retentionPolicy *LogRetentionPolicy retentionPolicy *LogRetentionPolicy
running bool running bool
} }
// LogSource represents a source of log data // LogSource represents a source of log data
type LogSource struct { type LogSource struct {
Name string `json:"name"` Name string `json:"name"`
Type LogSourceType `json:"type"` Type LogSourceType `json:"type"`
Location string `json:"location"` Location string `json:"location"`
Format LogFormat `json:"format"` Format LogFormat `json:"format"`
Labels map[string]string `json:"labels"` Labels map[string]string `json:"labels"`
Enabled bool `json:"enabled"` Enabled bool `json:"enabled"`
LastRead time.Time `json:"last_read"` LastRead time.Time `json:"last_read"`
} }
// LogSourceType represents different types of log sources // LogSourceType represents different types of log sources
type LogSourceType string type LogSourceType string
const ( const (
LogSourceTypeFile LogSourceType = "file" LogSourceTypeFile LogSourceType = "file"
LogSourceTypeHTTP LogSourceType = "http" LogSourceTypeHTTP LogSourceType = "http"
LogSourceTypeStream LogSourceType = "stream" LogSourceTypeStream LogSourceType = "stream"
LogSourceTypeDatabase LogSourceType = "database" LogSourceTypeDatabase LogSourceType = "database"
LogSourceTypeCustom LogSourceType = "custom" LogSourceTypeCustom LogSourceType = "custom"
) )
// LogFormat represents log entry format // LogFormat represents log entry format
@@ -497,14 +497,14 @@ const (
// LogEntry represents a single log entry // LogEntry represents a single log entry
type LogEntry struct { type LogEntry struct {
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
Level LogLevel `json:"level"` Level LogLevel `json:"level"`
Source string `json:"source"` Source string `json:"source"`
Message string `json:"message"` Message string `json:"message"`
Fields map[string]interface{} `json:"fields"` Fields map[string]interface{} `json:"fields"`
Labels map[string]string `json:"labels"` Labels map[string]string `json:"labels"`
TraceID string `json:"trace_id,omitempty"` TraceID string `json:"trace_id,omitempty"`
SpanID string `json:"span_id,omitempty"` SpanID string `json:"span_id,omitempty"`
} }
// LogLevel represents log entry severity // LogLevel represents log entry severity
@@ -527,22 +527,22 @@ type LogAnalyzer interface {
// LogAnalysisResult represents the result of log analysis // LogAnalysisResult represents the result of log analysis
type LogAnalysisResult struct { type LogAnalysisResult struct {
AnalyzerName string `json:"analyzer_name"` AnalyzerName string `json:"analyzer_name"`
Anomalies []*LogAnomaly `json:"anomalies"` Anomalies []*LogAnomaly `json:"anomalies"`
Patterns []*LogPattern `json:"patterns"` Patterns []*LogPattern `json:"patterns"`
Statistics *LogStatistics `json:"statistics"` Statistics *LogStatistics `json:"statistics"`
Recommendations []string `json:"recommendations"` Recommendations []string `json:"recommendations"`
AnalyzedAt time.Time `json:"analyzed_at"` AnalyzedAt time.Time `json:"analyzed_at"`
} }
// LogAnomaly represents detected log anomaly // LogAnomaly represents detected log anomaly
type LogAnomaly struct { type LogAnomaly struct {
Type AnomalyType `json:"type"` Type AnomalyType `json:"type"`
Severity AlertSeverity `json:"severity"` Severity AlertSeverity `json:"severity"`
Description string `json:"description"` Description string `json:"description"`
Entries []*LogEntry `json:"entries"` Entries []*LogEntry `json:"entries"`
Confidence float64 `json:"confidence"` Confidence float64 `json:"confidence"`
DetectedAt time.Time `json:"detected_at"` DetectedAt time.Time `json:"detected_at"`
} }
// AnomalyType represents different types of log anomalies // AnomalyType represents different types of log anomalies
@@ -558,38 +558,38 @@ const (
// LogPattern represents detected log pattern // LogPattern represents detected log pattern
type LogPattern struct { type LogPattern struct {
Pattern string `json:"pattern"` Pattern string `json:"pattern"`
Frequency int `json:"frequency"` Frequency int `json:"frequency"`
LastSeen time.Time `json:"last_seen"` LastSeen time.Time `json:"last_seen"`
Sources []string `json:"sources"` Sources []string `json:"sources"`
Confidence float64 `json:"confidence"` Confidence float64 `json:"confidence"`
} }
// LogStatistics provides log statistics // LogStatistics provides log statistics
type LogStatistics struct { type LogStatistics struct {
TotalEntries int64 `json:"total_entries"` TotalEntries int64 `json:"total_entries"`
EntriesByLevel map[LogLevel]int64 `json:"entries_by_level"` EntriesByLevel map[LogLevel]int64 `json:"entries_by_level"`
EntriesBySource map[string]int64 `json:"entries_by_source"` EntriesBySource map[string]int64 `json:"entries_by_source"`
ErrorRate float64 `json:"error_rate"` ErrorRate float64 `json:"error_rate"`
AverageRate float64 `json:"average_rate"` AverageRate float64 `json:"average_rate"`
TimeRange [2]time.Time `json:"time_range"` TimeRange [2]time.Time `json:"time_range"`
} }
// LogRetentionPolicy defines log retention rules // LogRetentionPolicy defines log retention rules
type LogRetentionPolicy struct { type LogRetentionPolicy struct {
RetentionPeriod time.Duration `json:"retention_period"` RetentionPeriod time.Duration `json:"retention_period"`
MaxEntries int64 `json:"max_entries"` MaxEntries int64 `json:"max_entries"`
CompressionAge time.Duration `json:"compression_age"` CompressionAge time.Duration `json:"compression_age"`
ArchiveAge time.Duration `json:"archive_age"` ArchiveAge time.Duration `json:"archive_age"`
Rules []*RetentionRule `json:"rules"` Rules []*RetentionRule `json:"rules"`
} }
// RetentionRule defines specific retention rules // RetentionRule defines specific retention rules
type RetentionRule struct { type RetentionRule struct {
Name string `json:"name"` Name string `json:"name"`
Condition string `json:"condition"` // Query expression Condition string `json:"condition"` // Query expression
Retention time.Duration `json:"retention"` Retention time.Duration `json:"retention"`
Action RetentionAction `json:"action"` Action RetentionAction `json:"action"`
} }
// RetentionAction represents retention actions // RetentionAction represents retention actions
@@ -603,47 +603,47 @@ const (
// TraceManager manages distributed tracing // TraceManager manages distributed tracing
type TraceManager struct { type TraceManager struct {
mu sync.RWMutex mu sync.RWMutex
traces map[string]*Trace traces map[string]*Trace
spans map[string]*Span spans map[string]*Span
samplers []TraceSampler samplers []TraceSampler
exporters []TraceExporter exporters []TraceExporter
running bool running bool
} }
// Trace represents a distributed trace // Trace represents a distributed trace
type Trace struct { type Trace struct {
TraceID string `json:"trace_id"` TraceID string `json:"trace_id"`
Spans []*Span `json:"spans"` Spans []*Span `json:"spans"`
Duration time.Duration `json:"duration"` Duration time.Duration `json:"duration"`
StartTime time.Time `json:"start_time"` StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"` EndTime time.Time `json:"end_time"`
Status TraceStatus `json:"status"` Status TraceStatus `json:"status"`
Tags map[string]string `json:"tags"` Tags map[string]string `json:"tags"`
Operations []string `json:"operations"` Operations []string `json:"operations"`
} }
// Span represents a single span in a trace // Span represents a single span in a trace
type Span struct { type Span struct {
SpanID string `json:"span_id"` SpanID string `json:"span_id"`
TraceID string `json:"trace_id"` TraceID string `json:"trace_id"`
ParentID string `json:"parent_id,omitempty"` ParentID string `json:"parent_id,omitempty"`
Operation string `json:"operation"` Operation string `json:"operation"`
Service string `json:"service"` Service string `json:"service"`
StartTime time.Time `json:"start_time"` StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"` EndTime time.Time `json:"end_time"`
Duration time.Duration `json:"duration"` Duration time.Duration `json:"duration"`
Status SpanStatus `json:"status"` Status SpanStatus `json:"status"`
Tags map[string]string `json:"tags"` Tags map[string]string `json:"tags"`
Logs []*SpanLog `json:"logs"` Logs []*SpanLog `json:"logs"`
} }
// TraceStatus represents the status of a trace // TraceStatus represents the status of a trace
type TraceStatus string type TraceStatus string
const ( const (
TraceStatusOK TraceStatus = "ok" TraceStatusOK TraceStatus = "ok"
TraceStatusError TraceStatus = "error" TraceStatusError TraceStatus = "error"
TraceStatusTimeout TraceStatus = "timeout" TraceStatusTimeout TraceStatus = "timeout"
) )
@@ -675,18 +675,18 @@ type TraceExporter interface {
// ErrorEvent represents a system error event // ErrorEvent represents a system error event
type ErrorEvent struct { type ErrorEvent struct {
ID string `json:"id"` ID string `json:"id"`
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
Level LogLevel `json:"level"` Level LogLevel `json:"level"`
Component string `json:"component"` Component string `json:"component"`
Message string `json:"message"` Message string `json:"message"`
Error string `json:"error"` Error string `json:"error"`
Context map[string]interface{} `json:"context"` Context map[string]interface{} `json:"context"`
TraceID string `json:"trace_id,omitempty"` TraceID string `json:"trace_id,omitempty"`
SpanID string `json:"span_id,omitempty"` SpanID string `json:"span_id,omitempty"`
Count int `json:"count"` Count int `json:"count"`
FirstSeen time.Time `json:"first_seen"` FirstSeen time.Time `json:"first_seen"`
LastSeen time.Time `json:"last_seen"` LastSeen time.Time `json:"last_seen"`
} }
// NewMonitoringSystem creates a comprehensive monitoring system // NewMonitoringSystem creates a comprehensive monitoring system
@@ -722,7 +722,7 @@ func (ms *MonitoringSystem) initializeComponents() error {
aggregatedStats: &AggregatedStatistics{ aggregatedStats: &AggregatedStatistics{
LastUpdated: time.Now(), LastUpdated: time.Now(),
}, },
exporters: []MetricsExporter{}, exporters: []MetricsExporter{},
lastCollection: time.Now(), lastCollection: time.Now(),
} }
@@ -1134,15 +1134,15 @@ func (ms *MonitoringSystem) createDefaultDashboards() {
func (ms *MonitoringSystem) severityWeight(severity AlertSeverity) int { func (ms *MonitoringSystem) severityWeight(severity AlertSeverity) int {
switch severity { switch severity {
case SeverityCritical: case AlertSeverityCritical:
return 4 return 4
case SeverityError: case AlertSeverityError:
return 3 return 3
case SeverityWarning: case AlertSeverityWarning:
return 2 return 2
case SeverityInfo: case AlertSeverityInfo:
return 1 return 1
default: default:
return 0 return 0
} }
} }

View File

@@ -9,74 +9,74 @@ import (
"sync" "sync"
"time" "time"
"chorus/pkg/dht"
"chorus/pkg/config" "chorus/pkg/config"
"chorus/pkg/dht"
"github.com/libp2p/go-libp2p/core/peer" "github.com/libp2p/go-libp2p/core/peer"
) )
// NetworkManagerImpl implements NetworkManager interface for network topology and partition management // NetworkManagerImpl implements NetworkManager interface for network topology and partition management
type NetworkManagerImpl struct { type NetworkManagerImpl struct {
mu sync.RWMutex mu sync.RWMutex
dht *dht.DHT dht *dht.DHT
config *config.Config config *config.Config
topology *NetworkTopology topology *NetworkTopology
partitionInfo *PartitionInfo partitionInfo *PartitionInfo
connectivity *ConnectivityMatrix connectivity *ConnectivityMatrix
stats *NetworkStatistics stats *NetworkStatistics
healthChecker *NetworkHealthChecker healthChecker *NetworkHealthChecker
partitionDetector *PartitionDetector partitionDetector *PartitionDetector
recoveryManager *RecoveryManager recoveryManager *RecoveryManager
// Configuration // Configuration
healthCheckInterval time.Duration healthCheckInterval time.Duration
partitionCheckInterval time.Duration partitionCheckInterval time.Duration
connectivityTimeout time.Duration connectivityTimeout time.Duration
maxPartitionDuration time.Duration maxPartitionDuration time.Duration
// State // State
lastTopologyUpdate time.Time lastTopologyUpdate time.Time
lastPartitionCheck time.Time lastPartitionCheck time.Time
running bool running bool
recoveryInProgress bool recoveryInProgress bool
} }
// ConnectivityMatrix tracks connectivity between all nodes // ConnectivityMatrix tracks connectivity between all nodes
type ConnectivityMatrix struct { type ConnectivityMatrix struct {
Matrix map[string]map[string]*ConnectionInfo `json:"matrix"` Matrix map[string]map[string]*ConnectionInfo `json:"matrix"`
LastUpdated time.Time `json:"last_updated"` LastUpdated time.Time `json:"last_updated"`
mu sync.RWMutex mu sync.RWMutex
} }
// ConnectionInfo represents connectivity information between two nodes // ConnectionInfo represents connectivity information between two nodes
type ConnectionInfo struct { type ConnectionInfo struct {
Connected bool `json:"connected"` Connected bool `json:"connected"`
Latency time.Duration `json:"latency"` Latency time.Duration `json:"latency"`
PacketLoss float64 `json:"packet_loss"` PacketLoss float64 `json:"packet_loss"`
Bandwidth int64 `json:"bandwidth"` Bandwidth int64 `json:"bandwidth"`
LastChecked time.Time `json:"last_checked"` LastChecked time.Time `json:"last_checked"`
ErrorCount int `json:"error_count"` ErrorCount int `json:"error_count"`
LastError string `json:"last_error,omitempty"` LastError string `json:"last_error,omitempty"`
} }
// NetworkHealthChecker performs network health checks // NetworkHealthChecker performs network health checks
type NetworkHealthChecker struct { type NetworkHealthChecker struct {
mu sync.RWMutex mu sync.RWMutex
nodeHealth map[string]*NodeHealth nodeHealth map[string]*NodeHealth
healthHistory map[string][]*HealthCheckResult healthHistory map[string][]*NetworkHealthCheckResult
alertThresholds *NetworkAlertThresholds alertThresholds *NetworkAlertThresholds
} }
// NodeHealth represents health status of a network node // NodeHealth represents health status of a network node
type NodeHealth struct { type NodeHealth struct {
NodeID string `json:"node_id"` NodeID string `json:"node_id"`
Status NodeStatus `json:"status"` Status NodeStatus `json:"status"`
HealthScore float64 `json:"health_score"` HealthScore float64 `json:"health_score"`
LastSeen time.Time `json:"last_seen"` LastSeen time.Time `json:"last_seen"`
ResponseTime time.Duration `json:"response_time"` ResponseTime time.Duration `json:"response_time"`
PacketLossRate float64 `json:"packet_loss_rate"` PacketLossRate float64 `json:"packet_loss_rate"`
BandwidthUtil float64 `json:"bandwidth_utilization"` BandwidthUtil float64 `json:"bandwidth_utilization"`
Uptime time.Duration `json:"uptime"` Uptime time.Duration `json:"uptime"`
ErrorRate float64 `json:"error_rate"` ErrorRate float64 `json:"error_rate"`
} }
// NodeStatus represents the status of a network node // NodeStatus represents the status of a network node
@@ -91,23 +91,23 @@ const (
) )
// HealthCheckResult represents the result of a health check // HealthCheckResult represents the result of a health check
type HealthCheckResult struct { type NetworkHealthCheckResult struct {
NodeID string `json:"node_id"` NodeID string `json:"node_id"`
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
Success bool `json:"success"` Success bool `json:"success"`
ResponseTime time.Duration `json:"response_time"` ResponseTime time.Duration `json:"response_time"`
ErrorMessage string `json:"error_message,omitempty"` ErrorMessage string `json:"error_message,omitempty"`
NetworkMetrics *NetworkMetrics `json:"network_metrics"` NetworkMetrics *NetworkMetrics `json:"network_metrics"`
} }
// NetworkAlertThresholds defines thresholds for network alerts // NetworkAlertThresholds defines thresholds for network alerts
type NetworkAlertThresholds struct { type NetworkAlertThresholds struct {
LatencyWarning time.Duration `json:"latency_warning"` LatencyWarning time.Duration `json:"latency_warning"`
LatencyCritical time.Duration `json:"latency_critical"` LatencyCritical time.Duration `json:"latency_critical"`
PacketLossWarning float64 `json:"packet_loss_warning"` PacketLossWarning float64 `json:"packet_loss_warning"`
PacketLossCritical float64 `json:"packet_loss_critical"` PacketLossCritical float64 `json:"packet_loss_critical"`
HealthScoreWarning float64 `json:"health_score_warning"` HealthScoreWarning float64 `json:"health_score_warning"`
HealthScoreCritical float64 `json:"health_score_critical"` HealthScoreCritical float64 `json:"health_score_critical"`
} }
// PartitionDetector detects network partitions // PartitionDetector detects network partitions
@@ -131,14 +131,14 @@ const (
// PartitionEvent represents a partition detection event // PartitionEvent represents a partition detection event
type PartitionEvent struct { type PartitionEvent struct {
EventID string `json:"event_id"` EventID string `json:"event_id"`
DetectedAt time.Time `json:"detected_at"` DetectedAt time.Time `json:"detected_at"`
Algorithm PartitionDetectionAlgorithm `json:"algorithm"` Algorithm PartitionDetectionAlgorithm `json:"algorithm"`
PartitionedNodes []string `json:"partitioned_nodes"` PartitionedNodes []string `json:"partitioned_nodes"`
Confidence float64 `json:"confidence"` Confidence float64 `json:"confidence"`
Duration time.Duration `json:"duration"` Duration time.Duration `json:"duration"`
Resolved bool `json:"resolved"` Resolved bool `json:"resolved"`
ResolvedAt *time.Time `json:"resolved_at,omitempty"` ResolvedAt *time.Time `json:"resolved_at,omitempty"`
} }
// FalsePositiveFilter helps reduce false partition detections // FalsePositiveFilter helps reduce false partition detections
@@ -159,10 +159,10 @@ type PartitionDetectorConfig struct {
// RecoveryManager manages network partition recovery // RecoveryManager manages network partition recovery
type RecoveryManager struct { type RecoveryManager struct {
mu sync.RWMutex mu sync.RWMutex
recoveryStrategies map[RecoveryStrategy]*RecoveryStrategyConfig recoveryStrategies map[RecoveryStrategy]*RecoveryStrategyConfig
activeRecoveries map[string]*RecoveryOperation activeRecoveries map[string]*RecoveryOperation
recoveryHistory []*RecoveryResult recoveryHistory []*RecoveryResult
} }
// RecoveryStrategy represents different recovery strategies // RecoveryStrategy represents different recovery strategies
@@ -177,25 +177,25 @@ const (
// RecoveryStrategyConfig configures a recovery strategy // RecoveryStrategyConfig configures a recovery strategy
type RecoveryStrategyConfig struct { type RecoveryStrategyConfig struct {
Strategy RecoveryStrategy `json:"strategy"` Strategy RecoveryStrategy `json:"strategy"`
Timeout time.Duration `json:"timeout"` Timeout time.Duration `json:"timeout"`
RetryAttempts int `json:"retry_attempts"` RetryAttempts int `json:"retry_attempts"`
RetryInterval time.Duration `json:"retry_interval"` RetryInterval time.Duration `json:"retry_interval"`
RequireConsensus bool `json:"require_consensus"` RequireConsensus bool `json:"require_consensus"`
ForcedThreshold time.Duration `json:"forced_threshold"` ForcedThreshold time.Duration `json:"forced_threshold"`
} }
// RecoveryOperation represents an active recovery operation // RecoveryOperation represents an active recovery operation
type RecoveryOperation struct { type RecoveryOperation struct {
OperationID string `json:"operation_id"` OperationID string `json:"operation_id"`
Strategy RecoveryStrategy `json:"strategy"` Strategy RecoveryStrategy `json:"strategy"`
StartedAt time.Time `json:"started_at"` StartedAt time.Time `json:"started_at"`
TargetNodes []string `json:"target_nodes"` TargetNodes []string `json:"target_nodes"`
Status RecoveryStatus `json:"status"` Status RecoveryStatus `json:"status"`
Progress float64 `json:"progress"` Progress float64 `json:"progress"`
CurrentPhase RecoveryPhase `json:"current_phase"` CurrentPhase RecoveryPhase `json:"current_phase"`
Errors []string `json:"errors"` Errors []string `json:"errors"`
LastUpdate time.Time `json:"last_update"` LastUpdate time.Time `json:"last_update"`
} }
// RecoveryStatus represents the status of a recovery operation // RecoveryStatus represents the status of a recovery operation
@@ -213,12 +213,12 @@ const (
type RecoveryPhase string type RecoveryPhase string
const ( const (
RecoveryPhaseAssessment RecoveryPhase = "assessment" RecoveryPhaseAssessment RecoveryPhase = "assessment"
RecoveryPhasePreparation RecoveryPhase = "preparation" RecoveryPhasePreparation RecoveryPhase = "preparation"
RecoveryPhaseReconnection RecoveryPhase = "reconnection" RecoveryPhaseReconnection RecoveryPhase = "reconnection"
RecoveryPhaseSynchronization RecoveryPhase = "synchronization" RecoveryPhaseSynchronization RecoveryPhase = "synchronization"
RecoveryPhaseValidation RecoveryPhase = "validation" RecoveryPhaseValidation RecoveryPhase = "validation"
RecoveryPhaseCompletion RecoveryPhase = "completion" RecoveryPhaseCompletion RecoveryPhase = "completion"
) )
// NewNetworkManagerImpl creates a new network manager implementation // NewNetworkManagerImpl creates a new network manager implementation
@@ -231,13 +231,13 @@ func NewNetworkManagerImpl(dht *dht.DHT, config *config.Config) (*NetworkManager
} }
nm := &NetworkManagerImpl{ nm := &NetworkManagerImpl{
dht: dht, dht: dht,
config: config, config: config,
healthCheckInterval: 30 * time.Second, healthCheckInterval: 30 * time.Second,
partitionCheckInterval: 60 * time.Second, partitionCheckInterval: 60 * time.Second,
connectivityTimeout: 10 * time.Second, connectivityTimeout: 10 * time.Second,
maxPartitionDuration: 10 * time.Minute, maxPartitionDuration: 10 * time.Minute,
connectivity: &ConnectivityMatrix{Matrix: make(map[string]map[string]*ConnectionInfo)}, connectivity: &ConnectivityMatrix{Matrix: make(map[string]map[string]*ConnectionInfo)},
stats: &NetworkStatistics{ stats: &NetworkStatistics{
LastUpdated: time.Now(), LastUpdated: time.Now(),
}, },
@@ -255,33 +255,33 @@ func NewNetworkManagerImpl(dht *dht.DHT, config *config.Config) (*NetworkManager
func (nm *NetworkManagerImpl) initializeComponents() error { func (nm *NetworkManagerImpl) initializeComponents() error {
// Initialize topology // Initialize topology
nm.topology = &NetworkTopology{ nm.topology = &NetworkTopology{
TotalNodes: 0, TotalNodes: 0,
Connections: make(map[string][]string), Connections: make(map[string][]string),
Regions: make(map[string][]string), Regions: make(map[string][]string),
AvailabilityZones: make(map[string][]string), AvailabilityZones: make(map[string][]string),
UpdatedAt: time.Now(), UpdatedAt: time.Now(),
} }
// Initialize partition info // Initialize partition info
nm.partitionInfo = &PartitionInfo{ nm.partitionInfo = &PartitionInfo{
PartitionDetected: false, PartitionDetected: false,
PartitionCount: 1, PartitionCount: 1,
IsolatedNodes: []string{}, IsolatedNodes: []string{},
ConnectivityMatrix: make(map[string]map[string]bool), ConnectivityMatrix: make(map[string]map[string]bool),
DetectedAt: time.Now(), DetectedAt: time.Now(),
} }
// Initialize health checker // Initialize health checker
nm.healthChecker = &NetworkHealthChecker{ nm.healthChecker = &NetworkHealthChecker{
nodeHealth: make(map[string]*NodeHealth), nodeHealth: make(map[string]*NodeHealth),
healthHistory: make(map[string][]*HealthCheckResult), healthHistory: make(map[string][]*NetworkHealthCheckResult),
alertThresholds: &NetworkAlertThresholds{ alertThresholds: &NetworkAlertThresholds{
LatencyWarning: 500 * time.Millisecond, LatencyWarning: 500 * time.Millisecond,
LatencyCritical: 2 * time.Second, LatencyCritical: 2 * time.Second,
PacketLossWarning: 0.05, // 5% PacketLossWarning: 0.05, // 5%
PacketLossCritical: 0.15, // 15% PacketLossCritical: 0.15, // 15%
HealthScoreWarning: 0.7, HealthScoreWarning: 0.7,
HealthScoreCritical: 0.4, HealthScoreCritical: 0.4,
}, },
} }
@@ -307,20 +307,20 @@ func (nm *NetworkManagerImpl) initializeComponents() error {
nm.recoveryManager = &RecoveryManager{ nm.recoveryManager = &RecoveryManager{
recoveryStrategies: map[RecoveryStrategy]*RecoveryStrategyConfig{ recoveryStrategies: map[RecoveryStrategy]*RecoveryStrategyConfig{
RecoveryStrategyAutomatic: { RecoveryStrategyAutomatic: {
Strategy: RecoveryStrategyAutomatic, Strategy: RecoveryStrategyAutomatic,
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
RetryAttempts: 3, RetryAttempts: 3,
RetryInterval: 30 * time.Second, RetryInterval: 30 * time.Second,
RequireConsensus: false, RequireConsensus: false,
ForcedThreshold: 10 * time.Minute, ForcedThreshold: 10 * time.Minute,
}, },
RecoveryStrategyGraceful: { RecoveryStrategyGraceful: {
Strategy: RecoveryStrategyGraceful, Strategy: RecoveryStrategyGraceful,
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
RetryAttempts: 5, RetryAttempts: 5,
RetryInterval: 60 * time.Second, RetryInterval: 60 * time.Second,
RequireConsensus: true, RequireConsensus: true,
ForcedThreshold: 20 * time.Minute, ForcedThreshold: 20 * time.Minute,
}, },
}, },
activeRecoveries: make(map[string]*RecoveryOperation), activeRecoveries: make(map[string]*RecoveryOperation),
@@ -628,10 +628,10 @@ func (nm *NetworkManagerImpl) connectivityChecker(ctx context.Context) {
func (nm *NetworkManagerImpl) updateTopology() { func (nm *NetworkManagerImpl) updateTopology() {
peers := nm.dht.GetConnectedPeers() peers := nm.dht.GetConnectedPeers()
nm.topology.TotalNodes = len(peers) + 1 // +1 for current node nm.topology.TotalNodes = len(peers) + 1 // +1 for current node
nm.topology.Connections = make(map[string][]string) nm.topology.Connections = make(map[string][]string)
// Build connection map // Build connection map
currentNodeID := nm.config.Agent.ID currentNodeID := nm.config.Agent.ID
peerConnections := make([]string, len(peers)) peerConnections := make([]string, len(peers))
@@ -639,21 +639,21 @@ func (nm *NetworkManagerImpl) updateTopology() {
peerConnections[i] = peer.String() peerConnections[i] = peer.String()
} }
nm.topology.Connections[currentNodeID] = peerConnections nm.topology.Connections[currentNodeID] = peerConnections
// Calculate network metrics // Calculate network metrics
nm.topology.ClusterDiameter = nm.calculateClusterDiameter() nm.topology.ClusterDiameter = nm.calculateClusterDiameter()
nm.topology.ClusteringCoefficient = nm.calculateClusteringCoefficient() nm.topology.ClusteringCoefficient = nm.calculateClusteringCoefficient()
nm.topology.UpdatedAt = time.Now() nm.topology.UpdatedAt = time.Now()
nm.lastTopologyUpdate = time.Now() nm.lastTopologyUpdate = time.Now()
} }
func (nm *NetworkManagerImpl) performHealthChecks(ctx context.Context) { func (nm *NetworkManagerImpl) performHealthChecks(ctx context.Context) {
peers := nm.dht.GetConnectedPeers() peers := nm.dht.GetConnectedPeers()
for _, peer := range peers { for _, peer := range peers {
result := nm.performHealthCheck(ctx, peer.String()) result := nm.performHealthCheck(ctx, peer.String())
// Update node health // Update node health
nodeHealth := &NodeHealth{ nodeHealth := &NodeHealth{
NodeID: peer.String(), NodeID: peer.String(),
@@ -664,7 +664,7 @@ func (nm *NetworkManagerImpl) performHealthChecks(ctx context.Context) {
PacketLossRate: 0.0, // Would be measured in real implementation PacketLossRate: 0.0, // Would be measured in real implementation
ErrorRate: 0.0, // Would be calculated from history ErrorRate: 0.0, // Would be calculated from history
} }
if result.Success { if result.Success {
nodeHealth.Status = NodeStatusHealthy nodeHealth.Status = NodeStatusHealthy
nodeHealth.HealthScore = 1.0 nodeHealth.HealthScore = 1.0
@@ -672,21 +672,21 @@ func (nm *NetworkManagerImpl) performHealthChecks(ctx context.Context) {
nodeHealth.Status = NodeStatusUnreachable nodeHealth.Status = NodeStatusUnreachable
nodeHealth.HealthScore = 0.0 nodeHealth.HealthScore = 0.0
} }
nm.healthChecker.nodeHealth[peer.String()] = nodeHealth nm.healthChecker.nodeHealth[peer.String()] = nodeHealth
// Store health check history // Store health check history
if _, exists := nm.healthChecker.healthHistory[peer.String()]; !exists { if _, exists := nm.healthChecker.healthHistory[peer.String()]; !exists {
nm.healthChecker.healthHistory[peer.String()] = []*HealthCheckResult{} nm.healthChecker.healthHistory[peer.String()] = []*NetworkHealthCheckResult{}
} }
nm.healthChecker.healthHistory[peer.String()] = append( nm.healthChecker.healthHistory[peer.String()] = append(
nm.healthChecker.healthHistory[peer.String()], nm.healthChecker.healthHistory[peer.String()],
result, result,
) )
// Keep only recent history (last 100 checks) // Keep only recent history (last 100 checks)
if len(nm.healthChecker.healthHistory[peer.String()]) > 100 { if len(nm.healthChecker.healthHistory[peer.String()]) > 100 {
nm.healthChecker.healthHistory[peer.String()] = nm.healthChecker.healthHistory[peer.String()] =
nm.healthChecker.healthHistory[peer.String()][1:] nm.healthChecker.healthHistory[peer.String()][1:]
} }
} }
@@ -694,31 +694,31 @@ func (nm *NetworkManagerImpl) performHealthChecks(ctx context.Context) {
func (nm *NetworkManagerImpl) updateConnectivityMatrix(ctx context.Context) { func (nm *NetworkManagerImpl) updateConnectivityMatrix(ctx context.Context) {
peers := nm.dht.GetConnectedPeers() peers := nm.dht.GetConnectedPeers()
nm.connectivity.mu.Lock() nm.connectivity.mu.Lock()
defer nm.connectivity.mu.Unlock() defer nm.connectivity.mu.Unlock()
// Initialize matrix if needed // Initialize matrix if needed
if nm.connectivity.Matrix == nil { if nm.connectivity.Matrix == nil {
nm.connectivity.Matrix = make(map[string]map[string]*ConnectionInfo) nm.connectivity.Matrix = make(map[string]map[string]*ConnectionInfo)
} }
currentNodeID := nm.config.Agent.ID currentNodeID := nm.config.Agent.ID
// Ensure current node exists in matrix // Ensure current node exists in matrix
if nm.connectivity.Matrix[currentNodeID] == nil { if nm.connectivity.Matrix[currentNodeID] == nil {
nm.connectivity.Matrix[currentNodeID] = make(map[string]*ConnectionInfo) nm.connectivity.Matrix[currentNodeID] = make(map[string]*ConnectionInfo)
} }
// Test connectivity to all peers // Test connectivity to all peers
for _, peer := range peers { for _, peer := range peers {
peerID := peer.String() peerID := peer.String()
// Test connection // Test connection
connInfo := nm.testConnection(ctx, peerID) connInfo := nm.testConnection(ctx, peerID)
nm.connectivity.Matrix[currentNodeID][peerID] = connInfo nm.connectivity.Matrix[currentNodeID][peerID] = connInfo
} }
nm.connectivity.LastUpdated = time.Now() nm.connectivity.LastUpdated = time.Now()
} }
@@ -741,7 +741,7 @@ func (nm *NetworkManagerImpl) detectPartitionByConnectivity() (bool, []string, f
// Simplified connectivity-based detection // Simplified connectivity-based detection
peers := nm.dht.GetConnectedPeers() peers := nm.dht.GetConnectedPeers()
knownPeers := nm.dht.GetKnownPeers() knownPeers := nm.dht.GetKnownPeers()
// If we know more peers than we're connected to, might be partitioned // If we know more peers than we're connected to, might be partitioned
if len(knownPeers) > len(peers)+2 { // Allow some tolerance if len(knownPeers) > len(peers)+2 { // Allow some tolerance
isolatedNodes := []string{} isolatedNodes := []string{}
@@ -759,7 +759,7 @@ func (nm *NetworkManagerImpl) detectPartitionByConnectivity() (bool, []string, f
} }
return true, isolatedNodes, 0.8 return true, isolatedNodes, 0.8
} }
return false, []string{}, 0.0 return false, []string{}, 0.0
} }
@@ -767,18 +767,18 @@ func (nm *NetworkManagerImpl) detectPartitionByHeartbeat() (bool, []string, floa
// Simplified heartbeat-based detection // Simplified heartbeat-based detection
nm.healthChecker.mu.RLock() nm.healthChecker.mu.RLock()
defer nm.healthChecker.mu.RUnlock() defer nm.healthChecker.mu.RUnlock()
isolatedNodes := []string{} isolatedNodes := []string{}
for nodeID, health := range nm.healthChecker.nodeHealth { for nodeID, health := range nm.healthChecker.nodeHealth {
if health.Status == NodeStatusUnreachable { if health.Status == NodeStatusUnreachable {
isolatedNodes = append(isolatedNodes, nodeID) isolatedNodes = append(isolatedNodes, nodeID)
} }
} }
if len(isolatedNodes) > 0 { if len(isolatedNodes) > 0 {
return true, isolatedNodes, 0.7 return true, isolatedNodes, 0.7
} }
return false, []string{}, 0.0 return false, []string{}, 0.0
} }
@@ -791,7 +791,7 @@ func (nm *NetworkManagerImpl) detectPartitionHybrid() (bool, []string, float64)
// Combine multiple detection methods // Combine multiple detection methods
partitioned1, nodes1, conf1 := nm.detectPartitionByConnectivity() partitioned1, nodes1, conf1 := nm.detectPartitionByConnectivity()
partitioned2, nodes2, conf2 := nm.detectPartitionByHeartbeat() partitioned2, nodes2, conf2 := nm.detectPartitionByHeartbeat()
if partitioned1 && partitioned2 { if partitioned1 && partitioned2 {
// Both methods agree // Both methods agree
combinedNodes := nm.combineNodeLists(nodes1, nodes2) combinedNodes := nm.combineNodeLists(nodes1, nodes2)
@@ -805,7 +805,7 @@ func (nm *NetworkManagerImpl) detectPartitionHybrid() (bool, []string, float64)
return true, nodes2, conf2 * 0.7 return true, nodes2, conf2 * 0.7
} }
} }
return false, []string{}, 0.0 return false, []string{}, 0.0
} }
@@ -878,11 +878,11 @@ func (nm *NetworkManagerImpl) completeRecovery(ctx context.Context, operation *R
func (nm *NetworkManagerImpl) testPeerConnectivity(ctx context.Context, peerID string) *ConnectivityResult { func (nm *NetworkManagerImpl) testPeerConnectivity(ctx context.Context, peerID string) *ConnectivityResult {
start := time.Now() start := time.Now()
// In a real implementation, this would test actual network connectivity // In a real implementation, this would test actual network connectivity
// For now, we'll simulate based on DHT connectivity // For now, we'll simulate based on DHT connectivity
peers := nm.dht.GetConnectedPeers() peers := nm.dht.GetConnectedPeers()
for _, peer := range peers { for _, peer := range peers {
if peer.String() == peerID { if peer.String() == peerID {
return &ConnectivityResult{ return &ConnectivityResult{
@@ -895,7 +895,7 @@ func (nm *NetworkManagerImpl) testPeerConnectivity(ctx context.Context, peerID s
} }
} }
} }
return &ConnectivityResult{ return &ConnectivityResult{
PeerID: peerID, PeerID: peerID,
Reachable: false, Reachable: false,
@@ -907,13 +907,13 @@ func (nm *NetworkManagerImpl) testPeerConnectivity(ctx context.Context, peerID s
} }
} }
func (nm *NetworkManagerImpl) performHealthCheck(ctx context.Context, nodeID string) *HealthCheckResult { func (nm *NetworkManagerImpl) performHealthCheck(ctx context.Context, nodeID string) *NetworkHealthCheckResult {
start := time.Now() start := time.Now()
// In a real implementation, this would perform actual health checks // In a real implementation, this would perform actual health checks
// For now, simulate based on connectivity // For now, simulate based on connectivity
peers := nm.dht.GetConnectedPeers() peers := nm.dht.GetConnectedPeers()
for _, peer := range peers { for _, peer := range peers {
if peer.String() == nodeID { if peer.String() == nodeID {
return &HealthCheckResult{ return &HealthCheckResult{
@@ -924,7 +924,7 @@ func (nm *NetworkManagerImpl) performHealthCheck(ctx context.Context, nodeID str
} }
} }
} }
return &HealthCheckResult{ return &HealthCheckResult{
NodeID: nodeID, NodeID: nodeID,
Timestamp: time.Now(), Timestamp: time.Now(),
@@ -938,7 +938,7 @@ func (nm *NetworkManagerImpl) testConnection(ctx context.Context, peerID string)
// Test connection to specific peer // Test connection to specific peer
connected := false connected := false
latency := time.Duration(0) latency := time.Duration(0)
// Check if peer is in connected peers list // Check if peer is in connected peers list
peers := nm.dht.GetConnectedPeers() peers := nm.dht.GetConnectedPeers()
for _, peer := range peers { for _, peer := range peers {
@@ -948,28 +948,28 @@ func (nm *NetworkManagerImpl) testConnection(ctx context.Context, peerID string)
break break
} }
} }
return &ConnectionInfo{ return &ConnectionInfo{
Connected: connected, Connected: connected,
Latency: latency, Latency: latency,
PacketLoss: 0.0, PacketLoss: 0.0,
Bandwidth: 1000000, // 1 Mbps placeholder Bandwidth: 1000000, // 1 Mbps placeholder
LastChecked: time.Now(), LastChecked: time.Now(),
ErrorCount: 0, ErrorCount: 0,
} }
} }
func (nm *NetworkManagerImpl) updateNetworkStatistics() { func (nm *NetworkManagerImpl) updateNetworkStatistics() {
peers := nm.dht.GetConnectedPeers() peers := nm.dht.GetConnectedPeers()
nm.stats.TotalNodes = len(peers) + 1 nm.stats.TotalNodes = len(peers) + 1
nm.stats.ConnectedNodes = len(peers) nm.stats.ConnectedNodes = len(peers)
nm.stats.DisconnectedNodes = nm.stats.TotalNodes - nm.stats.ConnectedNodes nm.stats.DisconnectedNodes = nm.stats.TotalNodes - nm.stats.ConnectedNodes
// Calculate average latency from connectivity matrix // Calculate average latency from connectivity matrix
totalLatency := time.Duration(0) totalLatency := time.Duration(0)
connectionCount := 0 connectionCount := 0
nm.connectivity.mu.RLock() nm.connectivity.mu.RLock()
for _, connections := range nm.connectivity.Matrix { for _, connections := range nm.connectivity.Matrix {
for _, conn := range connections { for _, conn := range connections {
@@ -980,11 +980,11 @@ func (nm *NetworkManagerImpl) updateNetworkStatistics() {
} }
} }
nm.connectivity.mu.RUnlock() nm.connectivity.mu.RUnlock()
if connectionCount > 0 { if connectionCount > 0 {
nm.stats.AverageLatency = totalLatency / time.Duration(connectionCount) nm.stats.AverageLatency = totalLatency / time.Duration(connectionCount)
} }
nm.stats.OverallHealth = nm.calculateOverallNetworkHealth() nm.stats.OverallHealth = nm.calculateOverallNetworkHealth()
nm.stats.LastUpdated = time.Now() nm.stats.LastUpdated = time.Now()
} }
@@ -1024,14 +1024,14 @@ func (nm *NetworkManagerImpl) calculateOverallNetworkHealth() float64 {
return float64(nm.stats.ConnectedNodes) / float64(nm.stats.TotalNodes) return float64(nm.stats.ConnectedNodes) / float64(nm.stats.TotalNodes)
} }
func (nm *NetworkManagerImpl) determineNodeStatus(result *HealthCheckResult) NodeStatus { func (nm *NetworkManagerImpl) determineNodeStatus(result *NetworkHealthCheckResult) NodeStatus {
if result.Success { if result.Success {
return NodeStatusHealthy return NodeStatusHealthy
} }
return NodeStatusUnreachable return NodeStatusUnreachable
} }
func (nm *NetworkManagerImpl) calculateHealthScore(result *HealthCheckResult) float64 { func (nm *NetworkManagerImpl) calculateHealthScore(result *NetworkHealthCheckResult) float64 {
if result.Success { if result.Success {
return 1.0 return 1.0
} }
@@ -1040,19 +1040,19 @@ func (nm *NetworkManagerImpl) calculateHealthScore(result *HealthCheckResult) fl
func (nm *NetworkManagerImpl) combineNodeLists(list1, list2 []string) []string { func (nm *NetworkManagerImpl) combineNodeLists(list1, list2 []string) []string {
nodeSet := make(map[string]bool) nodeSet := make(map[string]bool)
for _, node := range list1 { for _, node := range list1 {
nodeSet[node] = true nodeSet[node] = true
} }
for _, node := range list2 { for _, node := range list2 {
nodeSet[node] = true nodeSet[node] = true
} }
result := make([]string, 0, len(nodeSet)) result := make([]string, 0, len(nodeSet))
for node := range nodeSet { for node := range nodeSet {
result = append(result, node) result = append(result, node)
} }
sort.Strings(result) sort.Strings(result)
return result return result
} }
@@ -1073,4 +1073,4 @@ func (nm *NetworkManagerImpl) generateEventID() string {
func (nm *NetworkManagerImpl) generateOperationID() string { func (nm *NetworkManagerImpl) generateOperationID() string {
return fmt.Sprintf("op-%d", time.Now().UnixNano()) return fmt.Sprintf("op-%d", time.Now().UnixNano())
} }

View File

@@ -7,39 +7,39 @@ import (
"sync" "sync"
"time" "time"
"chorus/pkg/dht"
"chorus/pkg/config" "chorus/pkg/config"
"chorus/pkg/dht"
"chorus/pkg/ucxl" "chorus/pkg/ucxl"
"github.com/libp2p/go-libp2p/core/peer" "github.com/libp2p/go-libp2p/core/peer"
) )
// ReplicationManagerImpl implements ReplicationManager interface // ReplicationManagerImpl implements ReplicationManager interface
type ReplicationManagerImpl struct { type ReplicationManagerImpl struct {
mu sync.RWMutex mu sync.RWMutex
dht *dht.DHT dht *dht.DHT
config *config.Config config *config.Config
replicationMap map[string]*ReplicationStatus replicationMap map[string]*ReplicationStatus
repairQueue chan *RepairRequest repairQueue chan *RepairRequest
rebalanceQueue chan *RebalanceRequest rebalanceQueue chan *RebalanceRequest
consistentHash ConsistentHashing consistentHash ConsistentHashing
policy *ReplicationPolicy policy *ReplicationPolicy
stats *ReplicationStatistics stats *ReplicationStatistics
running bool running bool
} }
// RepairRequest represents a repair request // RepairRequest represents a repair request
type RepairRequest struct { type RepairRequest struct {
Address ucxl.Address Address ucxl.Address
RequestedBy string RequestedBy string
Priority Priority Priority Priority
RequestTime time.Time RequestTime time.Time
} }
// RebalanceRequest represents a rebalance request // RebalanceRequest represents a rebalance request
type RebalanceRequest struct { type RebalanceRequest struct {
Reason string Reason string
RequestedBy string RequestedBy string
RequestTime time.Time RequestTime time.Time
} }
// NewReplicationManagerImpl creates a new replication manager implementation // NewReplicationManagerImpl creates a new replication manager implementation
@@ -220,10 +220,10 @@ func (rm *ReplicationManagerImpl) BalanceReplicas(ctx context.Context) (*Rebalan
start := time.Now() start := time.Now()
result := &RebalanceResult{ result := &RebalanceResult{
RebalanceTime: 0, RebalanceTime: 0,
RebalanceSuccessful: false, RebalanceSuccessful: false,
Errors: []string{}, Errors: []string{},
RebalancedAt: time.Now(), RebalancedAt: time.Now(),
} }
// Get current cluster topology // Get current cluster topology
@@ -462,9 +462,9 @@ func (rm *ReplicationManagerImpl) discoverReplicas(ctx context.Context, address
// For now, we'll simulate some replicas // For now, we'll simulate some replicas
peers := rm.dht.GetConnectedPeers() peers := rm.dht.GetConnectedPeers()
if len(peers) > 0 { if len(peers) > 0 {
status.CurrentReplicas = min(len(peers), rm.policy.DefaultFactor) status.CurrentReplicas = minInt(len(peers), rm.policy.DefaultFactor)
status.HealthyReplicas = status.CurrentReplicas status.HealthyReplicas = status.CurrentReplicas
for i, peer := range peers { for i, peer := range peers {
if i >= status.CurrentReplicas { if i >= status.CurrentReplicas {
break break
@@ -478,9 +478,9 @@ func (rm *ReplicationManagerImpl) determineOverallHealth(status *ReplicationStat
if status.HealthyReplicas == 0 { if status.HealthyReplicas == 0 {
return HealthFailed return HealthFailed
} }
healthRatio := float64(status.HealthyReplicas) / float64(status.DesiredReplicas) healthRatio := float64(status.HealthyReplicas) / float64(status.DesiredReplicas)
if healthRatio >= 1.0 { if healthRatio >= 1.0 {
return HealthHealthy return HealthHealthy
} else if healthRatio >= 0.7 { } else if healthRatio >= 0.7 {
@@ -579,7 +579,7 @@ func (rm *ReplicationManagerImpl) calculateIdealDistribution(peers []peer.ID) ma
func (rm *ReplicationManagerImpl) getCurrentDistribution(ctx context.Context) map[string]map[string]int { func (rm *ReplicationManagerImpl) getCurrentDistribution(ctx context.Context) map[string]map[string]int {
// Returns current distribution: address -> node -> replica count // Returns current distribution: address -> node -> replica count
distribution := make(map[string]map[string]int) distribution := make(map[string]map[string]int)
rm.mu.RLock() rm.mu.RLock()
for addr, status := range rm.replicationMap { for addr, status := range rm.replicationMap {
distribution[addr] = make(map[string]int) distribution[addr] = make(map[string]int)
@@ -588,7 +588,7 @@ func (rm *ReplicationManagerImpl) getCurrentDistribution(ctx context.Context) ma
} }
} }
rm.mu.RUnlock() rm.mu.RUnlock()
return distribution return distribution
} }
@@ -630,17 +630,17 @@ func (rm *ReplicationManagerImpl) isNodeOverloaded(nodeID string) bool {
// RebalanceMove represents a replica move operation // RebalanceMove represents a replica move operation
type RebalanceMove struct { type RebalanceMove struct {
Address ucxl.Address `json:"address"` Address ucxl.Address `json:"address"`
FromNode string `json:"from_node"` FromNode string `json:"from_node"`
ToNode string `json:"to_node"` ToNode string `json:"to_node"`
Priority Priority `json:"priority"` Priority Priority `json:"priority"`
Reason string `json:"reason"` Reason string `json:"reason"`
} }
// Utility functions // Utility functions
func min(a, b int) int { func minInt(a, b int) int {
if a < b { if a < b {
return a return a
} }
return b return b
} }

View File

@@ -20,22 +20,22 @@ import (
// SecurityManager handles all security aspects of the distributed system // SecurityManager handles all security aspects of the distributed system
type SecurityManager struct { type SecurityManager struct {
mu sync.RWMutex mu sync.RWMutex
config *config.Config config *config.Config
tlsConfig *TLSConfig tlsConfig *TLSConfig
authManager *AuthenticationManager authManager *AuthenticationManager
authzManager *AuthorizationManager authzManager *AuthorizationManager
auditLogger *SecurityAuditLogger auditLogger *SecurityAuditLogger
nodeAuth *NodeAuthentication nodeAuth *NodeAuthentication
encryption *DistributionEncryption encryption *DistributionEncryption
certificateAuth *CertificateAuthority certificateAuth *CertificateAuthority
// Security state // Security state
trustedNodes map[string]*TrustedNode trustedNodes map[string]*TrustedNode
activeSessions map[string]*SecuritySession activeSessions map[string]*SecuritySession
securityPolicies map[string]*SecurityPolicy securityPolicies map[string]*SecurityPolicy
threatDetector *ThreatDetector threatDetector *ThreatDetector
// Configuration // Configuration
tlsEnabled bool tlsEnabled bool
mutualTLSEnabled bool mutualTLSEnabled bool
@@ -45,28 +45,28 @@ type SecurityManager struct {
// TLSConfig manages TLS configuration for secure communications // TLSConfig manages TLS configuration for secure communications
type TLSConfig struct { type TLSConfig struct {
ServerConfig *tls.Config ServerConfig *tls.Config
ClientConfig *tls.Config ClientConfig *tls.Config
CertificatePath string CertificatePath string
PrivateKeyPath string PrivateKeyPath string
CAPath string CAPath string
MinTLSVersion uint16 MinTLSVersion uint16
CipherSuites []uint16 CipherSuites []uint16
CurvePreferences []tls.CurveID CurvePreferences []tls.CurveID
ClientAuth tls.ClientAuthType ClientAuth tls.ClientAuthType
VerifyConnection func(tls.ConnectionState) error VerifyConnection func(tls.ConnectionState) error
} }
// AuthenticationManager handles node and user authentication // AuthenticationManager handles node and user authentication
type AuthenticationManager struct { type AuthenticationManager struct {
mu sync.RWMutex mu sync.RWMutex
providers map[string]AuthProvider providers map[string]AuthProvider
tokenValidator TokenValidator tokenValidator TokenValidator
sessionManager *SessionManager sessionManager *SessionManager
multiFactorAuth *MultiFactorAuth multiFactorAuth *MultiFactorAuth
credentialStore *CredentialStore credentialStore *CredentialStore
loginAttempts map[string]*LoginAttempts loginAttempts map[string]*LoginAttempts
authPolicies map[string]*AuthPolicy authPolicies map[string]*AuthPolicy
} }
// AuthProvider interface for different authentication methods // AuthProvider interface for different authentication methods
@@ -80,14 +80,14 @@ type AuthProvider interface {
// Credentials represents authentication credentials // Credentials represents authentication credentials
type Credentials struct { type Credentials struct {
Type CredentialType `json:"type"` Type CredentialType `json:"type"`
Username string `json:"username,omitempty"` Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"` Password string `json:"password,omitempty"`
Token string `json:"token,omitempty"` Token string `json:"token,omitempty"`
Certificate *x509.Certificate `json:"certificate,omitempty"` Certificate *x509.Certificate `json:"certificate,omitempty"`
Signature []byte `json:"signature,omitempty"` Signature []byte `json:"signature,omitempty"`
Challenge string `json:"challenge,omitempty"` Challenge string `json:"challenge,omitempty"`
Metadata map[string]interface{} `json:"metadata,omitempty"` Metadata map[string]interface{} `json:"metadata,omitempty"`
} }
// CredentialType represents different types of credentials // CredentialType represents different types of credentials
@@ -104,15 +104,15 @@ const (
// AuthResult represents the result of authentication // AuthResult represents the result of authentication
type AuthResult struct { type AuthResult struct {
Success bool `json:"success"` Success bool `json:"success"`
UserID string `json:"user_id"` UserID string `json:"user_id"`
Roles []string `json:"roles"` Roles []string `json:"roles"`
Permissions []string `json:"permissions"` Permissions []string `json:"permissions"`
TokenPair *TokenPair `json:"token_pair"` TokenPair *TokenPair `json:"token_pair"`
SessionID string `json:"session_id"` SessionID string `json:"session_id"`
ExpiresAt time.Time `json:"expires_at"` ExpiresAt time.Time `json:"expires_at"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
FailureReason string `json:"failure_reason,omitempty"` FailureReason string `json:"failure_reason,omitempty"`
} }
// TokenPair represents access and refresh tokens // TokenPair represents access and refresh tokens
@@ -140,13 +140,13 @@ type TokenClaims struct {
// AuthorizationManager handles authorization and access control // AuthorizationManager handles authorization and access control
type AuthorizationManager struct { type AuthorizationManager struct {
mu sync.RWMutex mu sync.RWMutex
policyEngine PolicyEngine policyEngine PolicyEngine
rbacManager *RBACManager rbacManager *RBACManager
aclManager *ACLManager aclManager *ACLManager
resourceManager *ResourceManager resourceManager *ResourceManager
permissionCache *PermissionCache permissionCache *PermissionCache
authzPolicies map[string]*AuthorizationPolicy authzPolicies map[string]*AuthorizationPolicy
} }
// PolicyEngine interface for policy evaluation // PolicyEngine interface for policy evaluation
@@ -168,13 +168,13 @@ type AuthorizationRequest struct {
// AuthorizationResult represents the result of authorization // AuthorizationResult represents the result of authorization
type AuthorizationResult struct { type AuthorizationResult struct {
Decision AuthorizationDecision `json:"decision"` Decision AuthorizationDecision `json:"decision"`
Reason string `json:"reason"` Reason string `json:"reason"`
Policies []string `json:"applied_policies"` Policies []string `json:"applied_policies"`
Conditions []string `json:"conditions"` Conditions []string `json:"conditions"`
TTL time.Duration `json:"ttl"` TTL time.Duration `json:"ttl"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
EvaluationTime time.Duration `json:"evaluation_time"` EvaluationTime time.Duration `json:"evaluation_time"`
} }
// AuthorizationDecision represents authorization decisions // AuthorizationDecision represents authorization decisions
@@ -188,13 +188,13 @@ const (
// SecurityAuditLogger handles security event logging // SecurityAuditLogger handles security event logging
type SecurityAuditLogger struct { type SecurityAuditLogger struct {
mu sync.RWMutex mu sync.RWMutex
loggers []SecurityLogger loggers []SecurityLogger
eventBuffer []*SecurityEvent eventBuffer []*SecurityEvent
alertManager *SecurityAlertManager alertManager *SecurityAlertManager
compliance *ComplianceManager compliance *ComplianceManager
retention *AuditRetentionPolicy retention *AuditRetentionPolicy
enabled bool enabled bool
} }
// SecurityLogger interface for security event logging // SecurityLogger interface for security event logging
@@ -206,22 +206,22 @@ type SecurityLogger interface {
// SecurityEvent represents a security event // SecurityEvent represents a security event
type SecurityEvent struct { type SecurityEvent struct {
EventID string `json:"event_id"` EventID string `json:"event_id"`
EventType SecurityEventType `json:"event_type"` EventType SecurityEventType `json:"event_type"`
Severity SecuritySeverity `json:"severity"` Severity SecuritySeverity `json:"severity"`
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
UserID string `json:"user_id,omitempty"` UserID string `json:"user_id,omitempty"`
NodeID string `json:"node_id,omitempty"` NodeID string `json:"node_id,omitempty"`
Resource string `json:"resource,omitempty"` Resource string `json:"resource,omitempty"`
Action string `json:"action,omitempty"` Action string `json:"action,omitempty"`
Result string `json:"result"` Result string `json:"result"`
Message string `json:"message"` Message string `json:"message"`
Details map[string]interface{} `json:"details"` Details map[string]interface{} `json:"details"`
IPAddress string `json:"ip_address,omitempty"` IPAddress string `json:"ip_address,omitempty"`
UserAgent string `json:"user_agent,omitempty"` UserAgent string `json:"user_agent,omitempty"`
SessionID string `json:"session_id,omitempty"` SessionID string `json:"session_id,omitempty"`
RequestID string `json:"request_id,omitempty"` RequestID string `json:"request_id,omitempty"`
Fingerprint string `json:"fingerprint"` Fingerprint string `json:"fingerprint"`
} }
// SecurityEventType represents different types of security events // SecurityEventType represents different types of security events
@@ -242,12 +242,12 @@ const (
type SecuritySeverity string type SecuritySeverity string
const ( const (
SeverityDebug SecuritySeverity = "debug" SecuritySeverityDebug SecuritySeverity = "debug"
SeverityInfo SecuritySeverity = "info" SecuritySeverityInfo SecuritySeverity = "info"
SeverityWarning SecuritySeverity = "warning" SecuritySeverityWarning SecuritySeverity = "warning"
SeverityError SecuritySeverity = "error" SecuritySeverityError SecuritySeverity = "error"
SeverityCritical SecuritySeverity = "critical" SecuritySeverityCritical SecuritySeverity = "critical"
SeverityAlert SecuritySeverity = "alert" SecuritySeverityAlert SecuritySeverity = "alert"
) )
// NodeAuthentication handles node-to-node authentication // NodeAuthentication handles node-to-node authentication
@@ -262,16 +262,16 @@ type NodeAuthentication struct {
// TrustedNode represents a trusted node in the network // TrustedNode represents a trusted node in the network
type TrustedNode struct { type TrustedNode struct {
NodeID string `json:"node_id"` NodeID string `json:"node_id"`
PublicKey []byte `json:"public_key"` PublicKey []byte `json:"public_key"`
Certificate *x509.Certificate `json:"certificate"` Certificate *x509.Certificate `json:"certificate"`
Roles []string `json:"roles"` Roles []string `json:"roles"`
Capabilities []string `json:"capabilities"` Capabilities []string `json:"capabilities"`
TrustLevel TrustLevel `json:"trust_level"` TrustLevel TrustLevel `json:"trust_level"`
LastSeen time.Time `json:"last_seen"` LastSeen time.Time `json:"last_seen"`
VerifiedAt time.Time `json:"verified_at"` VerifiedAt time.Time `json:"verified_at"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
Status NodeStatus `json:"status"` Status NodeStatus `json:"status"`
} }
// TrustLevel represents the trust level of a node // TrustLevel represents the trust level of a node
@@ -287,18 +287,18 @@ const (
// SecuritySession represents an active security session // SecuritySession represents an active security session
type SecuritySession struct { type SecuritySession struct {
SessionID string `json:"session_id"` SessionID string `json:"session_id"`
UserID string `json:"user_id"` UserID string `json:"user_id"`
NodeID string `json:"node_id"` NodeID string `json:"node_id"`
Roles []string `json:"roles"` Roles []string `json:"roles"`
Permissions []string `json:"permissions"` Permissions []string `json:"permissions"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
ExpiresAt time.Time `json:"expires_at"` ExpiresAt time.Time `json:"expires_at"`
LastActivity time.Time `json:"last_activity"` LastActivity time.Time `json:"last_activity"`
IPAddress string `json:"ip_address"` IPAddress string `json:"ip_address"`
UserAgent string `json:"user_agent"` UserAgent string `json:"user_agent"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
Status SessionStatus `json:"status"` Status SessionStatus `json:"status"`
} }
// SessionStatus represents session status // SessionStatus represents session status
@@ -313,61 +313,61 @@ const (
// ThreatDetector detects security threats and anomalies // ThreatDetector detects security threats and anomalies
type ThreatDetector struct { type ThreatDetector struct {
mu sync.RWMutex mu sync.RWMutex
detectionRules []*ThreatDetectionRule detectionRules []*ThreatDetectionRule
behaviorAnalyzer *BehaviorAnalyzer behaviorAnalyzer *BehaviorAnalyzer
anomalyDetector *AnomalyDetector anomalyDetector *AnomalyDetector
threatIntelligence *ThreatIntelligence threatIntelligence *ThreatIntelligence
activeThreats map[string]*ThreatEvent activeThreats map[string]*ThreatEvent
mitigationStrategies map[ThreatType]*MitigationStrategy mitigationStrategies map[ThreatType]*MitigationStrategy
} }
// ThreatDetectionRule represents a threat detection rule // ThreatDetectionRule represents a threat detection rule
type ThreatDetectionRule struct { type ThreatDetectionRule struct {
RuleID string `json:"rule_id"` RuleID string `json:"rule_id"`
Name string `json:"name"` Name string `json:"name"`
Description string `json:"description"` Description string `json:"description"`
ThreatType ThreatType `json:"threat_type"` ThreatType ThreatType `json:"threat_type"`
Severity SecuritySeverity `json:"severity"` Severity SecuritySeverity `json:"severity"`
Conditions []*ThreatCondition `json:"conditions"` Conditions []*ThreatCondition `json:"conditions"`
Actions []*ThreatAction `json:"actions"` Actions []*ThreatAction `json:"actions"`
Enabled bool `json:"enabled"` Enabled bool `json:"enabled"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"` UpdatedAt time.Time `json:"updated_at"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
} }
// ThreatType represents different types of threats // ThreatType represents different types of threats
type ThreatType string type ThreatType string
const ( const (
ThreatTypeBruteForce ThreatType = "brute_force" ThreatTypeBruteForce ThreatType = "brute_force"
ThreatTypeUnauthorized ThreatType = "unauthorized_access" ThreatTypeUnauthorized ThreatType = "unauthorized_access"
ThreatTypeDataExfiltration ThreatType = "data_exfiltration" ThreatTypeDataExfiltration ThreatType = "data_exfiltration"
ThreatTypeDoS ThreatType = "denial_of_service" ThreatTypeDoS ThreatType = "denial_of_service"
ThreatTypePrivilegeEscalation ThreatType = "privilege_escalation" ThreatTypePrivilegeEscalation ThreatType = "privilege_escalation"
ThreatTypeAnomalous ThreatType = "anomalous_behavior" ThreatTypeAnomalous ThreatType = "anomalous_behavior"
ThreatTypeMaliciousCode ThreatType = "malicious_code" ThreatTypeMaliciousCode ThreatType = "malicious_code"
ThreatTypeInsiderThreat ThreatType = "insider_threat" ThreatTypeInsiderThreat ThreatType = "insider_threat"
) )
// CertificateAuthority manages certificate generation and validation // CertificateAuthority manages certificate generation and validation
type CertificateAuthority struct { type CertificateAuthority struct {
mu sync.RWMutex mu sync.RWMutex
rootCA *x509.Certificate rootCA *x509.Certificate
rootKey interface{} rootKey interface{}
intermediateCA *x509.Certificate intermediateCA *x509.Certificate
intermediateKey interface{} intermediateKey interface{}
certStore *CertificateStore certStore *CertificateStore
crlManager *CRLManager crlManager *CRLManager
ocspResponder *OCSPResponder ocspResponder *OCSPResponder
} }
// DistributionEncryption handles encryption for distributed communications // DistributionEncryption handles encryption for distributed communications
type DistributionEncryption struct { type DistributionEncryption struct {
mu sync.RWMutex mu sync.RWMutex
keyManager *DistributionKeyManager keyManager *DistributionKeyManager
encryptionSuite *EncryptionSuite encryptionSuite *EncryptionSuite
keyRotationPolicy *KeyRotationPolicy keyRotationPolicy *KeyRotationPolicy
encryptionMetrics *EncryptionMetrics encryptionMetrics *EncryptionMetrics
} }
@@ -379,13 +379,13 @@ func NewSecurityManager(config *config.Config) (*SecurityManager, error) {
} }
sm := &SecurityManager{ sm := &SecurityManager{
config: config, config: config,
trustedNodes: make(map[string]*TrustedNode), trustedNodes: make(map[string]*TrustedNode),
activeSessions: make(map[string]*SecuritySession), activeSessions: make(map[string]*SecuritySession),
securityPolicies: make(map[string]*SecurityPolicy), securityPolicies: make(map[string]*SecurityPolicy),
tlsEnabled: true, tlsEnabled: true,
mutualTLSEnabled: true, mutualTLSEnabled: true,
auditingEnabled: true, auditingEnabled: true,
encryptionEnabled: true, encryptionEnabled: true,
} }
@@ -508,12 +508,12 @@ func (sm *SecurityManager) Authenticate(ctx context.Context, credentials *Creden
// Log authentication attempt // Log authentication attempt
sm.logSecurityEvent(ctx, &SecurityEvent{ sm.logSecurityEvent(ctx, &SecurityEvent{
EventType: EventTypeAuthentication, EventType: EventTypeAuthentication,
Severity: SeverityInfo, Severity: SecuritySeverityInfo,
Action: "authenticate", Action: "authenticate",
Message: "Authentication attempt", Message: "Authentication attempt",
Details: map[string]interface{}{ Details: map[string]interface{}{
"credential_type": credentials.Type, "credential_type": credentials.Type,
"username": credentials.Username, "username": credentials.Username,
}, },
}) })
@@ -525,7 +525,7 @@ func (sm *SecurityManager) Authorize(ctx context.Context, request *Authorization
// Log authorization attempt // Log authorization attempt
sm.logSecurityEvent(ctx, &SecurityEvent{ sm.logSecurityEvent(ctx, &SecurityEvent{
EventType: EventTypeAuthorization, EventType: EventTypeAuthorization,
Severity: SeverityInfo, Severity: SecuritySeverityInfo,
UserID: request.UserID, UserID: request.UserID,
Resource: request.Resource, Resource: request.Resource,
Action: request.Action, Action: request.Action,
@@ -554,7 +554,7 @@ func (sm *SecurityManager) ValidateNodeIdentity(ctx context.Context, nodeID stri
// Log successful validation // Log successful validation
sm.logSecurityEvent(ctx, &SecurityEvent{ sm.logSecurityEvent(ctx, &SecurityEvent{
EventType: EventTypeAuthentication, EventType: EventTypeAuthentication,
Severity: SeverityInfo, Severity: SecuritySeverityInfo,
NodeID: nodeID, NodeID: nodeID,
Action: "validate_node_identity", Action: "validate_node_identity",
Result: "success", Result: "success",
@@ -609,7 +609,7 @@ func (sm *SecurityManager) AddTrustedNode(ctx context.Context, node *TrustedNode
// Log node addition // Log node addition
sm.logSecurityEvent(ctx, &SecurityEvent{ sm.logSecurityEvent(ctx, &SecurityEvent{
EventType: EventTypeConfiguration, EventType: EventTypeConfiguration,
Severity: SeverityInfo, Severity: SecuritySeverityInfo,
NodeID: node.NodeID, NodeID: node.NodeID,
Action: "add_trusted_node", Action: "add_trusted_node",
Result: "success", Result: "success",
@@ -649,7 +649,7 @@ func (sm *SecurityManager) loadOrGenerateCertificate() (*tls.Certificate, error)
func (sm *SecurityManager) generateSelfSignedCertificate() ([]byte, []byte, error) { func (sm *SecurityManager) generateSelfSignedCertificate() ([]byte, []byte, error) {
// Generate a self-signed certificate for development/testing // Generate a self-signed certificate for development/testing
// In production, use proper CA-signed certificates // In production, use proper CA-signed certificates
template := x509.Certificate{ template := x509.Certificate{
SerialNumber: big.NewInt(1), SerialNumber: big.NewInt(1),
Subject: pkix.Name{ Subject: pkix.Name{
@@ -660,11 +660,11 @@ func (sm *SecurityManager) generateSelfSignedCertificate() ([]byte, []byte, erro
StreetAddress: []string{""}, StreetAddress: []string{""},
PostalCode: []string{""}, PostalCode: []string{""},
}, },
NotBefore: time.Now(), NotBefore: time.Now(),
NotAfter: time.Now().Add(365 * 24 * time.Hour), NotAfter: time.Now().Add(365 * 24 * time.Hour),
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature, KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}, ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
IPAddresses: []net.IP{net.IPv4(127, 0, 0, 1), net.IPv6loopback}, IPAddresses: []net.IP{net.IPv4(127, 0, 0, 1), net.IPv6loopback},
} }
// This is a simplified implementation // This is a simplified implementation
@@ -765,8 +765,8 @@ func NewDistributionEncryption(config *config.Config) (*DistributionEncryption,
func NewThreatDetector(config *config.Config) (*ThreatDetector, error) { func NewThreatDetector(config *config.Config) (*ThreatDetector, error) {
return &ThreatDetector{ return &ThreatDetector{
detectionRules: []*ThreatDetectionRule{}, detectionRules: []*ThreatDetectionRule{},
activeThreats: make(map[string]*ThreatEvent), activeThreats: make(map[string]*ThreatEvent),
mitigationStrategies: make(map[ThreatType]*MitigationStrategy), mitigationStrategies: make(map[ThreatType]*MitigationStrategy),
}, nil }, nil
} }
@@ -831,4 +831,4 @@ type OCSPResponder struct{}
type DistributionKeyManager struct{} type DistributionKeyManager struct{}
type EncryptionSuite struct{} type EncryptionSuite struct{}
type KeyRotationPolicy struct{} type KeyRotationPolicy struct{}
type EncryptionMetrics struct{} type EncryptionMetrics struct{}

View File

@@ -11,8 +11,8 @@ import (
"strings" "strings"
"time" "time"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
) )
// DefaultDirectoryAnalyzer provides comprehensive directory structure analysis // DefaultDirectoryAnalyzer provides comprehensive directory structure analysis
@@ -268,11 +268,11 @@ func NewRelationshipAnalyzer() *RelationshipAnalyzer {
// AnalyzeStructure analyzes directory organization patterns // AnalyzeStructure analyzes directory organization patterns
func (da *DefaultDirectoryAnalyzer) AnalyzeStructure(ctx context.Context, dirPath string) (*DirectoryStructure, error) { func (da *DefaultDirectoryAnalyzer) AnalyzeStructure(ctx context.Context, dirPath string) (*DirectoryStructure, error) {
structure := &DirectoryStructure{ structure := &DirectoryStructure{
Path: dirPath, Path: dirPath,
FileTypes: make(map[string]int), FileTypes: make(map[string]int),
Languages: make(map[string]int), Languages: make(map[string]int),
Dependencies: []string{}, Dependencies: []string{},
AnalyzedAt: time.Now(), AnalyzedAt: time.Now(),
} }
// Walk the directory tree // Walk the directory tree
@@ -340,9 +340,9 @@ func (da *DefaultDirectoryAnalyzer) DetectConventions(ctx context.Context, dirPa
OrganizationalPatterns: []*OrganizationalPattern{}, OrganizationalPatterns: []*OrganizationalPattern{},
Consistency: 0.0, Consistency: 0.0,
Violations: []*Violation{}, Violations: []*Violation{},
Recommendations: []*Recommendation{}, Recommendations: []*BasicRecommendation{},
AppliedStandards: []string{}, AppliedStandards: []string{},
AnalyzedAt: time.Now(), AnalyzedAt: time.Now(),
} }
// Collect all files and directories // Collect all files and directories
@@ -385,39 +385,39 @@ func (da *DefaultDirectoryAnalyzer) IdentifyPurpose(ctx context.Context, structu
purpose string purpose string
confidence float64 confidence float64
}{ }{
"src": {"Source code repository", 0.9}, "src": {"Source code repository", 0.9},
"source": {"Source code repository", 0.9}, "source": {"Source code repository", 0.9},
"lib": {"Library code", 0.8}, "lib": {"Library code", 0.8},
"libs": {"Library code", 0.8}, "libs": {"Library code", 0.8},
"vendor": {"Third-party dependencies", 0.9}, "vendor": {"Third-party dependencies", 0.9},
"node_modules": {"Node.js dependencies", 0.95}, "node_modules": {"Node.js dependencies", 0.95},
"build": {"Build artifacts", 0.9}, "build": {"Build artifacts", 0.9},
"dist": {"Distribution files", 0.9}, "dist": {"Distribution files", 0.9},
"bin": {"Binary executables", 0.9}, "bin": {"Binary executables", 0.9},
"test": {"Test code", 0.9}, "test": {"Test code", 0.9},
"tests": {"Test code", 0.9}, "tests": {"Test code", 0.9},
"docs": {"Documentation", 0.9}, "docs": {"Documentation", 0.9},
"doc": {"Documentation", 0.9}, "doc": {"Documentation", 0.9},
"config": {"Configuration files", 0.9}, "config": {"Configuration files", 0.9},
"configs": {"Configuration files", 0.9}, "configs": {"Configuration files", 0.9},
"scripts": {"Utility scripts", 0.8}, "scripts": {"Utility scripts", 0.8},
"tools": {"Development tools", 0.8}, "tools": {"Development tools", 0.8},
"assets": {"Static assets", 0.8}, "assets": {"Static assets", 0.8},
"public": {"Public web assets", 0.8}, "public": {"Public web assets", 0.8},
"static": {"Static files", 0.8}, "static": {"Static files", 0.8},
"templates": {"Template files", 0.8}, "templates": {"Template files", 0.8},
"migrations": {"Database migrations", 0.9}, "migrations": {"Database migrations", 0.9},
"models": {"Data models", 0.8}, "models": {"Data models", 0.8},
"views": {"View layer", 0.8}, "views": {"View layer", 0.8},
"controllers": {"Controller layer", 0.8}, "controllers": {"Controller layer", 0.8},
"services": {"Service layer", 0.8}, "services": {"Service layer", 0.8},
"components": {"Reusable components", 0.8}, "components": {"Reusable components", 0.8},
"modules": {"Modular components", 0.8}, "modules": {"Modular components", 0.8},
"packages": {"Package organization", 0.7}, "packages": {"Package organization", 0.7},
"internal": {"Internal implementation", 0.8}, "internal": {"Internal implementation", 0.8},
"cmd": {"Command-line applications", 0.9}, "cmd": {"Command-line applications", 0.9},
"api": {"API implementation", 0.8}, "api": {"API implementation", 0.8},
"pkg": {"Go package directory", 0.8}, "pkg": {"Go package directory", 0.8},
} }
if p, exists := purposes[dirName]; exists { if p, exists := purposes[dirName]; exists {
@@ -459,12 +459,12 @@ func (da *DefaultDirectoryAnalyzer) IdentifyPurpose(ctx context.Context, structu
// AnalyzeRelationships analyzes relationships between subdirectories // AnalyzeRelationships analyzes relationships between subdirectories
func (da *DefaultDirectoryAnalyzer) AnalyzeRelationships(ctx context.Context, dirPath string) (*RelationshipAnalysis, error) { func (da *DefaultDirectoryAnalyzer) AnalyzeRelationships(ctx context.Context, dirPath string) (*RelationshipAnalysis, error) {
analysis := &RelationshipAnalysis{ analysis := &RelationshipAnalysis{
Dependencies: []*DirectoryDependency{}, Dependencies: []*DirectoryDependency{},
Relationships: []*DirectoryRelation{}, Relationships: []*DirectoryRelation{},
CouplingMetrics: &CouplingMetrics{}, CouplingMetrics: &CouplingMetrics{},
ModularityScore: 0.0, ModularityScore: 0.0,
ArchitecturalStyle: "unknown", ArchitecturalStyle: "unknown",
AnalyzedAt: time.Now(), AnalyzedAt: time.Now(),
} }
// Find subdirectories // Find subdirectories
@@ -568,20 +568,20 @@ func (da *DefaultDirectoryAnalyzer) GenerateHierarchy(ctx context.Context, rootP
func (da *DefaultDirectoryAnalyzer) mapExtensionToLanguage(ext string) string { func (da *DefaultDirectoryAnalyzer) mapExtensionToLanguage(ext string) string {
langMap := map[string]string{ langMap := map[string]string{
".go": "go", ".go": "go",
".py": "python", ".py": "python",
".js": "javascript", ".js": "javascript",
".jsx": "javascript", ".jsx": "javascript",
".ts": "typescript", ".ts": "typescript",
".tsx": "typescript", ".tsx": "typescript",
".java": "java", ".java": "java",
".c": "c", ".c": "c",
".cpp": "cpp", ".cpp": "cpp",
".cs": "csharp", ".cs": "csharp",
".php": "php", ".php": "php",
".rb": "ruby", ".rb": "ruby",
".rs": "rust", ".rs": "rust",
".kt": "kotlin", ".kt": "kotlin",
".swift": "swift", ".swift": "swift",
} }
@@ -604,7 +604,7 @@ func (da *DefaultDirectoryAnalyzer) analyzeOrganization(dirPath string) (*Organi
// Detect organizational pattern // Detect organizational pattern
pattern := da.detectOrganizationalPattern(subdirs) pattern := da.detectOrganizationalPattern(subdirs)
// Calculate metrics // Calculate metrics
fanOut := len(subdirs) fanOut := len(subdirs)
consistency := da.calculateOrganizationalConsistency(subdirs) consistency := da.calculateOrganizationalConsistency(subdirs)
@@ -672,7 +672,7 @@ func (da *DefaultDirectoryAnalyzer) allAreDomainLike(subdirs []string) bool {
// Simple heuristic: if directories don't look like technical layers, // Simple heuristic: if directories don't look like technical layers,
// they might be domain/feature based // they might be domain/feature based
technicalTerms := []string{"api", "service", "repository", "model", "dto", "util", "config", "test", "lib"} technicalTerms := []string{"api", "service", "repository", "model", "dto", "util", "config", "test", "lib"}
for _, subdir := range subdirs { for _, subdir := range subdirs {
lowerDir := strings.ToLower(subdir) lowerDir := strings.ToLower(subdir)
for _, term := range technicalTerms { for _, term := range technicalTerms {
@@ -733,7 +733,7 @@ func (da *DefaultDirectoryAnalyzer) isSnakeCase(s string) bool {
func (da *DefaultDirectoryAnalyzer) calculateMaxDepth(dirPath string) int { func (da *DefaultDirectoryAnalyzer) calculateMaxDepth(dirPath string) int {
maxDepth := 0 maxDepth := 0
filepath.Walk(dirPath, func(path string, info os.FileInfo, err error) error { filepath.Walk(dirPath, func(path string, info os.FileInfo, err error) error {
if err != nil { if err != nil {
return nil return nil
@@ -747,7 +747,7 @@ func (da *DefaultDirectoryAnalyzer) calculateMaxDepth(dirPath string) int {
} }
return nil return nil
}) })
return maxDepth return maxDepth
} }
@@ -756,7 +756,7 @@ func (da *DefaultDirectoryAnalyzer) calculateModularity(subdirs []string) float6
if len(subdirs) == 0 { if len(subdirs) == 0 {
return 0.0 return 0.0
} }
// More subdirectories with clear separation indicates higher modularity // More subdirectories with clear separation indicates higher modularity
if len(subdirs) > 5 { if len(subdirs) > 5 {
return 0.8 return 0.8
@@ -786,7 +786,7 @@ func (da *DefaultDirectoryAnalyzer) analyzeConventions(ctx context.Context, dirP
// Detect dominant naming style // Detect dominant naming style
namingStyle := da.detectDominantNamingStyle(append(fileNames, dirNames...)) namingStyle := da.detectDominantNamingStyle(append(fileNames, dirNames...))
// Calculate consistency // Calculate consistency
consistency := da.calculateNamingConsistency(append(fileNames, dirNames...), namingStyle) consistency := da.calculateNamingConsistency(append(fileNames, dirNames...), namingStyle)
@@ -988,7 +988,7 @@ func (da *DefaultDirectoryAnalyzer) analyzeNamingPattern(paths []string, scope s
// Detect the dominant convention // Detect the dominant convention
convention := da.detectDominantNamingStyle(names) convention := da.detectDominantNamingStyle(names)
return &NamingPattern{ return &NamingPattern{
Pattern: Pattern{ Pattern: Pattern{
ID: fmt.Sprintf("%s_naming", scope), ID: fmt.Sprintf("%s_naming", scope),
@@ -996,7 +996,7 @@ func (da *DefaultDirectoryAnalyzer) analyzeNamingPattern(paths []string, scope s
Type: "naming", Type: "naming",
Description: fmt.Sprintf("Naming convention for %ss", scope), Description: fmt.Sprintf("Naming convention for %ss", scope),
Confidence: da.calculateNamingConsistency(names, convention), Confidence: da.calculateNamingConsistency(names, convention),
Examples: names[:min(5, len(names))], Examples: names[:minInt(5, len(names))],
}, },
Convention: convention, Convention: convention,
Scope: scope, Scope: scope,
@@ -1100,12 +1100,12 @@ func (da *DefaultDirectoryAnalyzer) detectNamingStyle(name string) string {
return "unknown" return "unknown"
} }
func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *ConventionAnalysis) []*Recommendation { func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *ConventionAnalysis) []*BasicRecommendation {
recommendations := []*Recommendation{} recommendations := []*BasicRecommendation{}
// Recommend consistency improvements // Recommend consistency improvements
if analysis.Consistency < 0.8 { if analysis.Consistency < 0.8 {
recommendations = append(recommendations, &Recommendation{ recommendations = append(recommendations, &BasicRecommendation{
Type: "consistency", Type: "consistency",
Title: "Improve naming consistency", Title: "Improve naming consistency",
Description: "Consider standardizing naming conventions across the project", Description: "Consider standardizing naming conventions across the project",
@@ -1118,7 +1118,7 @@ func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *
// Recommend architectural improvements // Recommend architectural improvements
if len(analysis.OrganizationalPatterns) == 0 { if len(analysis.OrganizationalPatterns) == 0 {
recommendations = append(recommendations, &Recommendation{ recommendations = append(recommendations, &BasicRecommendation{
Type: "architecture", Type: "architecture",
Title: "Consider architectural patterns", Title: "Consider architectural patterns",
Description: "Project structure could benefit from established architectural patterns", Description: "Project structure could benefit from established architectural patterns",
@@ -1185,7 +1185,7 @@ func (da *DefaultDirectoryAnalyzer) findDirectoryDependencies(ctx context.Contex
if detector, exists := da.relationshipAnalyzer.dependencyDetectors[language]; exists { if detector, exists := da.relationshipAnalyzer.dependencyDetectors[language]; exists {
imports := da.extractImports(string(content), detector.importPatterns) imports := da.extractImports(string(content), detector.importPatterns)
// Check which imports refer to other directories // Check which imports refer to other directories
for _, imp := range imports { for _, imp := range imports {
for _, otherDir := range allDirs { for _, otherDir := range allDirs {
@@ -1210,7 +1210,7 @@ func (da *DefaultDirectoryAnalyzer) findDirectoryDependencies(ctx context.Contex
func (da *DefaultDirectoryAnalyzer) extractImports(content string, patterns []*regexp.Regexp) []string { func (da *DefaultDirectoryAnalyzer) extractImports(content string, patterns []*regexp.Regexp) []string {
imports := []string{} imports := []string{}
for _, pattern := range patterns { for _, pattern := range patterns {
matches := pattern.FindAllStringSubmatch(content, -1) matches := pattern.FindAllStringSubmatch(content, -1)
for _, match := range matches { for _, match := range matches {
@@ -1225,12 +1225,11 @@ func (da *DefaultDirectoryAnalyzer) extractImports(content string, patterns []*r
func (da *DefaultDirectoryAnalyzer) isLocalDependency(importPath, fromDir, toDir string) bool { func (da *DefaultDirectoryAnalyzer) isLocalDependency(importPath, fromDir, toDir string) bool {
// Simple heuristic: check if import path references the target directory // Simple heuristic: check if import path references the target directory
fromBase := filepath.Base(fromDir)
toBase := filepath.Base(toDir) toBase := filepath.Base(toDir)
return strings.Contains(importPath, toBase) || return strings.Contains(importPath, toBase) ||
strings.Contains(importPath, "../"+toBase) || strings.Contains(importPath, "../"+toBase) ||
strings.Contains(importPath, "./"+toBase) strings.Contains(importPath, "./"+toBase)
} }
func (da *DefaultDirectoryAnalyzer) analyzeDirectoryRelationships(subdirs []string, dependencies []*DirectoryDependency) []*DirectoryRelation { func (da *DefaultDirectoryAnalyzer) analyzeDirectoryRelationships(subdirs []string, dependencies []*DirectoryDependency) []*DirectoryRelation {
@@ -1399,7 +1398,7 @@ func (da *DefaultDirectoryAnalyzer) walkDirectoryHierarchy(rootPath string, curr
func (da *DefaultDirectoryAnalyzer) generateUCXLAddress(path string) (*ucxl.Address, error) { func (da *DefaultDirectoryAnalyzer) generateUCXLAddress(path string) (*ucxl.Address, error) {
cleanPath := filepath.Clean(path) cleanPath := filepath.Clean(path)
addr, err := ucxl.ParseAddress(fmt.Sprintf("dir://%s", cleanPath)) addr, err := ucxl.Parse(fmt.Sprintf("dir://%s", cleanPath))
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to generate UCXL address: %w", err) return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
} }
@@ -1407,7 +1406,7 @@ func (da *DefaultDirectoryAnalyzer) generateUCXLAddress(path string) (*ucxl.Addr
} }
func (da *DefaultDirectoryAnalyzer) generateDirectorySummary(structure *DirectoryStructure) string { func (da *DefaultDirectoryAnalyzer) generateDirectorySummary(structure *DirectoryStructure) string {
summary := fmt.Sprintf("Directory with %d files and %d subdirectories", summary := fmt.Sprintf("Directory with %d files and %d subdirectories",
structure.FileCount, structure.DirectoryCount) structure.FileCount, structure.DirectoryCount)
// Add language information // Add language information
@@ -1417,7 +1416,7 @@ func (da *DefaultDirectoryAnalyzer) generateDirectorySummary(structure *Director
langs = append(langs, fmt.Sprintf("%s (%d)", lang, count)) langs = append(langs, fmt.Sprintf("%s (%d)", lang, count))
} }
sort.Strings(langs) sort.Strings(langs)
summary += fmt.Sprintf(", containing: %s", strings.Join(langs[:min(3, len(langs))], ", ")) summary += fmt.Sprintf(", containing: %s", strings.Join(langs[:minInt(3, len(langs))], ", "))
} }
return summary return summary
@@ -1497,9 +1496,9 @@ func (da *DefaultDirectoryAnalyzer) calculateDirectorySpecificity(structure *Dir
return specificity return specificity
} }
func min(a, b int) int { func minInt(a, b int) int {
if a < b { if a < b {
return a return a
} }
return b return b
} }

View File

@@ -2,9 +2,9 @@ package intelligence
import ( import (
"context" "context"
"sync"
"time" "time"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
) )
@@ -17,38 +17,38 @@ type IntelligenceEngine interface {
// AnalyzeFile analyzes a single file and generates context // AnalyzeFile analyzes a single file and generates context
// Performs content analysis, language detection, and pattern recognition // Performs content analysis, language detection, and pattern recognition
AnalyzeFile(ctx context.Context, filePath string, role string) (*slurpContext.ContextNode, error) AnalyzeFile(ctx context.Context, filePath string, role string) (*slurpContext.ContextNode, error)
// AnalyzeDirectory analyzes directory structure for hierarchical patterns // AnalyzeDirectory analyzes directory structure for hierarchical patterns
// Identifies organizational patterns, naming conventions, and structure insights // Identifies organizational patterns, naming conventions, and structure insights
AnalyzeDirectory(ctx context.Context, dirPath string) ([]*slurpContext.ContextNode, error) AnalyzeDirectory(ctx context.Context, dirPath string) ([]*slurpContext.ContextNode, error)
// GenerateRoleInsights generates role-specific insights for existing context // GenerateRoleInsights generates role-specific insights for existing context
// Provides specialized analysis based on role requirements and perspectives // Provides specialized analysis based on role requirements and perspectives
GenerateRoleInsights(ctx context.Context, baseContext *slurpContext.ContextNode, role string) ([]string, error) GenerateRoleInsights(ctx context.Context, baseContext *slurpContext.ContextNode, role string) ([]string, error)
// AssessGoalAlignment assesses how well context aligns with project goals // AssessGoalAlignment assesses how well context aligns with project goals
// Returns alignment score and specific alignment metrics // Returns alignment score and specific alignment metrics
AssessGoalAlignment(ctx context.Context, node *slurpContext.ContextNode) (float64, error) AssessGoalAlignment(ctx context.Context, node *slurpContext.ContextNode) (float64, error)
// AnalyzeBatch processes multiple files efficiently in parallel // AnalyzeBatch processes multiple files efficiently in parallel
// Optimized for bulk analysis operations with resource management // Optimized for bulk analysis operations with resource management
AnalyzeBatch(ctx context.Context, filePaths []string, role string) (map[string]*slurpContext.ContextNode, error) AnalyzeBatch(ctx context.Context, filePaths []string, role string) (map[string]*slurpContext.ContextNode, error)
// DetectPatterns identifies recurring patterns across multiple contexts // DetectPatterns identifies recurring patterns across multiple contexts
// Useful for template creation and standardization // Useful for template creation and standardization
DetectPatterns(ctx context.Context, contexts []*slurpContext.ContextNode) ([]*Pattern, error) DetectPatterns(ctx context.Context, contexts []*slurpContext.ContextNode) ([]*Pattern, error)
// EnhanceWithRAG enhances context using RAG system knowledge // EnhanceWithRAG enhances context using RAG system knowledge
// Integrates external knowledge for richer context understanding // Integrates external knowledge for richer context understanding
EnhanceWithRAG(ctx context.Context, node *slurpContext.ContextNode) (*slurpContext.ContextNode, error) EnhanceWithRAG(ctx context.Context, node *slurpContext.ContextNode) (*slurpContext.ContextNode, error)
// ValidateContext validates generated context quality and consistency // ValidateContext validates generated context quality and consistency
// Ensures context meets quality thresholds and consistency requirements // Ensures context meets quality thresholds and consistency requirements
ValidateContext(ctx context.Context, node *slurpContext.ContextNode) (*ValidationResult, error) ValidateContext(ctx context.Context, node *slurpContext.ContextNode) (*ValidationResult, error)
// GetEngineStats returns engine performance and operational statistics // GetEngineStats returns engine performance and operational statistics
GetEngineStats() (*EngineStatistics, error) GetEngineStats() (*EngineStatistics, error)
// SetConfiguration updates engine configuration // SetConfiguration updates engine configuration
SetConfiguration(config *EngineConfig) error SetConfiguration(config *EngineConfig) error
} }
@@ -57,22 +57,22 @@ type IntelligenceEngine interface {
type FileAnalyzer interface { type FileAnalyzer interface {
// AnalyzeContent analyzes file content for context extraction // AnalyzeContent analyzes file content for context extraction
AnalyzeContent(ctx context.Context, filePath string, content []byte) (*FileAnalysis, error) AnalyzeContent(ctx context.Context, filePath string, content []byte) (*FileAnalysis, error)
// DetectLanguage detects programming language from content // DetectLanguage detects programming language from content
DetectLanguage(ctx context.Context, filePath string, content []byte) (string, float64, error) DetectLanguage(ctx context.Context, filePath string, content []byte) (string, float64, error)
// ExtractMetadata extracts file metadata and statistics // ExtractMetadata extracts file metadata and statistics
ExtractMetadata(ctx context.Context, filePath string) (*FileMetadata, error) ExtractMetadata(ctx context.Context, filePath string) (*FileMetadata, error)
// AnalyzeStructure analyzes code structure and organization // AnalyzeStructure analyzes code structure and organization
AnalyzeStructure(ctx context.Context, filePath string, content []byte) (*StructureAnalysis, error) AnalyzeStructure(ctx context.Context, filePath string, content []byte) (*StructureAnalysis, error)
// IdentifyPurpose identifies the primary purpose of the file // IdentifyPurpose identifies the primary purpose of the file
IdentifyPurpose(ctx context.Context, analysis *FileAnalysis) (string, float64, error) IdentifyPurpose(ctx context.Context, analysis *FileAnalysis) (string, float64, error)
// GenerateSummary generates a concise summary of file content // GenerateSummary generates a concise summary of file content
GenerateSummary(ctx context.Context, analysis *FileAnalysis) (string, error) GenerateSummary(ctx context.Context, analysis *FileAnalysis) (string, error)
// ExtractTechnologies identifies technologies used in the file // ExtractTechnologies identifies technologies used in the file
ExtractTechnologies(ctx context.Context, analysis *FileAnalysis) ([]string, error) ExtractTechnologies(ctx context.Context, analysis *FileAnalysis) ([]string, error)
} }
@@ -81,16 +81,16 @@ type FileAnalyzer interface {
type DirectoryAnalyzer interface { type DirectoryAnalyzer interface {
// AnalyzeStructure analyzes directory organization patterns // AnalyzeStructure analyzes directory organization patterns
AnalyzeStructure(ctx context.Context, dirPath string) (*DirectoryStructure, error) AnalyzeStructure(ctx context.Context, dirPath string) (*DirectoryStructure, error)
// DetectConventions identifies naming and organizational conventions // DetectConventions identifies naming and organizational conventions
DetectConventions(ctx context.Context, dirPath string) (*ConventionAnalysis, error) DetectConventions(ctx context.Context, dirPath string) (*ConventionAnalysis, error)
// IdentifyPurpose determines the primary purpose of a directory // IdentifyPurpose determines the primary purpose of a directory
IdentifyPurpose(ctx context.Context, structure *DirectoryStructure) (string, float64, error) IdentifyPurpose(ctx context.Context, structure *DirectoryStructure) (string, float64, error)
// AnalyzeRelationships analyzes relationships between subdirectories // AnalyzeRelationships analyzes relationships between subdirectories
AnalyzeRelationships(ctx context.Context, dirPath string) (*RelationshipAnalysis, error) AnalyzeRelationships(ctx context.Context, dirPath string) (*RelationshipAnalysis, error)
// GenerateHierarchy generates context hierarchy for directory tree // GenerateHierarchy generates context hierarchy for directory tree
GenerateHierarchy(ctx context.Context, rootPath string, maxDepth int) ([]*slurpContext.ContextNode, error) GenerateHierarchy(ctx context.Context, rootPath string, maxDepth int) ([]*slurpContext.ContextNode, error)
} }
@@ -99,16 +99,16 @@ type DirectoryAnalyzer interface {
type PatternDetector interface { type PatternDetector interface {
// DetectCodePatterns identifies code patterns and architectural styles // DetectCodePatterns identifies code patterns and architectural styles
DetectCodePatterns(ctx context.Context, filePath string, content []byte) ([]*CodePattern, error) DetectCodePatterns(ctx context.Context, filePath string, content []byte) ([]*CodePattern, error)
// DetectNamingPatterns identifies naming conventions and patterns // DetectNamingPatterns identifies naming conventions and patterns
DetectNamingPatterns(ctx context.Context, contexts []*slurpContext.ContextNode) ([]*NamingPattern, error) DetectNamingPatterns(ctx context.Context, contexts []*slurpContext.ContextNode) ([]*NamingPattern, error)
// DetectOrganizationalPatterns identifies organizational patterns // DetectOrganizationalPatterns identifies organizational patterns
DetectOrganizationalPatterns(ctx context.Context, rootPath string) ([]*OrganizationalPattern, error) DetectOrganizationalPatterns(ctx context.Context, rootPath string) ([]*OrganizationalPattern, error)
// MatchPatterns matches context against known patterns // MatchPatterns matches context against known patterns
MatchPatterns(ctx context.Context, node *slurpContext.ContextNode, patterns []*Pattern) ([]*PatternMatch, error) MatchPatterns(ctx context.Context, node *slurpContext.ContextNode, patterns []*Pattern) ([]*PatternMatch, error)
// LearnPatterns learns new patterns from context examples // LearnPatterns learns new patterns from context examples
LearnPatterns(ctx context.Context, examples []*slurpContext.ContextNode) ([]*Pattern, error) LearnPatterns(ctx context.Context, examples []*slurpContext.ContextNode) ([]*Pattern, error)
} }
@@ -117,19 +117,19 @@ type PatternDetector interface {
type RAGIntegration interface { type RAGIntegration interface {
// Query queries the RAG system for relevant information // Query queries the RAG system for relevant information
Query(ctx context.Context, query string, context map[string]interface{}) (*RAGResponse, error) Query(ctx context.Context, query string, context map[string]interface{}) (*RAGResponse, error)
// EnhanceContext enhances context using RAG knowledge // EnhanceContext enhances context using RAG knowledge
EnhanceContext(ctx context.Context, node *slurpContext.ContextNode) (*slurpContext.ContextNode, error) EnhanceContext(ctx context.Context, node *slurpContext.ContextNode) (*slurpContext.ContextNode, error)
// IndexContent indexes content for RAG retrieval // IndexContent indexes content for RAG retrieval
IndexContent(ctx context.Context, content string, metadata map[string]interface{}) error IndexContent(ctx context.Context, content string, metadata map[string]interface{}) error
// SearchSimilar searches for similar content in RAG system // SearchSimilar searches for similar content in RAG system
SearchSimilar(ctx context.Context, content string, limit int) ([]*RAGResult, error) SearchSimilar(ctx context.Context, content string, limit int) ([]*RAGResult, error)
// UpdateIndex updates RAG index with new content // UpdateIndex updates RAG index with new content
UpdateIndex(ctx context.Context, updates []*RAGUpdate) error UpdateIndex(ctx context.Context, updates []*RAGUpdate) error
// GetRAGStats returns RAG system statistics // GetRAGStats returns RAG system statistics
GetRAGStats(ctx context.Context) (*RAGStatistics, error) GetRAGStats(ctx context.Context) (*RAGStatistics, error)
} }
@@ -138,26 +138,26 @@ type RAGIntegration interface {
// ProjectGoal represents a high-level project objective // ProjectGoal represents a high-level project objective
type ProjectGoal struct { type ProjectGoal struct {
ID string `json:"id"` // Unique identifier ID string `json:"id"` // Unique identifier
Name string `json:"name"` // Goal name Name string `json:"name"` // Goal name
Description string `json:"description"` // Detailed description Description string `json:"description"` // Detailed description
Keywords []string `json:"keywords"` // Associated keywords Keywords []string `json:"keywords"` // Associated keywords
Priority int `json:"priority"` // Priority level (1=highest) Priority int `json:"priority"` // Priority level (1=highest)
Phase string `json:"phase"` // Project phase Phase string `json:"phase"` // Project phase
Metrics []string `json:"metrics"` // Success metrics Metrics []string `json:"metrics"` // Success metrics
Owner string `json:"owner"` // Goal owner Owner string `json:"owner"` // Goal owner
Deadline *time.Time `json:"deadline,omitempty"` // Target deadline Deadline *time.Time `json:"deadline,omitempty"` // Target deadline
} }
// RoleProfile defines context requirements for different roles // RoleProfile defines context requirements for different roles
type RoleProfile struct { type RoleProfile struct {
Role string `json:"role"` // Role identifier Role string `json:"role"` // Role identifier
AccessLevel slurpContext.RoleAccessLevel `json:"access_level"` // Required access level AccessLevel slurpContext.RoleAccessLevel `json:"access_level"` // Required access level
RelevantTags []string `json:"relevant_tags"` // Relevant context tags RelevantTags []string `json:"relevant_tags"` // Relevant context tags
ContextScope []string `json:"context_scope"` // Scope of interest ContextScope []string `json:"context_scope"` // Scope of interest
InsightTypes []string `json:"insight_types"` // Types of insights needed InsightTypes []string `json:"insight_types"` // Types of insights needed
QualityThreshold float64 `json:"quality_threshold"` // Minimum quality threshold QualityThreshold float64 `json:"quality_threshold"` // Minimum quality threshold
Preferences map[string]interface{} `json:"preferences"` // Role-specific preferences Preferences map[string]interface{} `json:"preferences"` // Role-specific preferences
} }
// EngineConfig represents configuration for the intelligence engine // EngineConfig represents configuration for the intelligence engine
@@ -166,61 +166,66 @@ type EngineConfig struct {
MaxConcurrentAnalysis int `json:"max_concurrent_analysis"` // Maximum concurrent analyses MaxConcurrentAnalysis int `json:"max_concurrent_analysis"` // Maximum concurrent analyses
AnalysisTimeout time.Duration `json:"analysis_timeout"` // Analysis timeout AnalysisTimeout time.Duration `json:"analysis_timeout"` // Analysis timeout
MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze
// RAG integration settings // RAG integration settings
RAGEndpoint string `json:"rag_endpoint"` // RAG system endpoint RAGEndpoint string `json:"rag_endpoint"` // RAG system endpoint
RAGTimeout time.Duration `json:"rag_timeout"` // RAG query timeout RAGTimeout time.Duration `json:"rag_timeout"` // RAG query timeout
RAGEnabled bool `json:"rag_enabled"` // Whether RAG is enabled RAGEnabled bool `json:"rag_enabled"` // Whether RAG is enabled
EnableRAG bool `json:"enable_rag"` // Legacy toggle for RAG enablement
// Feature toggles
EnableGoalAlignment bool `json:"enable_goal_alignment"`
EnablePatternDetection bool `json:"enable_pattern_detection"`
EnableRoleAware bool `json:"enable_role_aware"`
// Quality settings // Quality settings
MinConfidenceThreshold float64 `json:"min_confidence_threshold"` // Minimum confidence for results MinConfidenceThreshold float64 `json:"min_confidence_threshold"` // Minimum confidence for results
RequireValidation bool `json:"require_validation"` // Whether validation is required RequireValidation bool `json:"require_validation"` // Whether validation is required
// Performance settings // Performance settings
CacheEnabled bool `json:"cache_enabled"` // Whether caching is enabled CacheEnabled bool `json:"cache_enabled"` // Whether caching is enabled
CacheTTL time.Duration `json:"cache_ttl"` // Cache TTL CacheTTL time.Duration `json:"cache_ttl"` // Cache TTL
// Role profiles // Role profiles
RoleProfiles map[string]*RoleProfile `json:"role_profiles"` // Role-specific profiles RoleProfiles map[string]*RoleProfile `json:"role_profiles"` // Role-specific profiles
// Project goals // Project goals
ProjectGoals []*ProjectGoal `json:"project_goals"` // Active project goals ProjectGoals []*ProjectGoal `json:"project_goals"` // Active project goals
} }
// EngineStatistics represents performance statistics for the engine // EngineStatistics represents performance statistics for the engine
type EngineStatistics struct { type EngineStatistics struct {
TotalAnalyses int64 `json:"total_analyses"` // Total analyses performed TotalAnalyses int64 `json:"total_analyses"` // Total analyses performed
SuccessfulAnalyses int64 `json:"successful_analyses"` // Successful analyses SuccessfulAnalyses int64 `json:"successful_analyses"` // Successful analyses
FailedAnalyses int64 `json:"failed_analyses"` // Failed analyses FailedAnalyses int64 `json:"failed_analyses"` // Failed analyses
AverageAnalysisTime time.Duration `json:"average_analysis_time"` // Average analysis time AverageAnalysisTime time.Duration `json:"average_analysis_time"` // Average analysis time
CacheHitRate float64 `json:"cache_hit_rate"` // Cache hit rate CacheHitRate float64 `json:"cache_hit_rate"` // Cache hit rate
RAGQueriesPerformed int64 `json:"rag_queries_performed"` // RAG queries made RAGQueriesPerformed int64 `json:"rag_queries_performed"` // RAG queries made
AverageConfidence float64 `json:"average_confidence"` // Average confidence score AverageConfidence float64 `json:"average_confidence"` // Average confidence score
FilesAnalyzed int64 `json:"files_analyzed"` // Total files analyzed FilesAnalyzed int64 `json:"files_analyzed"` // Total files analyzed
DirectoriesAnalyzed int64 `json:"directories_analyzed"` // Total directories analyzed DirectoriesAnalyzed int64 `json:"directories_analyzed"` // Total directories analyzed
PatternsDetected int64 `json:"patterns_detected"` // Patterns detected PatternsDetected int64 `json:"patterns_detected"` // Patterns detected
LastResetAt time.Time `json:"last_reset_at"` // When stats were last reset LastResetAt time.Time `json:"last_reset_at"` // When stats were last reset
} }
// FileAnalysis represents the result of file analysis // FileAnalysis represents the result of file analysis
type FileAnalysis struct { type FileAnalysis struct {
FilePath string `json:"file_path"` // Path to analyzed file FilePath string `json:"file_path"` // Path to analyzed file
Language string `json:"language"` // Detected language Language string `json:"language"` // Detected language
LanguageConf float64 `json:"language_conf"` // Language detection confidence LanguageConf float64 `json:"language_conf"` // Language detection confidence
FileType string `json:"file_type"` // File type classification FileType string `json:"file_type"` // File type classification
Size int64 `json:"size"` // File size in bytes Size int64 `json:"size"` // File size in bytes
LineCount int `json:"line_count"` // Number of lines LineCount int `json:"line_count"` // Number of lines
Complexity float64 `json:"complexity"` // Code complexity score Complexity float64 `json:"complexity"` // Code complexity score
Dependencies []string `json:"dependencies"` // Identified dependencies Dependencies []string `json:"dependencies"` // Identified dependencies
Exports []string `json:"exports"` // Exported symbols/functions Exports []string `json:"exports"` // Exported symbols/functions
Imports []string `json:"imports"` // Import statements Imports []string `json:"imports"` // Import statements
Functions []string `json:"functions"` // Function/method names Functions []string `json:"functions"` // Function/method names
Classes []string `json:"classes"` // Class names Classes []string `json:"classes"` // Class names
Variables []string `json:"variables"` // Variable names Variables []string `json:"variables"` // Variable names
Comments []string `json:"comments"` // Extracted comments Comments []string `json:"comments"` // Extracted comments
TODOs []string `json:"todos"` // TODO comments TODOs []string `json:"todos"` // TODO comments
Metadata map[string]interface{} `json:"metadata"` // Additional metadata Metadata map[string]interface{} `json:"metadata"` // Additional metadata
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
} }
// DefaultIntelligenceEngine provides a complete implementation of the IntelligenceEngine interface // DefaultIntelligenceEngine provides a complete implementation of the IntelligenceEngine interface
@@ -250,6 +255,10 @@ func NewDefaultIntelligenceEngine(config *EngineConfig) (*DefaultIntelligenceEng
config = DefaultEngineConfig() config = DefaultEngineConfig()
} }
if config.EnableRAG {
config.RAGEnabled = true
}
// Initialize file analyzer // Initialize file analyzer
fileAnalyzer := NewDefaultFileAnalyzer(config) fileAnalyzer := NewDefaultFileAnalyzer(config)
@@ -273,13 +282,22 @@ func NewDefaultIntelligenceEngine(config *EngineConfig) (*DefaultIntelligenceEng
directoryAnalyzer: dirAnalyzer, directoryAnalyzer: dirAnalyzer,
patternDetector: patternDetector, patternDetector: patternDetector,
ragIntegration: ragIntegration, ragIntegration: ragIntegration,
stats: &EngineStatistics{ stats: &EngineStatistics{
LastResetAt: time.Now(), LastResetAt: time.Now(),
}, },
cache: &sync.Map{}, cache: &sync.Map{},
projectGoals: config.ProjectGoals, projectGoals: config.ProjectGoals,
roleProfiles: config.RoleProfiles, roleProfiles: config.RoleProfiles,
} }
return engine, nil return engine, nil
} }
// NewIntelligenceEngine is a convenience wrapper expected by legacy callers.
func NewIntelligenceEngine(config *EngineConfig) *DefaultIntelligenceEngine {
engine, err := NewDefaultIntelligenceEngine(config)
if err != nil {
panic(err)
}
return engine
}

View File

@@ -4,14 +4,13 @@ import (
"context" "context"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"os"
"path/filepath" "path/filepath"
"strings" "strings"
"sync" "sync"
"time" "time"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
) )
// AnalyzeFile analyzes a single file and generates contextual understanding // AnalyzeFile analyzes a single file and generates contextual understanding
@@ -136,8 +135,7 @@ func (e *DefaultIntelligenceEngine) AnalyzeDirectory(ctx context.Context, dirPat
}() }()
// Analyze directory structure // Analyze directory structure
structure, err := e.directoryAnalyzer.AnalyzeStructure(ctx, dirPath) if _, err := e.directoryAnalyzer.AnalyzeStructure(ctx, dirPath); err != nil {
if err != nil {
e.updateStats("directory_analysis", time.Since(start), false) e.updateStats("directory_analysis", time.Since(start), false)
return nil, fmt.Errorf("failed to analyze directory structure: %w", err) return nil, fmt.Errorf("failed to analyze directory structure: %w", err)
} }
@@ -232,7 +230,7 @@ func (e *DefaultIntelligenceEngine) AnalyzeBatch(ctx context.Context, filePaths
wg.Add(1) wg.Add(1)
go func(path string) { go func(path string) {
defer wg.Done() defer wg.Done()
semaphore <- struct{}{} // Acquire semaphore semaphore <- struct{}{} // Acquire semaphore
defer func() { <-semaphore }() // Release semaphore defer func() { <-semaphore }() // Release semaphore
ctxNode, err := e.AnalyzeFile(ctx, path, role) ctxNode, err := e.AnalyzeFile(ctx, path, role)
@@ -317,7 +315,7 @@ func (e *DefaultIntelligenceEngine) EnhanceWithRAG(ctx context.Context, node *sl
if ragResponse.Confidence >= e.config.MinConfidenceThreshold { if ragResponse.Confidence >= e.config.MinConfidenceThreshold {
enhanced.Insights = append(enhanced.Insights, fmt.Sprintf("RAG: %s", ragResponse.Answer)) enhanced.Insights = append(enhanced.Insights, fmt.Sprintf("RAG: %s", ragResponse.Answer))
enhanced.RAGConfidence = ragResponse.Confidence enhanced.RAGConfidence = ragResponse.Confidence
// Add source information to metadata // Add source information to metadata
if len(ragResponse.Sources) > 0 { if len(ragResponse.Sources) > 0 {
sources := make([]string, len(ragResponse.Sources)) sources := make([]string, len(ragResponse.Sources))
@@ -430,7 +428,7 @@ func (e *DefaultIntelligenceEngine) readFileContent(filePath string) ([]byte, er
func (e *DefaultIntelligenceEngine) generateUCXLAddress(filePath string) (*ucxl.Address, error) { func (e *DefaultIntelligenceEngine) generateUCXLAddress(filePath string) (*ucxl.Address, error) {
// Simple implementation - in reality this would be more sophisticated // Simple implementation - in reality this would be more sophisticated
cleanPath := filepath.Clean(filePath) cleanPath := filepath.Clean(filePath)
addr, err := ucxl.ParseAddress(fmt.Sprintf("file://%s", cleanPath)) addr, err := ucxl.Parse(fmt.Sprintf("file://%s", cleanPath))
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to generate UCXL address: %w", err) return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
} }
@@ -640,6 +638,10 @@ func DefaultEngineConfig() *EngineConfig {
RAGEndpoint: "", RAGEndpoint: "",
RAGTimeout: 10 * time.Second, RAGTimeout: 10 * time.Second,
RAGEnabled: false, RAGEnabled: false,
EnableRAG: false,
EnableGoalAlignment: false,
EnablePatternDetection: false,
EnableRoleAware: false,
MinConfidenceThreshold: 0.6, MinConfidenceThreshold: 0.6,
RequireValidation: true, RequireValidation: true,
CacheEnabled: true, CacheEnabled: true,
@@ -647,4 +649,4 @@ func DefaultEngineConfig() *EngineConfig {
RoleProfiles: make(map[string]*RoleProfile), RoleProfiles: make(map[string]*RoleProfile),
ProjectGoals: []*ProjectGoal{}, ProjectGoals: []*ProjectGoal{},
} }
} }

View File

@@ -1,3 +1,6 @@
//go:build integration
// +build integration
package intelligence package intelligence
import ( import (
@@ -13,12 +16,12 @@ import (
func TestIntelligenceEngine_Integration(t *testing.T) { func TestIntelligenceEngine_Integration(t *testing.T) {
// Create test configuration // Create test configuration
config := &EngineConfig{ config := &EngineConfig{
EnableRAG: false, // Disable RAG for testing EnableRAG: false, // Disable RAG for testing
EnableGoalAlignment: true, EnableGoalAlignment: true,
EnablePatternDetection: true, EnablePatternDetection: true,
EnableRoleAware: true, EnableRoleAware: true,
MaxConcurrentAnalysis: 2, MaxConcurrentAnalysis: 2,
AnalysisTimeout: 30 * time.Second, AnalysisTimeout: 30 * time.Second,
CacheTTL: 5 * time.Minute, CacheTTL: 5 * time.Minute,
MinConfidenceThreshold: 0.5, MinConfidenceThreshold: 0.5,
} }
@@ -29,13 +32,13 @@ func TestIntelligenceEngine_Integration(t *testing.T) {
// Create test context node // Create test context node
testNode := &slurpContext.ContextNode{ testNode := &slurpContext.ContextNode{
Path: "/test/example.go", Path: "/test/example.go",
Summary: "A Go service implementing user authentication", Summary: "A Go service implementing user authentication",
Purpose: "Handles user login and authentication for the web application", Purpose: "Handles user login and authentication for the web application",
Technologies: []string{"go", "jwt", "bcrypt"}, Technologies: []string{"go", "jwt", "bcrypt"},
Tags: []string{"authentication", "security", "web"}, Tags: []string{"authentication", "security", "web"},
CreatedAt: time.Now(), GeneratedAt: time.Now(),
UpdatedAt: time.Now(), UpdatedAt: time.Now(),
} }
// Create test project goal // Create test project goal
@@ -47,7 +50,7 @@ func TestIntelligenceEngine_Integration(t *testing.T) {
Priority: 1, Priority: 1,
Phase: "development", Phase: "development",
Deadline: nil, Deadline: nil,
CreatedAt: time.Now(), GeneratedAt: time.Now(),
} }
t.Run("AnalyzeFile", func(t *testing.T) { t.Run("AnalyzeFile", func(t *testing.T) {
@@ -220,9 +223,9 @@ func TestPatternDetector_DetectDesignPatterns(t *testing.T) {
ctx := context.Background() ctx := context.Background()
tests := []struct { tests := []struct {
name string name string
filename string filename string
content []byte content []byte
expectedPattern string expectedPattern string
}{ }{
{ {
@@ -244,7 +247,7 @@ func TestPatternDetector_DetectDesignPatterns(t *testing.T) {
}, },
{ {
name: "Go Factory Pattern", name: "Go Factory Pattern",
filename: "factory.go", filename: "factory.go",
content: []byte(` content: []byte(`
package main package main
func NewUser(name string) *User { func NewUser(name string) *User {
@@ -312,7 +315,7 @@ func TestGoalAlignment_DimensionCalculators(t *testing.T) {
testNode := &slurpContext.ContextNode{ testNode := &slurpContext.ContextNode{
Path: "/test/auth.go", Path: "/test/auth.go",
Summary: "User authentication service with JWT tokens", Summary: "User authentication service with JWT tokens",
Purpose: "Handles user login and token generation", Purpose: "Handles user login and token generation",
Technologies: []string{"go", "jwt", "bcrypt"}, Technologies: []string{"go", "jwt", "bcrypt"},
Tags: []string{"authentication", "security"}, Tags: []string{"authentication", "security"},
} }
@@ -470,7 +473,7 @@ func TestRoleAwareProcessor_AccessControl(t *testing.T) {
hasAccess := err == nil hasAccess := err == nil
if hasAccess != tc.expected { if hasAccess != tc.expected {
t.Errorf("Expected access %v for role %s, action %s, resource %s, got %v", t.Errorf("Expected access %v for role %s, action %s, resource %s, got %v",
tc.expected, tc.roleID, tc.action, tc.resource, hasAccess) tc.expected, tc.roleID, tc.action, tc.resource, hasAccess)
} }
}) })
@@ -491,7 +494,7 @@ func TestDirectoryAnalyzer_StructureAnalysis(t *testing.T) {
// Create test structure // Create test structure
testDirs := []string{ testDirs := []string{
"src/main", "src/main",
"src/lib", "src/lib",
"test/unit", "test/unit",
"test/integration", "test/integration",
"docs/api", "docs/api",
@@ -504,7 +507,7 @@ func TestDirectoryAnalyzer_StructureAnalysis(t *testing.T) {
if err := os.MkdirAll(fullPath, 0755); err != nil { if err := os.MkdirAll(fullPath, 0755); err != nil {
t.Fatalf("Failed to create directory %s: %v", fullPath, err) t.Fatalf("Failed to create directory %s: %v", fullPath, err)
} }
// Create a dummy file in each directory // Create a dummy file in each directory
testFile := filepath.Join(fullPath, "test.txt") testFile := filepath.Join(fullPath, "test.txt")
if err := os.WriteFile(testFile, []byte("test content"), 0644); err != nil { if err := os.WriteFile(testFile, []byte("test content"), 0644); err != nil {
@@ -652,7 +655,7 @@ func createTestContextNode(path, summary, purpose string, technologies, tags []s
Purpose: purpose, Purpose: purpose,
Technologies: technologies, Technologies: technologies,
Tags: tags, Tags: tags,
CreatedAt: time.Now(), GeneratedAt: time.Now(),
UpdatedAt: time.Now(), UpdatedAt: time.Now(),
} }
} }
@@ -665,7 +668,7 @@ func createTestProjectGoal(id, name, description string, keywords []string, prio
Keywords: keywords, Keywords: keywords,
Priority: priority, Priority: priority,
Phase: phase, Phase: phase,
CreatedAt: time.Now(), GeneratedAt: time.Now(),
} }
} }
@@ -697,4 +700,4 @@ func assertValidDimensionScore(t *testing.T, score *DimensionScore) {
if score.Confidence <= 0 || score.Confidence > 1 { if score.Confidence <= 0 || score.Confidence > 1 {
t.Errorf("Invalid confidence: %f", score.Confidence) t.Errorf("Invalid confidence: %f", score.Confidence)
} }
} }

View File

@@ -1,7 +1,6 @@
package intelligence package intelligence
import ( import (
"bufio"
"bytes" "bytes"
"context" "context"
"fmt" "fmt"
@@ -33,12 +32,12 @@ type CodeStructureAnalyzer struct {
// LanguagePatterns contains regex patterns for different language constructs // LanguagePatterns contains regex patterns for different language constructs
type LanguagePatterns struct { type LanguagePatterns struct {
Functions []*regexp.Regexp Functions []*regexp.Regexp
Classes []*regexp.Regexp Classes []*regexp.Regexp
Variables []*regexp.Regexp Variables []*regexp.Regexp
Imports []*regexp.Regexp Imports []*regexp.Regexp
Comments []*regexp.Regexp Comments []*regexp.Regexp
TODOs []*regexp.Regexp TODOs []*regexp.Regexp
} }
// MetadataExtractor extracts file system metadata // MetadataExtractor extracts file system metadata
@@ -65,66 +64,66 @@ func NewLanguageDetector() *LanguageDetector {
// Map file extensions to languages // Map file extensions to languages
extensions := map[string]string{ extensions := map[string]string{
".go": "go", ".go": "go",
".py": "python", ".py": "python",
".js": "javascript", ".js": "javascript",
".jsx": "javascript", ".jsx": "javascript",
".ts": "typescript", ".ts": "typescript",
".tsx": "typescript", ".tsx": "typescript",
".java": "java", ".java": "java",
".c": "c", ".c": "c",
".cpp": "cpp", ".cpp": "cpp",
".cc": "cpp", ".cc": "cpp",
".cxx": "cpp", ".cxx": "cpp",
".h": "c", ".h": "c",
".hpp": "cpp", ".hpp": "cpp",
".cs": "csharp", ".cs": "csharp",
".php": "php", ".php": "php",
".rb": "ruby", ".rb": "ruby",
".rs": "rust", ".rs": "rust",
".kt": "kotlin", ".kt": "kotlin",
".swift": "swift", ".swift": "swift",
".m": "objective-c", ".m": "objective-c",
".mm": "objective-c", ".mm": "objective-c",
".scala": "scala", ".scala": "scala",
".clj": "clojure", ".clj": "clojure",
".hs": "haskell", ".hs": "haskell",
".ex": "elixir", ".ex": "elixir",
".exs": "elixir", ".exs": "elixir",
".erl": "erlang", ".erl": "erlang",
".lua": "lua", ".lua": "lua",
".pl": "perl", ".pl": "perl",
".r": "r", ".r": "r",
".sh": "shell", ".sh": "shell",
".bash": "shell", ".bash": "shell",
".zsh": "shell", ".zsh": "shell",
".fish": "shell", ".fish": "shell",
".sql": "sql", ".sql": "sql",
".html": "html", ".html": "html",
".htm": "html", ".htm": "html",
".css": "css", ".css": "css",
".scss": "scss", ".scss": "scss",
".sass": "sass", ".sass": "sass",
".less": "less", ".less": "less",
".xml": "xml", ".xml": "xml",
".json": "json", ".json": "json",
".yaml": "yaml", ".yaml": "yaml",
".yml": "yaml", ".yml": "yaml",
".toml": "toml", ".toml": "toml",
".ini": "ini", ".ini": "ini",
".cfg": "ini", ".cfg": "ini",
".conf": "config", ".conf": "config",
".md": "markdown", ".md": "markdown",
".rst": "rst", ".rst": "rst",
".tex": "latex", ".tex": "latex",
".proto": "protobuf", ".proto": "protobuf",
".tf": "terraform", ".tf": "terraform",
".hcl": "hcl", ".hcl": "hcl",
".dockerfile": "dockerfile", ".dockerfile": "dockerfile",
".dockerignore": "dockerignore", ".dockerignore": "dockerignore",
".gitignore": "gitignore", ".gitignore": "gitignore",
".vim": "vim", ".vim": "vim",
".emacs": "emacs", ".emacs": "emacs",
} }
for ext, lang := range extensions { for ext, lang := range extensions {
@@ -383,11 +382,11 @@ func (fa *DefaultFileAnalyzer) AnalyzeContent(ctx context.Context, filePath stri
// DetectLanguage detects programming language from content and file extension // DetectLanguage detects programming language from content and file extension
func (fa *DefaultFileAnalyzer) DetectLanguage(ctx context.Context, filePath string, content []byte) (string, float64, error) { func (fa *DefaultFileAnalyzer) DetectLanguage(ctx context.Context, filePath string, content []byte) (string, float64, error) {
ext := strings.ToLower(filepath.Ext(filePath)) ext := strings.ToLower(filepath.Ext(filePath))
// First try extension-based detection // First try extension-based detection
if lang, exists := fa.languageDetector.extensionMap[ext]; exists { if lang, exists := fa.languageDetector.extensionMap[ext]; exists {
confidence := 0.8 // High confidence for extension-based detection confidence := 0.8 // High confidence for extension-based detection
// Verify with content signatures // Verify with content signatures
if signatures, hasSignatures := fa.languageDetector.signatureRegexs[lang]; hasSignatures { if signatures, hasSignatures := fa.languageDetector.signatureRegexs[lang]; hasSignatures {
matches := 0 matches := 0
@@ -396,7 +395,7 @@ func (fa *DefaultFileAnalyzer) DetectLanguage(ctx context.Context, filePath stri
matches++ matches++
} }
} }
// Adjust confidence based on signature matches // Adjust confidence based on signature matches
if matches > 0 { if matches > 0 {
confidence = 0.9 + float64(matches)/float64(len(signatures))*0.1 confidence = 0.9 + float64(matches)/float64(len(signatures))*0.1
@@ -404,14 +403,14 @@ func (fa *DefaultFileAnalyzer) DetectLanguage(ctx context.Context, filePath stri
confidence = 0.6 // Lower confidence if no signatures match confidence = 0.6 // Lower confidence if no signatures match
} }
} }
return lang, confidence, nil return lang, confidence, nil
} }
// Fall back to content-based detection // Fall back to content-based detection
bestLang := "unknown" bestLang := "unknown"
bestScore := 0 bestScore := 0
for lang, signatures := range fa.languageDetector.signatureRegexs { for lang, signatures := range fa.languageDetector.signatureRegexs {
score := 0 score := 0
for _, regex := range signatures { for _, regex := range signatures {
@@ -419,7 +418,7 @@ func (fa *DefaultFileAnalyzer) DetectLanguage(ctx context.Context, filePath stri
score++ score++
} }
} }
if score > bestScore { if score > bestScore {
bestScore = score bestScore = score
bestLang = lang bestLang = lang
@@ -499,9 +498,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
filenameUpper := strings.ToUpper(filename) filenameUpper := strings.ToUpper(filename)
// Configuration files // Configuration files
if strings.Contains(filenameUpper, "CONFIG") || if strings.Contains(filenameUpper, "CONFIG") ||
strings.Contains(filenameUpper, "CONF") || strings.Contains(filenameUpper, "CONF") ||
analysis.FileType == ".ini" || analysis.FileType == ".toml" { analysis.FileType == ".ini" || analysis.FileType == ".toml" {
purpose = "Configuration management" purpose = "Configuration management"
confidence = 0.9 confidence = 0.9
return purpose, confidence, nil return purpose, confidence, nil
@@ -509,9 +508,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// Test files // Test files
if strings.Contains(filenameUpper, "TEST") || if strings.Contains(filenameUpper, "TEST") ||
strings.Contains(filenameUpper, "SPEC") || strings.Contains(filenameUpper, "SPEC") ||
strings.HasSuffix(filenameUpper, "_TEST.GO") || strings.HasSuffix(filenameUpper, "_TEST.GO") ||
strings.HasSuffix(filenameUpper, "_TEST.PY") { strings.HasSuffix(filenameUpper, "_TEST.PY") {
purpose = "Testing and quality assurance" purpose = "Testing and quality assurance"
confidence = 0.9 confidence = 0.9
return purpose, confidence, nil return purpose, confidence, nil
@@ -519,8 +518,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// Documentation files // Documentation files
if analysis.FileType == ".md" || analysis.FileType == ".rst" || if analysis.FileType == ".md" || analysis.FileType == ".rst" ||
strings.Contains(filenameUpper, "README") || strings.Contains(filenameUpper, "README") ||
strings.Contains(filenameUpper, "DOC") { strings.Contains(filenameUpper, "DOC") {
purpose = "Documentation and guidance" purpose = "Documentation and guidance"
confidence = 0.9 confidence = 0.9
return purpose, confidence, nil return purpose, confidence, nil
@@ -528,8 +527,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// API files // API files
if strings.Contains(filenameUpper, "API") || if strings.Contains(filenameUpper, "API") ||
strings.Contains(filenameUpper, "ROUTER") || strings.Contains(filenameUpper, "ROUTER") ||
strings.Contains(filenameUpper, "HANDLER") { strings.Contains(filenameUpper, "HANDLER") {
purpose = "API endpoint management" purpose = "API endpoint management"
confidence = 0.8 confidence = 0.8
return purpose, confidence, nil return purpose, confidence, nil
@@ -537,9 +536,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// Database files // Database files
if strings.Contains(filenameUpper, "DB") || if strings.Contains(filenameUpper, "DB") ||
strings.Contains(filenameUpper, "DATABASE") || strings.Contains(filenameUpper, "DATABASE") ||
strings.Contains(filenameUpper, "MODEL") || strings.Contains(filenameUpper, "MODEL") ||
strings.Contains(filenameUpper, "SCHEMA") { strings.Contains(filenameUpper, "SCHEMA") {
purpose = "Data storage and management" purpose = "Data storage and management"
confidence = 0.8 confidence = 0.8
return purpose, confidence, nil return purpose, confidence, nil
@@ -547,9 +546,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// UI/Frontend files // UI/Frontend files
if analysis.Language == "javascript" || analysis.Language == "typescript" || if analysis.Language == "javascript" || analysis.Language == "typescript" ||
strings.Contains(filenameUpper, "COMPONENT") || strings.Contains(filenameUpper, "COMPONENT") ||
strings.Contains(filenameUpper, "VIEW") || strings.Contains(filenameUpper, "VIEW") ||
strings.Contains(filenameUpper, "UI") { strings.Contains(filenameUpper, "UI") {
purpose = "User interface component" purpose = "User interface component"
confidence = 0.7 confidence = 0.7
return purpose, confidence, nil return purpose, confidence, nil
@@ -557,8 +556,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// Service/Business logic // Service/Business logic
if strings.Contains(filenameUpper, "SERVICE") || if strings.Contains(filenameUpper, "SERVICE") ||
strings.Contains(filenameUpper, "BUSINESS") || strings.Contains(filenameUpper, "BUSINESS") ||
strings.Contains(filenameUpper, "LOGIC") { strings.Contains(filenameUpper, "LOGIC") {
purpose = "Business logic implementation" purpose = "Business logic implementation"
confidence = 0.7 confidence = 0.7
return purpose, confidence, nil return purpose, confidence, nil
@@ -566,8 +565,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// Utility files // Utility files
if strings.Contains(filenameUpper, "UTIL") || if strings.Contains(filenameUpper, "UTIL") ||
strings.Contains(filenameUpper, "HELPER") || strings.Contains(filenameUpper, "HELPER") ||
strings.Contains(filenameUpper, "COMMON") { strings.Contains(filenameUpper, "COMMON") {
purpose = "Utility and helper functions" purpose = "Utility and helper functions"
confidence = 0.7 confidence = 0.7
return purpose, confidence, nil return purpose, confidence, nil
@@ -591,7 +590,7 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// GenerateSummary generates a concise summary of file content // GenerateSummary generates a concise summary of file content
func (fa *DefaultFileAnalyzer) GenerateSummary(ctx context.Context, analysis *FileAnalysis) (string, error) { func (fa *DefaultFileAnalyzer) GenerateSummary(ctx context.Context, analysis *FileAnalysis) (string, error) {
summary := strings.Builder{} summary := strings.Builder{}
// Language and type // Language and type
if analysis.Language != "unknown" { if analysis.Language != "unknown" {
summary.WriteString(fmt.Sprintf("%s", strings.Title(analysis.Language))) summary.WriteString(fmt.Sprintf("%s", strings.Title(analysis.Language)))
@@ -643,23 +642,23 @@ func (fa *DefaultFileAnalyzer) ExtractTechnologies(ctx context.Context, analysis
// Extract from file patterns // Extract from file patterns
filename := strings.ToLower(filepath.Base(analysis.FilePath)) filename := strings.ToLower(filepath.Base(analysis.FilePath))
// Framework detection // Framework detection
frameworks := map[string]string{ frameworks := map[string]string{
"react": "React", "react": "React",
"vue": "Vue.js", "vue": "Vue.js",
"angular": "Angular", "angular": "Angular",
"express": "Express.js", "express": "Express.js",
"django": "Django", "django": "Django",
"flask": "Flask", "flask": "Flask",
"spring": "Spring", "spring": "Spring",
"gin": "Gin", "gin": "Gin",
"echo": "Echo", "echo": "Echo",
"fastapi": "FastAPI", "fastapi": "FastAPI",
"bootstrap": "Bootstrap", "bootstrap": "Bootstrap",
"tailwind": "Tailwind CSS", "tailwind": "Tailwind CSS",
"material": "Material UI", "material": "Material UI",
"antd": "Ant Design", "antd": "Ant Design",
} }
for pattern, tech := range frameworks { for pattern, tech := range frameworks {
@@ -778,7 +777,7 @@ func (fa *DefaultFileAnalyzer) analyzeCodeStructure(analysis *FileAnalysis, cont
func (fa *DefaultFileAnalyzer) calculateComplexity(analysis *FileAnalysis) float64 { func (fa *DefaultFileAnalyzer) calculateComplexity(analysis *FileAnalysis) float64 {
complexity := 0.0 complexity := 0.0
// Base complexity from structure // Base complexity from structure
complexity += float64(len(analysis.Functions)) * 1.5 complexity += float64(len(analysis.Functions)) * 1.5
complexity += float64(len(analysis.Classes)) * 2.0 complexity += float64(len(analysis.Classes)) * 2.0
@@ -799,7 +798,7 @@ func (fa *DefaultFileAnalyzer) calculateComplexity(analysis *FileAnalysis) float
func (fa *DefaultFileAnalyzer) analyzeArchitecturalPatterns(analysis *StructureAnalysis, content []byte, patterns *LanguagePatterns, language string) { func (fa *DefaultFileAnalyzer) analyzeArchitecturalPatterns(analysis *StructureAnalysis, content []byte, patterns *LanguagePatterns, language string) {
contentStr := string(content) contentStr := string(content)
// Detect common architectural patterns // Detect common architectural patterns
if strings.Contains(contentStr, "interface") && language == "go" { if strings.Contains(contentStr, "interface") && language == "go" {
analysis.Patterns = append(analysis.Patterns, "Interface Segregation") analysis.Patterns = append(analysis.Patterns, "Interface Segregation")
@@ -813,7 +812,7 @@ func (fa *DefaultFileAnalyzer) analyzeArchitecturalPatterns(analysis *StructureA
if strings.Contains(contentStr, "Observer") { if strings.Contains(contentStr, "Observer") {
analysis.Patterns = append(analysis.Patterns, "Observer Pattern") analysis.Patterns = append(analysis.Patterns, "Observer Pattern")
} }
// Architectural style detection // Architectural style detection
if strings.Contains(contentStr, "http.") || strings.Contains(contentStr, "router") { if strings.Contains(contentStr, "http.") || strings.Contains(contentStr, "router") {
analysis.Architecture = "REST API" analysis.Architecture = "REST API"
@@ -832,13 +831,13 @@ func (fa *DefaultFileAnalyzer) mapImportToTechnology(importPath, language string
// Technology mapping based on common imports // Technology mapping based on common imports
techMap := map[string]string{ techMap := map[string]string{
// Go // Go
"gin-gonic/gin": "Gin", "gin-gonic/gin": "Gin",
"labstack/echo": "Echo", "labstack/echo": "Echo",
"gorilla/mux": "Gorilla Mux", "gorilla/mux": "Gorilla Mux",
"gorm.io/gorm": "GORM", "gorm.io/gorm": "GORM",
"github.com/redis": "Redis", "github.com/redis": "Redis",
"go.mongodb.org": "MongoDB", "go.mongodb.org": "MongoDB",
// Python // Python
"django": "Django", "django": "Django",
"flask": "Flask", "flask": "Flask",
@@ -849,15 +848,15 @@ func (fa *DefaultFileAnalyzer) mapImportToTechnology(importPath, language string
"numpy": "NumPy", "numpy": "NumPy",
"tensorflow": "TensorFlow", "tensorflow": "TensorFlow",
"torch": "PyTorch", "torch": "PyTorch",
// JavaScript/TypeScript // JavaScript/TypeScript
"react": "React", "react": "React",
"vue": "Vue.js", "vue": "Vue.js",
"angular": "Angular", "angular": "Angular",
"express": "Express.js", "express": "Express.js",
"axios": "Axios", "axios": "Axios",
"lodash": "Lodash", "lodash": "Lodash",
"moment": "Moment.js", "moment": "Moment.js",
"socket.io": "Socket.IO", "socket.io": "Socket.IO",
} }
@@ -868,4 +867,4 @@ func (fa *DefaultFileAnalyzer) mapImportToTechnology(importPath, language string
} }
return "" return ""
} }

View File

@@ -8,80 +8,79 @@ import (
"sync" "sync"
"time" "time"
"chorus/pkg/crypto"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
) )
// RoleAwareProcessor provides role-based context processing and insight generation // RoleAwareProcessor provides role-based context processing and insight generation
type RoleAwareProcessor struct { type RoleAwareProcessor struct {
mu sync.RWMutex mu sync.RWMutex
config *EngineConfig config *EngineConfig
roleManager *RoleManager roleManager *RoleManager
securityFilter *SecurityFilter securityFilter *SecurityFilter
insightGenerator *InsightGenerator insightGenerator *InsightGenerator
accessController *AccessController accessController *AccessController
auditLogger *AuditLogger auditLogger *AuditLogger
permissions *PermissionMatrix permissions *PermissionMatrix
roleProfiles map[string]*RoleProfile roleProfiles map[string]*RoleBlueprint
} }
// RoleManager manages role definitions and hierarchies // RoleManager manages role definitions and hierarchies
type RoleManager struct { type RoleManager struct {
roles map[string]*Role roles map[string]*Role
hierarchies map[string]*RoleHierarchy hierarchies map[string]*RoleHierarchy
capabilities map[string]*RoleCapabilities capabilities map[string]*RoleCapabilities
restrictions map[string]*RoleRestrictions restrictions map[string]*RoleRestrictions
} }
// Role represents an AI agent role with specific permissions and capabilities // Role represents an AI agent role with specific permissions and capabilities
type Role struct { type Role struct {
ID string `json:"id"` ID string `json:"id"`
Name string `json:"name"` Name string `json:"name"`
Description string `json:"description"` Description string `json:"description"`
SecurityLevel int `json:"security_level"` SecurityLevel int `json:"security_level"`
Capabilities []string `json:"capabilities"` Capabilities []string `json:"capabilities"`
Restrictions []string `json:"restrictions"` Restrictions []string `json:"restrictions"`
AccessPatterns []string `json:"access_patterns"` AccessPatterns []string `json:"access_patterns"`
ContextFilters []string `json:"context_filters"` ContextFilters []string `json:"context_filters"`
Priority int `json:"priority"` Priority int `json:"priority"`
ParentRoles []string `json:"parent_roles"` ParentRoles []string `json:"parent_roles"`
ChildRoles []string `json:"child_roles"` ChildRoles []string `json:"child_roles"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"` UpdatedAt time.Time `json:"updated_at"`
IsActive bool `json:"is_active"` IsActive bool `json:"is_active"`
} }
// RoleHierarchy defines role inheritance and relationships // RoleHierarchy defines role inheritance and relationships
type RoleHierarchy struct { type RoleHierarchy struct {
ParentRole string `json:"parent_role"` ParentRole string `json:"parent_role"`
ChildRoles []string `json:"child_roles"` ChildRoles []string `json:"child_roles"`
InheritLevel int `json:"inherit_level"` InheritLevel int `json:"inherit_level"`
OverrideRules []string `json:"override_rules"` OverrideRules []string `json:"override_rules"`
} }
// RoleCapabilities defines what a role can do // RoleCapabilities defines what a role can do
type RoleCapabilities struct { type RoleCapabilities struct {
RoleID string `json:"role_id"` RoleID string `json:"role_id"`
ReadAccess []string `json:"read_access"` ReadAccess []string `json:"read_access"`
WriteAccess []string `json:"write_access"` WriteAccess []string `json:"write_access"`
ExecuteAccess []string `json:"execute_access"` ExecuteAccess []string `json:"execute_access"`
AnalysisTypes []string `json:"analysis_types"` AnalysisTypes []string `json:"analysis_types"`
InsightLevels []string `json:"insight_levels"` InsightLevels []string `json:"insight_levels"`
SecurityScopes []string `json:"security_scopes"` SecurityScopes []string `json:"security_scopes"`
DataClassifications []string `json:"data_classifications"` DataClassifications []string `json:"data_classifications"`
} }
// RoleRestrictions defines what a role cannot do or access // RoleRestrictions defines what a role cannot do or access
type RoleRestrictions struct { type RoleRestrictions struct {
RoleID string `json:"role_id"` RoleID string `json:"role_id"`
ForbiddenPaths []string `json:"forbidden_paths"` ForbiddenPaths []string `json:"forbidden_paths"`
ForbiddenTypes []string `json:"forbidden_types"` ForbiddenTypes []string `json:"forbidden_types"`
ForbiddenKeywords []string `json:"forbidden_keywords"` ForbiddenKeywords []string `json:"forbidden_keywords"`
TimeRestrictions []string `json:"time_restrictions"` TimeRestrictions []string `json:"time_restrictions"`
RateLimit *RateLimit `json:"rate_limit"` RateLimit *RateLimit `json:"rate_limit"`
MaxContextSize int `json:"max_context_size"` MaxContextSize int `json:"max_context_size"`
MaxInsights int `json:"max_insights"` MaxInsights int `json:"max_insights"`
} }
// RateLimit defines rate limiting for role operations // RateLimit defines rate limiting for role operations
@@ -111,9 +110,9 @@ type ContentFilter struct {
// AccessMatrix defines access control rules // AccessMatrix defines access control rules
type AccessMatrix struct { type AccessMatrix struct {
Rules map[string]*AccessRule `json:"rules"` Rules map[string]*AccessRule `json:"rules"`
DefaultDeny bool `json:"default_deny"` DefaultDeny bool `json:"default_deny"`
LastUpdated time.Time `json:"last_updated"` LastUpdated time.Time `json:"last_updated"`
} }
// AccessRule defines a specific access control rule // AccessRule defines a specific access control rule
@@ -144,14 +143,14 @@ type RoleInsightGenerator interface {
// InsightTemplate defines templates for generating insights // InsightTemplate defines templates for generating insights
type InsightTemplate struct { type InsightTemplate struct {
TemplateID string `json:"template_id"` TemplateID string `json:"template_id"`
Name string `json:"name"` Name string `json:"name"`
Template string `json:"template"` Template string `json:"template"`
Variables []string `json:"variables"` Variables []string `json:"variables"`
Roles []string `json:"roles"` Roles []string `json:"roles"`
Category string `json:"category"` Category string `json:"category"`
Priority int `json:"priority"` Priority int `json:"priority"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
} }
// InsightFilter filters insights based on role permissions // InsightFilter filters insights based on role permissions
@@ -179,39 +178,39 @@ type PermissionMatrix struct {
// RolePermissions defines permissions for a specific role // RolePermissions defines permissions for a specific role
type RolePermissions struct { type RolePermissions struct {
RoleID string `json:"role_id"` RoleID string `json:"role_id"`
ContextAccess *ContextAccessRights `json:"context_access"` ContextAccess *ContextAccessRights `json:"context_access"`
AnalysisAccess *AnalysisAccessRights `json:"analysis_access"` AnalysisAccess *AnalysisAccessRights `json:"analysis_access"`
InsightAccess *InsightAccessRights `json:"insight_access"` InsightAccess *InsightAccessRights `json:"insight_access"`
SystemAccess *SystemAccessRights `json:"system_access"` SystemAccess *SystemAccessRights `json:"system_access"`
CustomAccess map[string]interface{} `json:"custom_access"` CustomAccess map[string]interface{} `json:"custom_access"`
} }
// ContextAccessRights defines context-related access rights // ContextAccessRights defines context-related access rights
type ContextAccessRights struct { type ContextAccessRights struct {
ReadLevel int `json:"read_level"` ReadLevel int `json:"read_level"`
WriteLevel int `json:"write_level"` WriteLevel int `json:"write_level"`
AllowedTypes []string `json:"allowed_types"` AllowedTypes []string `json:"allowed_types"`
ForbiddenTypes []string `json:"forbidden_types"` ForbiddenTypes []string `json:"forbidden_types"`
PathRestrictions []string `json:"path_restrictions"` PathRestrictions []string `json:"path_restrictions"`
SizeLimit int `json:"size_limit"` SizeLimit int `json:"size_limit"`
} }
// AnalysisAccessRights defines analysis-related access rights // AnalysisAccessRights defines analysis-related access rights
type AnalysisAccessRights struct { type AnalysisAccessRights struct {
AllowedAnalysisTypes []string `json:"allowed_analysis_types"` AllowedAnalysisTypes []string `json:"allowed_analysis_types"`
MaxComplexity int `json:"max_complexity"` MaxComplexity int `json:"max_complexity"`
TimeoutLimit time.Duration `json:"timeout_limit"` TimeoutLimit time.Duration `json:"timeout_limit"`
ResourceLimit int `json:"resource_limit"` ResourceLimit int `json:"resource_limit"`
} }
// InsightAccessRights defines insight-related access rights // InsightAccessRights defines insight-related access rights
type InsightAccessRights struct { type InsightAccessRights struct {
GenerationLevel int `json:"generation_level"` GenerationLevel int `json:"generation_level"`
AccessLevel int `json:"access_level"` AccessLevel int `json:"access_level"`
CategoryFilters []string `json:"category_filters"` CategoryFilters []string `json:"category_filters"`
ConfidenceThreshold float64 `json:"confidence_threshold"` ConfidenceThreshold float64 `json:"confidence_threshold"`
MaxInsights int `json:"max_insights"` MaxInsights int `json:"max_insights"`
} }
// SystemAccessRights defines system-level access rights // SystemAccessRights defines system-level access rights
@@ -254,15 +253,15 @@ type AuditLogger struct {
// AuditEntry represents an audit log entry // AuditEntry represents an audit log entry
type AuditEntry struct { type AuditEntry struct {
ID string `json:"id"` ID string `json:"id"`
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
RoleID string `json:"role_id"` RoleID string `json:"role_id"`
Action string `json:"action"` Action string `json:"action"`
Resource string `json:"resource"` Resource string `json:"resource"`
Result string `json:"result"` // success, denied, error Result string `json:"result"` // success, denied, error
Details string `json:"details"` Details string `json:"details"`
Context map[string]interface{} `json:"context"` Context map[string]interface{} `json:"context"`
SecurityLevel int `json:"security_level"` SecurityLevel int `json:"security_level"`
} }
// AuditConfig defines audit logging configuration // AuditConfig defines audit logging configuration
@@ -276,49 +275,49 @@ type AuditConfig struct {
} }
// RoleProfile contains comprehensive role configuration // RoleProfile contains comprehensive role configuration
type RoleProfile struct { type RoleBlueprint struct {
Role *Role `json:"role"` Role *Role `json:"role"`
Capabilities *RoleCapabilities `json:"capabilities"` Capabilities *RoleCapabilities `json:"capabilities"`
Restrictions *RoleRestrictions `json:"restrictions"` Restrictions *RoleRestrictions `json:"restrictions"`
Permissions *RolePermissions `json:"permissions"` Permissions *RolePermissions `json:"permissions"`
InsightConfig *RoleInsightConfig `json:"insight_config"` InsightConfig *RoleInsightConfig `json:"insight_config"`
SecurityConfig *RoleSecurityConfig `json:"security_config"` SecurityConfig *RoleSecurityConfig `json:"security_config"`
} }
// RoleInsightConfig defines insight generation configuration for a role // RoleInsightConfig defines insight generation configuration for a role
type RoleInsightConfig struct { type RoleInsightConfig struct {
EnabledGenerators []string `json:"enabled_generators"` EnabledGenerators []string `json:"enabled_generators"`
MaxInsights int `json:"max_insights"` MaxInsights int `json:"max_insights"`
ConfidenceThreshold float64 `json:"confidence_threshold"` ConfidenceThreshold float64 `json:"confidence_threshold"`
CategoryWeights map[string]float64 `json:"category_weights"` CategoryWeights map[string]float64 `json:"category_weights"`
CustomFilters []string `json:"custom_filters"` CustomFilters []string `json:"custom_filters"`
} }
// RoleSecurityConfig defines security configuration for a role // RoleSecurityConfig defines security configuration for a role
type RoleSecurityConfig struct { type RoleSecurityConfig struct {
EncryptionRequired bool `json:"encryption_required"` EncryptionRequired bool `json:"encryption_required"`
AccessLogging bool `json:"access_logging"` AccessLogging bool `json:"access_logging"`
RateLimit *RateLimit `json:"rate_limit"` RateLimit *RateLimit `json:"rate_limit"`
IPWhitelist []string `json:"ip_whitelist"` IPWhitelist []string `json:"ip_whitelist"`
RequiredClaims []string `json:"required_claims"` RequiredClaims []string `json:"required_claims"`
} }
// RoleSpecificInsight represents an insight tailored to a specific role // RoleSpecificInsight represents an insight tailored to a specific role
type RoleSpecificInsight struct { type RoleSpecificInsight struct {
ID string `json:"id"` ID string `json:"id"`
RoleID string `json:"role_id"` RoleID string `json:"role_id"`
Category string `json:"category"` Category string `json:"category"`
Title string `json:"title"` Title string `json:"title"`
Content string `json:"content"` Content string `json:"content"`
Confidence float64 `json:"confidence"` Confidence float64 `json:"confidence"`
Priority int `json:"priority"` Priority int `json:"priority"`
SecurityLevel int `json:"security_level"` SecurityLevel int `json:"security_level"`
Tags []string `json:"tags"` Tags []string `json:"tags"`
ActionItems []string `json:"action_items"` ActionItems []string `json:"action_items"`
References []string `json:"references"` References []string `json:"references"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
GeneratedAt time.Time `json:"generated_at"` GeneratedAt time.Time `json:"generated_at"`
ExpiresAt *time.Time `json:"expires_at,omitempty"` ExpiresAt *time.Time `json:"expires_at,omitempty"`
} }
// NewRoleAwareProcessor creates a new role-aware processor // NewRoleAwareProcessor creates a new role-aware processor
@@ -331,7 +330,7 @@ func NewRoleAwareProcessor(config *EngineConfig) *RoleAwareProcessor {
accessController: NewAccessController(), accessController: NewAccessController(),
auditLogger: NewAuditLogger(), auditLogger: NewAuditLogger(),
permissions: NewPermissionMatrix(), permissions: NewPermissionMatrix(),
roleProfiles: make(map[string]*RoleProfile), roleProfiles: make(map[string]*RoleBlueprint),
} }
// Initialize default roles // Initialize default roles
@@ -342,10 +341,10 @@ func NewRoleAwareProcessor(config *EngineConfig) *RoleAwareProcessor {
// NewRoleManager creates a role manager with default roles // NewRoleManager creates a role manager with default roles
func NewRoleManager() *RoleManager { func NewRoleManager() *RoleManager {
rm := &RoleManager{ rm := &RoleManager{
roles: make(map[string]*Role), roles: make(map[string]*Role),
hierarchies: make(map[string]*RoleHierarchy), hierarchies: make(map[string]*RoleHierarchy),
capabilities: make(map[string]*RoleCapabilities), capabilities: make(map[string]*RoleCapabilities),
restrictions: make(map[string]*RoleRestrictions), restrictions: make(map[string]*RoleRestrictions),
} }
// Initialize with default roles // Initialize with default roles
@@ -383,12 +382,15 @@ func (rap *RoleAwareProcessor) ProcessContextForRole(ctx context.Context, node *
// Apply insights to node // Apply insights to node
if len(insights) > 0 { if len(insights) > 0 {
filteredNode.RoleSpecificInsights = insights if filteredNode.Metadata == nil {
filteredNode.ProcessedForRole = roleID filteredNode.Metadata = make(map[string]interface{})
}
filteredNode.Metadata["role_specific_insights"] = insights
filteredNode.Metadata["processed_for_role"] = roleID
} }
// Log successful processing // Log successful processing
rap.auditLogger.logAccess(roleID, "context:process", node.Path, "success", rap.auditLogger.logAccess(roleID, "context:process", node.Path, "success",
fmt.Sprintf("processed with %d insights", len(insights))) fmt.Sprintf("processed with %d insights", len(insights)))
return filteredNode, nil return filteredNode, nil
@@ -413,7 +415,7 @@ func (rap *RoleAwareProcessor) GenerateRoleSpecificInsights(ctx context.Context,
return nil, err return nil, err
} }
rap.auditLogger.logAccess(roleID, "insight:generate", node.Path, "success", rap.auditLogger.logAccess(roleID, "insight:generate", node.Path, "success",
fmt.Sprintf("generated %d insights", len(insights))) fmt.Sprintf("generated %d insights", len(insights)))
return insights, nil return insights, nil
@@ -448,69 +450,69 @@ func (rap *RoleAwareProcessor) GetRoleCapabilities(roleID string) (*RoleCapabili
func (rap *RoleAwareProcessor) initializeDefaultRoles() { func (rap *RoleAwareProcessor) initializeDefaultRoles() {
defaultRoles := []*Role{ defaultRoles := []*Role{
{ {
ID: "architect", ID: "architect",
Name: "System Architect", Name: "System Architect",
Description: "High-level system design and architecture decisions", Description: "High-level system design and architecture decisions",
SecurityLevel: 8, SecurityLevel: 8,
Capabilities: []string{"architecture_design", "high_level_analysis", "strategic_planning"}, Capabilities: []string{"architecture_design", "high_level_analysis", "strategic_planning"},
Restrictions: []string{"no_implementation_details", "no_low_level_code"}, Restrictions: []string{"no_implementation_details", "no_low_level_code"},
AccessPatterns: []string{"architecture/**", "design/**", "docs/**"}, AccessPatterns: []string{"architecture/**", "design/**", "docs/**"},
Priority: 1, Priority: 1,
IsActive: true, IsActive: true,
CreatedAt: time.Now(), CreatedAt: time.Now(),
}, },
{ {
ID: "developer", ID: "developer",
Name: "Software Developer", Name: "Software Developer",
Description: "Code implementation and development tasks", Description: "Code implementation and development tasks",
SecurityLevel: 6, SecurityLevel: 6,
Capabilities: []string{"code_analysis", "implementation", "debugging", "testing"}, Capabilities: []string{"code_analysis", "implementation", "debugging", "testing"},
Restrictions: []string{"no_architecture_changes", "no_security_config"}, Restrictions: []string{"no_architecture_changes", "no_security_config"},
AccessPatterns: []string{"src/**", "lib/**", "test/**"}, AccessPatterns: []string{"src/**", "lib/**", "test/**"},
Priority: 2, Priority: 2,
IsActive: true, IsActive: true,
CreatedAt: time.Now(), CreatedAt: time.Now(),
}, },
{ {
ID: "security_analyst", ID: "security_analyst",
Name: "Security Analyst", Name: "Security Analyst",
Description: "Security analysis and vulnerability assessment", Description: "Security analysis and vulnerability assessment",
SecurityLevel: 9, SecurityLevel: 9,
Capabilities: []string{"security_analysis", "vulnerability_assessment", "compliance_check"}, Capabilities: []string{"security_analysis", "vulnerability_assessment", "compliance_check"},
Restrictions: []string{"no_code_modification"}, Restrictions: []string{"no_code_modification"},
AccessPatterns: []string{"**/*"}, AccessPatterns: []string{"**/*"},
Priority: 1, Priority: 1,
IsActive: true, IsActive: true,
CreatedAt: time.Now(), CreatedAt: time.Now(),
}, },
{ {
ID: "devops_engineer", ID: "devops_engineer",
Name: "DevOps Engineer", Name: "DevOps Engineer",
Description: "Infrastructure and deployment operations", Description: "Infrastructure and deployment operations",
SecurityLevel: 7, SecurityLevel: 7,
Capabilities: []string{"infrastructure_analysis", "deployment", "monitoring", "ci_cd"}, Capabilities: []string{"infrastructure_analysis", "deployment", "monitoring", "ci_cd"},
Restrictions: []string{"no_business_logic"}, Restrictions: []string{"no_business_logic"},
AccessPatterns: []string{"infra/**", "deploy/**", "config/**", "docker/**"}, AccessPatterns: []string{"infra/**", "deploy/**", "config/**", "docker/**"},
Priority: 2, Priority: 2,
IsActive: true, IsActive: true,
CreatedAt: time.Now(), CreatedAt: time.Now(),
}, },
{ {
ID: "qa_engineer", ID: "qa_engineer",
Name: "Quality Assurance Engineer", Name: "Quality Assurance Engineer",
Description: "Quality assurance and testing", Description: "Quality assurance and testing",
SecurityLevel: 5, SecurityLevel: 5,
Capabilities: []string{"quality_analysis", "testing", "test_planning"}, Capabilities: []string{"quality_analysis", "testing", "test_planning"},
Restrictions: []string{"no_production_access", "no_code_modification"}, Restrictions: []string{"no_production_access", "no_code_modification"},
AccessPatterns: []string{"test/**", "spec/**", "qa/**"}, AccessPatterns: []string{"test/**", "spec/**", "qa/**"},
Priority: 3, Priority: 3,
IsActive: true, IsActive: true,
CreatedAt: time.Now(), CreatedAt: time.Now(),
}, },
} }
for _, role := range defaultRoles { for _, role := range defaultRoles {
rap.roleProfiles[role.ID] = &RoleProfile{ rap.roleProfiles[role.ID] = &RoleBlueprint{
Role: role, Role: role,
Capabilities: rap.createDefaultCapabilities(role), Capabilities: rap.createDefaultCapabilities(role),
Restrictions: rap.createDefaultRestrictions(role), Restrictions: rap.createDefaultRestrictions(role),
@@ -540,23 +542,23 @@ func (rap *RoleAwareProcessor) createDefaultCapabilities(role *Role) *RoleCapabi
baseCapabilities.ExecuteAccess = []string{"design_tools", "modeling"} baseCapabilities.ExecuteAccess = []string{"design_tools", "modeling"}
baseCapabilities.InsightLevels = []string{"strategic", "architectural", "high_level"} baseCapabilities.InsightLevels = []string{"strategic", "architectural", "high_level"}
baseCapabilities.SecurityScopes = []string{"public", "internal", "confidential"} baseCapabilities.SecurityScopes = []string{"public", "internal", "confidential"}
case "developer": case "developer":
baseCapabilities.WriteAccess = []string{"src/**", "test/**"} baseCapabilities.WriteAccess = []string{"src/**", "test/**"}
baseCapabilities.ExecuteAccess = []string{"compile", "test", "debug"} baseCapabilities.ExecuteAccess = []string{"compile", "test", "debug"}
baseCapabilities.InsightLevels = []string{"implementation", "code_quality", "performance"} baseCapabilities.InsightLevels = []string{"implementation", "code_quality", "performance"}
case "security_analyst": case "security_analyst":
baseCapabilities.ReadAccess = []string{"**/*"} baseCapabilities.ReadAccess = []string{"**/*"}
baseCapabilities.InsightLevels = []string{"security", "vulnerability", "compliance"} baseCapabilities.InsightLevels = []string{"security", "vulnerability", "compliance"}
baseCapabilities.SecurityScopes = []string{"public", "internal", "confidential", "secret"} baseCapabilities.SecurityScopes = []string{"public", "internal", "confidential", "secret"}
baseCapabilities.DataClassifications = []string{"public", "internal", "confidential", "restricted"} baseCapabilities.DataClassifications = []string{"public", "internal", "confidential", "restricted"}
case "devops_engineer": case "devops_engineer":
baseCapabilities.WriteAccess = []string{"infra/**", "deploy/**", "config/**"} baseCapabilities.WriteAccess = []string{"infra/**", "deploy/**", "config/**"}
baseCapabilities.ExecuteAccess = []string{"deploy", "configure", "monitor"} baseCapabilities.ExecuteAccess = []string{"deploy", "configure", "monitor"}
baseCapabilities.InsightLevels = []string{"infrastructure", "deployment", "monitoring"} baseCapabilities.InsightLevels = []string{"infrastructure", "deployment", "monitoring"}
case "qa_engineer": case "qa_engineer":
baseCapabilities.WriteAccess = []string{"test/**", "qa/**"} baseCapabilities.WriteAccess = []string{"test/**", "qa/**"}
baseCapabilities.ExecuteAccess = []string{"test", "validate"} baseCapabilities.ExecuteAccess = []string{"test", "validate"}
@@ -587,21 +589,21 @@ func (rap *RoleAwareProcessor) createDefaultRestrictions(role *Role) *RoleRestri
// Architects have fewer restrictions // Architects have fewer restrictions
baseRestrictions.MaxContextSize = 50000 baseRestrictions.MaxContextSize = 50000
baseRestrictions.MaxInsights = 100 baseRestrictions.MaxInsights = 100
case "developer": case "developer":
baseRestrictions.ForbiddenPaths = append(baseRestrictions.ForbiddenPaths, "architecture/**", "security/**") baseRestrictions.ForbiddenPaths = append(baseRestrictions.ForbiddenPaths, "architecture/**", "security/**")
baseRestrictions.ForbiddenTypes = []string{"security_config", "deployment_config"} baseRestrictions.ForbiddenTypes = []string{"security_config", "deployment_config"}
case "security_analyst": case "security_analyst":
// Security analysts have minimal path restrictions but keyword restrictions // Security analysts have minimal path restrictions but keyword restrictions
baseRestrictions.ForbiddenPaths = []string{"temp/**"} baseRestrictions.ForbiddenPaths = []string{"temp/**"}
baseRestrictions.ForbiddenKeywords = []string{"password", "secret", "key"} baseRestrictions.ForbiddenKeywords = []string{"password", "secret", "key"}
baseRestrictions.MaxContextSize = 100000 baseRestrictions.MaxContextSize = 100000
case "devops_engineer": case "devops_engineer":
baseRestrictions.ForbiddenPaths = append(baseRestrictions.ForbiddenPaths, "src/**") baseRestrictions.ForbiddenPaths = append(baseRestrictions.ForbiddenPaths, "src/**")
baseRestrictions.ForbiddenTypes = []string{"business_logic", "user_data"} baseRestrictions.ForbiddenTypes = []string{"business_logic", "user_data"}
case "qa_engineer": case "qa_engineer":
baseRestrictions.ForbiddenPaths = append(baseRestrictions.ForbiddenPaths, "src/**", "infra/**") baseRestrictions.ForbiddenPaths = append(baseRestrictions.ForbiddenPaths, "src/**", "infra/**")
baseRestrictions.ForbiddenTypes = []string{"production_config", "security_config"} baseRestrictions.ForbiddenTypes = []string{"production_config", "security_config"}
@@ -615,10 +617,10 @@ func (rap *RoleAwareProcessor) createDefaultPermissions(role *Role) *RolePermiss
return &RolePermissions{ return &RolePermissions{
RoleID: role.ID, RoleID: role.ID,
ContextAccess: &ContextAccessRights{ ContextAccess: &ContextAccessRights{
ReadLevel: role.SecurityLevel, ReadLevel: role.SecurityLevel,
WriteLevel: role.SecurityLevel - 2, WriteLevel: role.SecurityLevel - 2,
AllowedTypes: []string{"code", "documentation", "configuration"}, AllowedTypes: []string{"code", "documentation", "configuration"},
SizeLimit: 1000000, SizeLimit: 1000000,
}, },
AnalysisAccess: &AnalysisAccessRights{ AnalysisAccess: &AnalysisAccessRights{
AllowedAnalysisTypes: role.Capabilities, AllowedAnalysisTypes: role.Capabilities,
@@ -627,10 +629,10 @@ func (rap *RoleAwareProcessor) createDefaultPermissions(role *Role) *RolePermiss
ResourceLimit: 100, ResourceLimit: 100,
}, },
InsightAccess: &InsightAccessRights{ InsightAccess: &InsightAccessRights{
GenerationLevel: role.SecurityLevel, GenerationLevel: role.SecurityLevel,
AccessLevel: role.SecurityLevel, AccessLevel: role.SecurityLevel,
ConfidenceThreshold: 0.5, ConfidenceThreshold: 0.5,
MaxInsights: 50, MaxInsights: 50,
}, },
SystemAccess: &SystemAccessRights{ SystemAccess: &SystemAccessRights{
AdminAccess: role.SecurityLevel >= 8, AdminAccess: role.SecurityLevel >= 8,
@@ -660,26 +662,26 @@ func (rap *RoleAwareProcessor) createDefaultInsightConfig(role *Role) *RoleInsig
"scalability": 0.9, "scalability": 0.9,
} }
config.MaxInsights = 100 config.MaxInsights = 100
case "developer": case "developer":
config.EnabledGenerators = []string{"code_insights", "implementation_suggestions", "bug_detection"} config.EnabledGenerators = []string{"code_insights", "implementation_suggestions", "bug_detection"}
config.CategoryWeights = map[string]float64{ config.CategoryWeights = map[string]float64{
"code_quality": 1.0, "code_quality": 1.0,
"implementation": 0.9, "implementation": 0.9,
"bugs": 0.8, "bugs": 0.8,
"performance": 0.6, "performance": 0.6,
} }
case "security_analyst": case "security_analyst":
config.EnabledGenerators = []string{"security_insights", "vulnerability_analysis", "compliance_check"} config.EnabledGenerators = []string{"security_insights", "vulnerability_analysis", "compliance_check"}
config.CategoryWeights = map[string]float64{ config.CategoryWeights = map[string]float64{
"security": 1.0, "security": 1.0,
"vulnerabilities": 1.0, "vulnerabilities": 1.0,
"compliance": 0.9, "compliance": 0.9,
"privacy": 0.8, "privacy": 0.8,
} }
config.MaxInsights = 200 config.MaxInsights = 200
case "devops_engineer": case "devops_engineer":
config.EnabledGenerators = []string{"infrastructure_insights", "deployment_analysis", "monitoring_suggestions"} config.EnabledGenerators = []string{"infrastructure_insights", "deployment_analysis", "monitoring_suggestions"}
config.CategoryWeights = map[string]float64{ config.CategoryWeights = map[string]float64{
@@ -688,7 +690,7 @@ func (rap *RoleAwareProcessor) createDefaultInsightConfig(role *Role) *RoleInsig
"monitoring": 0.8, "monitoring": 0.8,
"automation": 0.7, "automation": 0.7,
} }
case "qa_engineer": case "qa_engineer":
config.EnabledGenerators = []string{"quality_insights", "test_suggestions", "validation_analysis"} config.EnabledGenerators = []string{"quality_insights", "test_suggestions", "validation_analysis"}
config.CategoryWeights = map[string]float64{ config.CategoryWeights = map[string]float64{
@@ -751,7 +753,7 @@ func NewSecurityFilter() *SecurityFilter {
"top_secret": 10, "top_secret": 10,
}, },
contentFilters: make(map[string]*ContentFilter), contentFilters: make(map[string]*ContentFilter),
accessMatrix: &AccessMatrix{ accessMatrix: &AccessMatrix{
Rules: make(map[string]*AccessRule), Rules: make(map[string]*AccessRule),
DefaultDeny: true, DefaultDeny: true,
LastUpdated: time.Now(), LastUpdated: time.Now(),
@@ -765,7 +767,7 @@ func (sf *SecurityFilter) filterForRole(node *slurpContext.ContextNode, role *Ro
// Apply content filtering based on role security level // Apply content filtering based on role security level
filtered.Summary = sf.filterContent(node.Summary, role) filtered.Summary = sf.filterContent(node.Summary, role)
filtered.Purpose = sf.filterContent(node.Purpose, role) filtered.Purpose = sf.filterContent(node.Purpose, role)
// Filter insights based on role access level // Filter insights based on role access level
filteredInsights := []string{} filteredInsights := []string{}
for _, insight := range node.Insights { for _, insight := range node.Insights {
@@ -816,7 +818,7 @@ func (sf *SecurityFilter) filterContent(content string, role *Role) string {
func (sf *SecurityFilter) canAccessInsight(insight string, role *Role) bool { func (sf *SecurityFilter) canAccessInsight(insight string, role *Role) bool {
// Check if role can access this type of insight // Check if role can access this type of insight
lowerInsight := strings.ToLower(insight) lowerInsight := strings.ToLower(insight)
// Security analysts can see all insights // Security analysts can see all insights
if role.ID == "security_analyst" { if role.ID == "security_analyst" {
return true return true
@@ -849,20 +851,20 @@ func (sf *SecurityFilter) canAccessInsight(insight string, role *Role) bool {
func (sf *SecurityFilter) filterTechnologies(technologies []string, role *Role) []string { func (sf *SecurityFilter) filterTechnologies(technologies []string, role *Role) []string {
filtered := []string{} filtered := []string{}
for _, tech := range technologies { for _, tech := range technologies {
if sf.canAccessTechnology(tech, role) { if sf.canAccessTechnology(tech, role) {
filtered = append(filtered, tech) filtered = append(filtered, tech)
} }
} }
return filtered return filtered
} }
func (sf *SecurityFilter) canAccessTechnology(technology string, role *Role) bool { func (sf *SecurityFilter) canAccessTechnology(technology string, role *Role) bool {
// Role-specific technology access rules // Role-specific technology access rules
lowerTech := strings.ToLower(technology) lowerTech := strings.ToLower(technology)
switch role.ID { switch role.ID {
case "qa_engineer": case "qa_engineer":
// QA engineers shouldn't see infrastructure technologies // QA engineers shouldn't see infrastructure technologies
@@ -881,26 +883,26 @@ func (sf *SecurityFilter) canAccessTechnology(technology string, role *Role) boo
} }
} }
} }
return true return true
} }
func (sf *SecurityFilter) filterTags(tags []string, role *Role) []string { func (sf *SecurityFilter) filterTags(tags []string, role *Role) []string {
filtered := []string{} filtered := []string{}
for _, tag := range tags { for _, tag := range tags {
if sf.canAccessTag(tag, role) { if sf.canAccessTag(tag, role) {
filtered = append(filtered, tag) filtered = append(filtered, tag)
} }
} }
return filtered return filtered
} }
func (sf *SecurityFilter) canAccessTag(tag string, role *Role) bool { func (sf *SecurityFilter) canAccessTag(tag string, role *Role) bool {
// Simple tag filtering based on role // Simple tag filtering based on role
lowerTag := strings.ToLower(tag) lowerTag := strings.ToLower(tag)
// Security-related tags only for security analysts and architects // Security-related tags only for security analysts and architects
securityTags := []string{"security", "vulnerability", "encryption", "authentication"} securityTags := []string{"security", "vulnerability", "encryption", "authentication"}
for _, secTag := range securityTags { for _, secTag := range securityTags {
@@ -908,7 +910,7 @@ func (sf *SecurityFilter) canAccessTag(tag string, role *Role) bool {
return false return false
} }
} }
return true return true
} }
@@ -968,7 +970,7 @@ func (ig *InsightGenerator) generateForRole(ctx context.Context, node *slurpCont
func (ig *InsightGenerator) applyRoleFilters(insights []*RoleSpecificInsight, role *Role) []*RoleSpecificInsight { func (ig *InsightGenerator) applyRoleFilters(insights []*RoleSpecificInsight, role *Role) []*RoleSpecificInsight {
filtered := []*RoleSpecificInsight{} filtered := []*RoleSpecificInsight{}
for _, insight := range insights { for _, insight := range insights {
// Check security level // Check security level
if insight.SecurityLevel > role.SecurityLevel { if insight.SecurityLevel > role.SecurityLevel {
@@ -1174,6 +1176,7 @@ func (al *AuditLogger) GetAuditLog(limit int) []*AuditEntry {
// These would be fully implemented with sophisticated logic in production // These would be fully implemented with sophisticated logic in production
type ArchitectInsightGenerator struct{} type ArchitectInsightGenerator struct{}
func NewArchitectInsightGenerator() *ArchitectInsightGenerator { return &ArchitectInsightGenerator{} } func NewArchitectInsightGenerator() *ArchitectInsightGenerator { return &ArchitectInsightGenerator{} }
func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) { func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
return []*RoleSpecificInsight{ return []*RoleSpecificInsight{
@@ -1191,10 +1194,15 @@ func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node
}, nil }, nil
} }
func (aig *ArchitectInsightGenerator) GetSupportedRoles() []string { return []string{"architect"} } func (aig *ArchitectInsightGenerator) GetSupportedRoles() []string { return []string{"architect"} }
func (aig *ArchitectInsightGenerator) GetInsightTypes() []string { return []string{"architecture", "design", "patterns"} } func (aig *ArchitectInsightGenerator) GetInsightTypes() []string {
func (aig *ArchitectInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil } return []string{"architecture", "design", "patterns"}
}
func (aig *ArchitectInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
return nil
}
type DeveloperInsightGenerator struct{} type DeveloperInsightGenerator struct{}
func NewDeveloperInsightGenerator() *DeveloperInsightGenerator { return &DeveloperInsightGenerator{} } func NewDeveloperInsightGenerator() *DeveloperInsightGenerator { return &DeveloperInsightGenerator{} }
func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) { func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
return []*RoleSpecificInsight{ return []*RoleSpecificInsight{
@@ -1212,10 +1220,15 @@ func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node
}, nil }, nil
} }
func (dig *DeveloperInsightGenerator) GetSupportedRoles() []string { return []string{"developer"} } func (dig *DeveloperInsightGenerator) GetSupportedRoles() []string { return []string{"developer"} }
func (dig *DeveloperInsightGenerator) GetInsightTypes() []string { return []string{"code_quality", "implementation", "bugs"} } func (dig *DeveloperInsightGenerator) GetInsightTypes() []string {
func (dig *DeveloperInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil } return []string{"code_quality", "implementation", "bugs"}
}
func (dig *DeveloperInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
return nil
}
type SecurityInsightGenerator struct{} type SecurityInsightGenerator struct{}
func NewSecurityInsightGenerator() *SecurityInsightGenerator { return &SecurityInsightGenerator{} } func NewSecurityInsightGenerator() *SecurityInsightGenerator { return &SecurityInsightGenerator{} }
func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) { func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
return []*RoleSpecificInsight{ return []*RoleSpecificInsight{
@@ -1232,11 +1245,18 @@ func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node
}, },
}, nil }, nil
} }
func (sig *SecurityInsightGenerator) GetSupportedRoles() []string { return []string{"security_analyst"} } func (sig *SecurityInsightGenerator) GetSupportedRoles() []string {
func (sig *SecurityInsightGenerator) GetInsightTypes() []string { return []string{"security", "vulnerability", "compliance"} } return []string{"security_analyst"}
func (sig *SecurityInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil } }
func (sig *SecurityInsightGenerator) GetInsightTypes() []string {
return []string{"security", "vulnerability", "compliance"}
}
func (sig *SecurityInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
return nil
}
type DevOpsInsightGenerator struct{} type DevOpsInsightGenerator struct{}
func NewDevOpsInsightGenerator() *DevOpsInsightGenerator { return &DevOpsInsightGenerator{} } func NewDevOpsInsightGenerator() *DevOpsInsightGenerator { return &DevOpsInsightGenerator{} }
func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) { func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
return []*RoleSpecificInsight{ return []*RoleSpecificInsight{
@@ -1254,10 +1274,15 @@ func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *
}, nil }, nil
} }
func (doig *DevOpsInsightGenerator) GetSupportedRoles() []string { return []string{"devops_engineer"} } func (doig *DevOpsInsightGenerator) GetSupportedRoles() []string { return []string{"devops_engineer"} }
func (doig *DevOpsInsightGenerator) GetInsightTypes() []string { return []string{"infrastructure", "deployment", "monitoring"} } func (doig *DevOpsInsightGenerator) GetInsightTypes() []string {
func (doig *DevOpsInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil } return []string{"infrastructure", "deployment", "monitoring"}
}
func (doig *DevOpsInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
return nil
}
type QAInsightGenerator struct{} type QAInsightGenerator struct{}
func NewQAInsightGenerator() *QAInsightGenerator { return &QAInsightGenerator{} } func NewQAInsightGenerator() *QAInsightGenerator { return &QAInsightGenerator{} }
func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) { func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
return []*RoleSpecificInsight{ return []*RoleSpecificInsight{
@@ -1275,5 +1300,9 @@ func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slur
}, nil }, nil
} }
func (qaig *QAInsightGenerator) GetSupportedRoles() []string { return []string{"qa_engineer"} } func (qaig *QAInsightGenerator) GetSupportedRoles() []string { return []string{"qa_engineer"} }
func (qaig *QAInsightGenerator) GetInsightTypes() []string { return []string{"quality", "testing", "validation"} } func (qaig *QAInsightGenerator) GetInsightTypes() []string {
func (qaig *QAInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil } return []string{"quality", "testing", "validation"}
}
func (qaig *QAInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
return nil
}

View File

@@ -6,236 +6,236 @@ import (
// FileMetadata represents metadata extracted from file system // FileMetadata represents metadata extracted from file system
type FileMetadata struct { type FileMetadata struct {
Path string `json:"path"` // File path Path string `json:"path"` // File path
Size int64 `json:"size"` // File size in bytes Size int64 `json:"size"` // File size in bytes
ModTime time.Time `json:"mod_time"` // Last modification time ModTime time.Time `json:"mod_time"` // Last modification time
Mode uint32 `json:"mode"` // File mode Mode uint32 `json:"mode"` // File mode
IsDir bool `json:"is_dir"` // Whether it's a directory IsDir bool `json:"is_dir"` // Whether it's a directory
Extension string `json:"extension"` // File extension Extension string `json:"extension"` // File extension
MimeType string `json:"mime_type"` // MIME type MimeType string `json:"mime_type"` // MIME type
Hash string `json:"hash"` // Content hash Hash string `json:"hash"` // Content hash
Permissions string `json:"permissions"` // File permissions Permissions string `json:"permissions"` // File permissions
} }
// StructureAnalysis represents analysis of code structure // StructureAnalysis represents analysis of code structure
type StructureAnalysis struct { type StructureAnalysis struct {
Architecture string `json:"architecture"` // Architectural pattern Architecture string `json:"architecture"` // Architectural pattern
Patterns []string `json:"patterns"` // Design patterns used Patterns []string `json:"patterns"` // Design patterns used
Components []*Component `json:"components"` // Code components Components []*Component `json:"components"` // Code components
Relationships []*Relationship `json:"relationships"` // Component relationships Relationships []*Relationship `json:"relationships"` // Component relationships
Complexity *ComplexityMetrics `json:"complexity"` // Complexity metrics Complexity *ComplexityMetrics `json:"complexity"` // Complexity metrics
QualityMetrics *QualityMetrics `json:"quality_metrics"` // Code quality metrics QualityMetrics *QualityMetrics `json:"quality_metrics"` // Code quality metrics
TestCoverage float64 `json:"test_coverage"` // Test coverage percentage TestCoverage float64 `json:"test_coverage"` // Test coverage percentage
Documentation *DocMetrics `json:"documentation"` // Documentation metrics Documentation *DocMetrics `json:"documentation"` // Documentation metrics
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
} }
// Component represents a code component // Component represents a code component
type Component struct { type Component struct {
Name string `json:"name"` // Component name Name string `json:"name"` // Component name
Type string `json:"type"` // Component type (class, function, etc.) Type string `json:"type"` // Component type (class, function, etc.)
Purpose string `json:"purpose"` // Component purpose Purpose string `json:"purpose"` // Component purpose
Visibility string `json:"visibility"` // Visibility (public, private, etc.) Visibility string `json:"visibility"` // Visibility (public, private, etc.)
Lines int `json:"lines"` // Lines of code Lines int `json:"lines"` // Lines of code
Complexity int `json:"complexity"` // Cyclomatic complexity Complexity int `json:"complexity"` // Cyclomatic complexity
Dependencies []string `json:"dependencies"` // Dependencies Dependencies []string `json:"dependencies"` // Dependencies
Metadata map[string]interface{} `json:"metadata"` // Additional metadata Metadata map[string]interface{} `json:"metadata"` // Additional metadata
} }
// Relationship represents a relationship between components // Relationship represents a relationship between components
type Relationship struct { type Relationship struct {
From string `json:"from"` // Source component From string `json:"from"` // Source component
To string `json:"to"` // Target component To string `json:"to"` // Target component
Type string `json:"type"` // Relationship type Type string `json:"type"` // Relationship type
Strength float64 `json:"strength"` // Relationship strength (0-1) Strength float64 `json:"strength"` // Relationship strength (0-1)
Direction string `json:"direction"` // Direction (unidirectional, bidirectional) Direction string `json:"direction"` // Direction (unidirectional, bidirectional)
Description string `json:"description"` // Relationship description Description string `json:"description"` // Relationship description
} }
// ComplexityMetrics represents code complexity metrics // ComplexityMetrics represents code complexity metrics
type ComplexityMetrics struct { type ComplexityMetrics struct {
Cyclomatic float64 `json:"cyclomatic"` // Cyclomatic complexity Cyclomatic float64 `json:"cyclomatic"` // Cyclomatic complexity
Cognitive float64 `json:"cognitive"` // Cognitive complexity Cognitive float64 `json:"cognitive"` // Cognitive complexity
Halstead float64 `json:"halstead"` // Halstead complexity Halstead float64 `json:"halstead"` // Halstead complexity
Maintainability float64 `json:"maintainability"` // Maintainability index Maintainability float64 `json:"maintainability"` // Maintainability index
TechnicalDebt float64 `json:"technical_debt"` // Technical debt estimate TechnicalDebt float64 `json:"technical_debt"` // Technical debt estimate
} }
// QualityMetrics represents code quality metrics // QualityMetrics represents code quality metrics
type QualityMetrics struct { type QualityMetrics struct {
Readability float64 `json:"readability"` // Readability score Readability float64 `json:"readability"` // Readability score
Testability float64 `json:"testability"` // Testability score Testability float64 `json:"testability"` // Testability score
Reusability float64 `json:"reusability"` // Reusability score Reusability float64 `json:"reusability"` // Reusability score
Reliability float64 `json:"reliability"` // Reliability score Reliability float64 `json:"reliability"` // Reliability score
Security float64 `json:"security"` // Security score Security float64 `json:"security"` // Security score
Performance float64 `json:"performance"` // Performance score Performance float64 `json:"performance"` // Performance score
Duplication float64 `json:"duplication"` // Code duplication percentage Duplication float64 `json:"duplication"` // Code duplication percentage
Consistency float64 `json:"consistency"` // Code consistency score Consistency float64 `json:"consistency"` // Code consistency score
} }
// DocMetrics represents documentation metrics // DocMetrics represents documentation metrics
type DocMetrics struct { type DocMetrics struct {
Coverage float64 `json:"coverage"` // Documentation coverage Coverage float64 `json:"coverage"` // Documentation coverage
Quality float64 `json:"quality"` // Documentation quality Quality float64 `json:"quality"` // Documentation quality
CommentRatio float64 `json:"comment_ratio"` // Comment to code ratio CommentRatio float64 `json:"comment_ratio"` // Comment to code ratio
APIDocCoverage float64 `json:"api_doc_coverage"` // API documentation coverage APIDocCoverage float64 `json:"api_doc_coverage"` // API documentation coverage
ExampleCount int `json:"example_count"` // Number of examples ExampleCount int `json:"example_count"` // Number of examples
TODOCount int `json:"todo_count"` // Number of TODO comments TODOCount int `json:"todo_count"` // Number of TODO comments
FIXMECount int `json:"fixme_count"` // Number of FIXME comments FIXMECount int `json:"fixme_count"` // Number of FIXME comments
} }
// DirectoryStructure represents analysis of directory organization // DirectoryStructure represents analysis of directory organization
type DirectoryStructure struct { type DirectoryStructure struct {
Path string `json:"path"` // Directory path Path string `json:"path"` // Directory path
FileCount int `json:"file_count"` // Number of files FileCount int `json:"file_count"` // Number of files
DirectoryCount int `json:"directory_count"` // Number of subdirectories DirectoryCount int `json:"directory_count"` // Number of subdirectories
TotalSize int64 `json:"total_size"` // Total size in bytes TotalSize int64 `json:"total_size"` // Total size in bytes
FileTypes map[string]int `json:"file_types"` // File type distribution FileTypes map[string]int `json:"file_types"` // File type distribution
Languages map[string]int `json:"languages"` // Language distribution Languages map[string]int `json:"languages"` // Language distribution
Organization *OrganizationInfo `json:"organization"` // Organization information Organization *OrganizationInfo `json:"organization"` // Organization information
Conventions *ConventionInfo `json:"conventions"` // Convention information Conventions *ConventionInfo `json:"conventions"` // Convention information
Dependencies []string `json:"dependencies"` // Directory dependencies Dependencies []string `json:"dependencies"` // Directory dependencies
Purpose string `json:"purpose"` // Directory purpose Purpose string `json:"purpose"` // Directory purpose
Architecture string `json:"architecture"` // Architectural pattern Architecture string `json:"architecture"` // Architectural pattern
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
} }
// OrganizationInfo represents directory organization information // OrganizationInfo represents directory organization information
type OrganizationInfo struct { type OrganizationInfo struct {
Pattern string `json:"pattern"` // Organization pattern Pattern string `json:"pattern"` // Organization pattern
Consistency float64 `json:"consistency"` // Organization consistency Consistency float64 `json:"consistency"` // Organization consistency
Depth int `json:"depth"` // Directory depth Depth int `json:"depth"` // Directory depth
FanOut int `json:"fan_out"` // Average fan-out FanOut int `json:"fan_out"` // Average fan-out
Modularity float64 `json:"modularity"` // Modularity score Modularity float64 `json:"modularity"` // Modularity score
Cohesion float64 `json:"cohesion"` // Cohesion score Cohesion float64 `json:"cohesion"` // Cohesion score
Coupling float64 `json:"coupling"` // Coupling score Coupling float64 `json:"coupling"` // Coupling score
Metadata map[string]interface{} `json:"metadata"` // Additional metadata Metadata map[string]interface{} `json:"metadata"` // Additional metadata
} }
// ConventionInfo represents naming and organizational conventions // ConventionInfo represents naming and organizational conventions
type ConventionInfo struct { type ConventionInfo struct {
NamingStyle string `json:"naming_style"` // Naming convention style NamingStyle string `json:"naming_style"` // Naming convention style
FileNaming string `json:"file_naming"` // File naming pattern FileNaming string `json:"file_naming"` // File naming pattern
DirectoryNaming string `json:"directory_naming"` // Directory naming pattern DirectoryNaming string `json:"directory_naming"` // Directory naming pattern
Consistency float64 `json:"consistency"` // Convention consistency Consistency float64 `json:"consistency"` // Convention consistency
Violations []*Violation `json:"violations"` // Convention violations Violations []*Violation `json:"violations"` // Convention violations
Standards []string `json:"standards"` // Applied standards Standards []string `json:"standards"` // Applied standards
} }
// Violation represents a convention violation // Violation represents a convention violation
type Violation struct { type Violation struct {
Type string `json:"type"` // Violation type Type string `json:"type"` // Violation type
Path string `json:"path"` // Violating path Path string `json:"path"` // Violating path
Expected string `json:"expected"` // Expected format Expected string `json:"expected"` // Expected format
Actual string `json:"actual"` // Actual format Actual string `json:"actual"` // Actual format
Severity string `json:"severity"` // Violation severity Severity string `json:"severity"` // Violation severity
Suggestion string `json:"suggestion"` // Suggested fix Suggestion string `json:"suggestion"` // Suggested fix
} }
// ConventionAnalysis represents analysis of naming and organizational conventions // ConventionAnalysis represents analysis of naming and organizational conventions
type ConventionAnalysis struct { type ConventionAnalysis struct {
NamingPatterns []*NamingPattern `json:"naming_patterns"` // Detected naming patterns NamingPatterns []*NamingPattern `json:"naming_patterns"` // Detected naming patterns
OrganizationalPatterns []*OrganizationalPattern `json:"organizational_patterns"` // Organizational patterns OrganizationalPatterns []*OrganizationalPattern `json:"organizational_patterns"` // Organizational patterns
Consistency float64 `json:"consistency"` // Overall consistency score Consistency float64 `json:"consistency"` // Overall consistency score
Violations []*Violation `json:"violations"` // Convention violations Violations []*Violation `json:"violations"` // Convention violations
Recommendations []*Recommendation `json:"recommendations"` // Improvement recommendations Recommendations []*BasicRecommendation `json:"recommendations"` // Improvement recommendations
AppliedStandards []string `json:"applied_standards"` // Applied coding standards AppliedStandards []string `json:"applied_standards"` // Applied coding standards
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
} }
// RelationshipAnalysis represents analysis of directory relationships // RelationshipAnalysis represents analysis of directory relationships
type RelationshipAnalysis struct { type RelationshipAnalysis struct {
Dependencies []*DirectoryDependency `json:"dependencies"` // Directory dependencies Dependencies []*DirectoryDependency `json:"dependencies"` // Directory dependencies
Relationships []*DirectoryRelation `json:"relationships"` // Directory relationships Relationships []*DirectoryRelation `json:"relationships"` // Directory relationships
CouplingMetrics *CouplingMetrics `json:"coupling_metrics"` // Coupling metrics CouplingMetrics *CouplingMetrics `json:"coupling_metrics"` // Coupling metrics
ModularityScore float64 `json:"modularity_score"` // Modularity score ModularityScore float64 `json:"modularity_score"` // Modularity score
ArchitecturalStyle string `json:"architectural_style"` // Architectural style ArchitecturalStyle string `json:"architectural_style"` // Architectural style
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
} }
// DirectoryDependency represents a dependency between directories // DirectoryDependency represents a dependency between directories
type DirectoryDependency struct { type DirectoryDependency struct {
From string `json:"from"` // Source directory From string `json:"from"` // Source directory
To string `json:"to"` // Target directory To string `json:"to"` // Target directory
Type string `json:"type"` // Dependency type Type string `json:"type"` // Dependency type
Strength float64 `json:"strength"` // Dependency strength Strength float64 `json:"strength"` // Dependency strength
Reason string `json:"reason"` // Reason for dependency Reason string `json:"reason"` // Reason for dependency
FileCount int `json:"file_count"` // Number of files involved FileCount int `json:"file_count"` // Number of files involved
} }
// DirectoryRelation represents a relationship between directories // DirectoryRelation represents a relationship between directories
type DirectoryRelation struct { type DirectoryRelation struct {
Directory1 string `json:"directory1"` // First directory Directory1 string `json:"directory1"` // First directory
Directory2 string `json:"directory2"` // Second directory Directory2 string `json:"directory2"` // Second directory
Type string `json:"type"` // Relation type Type string `json:"type"` // Relation type
Strength float64 `json:"strength"` // Relation strength Strength float64 `json:"strength"` // Relation strength
Description string `json:"description"` // Relation description Description string `json:"description"` // Relation description
Bidirectional bool `json:"bidirectional"` // Whether relation is bidirectional Bidirectional bool `json:"bidirectional"` // Whether relation is bidirectional
} }
// CouplingMetrics represents coupling metrics between directories // CouplingMetrics represents coupling metrics between directories
type CouplingMetrics struct { type CouplingMetrics struct {
AfferentCoupling float64 `json:"afferent_coupling"` // Afferent coupling AfferentCoupling float64 `json:"afferent_coupling"` // Afferent coupling
EfferentCoupling float64 `json:"efferent_coupling"` // Efferent coupling EfferentCoupling float64 `json:"efferent_coupling"` // Efferent coupling
Instability float64 `json:"instability"` // Instability metric Instability float64 `json:"instability"` // Instability metric
Abstractness float64 `json:"abstractness"` // Abstractness metric Abstractness float64 `json:"abstractness"` // Abstractness metric
DistanceFromMain float64 `json:"distance_from_main"` // Distance from main sequence DistanceFromMain float64 `json:"distance_from_main"` // Distance from main sequence
} }
// Pattern represents a detected pattern in code or organization // Pattern represents a detected pattern in code or organization
type Pattern struct { type Pattern struct {
ID string `json:"id"` // Pattern identifier ID string `json:"id"` // Pattern identifier
Name string `json:"name"` // Pattern name Name string `json:"name"` // Pattern name
Type string `json:"type"` // Pattern type Type string `json:"type"` // Pattern type
Description string `json:"description"` // Pattern description Description string `json:"description"` // Pattern description
Confidence float64 `json:"confidence"` // Detection confidence Confidence float64 `json:"confidence"` // Detection confidence
Frequency int `json:"frequency"` // Pattern frequency Frequency int `json:"frequency"` // Pattern frequency
Examples []string `json:"examples"` // Example instances Examples []string `json:"examples"` // Example instances
Criteria map[string]interface{} `json:"criteria"` // Pattern criteria Criteria map[string]interface{} `json:"criteria"` // Pattern criteria
Benefits []string `json:"benefits"` // Pattern benefits Benefits []string `json:"benefits"` // Pattern benefits
Drawbacks []string `json:"drawbacks"` // Pattern drawbacks Drawbacks []string `json:"drawbacks"` // Pattern drawbacks
ApplicableRoles []string `json:"applicable_roles"` // Roles that benefit from this pattern ApplicableRoles []string `json:"applicable_roles"` // Roles that benefit from this pattern
DetectedAt time.Time `json:"detected_at"` // When pattern was detected DetectedAt time.Time `json:"detected_at"` // When pattern was detected
} }
// CodePattern represents a code-specific pattern // CodePattern represents a code-specific pattern
type CodePattern struct { type CodePattern struct {
Pattern // Embedded base pattern Pattern // Embedded base pattern
Language string `json:"language"` // Programming language Language string `json:"language"` // Programming language
Framework string `json:"framework"` // Framework context Framework string `json:"framework"` // Framework context
Complexity float64 `json:"complexity"` // Pattern complexity Complexity float64 `json:"complexity"` // Pattern complexity
Usage *UsagePattern `json:"usage"` // Usage pattern Usage *UsagePattern `json:"usage"` // Usage pattern
Performance *PerformanceInfo `json:"performance"` // Performance characteristics Performance *PerformanceInfo `json:"performance"` // Performance characteristics
} }
// NamingPattern represents a naming convention pattern // NamingPattern represents a naming convention pattern
type NamingPattern struct { type NamingPattern struct {
Pattern // Embedded base pattern Pattern // Embedded base pattern
Convention string `json:"convention"` // Naming convention Convention string `json:"convention"` // Naming convention
Scope string `json:"scope"` // Pattern scope Scope string `json:"scope"` // Pattern scope
Regex string `json:"regex"` // Regex pattern Regex string `json:"regex"` // Regex pattern
CaseStyle string `json:"case_style"` // Case style (camelCase, snake_case, etc.) CaseStyle string `json:"case_style"` // Case style (camelCase, snake_case, etc.)
Prefix string `json:"prefix"` // Common prefix Prefix string `json:"prefix"` // Common prefix
Suffix string `json:"suffix"` // Common suffix Suffix string `json:"suffix"` // Common suffix
} }
// OrganizationalPattern represents an organizational pattern // OrganizationalPattern represents an organizational pattern
type OrganizationalPattern struct { type OrganizationalPattern struct {
Pattern // Embedded base pattern Pattern // Embedded base pattern
Structure string `json:"structure"` // Organizational structure Structure string `json:"structure"` // Organizational structure
Depth int `json:"depth"` // Typical depth Depth int `json:"depth"` // Typical depth
FanOut int `json:"fan_out"` // Typical fan-out FanOut int `json:"fan_out"` // Typical fan-out
Modularity float64 `json:"modularity"` // Modularity characteristics Modularity float64 `json:"modularity"` // Modularity characteristics
Scalability string `json:"scalability"` // Scalability characteristics Scalability string `json:"scalability"` // Scalability characteristics
} }
// UsagePattern represents how a pattern is typically used // UsagePattern represents how a pattern is typically used
type UsagePattern struct { type UsagePattern struct {
Frequency string `json:"frequency"` // Usage frequency Frequency string `json:"frequency"` // Usage frequency
Context []string `json:"context"` // Usage contexts Context []string `json:"context"` // Usage contexts
Prerequisites []string `json:"prerequisites"` // Prerequisites Prerequisites []string `json:"prerequisites"` // Prerequisites
Alternatives []string `json:"alternatives"` // Alternative patterns Alternatives []string `json:"alternatives"` // Alternative patterns
Compatibility map[string]string `json:"compatibility"` // Compatibility with other patterns Compatibility map[string]string `json:"compatibility"` // Compatibility with other patterns
} }
// PerformanceInfo represents performance characteristics of a pattern // PerformanceInfo represents performance characteristics of a pattern
@@ -249,12 +249,12 @@ type PerformanceInfo struct {
// PatternMatch represents a match between context and a pattern // PatternMatch represents a match between context and a pattern
type PatternMatch struct { type PatternMatch struct {
PatternID string `json:"pattern_id"` // Pattern identifier PatternID string `json:"pattern_id"` // Pattern identifier
MatchScore float64 `json:"match_score"` // Match score (0-1) MatchScore float64 `json:"match_score"` // Match score (0-1)
Confidence float64 `json:"confidence"` // Match confidence Confidence float64 `json:"confidence"` // Match confidence
MatchedFields []string `json:"matched_fields"` // Fields that matched MatchedFields []string `json:"matched_fields"` // Fields that matched
Explanation string `json:"explanation"` // Match explanation Explanation string `json:"explanation"` // Match explanation
Suggestions []string `json:"suggestions"` // Improvement suggestions Suggestions []string `json:"suggestions"` // Improvement suggestions
} }
// ValidationResult represents context validation results // ValidationResult represents context validation results
@@ -269,12 +269,12 @@ type ValidationResult struct {
// ValidationIssue represents a validation issue // ValidationIssue represents a validation issue
type ValidationIssue struct { type ValidationIssue struct {
Type string `json:"type"` // Issue type Type string `json:"type"` // Issue type
Severity string `json:"severity"` // Issue severity Severity string `json:"severity"` // Issue severity
Message string `json:"message"` // Issue message Message string `json:"message"` // Issue message
Field string `json:"field"` // Affected field Field string `json:"field"` // Affected field
Suggestion string `json:"suggestion"` // Suggested fix Suggestion string `json:"suggestion"` // Suggested fix
Impact float64 `json:"impact"` // Impact score Impact float64 `json:"impact"` // Impact score
} }
// Suggestion represents an improvement suggestion // Suggestion represents an improvement suggestion
@@ -289,61 +289,61 @@ type Suggestion struct {
} }
// Recommendation represents an improvement recommendation // Recommendation represents an improvement recommendation
type Recommendation struct { type BasicRecommendation struct {
Type string `json:"type"` // Recommendation type Type string `json:"type"` // Recommendation type
Title string `json:"title"` // Recommendation title Title string `json:"title"` // Recommendation title
Description string `json:"description"` // Detailed description Description string `json:"description"` // Detailed description
Priority int `json:"priority"` // Priority level Priority int `json:"priority"` // Priority level
Effort string `json:"effort"` // Effort required Effort string `json:"effort"` // Effort required
Impact string `json:"impact"` // Expected impact Impact string `json:"impact"` // Expected impact
Steps []string `json:"steps"` // Implementation steps Steps []string `json:"steps"` // Implementation steps
Resources []string `json:"resources"` // Required resources Resources []string `json:"resources"` // Required resources
Metadata map[string]interface{} `json:"metadata"` // Additional metadata Metadata map[string]interface{} `json:"metadata"` // Additional metadata
} }
// RAGResponse represents a response from the RAG system // RAGResponse represents a response from the RAG system
type RAGResponse struct { type RAGResponse struct {
Query string `json:"query"` // Original query Query string `json:"query"` // Original query
Answer string `json:"answer"` // Generated answer Answer string `json:"answer"` // Generated answer
Sources []*RAGSource `json:"sources"` // Source documents Sources []*RAGSource `json:"sources"` // Source documents
Confidence float64 `json:"confidence"` // Response confidence Confidence float64 `json:"confidence"` // Response confidence
Context map[string]interface{} `json:"context"` // Additional context Context map[string]interface{} `json:"context"` // Additional context
ProcessedAt time.Time `json:"processed_at"` // When processed ProcessedAt time.Time `json:"processed_at"` // When processed
} }
// RAGSource represents a source document from RAG system // RAGSource represents a source document from RAG system
type RAGSource struct { type RAGSource struct {
ID string `json:"id"` // Source identifier ID string `json:"id"` // Source identifier
Title string `json:"title"` // Source title Title string `json:"title"` // Source title
Content string `json:"content"` // Source content excerpt Content string `json:"content"` // Source content excerpt
Score float64 `json:"score"` // Relevance score Score float64 `json:"score"` // Relevance score
Metadata map[string]interface{} `json:"metadata"` // Source metadata Metadata map[string]interface{} `json:"metadata"` // Source metadata
URL string `json:"url"` // Source URL if available URL string `json:"url"` // Source URL if available
} }
// RAGResult represents a result from RAG similarity search // RAGResult represents a result from RAG similarity search
type RAGResult struct { type RAGResult struct {
ID string `json:"id"` // Result identifier ID string `json:"id"` // Result identifier
Content string `json:"content"` // Content Content string `json:"content"` // Content
Score float64 `json:"score"` // Similarity score Score float64 `json:"score"` // Similarity score
Metadata map[string]interface{} `json:"metadata"` // Result metadata Metadata map[string]interface{} `json:"metadata"` // Result metadata
Highlights []string `json:"highlights"` // Content highlights Highlights []string `json:"highlights"` // Content highlights
} }
// RAGUpdate represents an update to the RAG index // RAGUpdate represents an update to the RAG index
type RAGUpdate struct { type RAGUpdate struct {
ID string `json:"id"` // Document identifier ID string `json:"id"` // Document identifier
Content string `json:"content"` // Document content Content string `json:"content"` // Document content
Metadata map[string]interface{} `json:"metadata"` // Document metadata Metadata map[string]interface{} `json:"metadata"` // Document metadata
Operation string `json:"operation"` // Operation type (add, update, delete) Operation string `json:"operation"` // Operation type (add, update, delete)
} }
// RAGStatistics represents RAG system statistics // RAGStatistics represents RAG system statistics
type RAGStatistics struct { type RAGStatistics struct {
TotalDocuments int64 `json:"total_documents"` // Total indexed documents TotalDocuments int64 `json:"total_documents"` // Total indexed documents
TotalQueries int64 `json:"total_queries"` // Total queries processed TotalQueries int64 `json:"total_queries"` // Total queries processed
AverageQueryTime time.Duration `json:"average_query_time"` // Average query time AverageQueryTime time.Duration `json:"average_query_time"` // Average query time
IndexSize int64 `json:"index_size"` // Index size in bytes IndexSize int64 `json:"index_size"` // Index size in bytes
LastIndexUpdate time.Time `json:"last_index_update"` // When index was last updated LastIndexUpdate time.Time `json:"last_index_update"` // When index was last updated
ErrorRate float64 `json:"error_rate"` // Error rate ErrorRate float64 `json:"error_rate"` // Error rate
} }

View File

@@ -227,7 +227,7 @@ func (cau *ContentAnalysisUtils) extractGenericIdentifiers(content string) (func
// CalculateComplexity calculates code complexity based on various metrics // CalculateComplexity calculates code complexity based on various metrics
func (cau *ContentAnalysisUtils) CalculateComplexity(content, language string) float64 { func (cau *ContentAnalysisUtils) CalculateComplexity(content, language string) float64 {
complexity := 0.0 complexity := 0.0
// Lines of code (basic metric) // Lines of code (basic metric)
lines := strings.Split(content, "\n") lines := strings.Split(content, "\n")
nonEmptyLines := 0 nonEmptyLines := 0
@@ -236,26 +236,26 @@ func (cau *ContentAnalysisUtils) CalculateComplexity(content, language string) f
nonEmptyLines++ nonEmptyLines++
} }
} }
// Base complexity from lines of code // Base complexity from lines of code
complexity += float64(nonEmptyLines) * 0.1 complexity += float64(nonEmptyLines) * 0.1
// Control flow complexity (if, for, while, switch, etc.) // Control flow complexity (if, for, while, switch, etc.)
controlFlowPatterns := []*regexp.Regexp{ controlFlowPatterns := []*regexp.Regexp{
regexp.MustCompile(`\b(?:if|for|while|switch|case)\b`), regexp.MustCompile(`\b(?:if|for|while|switch|case)\b`),
regexp.MustCompile(`\b(?:try|catch|finally)\b`), regexp.MustCompile(`\b(?:try|catch|finally)\b`),
regexp.MustCompile(`\?\s*.*\s*:`), // ternary operator regexp.MustCompile(`\?\s*.*\s*:`), // ternary operator
} }
for _, pattern := range controlFlowPatterns { for _, pattern := range controlFlowPatterns {
matches := pattern.FindAllString(content, -1) matches := pattern.FindAllString(content, -1)
complexity += float64(len(matches)) * 0.5 complexity += float64(len(matches)) * 0.5
} }
// Function complexity // Function complexity
functions, _, _ := cau.ExtractIdentifiers(content, language) functions, _, _ := cau.ExtractIdentifiers(content, language)
complexity += float64(len(functions)) * 0.3 complexity += float64(len(functions)) * 0.3
// Nesting level (simple approximation) // Nesting level (simple approximation)
maxNesting := 0 maxNesting := 0
currentNesting := 0 currentNesting := 0
@@ -269,7 +269,7 @@ func (cau *ContentAnalysisUtils) CalculateComplexity(content, language string) f
} }
} }
complexity += float64(maxNesting) * 0.2 complexity += float64(maxNesting) * 0.2
// Normalize to 0-10 scale // Normalize to 0-10 scale
return math.Min(10.0, complexity/10.0) return math.Min(10.0, complexity/10.0)
} }
@@ -279,66 +279,66 @@ func (cau *ContentAnalysisUtils) DetectTechnologies(content, filename string) []
technologies := []string{} technologies := []string{}
lowerContent := strings.ToLower(content) lowerContent := strings.ToLower(content)
ext := strings.ToLower(filepath.Ext(filename)) ext := strings.ToLower(filepath.Ext(filename))
// Language detection // Language detection
languageMap := map[string][]string{ languageMap := map[string][]string{
".go": {"go", "golang"}, ".go": {"go", "golang"},
".py": {"python"}, ".py": {"python"},
".js": {"javascript", "node.js"}, ".js": {"javascript", "node.js"},
".jsx": {"javascript", "react", "jsx"}, ".jsx": {"javascript", "react", "jsx"},
".ts": {"typescript"}, ".ts": {"typescript"},
".tsx": {"typescript", "react", "jsx"}, ".tsx": {"typescript", "react", "jsx"},
".java": {"java"}, ".java": {"java"},
".kt": {"kotlin"}, ".kt": {"kotlin"},
".rs": {"rust"}, ".rs": {"rust"},
".cpp": {"c++"}, ".cpp": {"c++"},
".c": {"c"}, ".c": {"c"},
".cs": {"c#", ".net"}, ".cs": {"c#", ".net"},
".php": {"php"}, ".php": {"php"},
".rb": {"ruby"}, ".rb": {"ruby"},
".swift": {"swift"}, ".swift": {"swift"},
".scala": {"scala"}, ".scala": {"scala"},
".clj": {"clojure"}, ".clj": {"clojure"},
".hs": {"haskell"}, ".hs": {"haskell"},
".ml": {"ocaml"}, ".ml": {"ocaml"},
} }
if langs, exists := languageMap[ext]; exists { if langs, exists := languageMap[ext]; exists {
technologies = append(technologies, langs...) technologies = append(technologies, langs...)
} }
// Framework and library detection // Framework and library detection
frameworkPatterns := map[string][]string{ frameworkPatterns := map[string][]string{
"react": {"import.*react", "from [\"']react[\"']", "<.*/>", "jsx"}, "react": {"import.*react", "from [\"']react[\"']", "<.*/>", "jsx"},
"vue": {"import.*vue", "from [\"']vue[\"']", "<template>", "vue"}, "vue": {"import.*vue", "from [\"']vue[\"']", "<template>", "vue"},
"angular": {"import.*@angular", "from [\"']@angular", "ngmodule", "component"}, "angular": {"import.*@angular", "from [\"']@angular", "ngmodule", "component"},
"express": {"import.*express", "require.*express", "app.get", "app.post"}, "express": {"import.*express", "require.*express", "app.get", "app.post"},
"django": {"from django", "import django", "django.db", "models.model"}, "django": {"from django", "import django", "django.db", "models.model"},
"flask": {"from flask", "import flask", "@app.route", "flask.request"}, "flask": {"from flask", "import flask", "@app.route", "flask.request"},
"spring": {"@springboot", "@controller", "@service", "@repository"}, "spring": {"@springboot", "@controller", "@service", "@repository"},
"hibernate": {"@entity", "@table", "@column", "hibernate"}, "hibernate": {"@entity", "@table", "@column", "hibernate"},
"jquery": {"$\\(", "jquery"}, "jquery": {"$\\(", "jquery"},
"bootstrap": {"bootstrap", "btn-", "col-", "row"}, "bootstrap": {"bootstrap", "btn-", "col-", "row"},
"docker": {"dockerfile", "docker-compose", "from.*:", "run.*"}, "docker": {"dockerfile", "docker-compose", "from.*:", "run.*"},
"kubernetes": {"apiversion:", "kind:", "metadata:", "spec:"}, "kubernetes": {"apiversion:", "kind:", "metadata:", "spec:"},
"terraform": {"\\.tf$", "resource \"", "provider \"", "terraform"}, "terraform": {"\\.tf$", "resource \"", "provider \"", "terraform"},
"ansible": {"\\.yml$", "hosts:", "tasks:", "playbook"}, "ansible": {"\\.yml$", "hosts:", "tasks:", "playbook"},
"jenkins": {"jenkinsfile", "pipeline", "stage", "steps"}, "jenkins": {"jenkinsfile", "pipeline", "stage", "steps"},
"git": {"\\.git", "git add", "git commit", "git push"}, "git": {"\\.git", "git add", "git commit", "git push"},
"mysql": {"mysql", "select.*from", "insert into", "create table"}, "mysql": {"mysql", "select.*from", "insert into", "create table"},
"postgresql": {"postgresql", "postgres", "psql"}, "postgresql": {"postgresql", "postgres", "psql"},
"mongodb": {"mongodb", "mongo", "find\\(", "insert\\("}, "mongodb": {"mongodb", "mongo", "find\\(", "insert\\("},
"redis": {"redis", "set.*", "get.*", "rpush"}, "redis": {"redis", "set.*", "get.*", "rpush"},
"elasticsearch": {"elasticsearch", "elastic", "query.*", "search.*"}, "elasticsearch": {"elasticsearch", "elastic", "query.*", "search.*"},
"graphql": {"graphql", "query.*{", "mutation.*{", "subscription.*{"}, "graphql": {"graphql", "query.*{", "mutation.*{", "subscription.*{"},
"grpc": {"grpc", "proto", "service.*rpc", "\\.proto$"}, "grpc": {"grpc", "proto", "service.*rpc", "\\.proto$"},
"websocket": {"websocket", "ws://", "wss://", "socket.io"}, "websocket": {"websocket", "ws://", "wss://", "socket.io"},
"jwt": {"jwt", "jsonwebtoken", "bearer.*token"}, "jwt": {"jwt", "jsonwebtoken", "bearer.*token"},
"oauth": {"oauth", "oauth2", "client_id", "client_secret"}, "oauth": {"oauth", "oauth2", "client_id", "client_secret"},
"ssl": {"ssl", "tls", "https", "certificate"}, "ssl": {"ssl", "tls", "https", "certificate"},
"encryption": {"encrypt", "decrypt", "bcrypt", "sha256"}, "encryption": {"encrypt", "decrypt", "bcrypt", "sha256"},
} }
for tech, patterns := range frameworkPatterns { for tech, patterns := range frameworkPatterns {
for _, pattern := range patterns { for _, pattern := range patterns {
if matched, _ := regexp.MatchString(pattern, lowerContent); matched { if matched, _ := regexp.MatchString(pattern, lowerContent); matched {
@@ -347,7 +347,7 @@ func (cau *ContentAnalysisUtils) DetectTechnologies(content, filename string) []
} }
} }
} }
return removeDuplicates(technologies) return removeDuplicates(technologies)
} }
@@ -371,7 +371,7 @@ func (su *ScoreUtils) NormalizeScore(score, min, max float64) float64 {
func (su *ScoreUtils) CalculateWeightedScore(scores map[string]float64, weights map[string]float64) float64 { func (su *ScoreUtils) CalculateWeightedScore(scores map[string]float64, weights map[string]float64) float64 {
totalWeight := 0.0 totalWeight := 0.0
weightedSum := 0.0 weightedSum := 0.0
for dimension, score := range scores { for dimension, score := range scores {
weight := weights[dimension] weight := weights[dimension]
if weight == 0 { if weight == 0 {
@@ -380,11 +380,11 @@ func (su *ScoreUtils) CalculateWeightedScore(scores map[string]float64, weights
weightedSum += score * weight weightedSum += score * weight
totalWeight += weight totalWeight += weight
} }
if totalWeight == 0 { if totalWeight == 0 {
return 0.0 return 0.0
} }
return weightedSum / totalWeight return weightedSum / totalWeight
} }
@@ -393,31 +393,31 @@ func (su *ScoreUtils) CalculatePercentile(values []float64, percentile int) floa
if len(values) == 0 { if len(values) == 0 {
return 0.0 return 0.0
} }
sorted := make([]float64, len(values)) sorted := make([]float64, len(values))
copy(sorted, values) copy(sorted, values)
sort.Float64s(sorted) sort.Float64s(sorted)
if percentile <= 0 { if percentile <= 0 {
return sorted[0] return sorted[0]
} }
if percentile >= 100 { if percentile >= 100 {
return sorted[len(sorted)-1] return sorted[len(sorted)-1]
} }
index := float64(percentile) / 100.0 * float64(len(sorted)-1) index := float64(percentile) / 100.0 * float64(len(sorted)-1)
lower := int(math.Floor(index)) lower := int(math.Floor(index))
upper := int(math.Ceil(index)) upper := int(math.Ceil(index))
if lower == upper { if lower == upper {
return sorted[lower] return sorted[lower]
} }
// Linear interpolation // Linear interpolation
lowerValue := sorted[lower] lowerValue := sorted[lower]
upperValue := sorted[upper] upperValue := sorted[upper]
weight := index - float64(lower) weight := index - float64(lower)
return lowerValue + weight*(upperValue-lowerValue) return lowerValue + weight*(upperValue-lowerValue)
} }
@@ -426,14 +426,14 @@ func (su *ScoreUtils) CalculateStandardDeviation(values []float64) float64 {
if len(values) <= 1 { if len(values) <= 1 {
return 0.0 return 0.0
} }
// Calculate mean // Calculate mean
sum := 0.0 sum := 0.0
for _, value := range values { for _, value := range values {
sum += value sum += value
} }
mean := sum / float64(len(values)) mean := sum / float64(len(values))
// Calculate variance // Calculate variance
variance := 0.0 variance := 0.0
for _, value := range values { for _, value := range values {
@@ -441,7 +441,7 @@ func (su *ScoreUtils) CalculateStandardDeviation(values []float64) float64 {
variance += diff * diff variance += diff * diff
} }
variance /= float64(len(values) - 1) variance /= float64(len(values) - 1)
return math.Sqrt(variance) return math.Sqrt(variance)
} }
@@ -510,41 +510,41 @@ func (su *StringUtils) Similarity(s1, s2 string) float64 {
if s1 == s2 { if s1 == s2 {
return 1.0 return 1.0
} }
words1 := strings.Fields(strings.ToLower(s1)) words1 := strings.Fields(strings.ToLower(s1))
words2 := strings.Fields(strings.ToLower(s2)) words2 := strings.Fields(strings.ToLower(s2))
if len(words1) == 0 && len(words2) == 0 { if len(words1) == 0 && len(words2) == 0 {
return 1.0 return 1.0
} }
if len(words1) == 0 || len(words2) == 0 { if len(words1) == 0 || len(words2) == 0 {
return 0.0 return 0.0
} }
set1 := make(map[string]bool) set1 := make(map[string]bool)
set2 := make(map[string]bool) set2 := make(map[string]bool)
for _, word := range words1 { for _, word := range words1 {
set1[word] = true set1[word] = true
} }
for _, word := range words2 { for _, word := range words2 {
set2[word] = true set2[word] = true
} }
intersection := 0 intersection := 0
for word := range set1 { for word := range set1 {
if set2[word] { if set2[word] {
intersection++ intersection++
} }
} }
union := len(set1) + len(set2) - intersection union := len(set1) + len(set2) - intersection
if union == 0 { if union == 0 {
return 1.0 return 1.0
} }
return float64(intersection) / float64(union) return float64(intersection) / float64(union)
} }
@@ -565,35 +565,35 @@ func (su *StringUtils) ExtractKeywords(text string, minLength int) []string {
"so": true, "than": true, "too": true, "very": true, "can": true, "could": true, "so": true, "than": true, "too": true, "very": true, "can": true, "could": true,
"should": true, "would": true, "use": true, "used": true, "using": true, "should": true, "would": true, "use": true, "used": true, "using": true,
} }
// Extract words // Extract words
wordRegex := regexp.MustCompile(`\b[a-zA-Z]+\b`) wordRegex := regexp.MustCompile(`\b[a-zA-Z]+\b`)
words := wordRegex.FindAllString(strings.ToLower(text), -1) words := wordRegex.FindAllString(strings.ToLower(text), -1)
keywords := []string{} keywords := []string{}
wordFreq := make(map[string]int) wordFreq := make(map[string]int)
for _, word := range words { for _, word := range words {
if len(word) >= minLength && !stopWords[word] { if len(word) >= minLength && !stopWords[word] {
wordFreq[word]++ wordFreq[word]++
} }
} }
// Sort by frequency and return top keywords // Sort by frequency and return top keywords
type wordCount struct { type wordCount struct {
word string word string
count int count int
} }
var sortedWords []wordCount var sortedWords []wordCount
for word, count := range wordFreq { for word, count := range wordFreq {
sortedWords = append(sortedWords, wordCount{word, count}) sortedWords = append(sortedWords, wordCount{word, count})
} }
sort.Slice(sortedWords, func(i, j int) bool { sort.Slice(sortedWords, func(i, j int) bool {
return sortedWords[i].count > sortedWords[j].count return sortedWords[i].count > sortedWords[j].count
}) })
maxKeywords := 20 maxKeywords := 20
for i, wc := range sortedWords { for i, wc := range sortedWords {
if i >= maxKeywords { if i >= maxKeywords {
@@ -601,7 +601,7 @@ func (su *StringUtils) ExtractKeywords(text string, minLength int) []string {
} }
keywords = append(keywords, wc.word) keywords = append(keywords, wc.word)
} }
return keywords return keywords
} }
@@ -741,30 +741,58 @@ func CloneContextNode(node *slurpContext.ContextNode) *slurpContext.ContextNode
} }
clone := &slurpContext.ContextNode{ clone := &slurpContext.ContextNode{
Path: node.Path, Path: node.Path,
Summary: node.Summary, UCXLAddress: node.UCXLAddress,
Purpose: node.Purpose, Summary: node.Summary,
Technologies: make([]string, len(node.Technologies)), Purpose: node.Purpose,
Tags: make([]string, len(node.Tags)), Technologies: make([]string, len(node.Technologies)),
Insights: make([]string, len(node.Insights)), Tags: make([]string, len(node.Tags)),
CreatedAt: node.CreatedAt, Insights: make([]string, len(node.Insights)),
UpdatedAt: node.UpdatedAt, OverridesParent: node.OverridesParent,
ContextSpecificity: node.ContextSpecificity, ContextSpecificity: node.ContextSpecificity,
RAGConfidence: node.RAGConfidence, AppliesToChildren: node.AppliesToChildren,
ProcessedForRole: node.ProcessedForRole, AppliesTo: node.AppliesTo,
GeneratedAt: node.GeneratedAt,
UpdatedAt: node.UpdatedAt,
CreatedBy: node.CreatedBy,
WhoUpdated: node.WhoUpdated,
RAGConfidence: node.RAGConfidence,
EncryptedFor: make([]string, len(node.EncryptedFor)),
AccessLevel: node.AccessLevel,
} }
copy(clone.Technologies, node.Technologies) copy(clone.Technologies, node.Technologies)
copy(clone.Tags, node.Tags) copy(clone.Tags, node.Tags)
copy(clone.Insights, node.Insights) copy(clone.Insights, node.Insights)
copy(clone.EncryptedFor, node.EncryptedFor)
if node.RoleSpecificInsights != nil { if node.Parent != nil {
clone.RoleSpecificInsights = make([]*RoleSpecificInsight, len(node.RoleSpecificInsights)) parent := *node.Parent
copy(clone.RoleSpecificInsights, node.RoleSpecificInsights) clone.Parent = &parent
}
if len(node.Children) > 0 {
clone.Children = make([]string, len(node.Children))
copy(clone.Children, node.Children)
}
if node.Language != nil {
language := *node.Language
clone.Language = &language
}
if node.Size != nil {
sz := *node.Size
clone.Size = &sz
}
if node.LastModified != nil {
lm := *node.LastModified
clone.LastModified = &lm
}
if node.ContentHash != nil {
hash := *node.ContentHash
clone.ContentHash = &hash
} }
if node.Metadata != nil { if node.Metadata != nil {
clone.Metadata = make(map[string]interface{}) clone.Metadata = make(map[string]interface{}, len(node.Metadata))
for k, v := range node.Metadata { for k, v := range node.Metadata {
clone.Metadata[k] = v clone.Metadata[k] = v
} }
@@ -783,7 +811,7 @@ func MergeContextNodes(nodes ...*slurpContext.ContextNode) *slurpContext.Context
} }
merged := CloneContextNode(nodes[0]) merged := CloneContextNode(nodes[0])
for i := 1; i < len(nodes); i++ { for i := 1; i < len(nodes); i++ {
node := nodes[i] node := nodes[i]
if node == nil { if node == nil {
@@ -792,27 +820,29 @@ func MergeContextNodes(nodes ...*slurpContext.ContextNode) *slurpContext.Context
// Merge technologies // Merge technologies
merged.Technologies = mergeStringSlices(merged.Technologies, node.Technologies) merged.Technologies = mergeStringSlices(merged.Technologies, node.Technologies)
// Merge tags // Merge tags
merged.Tags = mergeStringSlices(merged.Tags, node.Tags) merged.Tags = mergeStringSlices(merged.Tags, node.Tags)
// Merge insights // Merge insights
merged.Insights = mergeStringSlices(merged.Insights, node.Insights) merged.Insights = mergeStringSlices(merged.Insights, node.Insights)
// Use most recent timestamps // Use most relevant timestamps
if node.CreatedAt.Before(merged.CreatedAt) { if merged.GeneratedAt.IsZero() {
merged.CreatedAt = node.CreatedAt merged.GeneratedAt = node.GeneratedAt
} else if !node.GeneratedAt.IsZero() && node.GeneratedAt.Before(merged.GeneratedAt) {
merged.GeneratedAt = node.GeneratedAt
} }
if node.UpdatedAt.After(merged.UpdatedAt) { if node.UpdatedAt.After(merged.UpdatedAt) {
merged.UpdatedAt = node.UpdatedAt merged.UpdatedAt = node.UpdatedAt
} }
// Average context specificity // Average context specificity
merged.ContextSpecificity = (merged.ContextSpecificity + node.ContextSpecificity) / 2 merged.ContextSpecificity = (merged.ContextSpecificity + node.ContextSpecificity) / 2
// Average RAG confidence // Average RAG confidence
merged.RAGConfidence = (merged.RAGConfidence + node.RAGConfidence) / 2 merged.RAGConfidence = (merged.RAGConfidence + node.RAGConfidence) / 2
// Merge metadata // Merge metadata
if node.Metadata != nil { if node.Metadata != nil {
if merged.Metadata == nil { if merged.Metadata == nil {
@@ -844,7 +874,7 @@ func removeDuplicates(slice []string) []string {
func mergeStringSlices(slice1, slice2 []string) []string { func mergeStringSlices(slice1, slice2 []string) []string {
merged := make([]string, len(slice1)) merged := make([]string, len(slice1))
copy(merged, slice1) copy(merged, slice1)
for _, item := range slice2 { for _, item := range slice2 {
found := false found := false
for _, existing := range merged { for _, existing := range merged {
@@ -857,7 +887,7 @@ func mergeStringSlices(slice1, slice2 []string) []string {
merged = append(merged, item) merged = append(merged, item)
} }
} }
return merged return merged
} }
@@ -1034,4 +1064,4 @@ func (bu *ByteUtils) ReadFileWithLimit(filename string, maxSize int64) ([]byte,
} }
return io.ReadAll(file) return io.ReadAll(file)
} }

View File

@@ -2,6 +2,9 @@ package slurp
import ( import (
"context" "context"
"time"
"chorus/pkg/crypto"
) )
// Core interfaces for the SLURP contextual intelligence system. // Core interfaces for the SLURP contextual intelligence system.
@@ -17,34 +20,34 @@ type ContextResolver interface {
// Resolve resolves context for a UCXL address using cascading inheritance. // Resolve resolves context for a UCXL address using cascading inheritance.
// This is the primary method for context resolution with default depth limits. // This is the primary method for context resolution with default depth limits.
Resolve(ctx context.Context, ucxlAddress string) (*ResolvedContext, error) Resolve(ctx context.Context, ucxlAddress string) (*ResolvedContext, error)
// ResolveWithDepth resolves context with bounded depth limit. // ResolveWithDepth resolves context with bounded depth limit.
// Provides fine-grained control over hierarchy traversal depth for // Provides fine-grained control over hierarchy traversal depth for
// performance optimization and resource management. // performance optimization and resource management.
ResolveWithDepth(ctx context.Context, ucxlAddress string, maxDepth int) (*ResolvedContext, error) ResolveWithDepth(ctx context.Context, ucxlAddress string, maxDepth int) (*ResolvedContext, error)
// BatchResolve efficiently resolves multiple UCXL addresses. // BatchResolve efficiently resolves multiple UCXL addresses.
// Uses parallel processing, request deduplication, and shared caching // Uses parallel processing, request deduplication, and shared caching
// for optimal performance with bulk operations. // for optimal performance with bulk operations.
BatchResolve(ctx context.Context, addresses []string) (map[string]*ResolvedContext, error) BatchResolve(ctx context.Context, addresses []string) (map[string]*ResolvedContext, error)
// InvalidateCache invalidates cached resolution for an address. // InvalidateCache invalidates cached resolution for an address.
// Used when underlying context changes to ensure fresh resolution. // Used when underlying context changes to ensure fresh resolution.
InvalidateCache(ucxlAddress string) error InvalidateCache(ucxlAddress string) error
// InvalidatePattern invalidates cached resolutions matching a pattern. // InvalidatePattern invalidates cached resolutions matching a pattern.
// Useful for bulk cache invalidation when hierarchies change. // Useful for bulk cache invalidation when hierarchies change.
InvalidatePattern(pattern string) error InvalidatePattern(pattern string) error
// GetStatistics returns resolver performance and operational statistics. // GetStatistics returns resolver performance and operational statistics.
GetStatistics() *ResolverStatistics GetStatistics() *ResolverStatistics
// SetDepthLimit sets the default depth limit for resolution operations. // SetDepthLimit sets the default depth limit for resolution operations.
SetDepthLimit(maxDepth int) error SetDepthLimit(maxDepth int) error
// GetDepthLimit returns the current default depth limit. // GetDepthLimit returns the current default depth limit.
GetDepthLimit() int GetDepthLimit() int
// ClearCache clears all cached resolutions. // ClearCache clears all cached resolutions.
ClearCache() error ClearCache() error
} }
@@ -57,46 +60,46 @@ type HierarchyManager interface {
// LoadHierarchy loads the context hierarchy from storage. // LoadHierarchy loads the context hierarchy from storage.
// Must be called before other operations to initialize the hierarchy. // Must be called before other operations to initialize the hierarchy.
LoadHierarchy(ctx context.Context) error LoadHierarchy(ctx context.Context) error
// AddNode adds a context node to the hierarchy. // AddNode adds a context node to the hierarchy.
// Validates hierarchy constraints and updates relationships. // Validates hierarchy constraints and updates relationships.
AddNode(ctx context.Context, node *ContextNode) error AddNode(ctx context.Context, node *ContextNode) error
// UpdateNode updates an existing context node. // UpdateNode updates an existing context node.
// Preserves hierarchy relationships while updating content. // Preserves hierarchy relationships while updating content.
UpdateNode(ctx context.Context, node *ContextNode) error UpdateNode(ctx context.Context, node *ContextNode) error
// RemoveNode removes a context node and handles children. // RemoveNode removes a context node and handles children.
// Provides options for handling orphaned children (promote, delete, reassign). // Provides options for handling orphaned children (promote, delete, reassign).
RemoveNode(ctx context.Context, nodeID string) error RemoveNode(ctx context.Context, nodeID string) error
// GetNode retrieves a context node by ID. // GetNode retrieves a context node by ID.
GetNode(ctx context.Context, nodeID string) (*ContextNode, error) GetNode(ctx context.Context, nodeID string) (*ContextNode, error)
// TraverseUp traverses up the hierarchy with bounded depth. // TraverseUp traverses up the hierarchy with bounded depth.
// Returns ancestor nodes within the specified depth limit. // Returns ancestor nodes within the specified depth limit.
TraverseUp(ctx context.Context, startPath string, maxDepth int) ([]*ContextNode, error) TraverseUp(ctx context.Context, startPath string, maxDepth int) ([]*ContextNode, error)
// TraverseDown traverses down the hierarchy with bounded depth. // TraverseDown traverses down the hierarchy with bounded depth.
// Returns descendant nodes within the specified depth limit. // Returns descendant nodes within the specified depth limit.
TraverseDown(ctx context.Context, startPath string, maxDepth int) ([]*ContextNode, error) TraverseDown(ctx context.Context, startPath string, maxDepth int) ([]*ContextNode, error)
// GetChildren gets immediate children of a node. // GetChildren gets immediate children of a node.
GetChildren(ctx context.Context, nodeID string) ([]*ContextNode, error) GetChildren(ctx context.Context, nodeID string) ([]*ContextNode, error)
// GetParent gets the immediate parent of a node. // GetParent gets the immediate parent of a node.
GetParent(ctx context.Context, nodeID string) (*ContextNode, error) GetParent(ctx context.Context, nodeID string) (*ContextNode, error)
// GetPath gets the full path from root to a node. // GetPath gets the full path from root to a node.
GetPath(ctx context.Context, nodeID string) ([]*ContextNode, error) GetPath(ctx context.Context, nodeID string) ([]*ContextNode, error)
// ValidateHierarchy validates hierarchy integrity and constraints. // ValidateHierarchy validates hierarchy integrity and constraints.
// Checks for cycles, orphans, and consistency violations. // Checks for cycles, orphans, and consistency violations.
ValidateHierarchy(ctx context.Context) error ValidateHierarchy(ctx context.Context) error
// RebuildIndex rebuilds internal indexes for hierarchy operations. // RebuildIndex rebuilds internal indexes for hierarchy operations.
RebuildIndex(ctx context.Context) error RebuildIndex(ctx context.Context) error
// GetHierarchyStats returns statistics about the hierarchy. // GetHierarchyStats returns statistics about the hierarchy.
GetHierarchyStats(ctx context.Context) (*HierarchyStats, error) GetHierarchyStats(ctx context.Context) (*HierarchyStats, error)
} }
@@ -110,27 +113,27 @@ type GlobalContextManager interface {
// AddGlobalContext adds a context that applies globally. // AddGlobalContext adds a context that applies globally.
// Global contexts are merged into all resolution results. // Global contexts are merged into all resolution results.
AddGlobalContext(ctx context.Context, context *ContextNode) error AddGlobalContext(ctx context.Context, context *ContextNode) error
// RemoveGlobalContext removes a global context. // RemoveGlobalContext removes a global context.
RemoveGlobalContext(ctx context.Context, contextID string) error RemoveGlobalContext(ctx context.Context, contextID string) error
// UpdateGlobalContext updates an existing global context. // UpdateGlobalContext updates an existing global context.
UpdateGlobalContext(ctx context.Context, context *ContextNode) error UpdateGlobalContext(ctx context.Context, context *ContextNode) error
// ListGlobalContexts lists all global contexts. // ListGlobalContexts lists all global contexts.
// Returns contexts ordered by priority/specificity. // Returns contexts ordered by priority/specificity.
ListGlobalContexts(ctx context.Context) ([]*ContextNode, error) ListGlobalContexts(ctx context.Context) ([]*ContextNode, error)
// GetGlobalContext retrieves a specific global context. // GetGlobalContext retrieves a specific global context.
GetGlobalContext(ctx context.Context, contextID string) (*ContextNode, error) GetGlobalContext(ctx context.Context, contextID string) (*ContextNode, error)
// ApplyGlobalContexts applies global contexts to a resolution. // ApplyGlobalContexts applies global contexts to a resolution.
// Called automatically during resolution process. // Called automatically during resolution process.
ApplyGlobalContexts(ctx context.Context, resolved *ResolvedContext) error ApplyGlobalContexts(ctx context.Context, resolved *ResolvedContext) error
// EnableGlobalContext enables/disables a global context. // EnableGlobalContext enables/disables a global context.
EnableGlobalContext(ctx context.Context, contextID string, enabled bool) error EnableGlobalContext(ctx context.Context, contextID string, enabled bool) error
// SetGlobalContextPriority sets priority for global context application. // SetGlobalContextPriority sets priority for global context application.
SetGlobalContextPriority(ctx context.Context, contextID string, priority int) error SetGlobalContextPriority(ctx context.Context, contextID string, priority int) error
} }
@@ -143,54 +146,54 @@ type GlobalContextManager interface {
type TemporalGraph interface { type TemporalGraph interface {
// CreateInitialContext creates the first version of context. // CreateInitialContext creates the first version of context.
// Establishes the starting point for temporal evolution tracking. // Establishes the starting point for temporal evolution tracking.
CreateInitialContext(ctx context.Context, ucxlAddress string, CreateInitialContext(ctx context.Context, ucxlAddress string,
contextData *ContextNode, creator string) (*TemporalNode, error) contextData *ContextNode, creator string) (*TemporalNode, error)
// EvolveContext creates a new temporal version due to a decision. // EvolveContext creates a new temporal version due to a decision.
// Records the decision that caused the change and updates the graph. // Records the decision that caused the change and updates the graph.
EvolveContext(ctx context.Context, ucxlAddress string, EvolveContext(ctx context.Context, ucxlAddress string,
newContext *ContextNode, reason ChangeReason, newContext *ContextNode, reason ChangeReason,
decision *DecisionMetadata) (*TemporalNode, error) decision *DecisionMetadata) (*TemporalNode, error)
// GetLatestVersion gets the most recent temporal node. // GetLatestVersion gets the most recent temporal node.
GetLatestVersion(ctx context.Context, ucxlAddress string) (*TemporalNode, error) GetLatestVersion(ctx context.Context, ucxlAddress string) (*TemporalNode, error)
// GetVersionAtDecision gets context as it was at a specific decision point. // GetVersionAtDecision gets context as it was at a specific decision point.
// Navigation based on decision hops, not chronological time. // Navigation based on decision hops, not chronological time.
GetVersionAtDecision(ctx context.Context, ucxlAddress string, GetVersionAtDecision(ctx context.Context, ucxlAddress string,
decisionHop int) (*TemporalNode, error) decisionHop int) (*TemporalNode, error)
// GetEvolutionHistory gets complete evolution history. // GetEvolutionHistory gets complete evolution history.
// Returns all temporal versions ordered by decision sequence. // Returns all temporal versions ordered by decision sequence.
GetEvolutionHistory(ctx context.Context, ucxlAddress string) ([]*TemporalNode, error) GetEvolutionHistory(ctx context.Context, ucxlAddress string) ([]*TemporalNode, error)
// AddInfluenceRelationship adds influence between contexts. // AddInfluenceRelationship adds influence between contexts.
// Establishes that decisions in one context affect another. // Establishes that decisions in one context affect another.
AddInfluenceRelationship(ctx context.Context, influencer, influenced string) error AddInfluenceRelationship(ctx context.Context, influencer, influenced string) error
// RemoveInfluenceRelationship removes an influence relationship. // RemoveInfluenceRelationship removes an influence relationship.
RemoveInfluenceRelationship(ctx context.Context, influencer, influenced string) error RemoveInfluenceRelationship(ctx context.Context, influencer, influenced string) error
// GetInfluenceRelationships gets all influence relationships for a context. // GetInfluenceRelationships gets all influence relationships for a context.
GetInfluenceRelationships(ctx context.Context, ucxlAddress string) ([]string, []string, error) GetInfluenceRelationships(ctx context.Context, ucxlAddress string) ([]string, []string, error)
// FindRelatedDecisions finds decisions within N decision hops. // FindRelatedDecisions finds decisions within N decision hops.
// Explores the decision graph by conceptual distance, not time. // Explores the decision graph by conceptual distance, not time.
FindRelatedDecisions(ctx context.Context, ucxlAddress string, FindRelatedDecisions(ctx context.Context, ucxlAddress string,
maxHops int) ([]*DecisionPath, error) maxHops int) ([]*DecisionPath, error)
// FindDecisionPath finds shortest decision path between addresses. // FindDecisionPath finds shortest decision path between addresses.
// Returns the path of decisions connecting two contexts. // Returns the path of decisions connecting two contexts.
FindDecisionPath(ctx context.Context, from, to string) ([]*DecisionStep, error) FindDecisionPath(ctx context.Context, from, to string) ([]*DecisionStep, error)
// AnalyzeDecisionPatterns analyzes decision-making patterns. // AnalyzeDecisionPatterns analyzes decision-making patterns.
// Identifies patterns in how decisions are made and contexts evolve. // Identifies patterns in how decisions are made and contexts evolve.
AnalyzeDecisionPatterns(ctx context.Context) (*DecisionAnalysis, error) AnalyzeDecisionPatterns(ctx context.Context) (*DecisionAnalysis, error)
// ValidateTemporalIntegrity validates temporal graph integrity. // ValidateTemporalIntegrity validates temporal graph integrity.
// Checks for inconsistencies and corruption in temporal data. // Checks for inconsistencies and corruption in temporal data.
ValidateTemporalIntegrity(ctx context.Context) error ValidateTemporalIntegrity(ctx context.Context) error
// CompactHistory compacts old temporal data to save space. // CompactHistory compacts old temporal data to save space.
// Removes detailed history while preserving key decision points. // Removes detailed history while preserving key decision points.
CompactHistory(ctx context.Context, beforeTime *time.Time) error CompactHistory(ctx context.Context, beforeTime *time.Time) error
@@ -204,25 +207,25 @@ type TemporalGraph interface {
type DecisionNavigator interface { type DecisionNavigator interface {
// NavigateDecisionHops navigates by decision distance, not time. // NavigateDecisionHops navigates by decision distance, not time.
// Moves through the decision graph by the specified number of hops. // Moves through the decision graph by the specified number of hops.
NavigateDecisionHops(ctx context.Context, ucxlAddress string, NavigateDecisionHops(ctx context.Context, ucxlAddress string,
hops int, direction NavigationDirection) (*TemporalNode, error) hops int, direction NavigationDirection) (*TemporalNode, error)
// GetDecisionTimeline gets timeline ordered by decision sequence. // GetDecisionTimeline gets timeline ordered by decision sequence.
// Returns decisions in the order they were made, not chronological order. // Returns decisions in the order they were made, not chronological order.
GetDecisionTimeline(ctx context.Context, ucxlAddress string, GetDecisionTimeline(ctx context.Context, ucxlAddress string,
includeRelated bool, maxHops int) (*DecisionTimeline, error) includeRelated bool, maxHops int) (*DecisionTimeline, error)
// FindStaleContexts finds contexts that may be outdated. // FindStaleContexts finds contexts that may be outdated.
// Identifies contexts that haven't been updated despite related changes. // Identifies contexts that haven't been updated despite related changes.
FindStaleContexts(ctx context.Context, stalenessThreshold float64) ([]*StaleContext, error) FindStaleContexts(ctx context.Context, stalenessThreshold float64) ([]*StaleContext, error)
// ValidateDecisionPath validates a decision path is reachable. // ValidateDecisionPath validates a decision path is reachable.
// Verifies that a path exists and is traversable. // Verifies that a path exists and is traversable.
ValidateDecisionPath(ctx context.Context, path []*DecisionStep) error ValidateDecisionPath(ctx context.Context, path []*DecisionStep) error
// GetNavigationHistory gets navigation history for a session. // GetNavigationHistory gets navigation history for a session.
GetNavigationHistory(ctx context.Context, sessionID string) ([]*DecisionStep, error) GetNavigationHistory(ctx context.Context, sessionID string) ([]*DecisionStep, error)
// ResetNavigation resets navigation state to latest versions. // ResetNavigation resets navigation state to latest versions.
ResetNavigation(ctx context.Context, ucxlAddress string) error ResetNavigation(ctx context.Context, ucxlAddress string) error
} }
@@ -234,41 +237,41 @@ type DecisionNavigator interface {
type DistributedStorage interface { type DistributedStorage interface {
// Store stores context data in the DHT with encryption. // Store stores context data in the DHT with encryption.
// Data is encrypted based on access level and role requirements. // Data is encrypted based on access level and role requirements.
Store(ctx context.Context, key string, data interface{}, Store(ctx context.Context, key string, data interface{},
accessLevel crypto.AccessLevel) error accessLevel crypto.AccessLevel) error
// Retrieve retrieves and decrypts context data. // Retrieve retrieves and decrypts context data.
// Automatically handles decryption based on current role permissions. // Automatically handles decryption based on current role permissions.
Retrieve(ctx context.Context, key string) (interface{}, error) Retrieve(ctx context.Context, key string) (interface{}, error)
// Delete removes context data from storage. // Delete removes context data from storage.
// Handles distributed deletion and cleanup. // Handles distributed deletion and cleanup.
Delete(ctx context.Context, key string) error Delete(ctx context.Context, key string) error
// Exists checks if a key exists in storage. // Exists checks if a key exists in storage.
Exists(ctx context.Context, key string) (bool, error) Exists(ctx context.Context, key string) (bool, error)
// List lists keys matching a pattern. // List lists keys matching a pattern.
List(ctx context.Context, pattern string) ([]string, error) List(ctx context.Context, pattern string) ([]string, error)
// Index creates searchable indexes for context data. // Index creates searchable indexes for context data.
// Enables efficient searching and filtering operations. // Enables efficient searching and filtering operations.
Index(ctx context.Context, key string, metadata *IndexMetadata) error Index(ctx context.Context, key string, metadata *IndexMetadata) error
// Search searches indexed context data. // Search searches indexed context data.
// Supports complex queries with role-based filtering. // Supports complex queries with role-based filtering.
Search(ctx context.Context, query *SearchQuery) ([]*SearchResult, error) Search(ctx context.Context, query *SearchQuery) ([]*SearchResult, error)
// Sync synchronizes with other nodes. // Sync synchronizes with other nodes.
// Ensures consistency across the distributed system. // Ensures consistency across the distributed system.
Sync(ctx context.Context) error Sync(ctx context.Context) error
// GetStorageStats returns storage statistics and health information. // GetStorageStats returns storage statistics and health information.
GetStorageStats(ctx context.Context) (*StorageStats, error) GetStorageStats(ctx context.Context) (*StorageStats, error)
// Backup creates a backup of stored data. // Backup creates a backup of stored data.
Backup(ctx context.Context, destination string) error Backup(ctx context.Context, destination string) error
// Restore restores data from backup. // Restore restores data from backup.
Restore(ctx context.Context, source string) error Restore(ctx context.Context, source string) error
} }
@@ -280,31 +283,31 @@ type DistributedStorage interface {
type EncryptedStorage interface { type EncryptedStorage interface {
// StoreEncrypted stores data encrypted for specific roles. // StoreEncrypted stores data encrypted for specific roles.
// Supports multi-role encryption for shared access. // Supports multi-role encryption for shared access.
StoreEncrypted(ctx context.Context, key string, data interface{}, StoreEncrypted(ctx context.Context, key string, data interface{},
roles []string) error roles []string) error
// RetrieveDecrypted retrieves and decrypts data using current role. // RetrieveDecrypted retrieves and decrypts data using current role.
// Automatically selects appropriate decryption key. // Automatically selects appropriate decryption key.
RetrieveDecrypted(ctx context.Context, key string) (interface{}, error) RetrieveDecrypted(ctx context.Context, key string) (interface{}, error)
// CanAccess checks if current role can access data. // CanAccess checks if current role can access data.
// Validates access without retrieving the actual data. // Validates access without retrieving the actual data.
CanAccess(ctx context.Context, key string) (bool, error) CanAccess(ctx context.Context, key string) (bool, error)
// ListAccessibleKeys lists keys accessible to current role. // ListAccessibleKeys lists keys accessible to current role.
// Filters keys based on current role permissions. // Filters keys based on current role permissions.
ListAccessibleKeys(ctx context.Context) ([]string, error) ListAccessibleKeys(ctx context.Context) ([]string, error)
// ReEncryptForRoles re-encrypts data for different roles. // ReEncryptForRoles re-encrypts data for different roles.
// Useful for permission changes and access control updates. // Useful for permission changes and access control updates.
ReEncryptForRoles(ctx context.Context, key string, newRoles []string) error ReEncryptForRoles(ctx context.Context, key string, newRoles []string) error
// GetAccessRoles gets roles that can access a specific key. // GetAccessRoles gets roles that can access a specific key.
GetAccessRoles(ctx context.Context, key string) ([]string, error) GetAccessRoles(ctx context.Context, key string) ([]string, error)
// RotateKeys rotates encryption keys for enhanced security. // RotateKeys rotates encryption keys for enhanced security.
RotateKeys(ctx context.Context, keyAge time.Duration) error RotateKeys(ctx context.Context, keyAge time.Duration) error
// ValidateEncryption validates encryption integrity. // ValidateEncryption validates encryption integrity.
ValidateEncryption(ctx context.Context, key string) error ValidateEncryption(ctx context.Context, key string) error
} }
@@ -317,35 +320,35 @@ type EncryptedStorage interface {
type ContextGenerator interface { type ContextGenerator interface {
// GenerateContext generates context for a path (requires admin role). // GenerateContext generates context for a path (requires admin role).
// Analyzes content, structure, and patterns to create comprehensive context. // Analyzes content, structure, and patterns to create comprehensive context.
GenerateContext(ctx context.Context, path string, GenerateContext(ctx context.Context, path string,
options *GenerationOptions) (*ContextNode, error) options *GenerationOptions) (*ContextNode, error)
// RegenerateHierarchy regenerates entire hierarchy (admin-only). // RegenerateHierarchy regenerates entire hierarchy (admin-only).
// Rebuilds context hierarchy from scratch with improved analysis. // Rebuilds context hierarchy from scratch with improved analysis.
RegenerateHierarchy(ctx context.Context, rootPath string, RegenerateHierarchy(ctx context.Context, rootPath string,
options *GenerationOptions) (*HierarchyStats, error) options *GenerationOptions) (*HierarchyStats, error)
// ValidateGeneration validates generated context quality. // ValidateGeneration validates generated context quality.
// Ensures generated context meets quality and consistency standards. // Ensures generated context meets quality and consistency standards.
ValidateGeneration(ctx context.Context, context *ContextNode) (*ValidationResult, error) ValidateGeneration(ctx context.Context, context *ContextNode) (*ValidationResult, error)
// EstimateGenerationCost estimates resource cost of generation. // EstimateGenerationCost estimates resource cost of generation.
// Helps with resource planning and operation scheduling. // Helps with resource planning and operation scheduling.
EstimateGenerationCost(ctx context.Context, scope string) (*CostEstimate, error) EstimateGenerationCost(ctx context.Context, scope string) (*CostEstimate, error)
// GenerateBatch generates context for multiple paths efficiently. // GenerateBatch generates context for multiple paths efficiently.
// Optimized for bulk generation operations. // Optimized for bulk generation operations.
GenerateBatch(ctx context.Context, paths []string, GenerateBatch(ctx context.Context, paths []string,
options *GenerationOptions) (map[string]*ContextNode, error) options *GenerationOptions) (map[string]*ContextNode, error)
// ScheduleGeneration schedules background context generation. // ScheduleGeneration schedules background context generation.
// Queues generation tasks for processing during low-activity periods. // Queues generation tasks for processing during low-activity periods.
ScheduleGeneration(ctx context.Context, paths []string, ScheduleGeneration(ctx context.Context, paths []string,
options *GenerationOptions, priority int) error options *GenerationOptions, priority int) error
// GetGenerationStatus gets status of background generation tasks. // GetGenerationStatus gets status of background generation tasks.
GetGenerationStatus(ctx context.Context) (*GenerationStatus, error) GetGenerationStatus(ctx context.Context) (*GenerationStatus, error)
// CancelGeneration cancels pending generation tasks. // CancelGeneration cancels pending generation tasks.
CancelGeneration(ctx context.Context, taskID string) error CancelGeneration(ctx context.Context, taskID string) error
} }
@@ -358,30 +361,30 @@ type ContextAnalyzer interface {
// AnalyzeContext analyzes context quality and consistency. // AnalyzeContext analyzes context quality and consistency.
// Evaluates individual context nodes for quality and accuracy. // Evaluates individual context nodes for quality and accuracy.
AnalyzeContext(ctx context.Context, context *ContextNode) (*AnalysisResult, error) AnalyzeContext(ctx context.Context, context *ContextNode) (*AnalysisResult, error)
// DetectPatterns detects patterns across contexts. // DetectPatterns detects patterns across contexts.
// Identifies recurring patterns that can improve context generation. // Identifies recurring patterns that can improve context generation.
DetectPatterns(ctx context.Context, contexts []*ContextNode) ([]*Pattern, error) DetectPatterns(ctx context.Context, contexts []*ContextNode) ([]*Pattern, error)
// SuggestImprovements suggests context improvements. // SuggestImprovements suggests context improvements.
// Provides actionable recommendations for context enhancement. // Provides actionable recommendations for context enhancement.
SuggestImprovements(ctx context.Context, context *ContextNode) ([]*Suggestion, error) SuggestImprovements(ctx context.Context, context *ContextNode) ([]*Suggestion, error)
// CalculateConfidence calculates confidence score. // CalculateConfidence calculates confidence score.
// Assesses confidence in context accuracy and completeness. // Assesses confidence in context accuracy and completeness.
CalculateConfidence(ctx context.Context, context *ContextNode) (float64, error) CalculateConfidence(ctx context.Context, context *ContextNode) (float64, error)
// DetectInconsistencies detects inconsistencies in hierarchy. // DetectInconsistencies detects inconsistencies in hierarchy.
// Identifies conflicts and inconsistencies across related contexts. // Identifies conflicts and inconsistencies across related contexts.
DetectInconsistencies(ctx context.Context) ([]*Inconsistency, error) DetectInconsistencies(ctx context.Context) ([]*Inconsistency, error)
// AnalyzeTrends analyzes trends in context evolution. // AnalyzeTrends analyzes trends in context evolution.
// Identifies patterns in how contexts change over time. // Identifies patterns in how contexts change over time.
AnalyzeTrends(ctx context.Context, timeRange time.Duration) (*TrendAnalysis, error) AnalyzeTrends(ctx context.Context, timeRange time.Duration) (*TrendAnalysis, error)
// CompareContexts compares contexts for similarity and differences. // CompareContexts compares contexts for similarity and differences.
CompareContexts(ctx context.Context, context1, context2 *ContextNode) (*ComparisonResult, error) CompareContexts(ctx context.Context, context1, context2 *ContextNode) (*ComparisonResult, error)
// ValidateConsistency validates consistency across hierarchy. // ValidateConsistency validates consistency across hierarchy.
ValidateConsistency(ctx context.Context, rootPath string) ([]*ConsistencyIssue, error) ValidateConsistency(ctx context.Context, rootPath string) ([]*ConsistencyIssue, error)
} }
@@ -394,31 +397,31 @@ type PatternMatcher interface {
// MatchPatterns matches context against known patterns. // MatchPatterns matches context against known patterns.
// Identifies which patterns apply to a given context. // Identifies which patterns apply to a given context.
MatchPatterns(ctx context.Context, context *ContextNode) ([]*PatternMatch, error) MatchPatterns(ctx context.Context, context *ContextNode) ([]*PatternMatch, error)
// RegisterPattern registers a new context pattern. // RegisterPattern registers a new context pattern.
// Adds patterns that can be used for matching and generation. // Adds patterns that can be used for matching and generation.
RegisterPattern(ctx context.Context, pattern *ContextPattern) error RegisterPattern(ctx context.Context, pattern *ContextPattern) error
// UnregisterPattern removes a context pattern. // UnregisterPattern removes a context pattern.
UnregisterPattern(ctx context.Context, patternID string) error UnregisterPattern(ctx context.Context, patternID string) error
// UpdatePattern updates an existing pattern. // UpdatePattern updates an existing pattern.
UpdatePattern(ctx context.Context, pattern *ContextPattern) error UpdatePattern(ctx context.Context, pattern *ContextPattern) error
// ListPatterns lists all registered patterns. // ListPatterns lists all registered patterns.
// Returns patterns ordered by priority and usage frequency. // Returns patterns ordered by priority and usage frequency.
ListPatterns(ctx context.Context) ([]*ContextPattern, error) ListPatterns(ctx context.Context) ([]*ContextPattern, error)
// GetPattern retrieves a specific pattern. // GetPattern retrieves a specific pattern.
GetPattern(ctx context.Context, patternID string) (*ContextPattern, error) GetPattern(ctx context.Context, patternID string) (*ContextPattern, error)
// ApplyPattern applies a pattern to context. // ApplyPattern applies a pattern to context.
// Updates context to match pattern template. // Updates context to match pattern template.
ApplyPattern(ctx context.Context, context *ContextNode, patternID string) (*ContextNode, error) ApplyPattern(ctx context.Context, context *ContextNode, patternID string) (*ContextNode, error)
// ValidatePattern validates pattern definition. // ValidatePattern validates pattern definition.
ValidatePattern(ctx context.Context, pattern *ContextPattern) (*ValidationResult, error) ValidatePattern(ctx context.Context, pattern *ContextPattern) (*ValidationResult, error)
// GetPatternUsage gets usage statistics for patterns. // GetPatternUsage gets usage statistics for patterns.
GetPatternUsage(ctx context.Context) (map[string]int, error) GetPatternUsage(ctx context.Context) (map[string]int, error)
} }
@@ -431,41 +434,41 @@ type QueryEngine interface {
// Query performs a general context query. // Query performs a general context query.
// Supports complex queries with multiple criteria and filters. // Supports complex queries with multiple criteria and filters.
Query(ctx context.Context, query *SearchQuery) ([]*SearchResult, error) Query(ctx context.Context, query *SearchQuery) ([]*SearchResult, error)
// SearchByTag finds contexts by tag. // SearchByTag finds contexts by tag.
// Optimized search for tag-based filtering. // Optimized search for tag-based filtering.
SearchByTag(ctx context.Context, tags []string) ([]*SearchResult, error) SearchByTag(ctx context.Context, tags []string) ([]*SearchResult, error)
// SearchByTechnology finds contexts by technology. // SearchByTechnology finds contexts by technology.
// Finds contexts using specific technologies. // Finds contexts using specific technologies.
SearchByTechnology(ctx context.Context, technologies []string) ([]*SearchResult, error) SearchByTechnology(ctx context.Context, technologies []string) ([]*SearchResult, error)
// SearchByPath finds contexts by path pattern. // SearchByPath finds contexts by path pattern.
// Supports glob patterns and regex for path matching. // Supports glob patterns and regex for path matching.
SearchByPath(ctx context.Context, pathPattern string) ([]*SearchResult, error) SearchByPath(ctx context.Context, pathPattern string) ([]*SearchResult, error)
// TemporalQuery performs temporal-aware queries. // TemporalQuery performs temporal-aware queries.
// Queries context as it existed at specific decision points. // Queries context as it existed at specific decision points.
TemporalQuery(ctx context.Context, query *SearchQuery, TemporalQuery(ctx context.Context, query *SearchQuery,
temporal *TemporalFilter) ([]*SearchResult, error) temporal *TemporalFilter) ([]*SearchResult, error)
// FuzzySearch performs fuzzy text search. // FuzzySearch performs fuzzy text search.
// Handles typos and approximate matching. // Handles typos and approximate matching.
FuzzySearch(ctx context.Context, text string, threshold float64) ([]*SearchResult, error) FuzzySearch(ctx context.Context, text string, threshold float64) ([]*SearchResult, error)
// GetSuggestions gets search suggestions and auto-complete. // GetSuggestions gets search suggestions and auto-complete.
GetSuggestions(ctx context.Context, prefix string, limit int) ([]string, error) GetSuggestions(ctx context.Context, prefix string, limit int) ([]string, error)
// GetFacets gets faceted search information. // GetFacets gets faceted search information.
// Returns available filters and their counts. // Returns available filters and their counts.
GetFacets(ctx context.Context, query *SearchQuery) (map[string]map[string]int, error) GetFacets(ctx context.Context, query *SearchQuery) (map[string]map[string]int, error)
// BuildIndex builds search indexes for efficient querying. // BuildIndex builds search indexes for efficient querying.
BuildIndex(ctx context.Context, rebuild bool) error BuildIndex(ctx context.Context, rebuild bool) error
// OptimizeIndex optimizes search indexes for performance. // OptimizeIndex optimizes search indexes for performance.
OptimizeIndex(ctx context.Context) error OptimizeIndex(ctx context.Context) error
// GetQueryStats gets query performance statistics. // GetQueryStats gets query performance statistics.
GetQueryStats(ctx context.Context) (*QueryStats, error) GetQueryStats(ctx context.Context) (*QueryStats, error)
} }
@@ -497,83 +500,81 @@ type HealthChecker interface {
// Additional types needed by interfaces // Additional types needed by interfaces
import "time"
type StorageStats struct { type StorageStats struct {
TotalKeys int64 `json:"total_keys"` TotalKeys int64 `json:"total_keys"`
TotalSize int64 `json:"total_size"` TotalSize int64 `json:"total_size"`
IndexSize int64 `json:"index_size"` IndexSize int64 `json:"index_size"`
CacheSize int64 `json:"cache_size"` CacheSize int64 `json:"cache_size"`
ReplicationStatus string `json:"replication_status"` ReplicationStatus string `json:"replication_status"`
LastSync time.Time `json:"last_sync"` LastSync time.Time `json:"last_sync"`
SyncErrors int64 `json:"sync_errors"` SyncErrors int64 `json:"sync_errors"`
AvailableSpace int64 `json:"available_space"` AvailableSpace int64 `json:"available_space"`
} }
type GenerationStatus struct { type GenerationStatus struct {
ActiveTasks int `json:"active_tasks"` ActiveTasks int `json:"active_tasks"`
QueuedTasks int `json:"queued_tasks"` QueuedTasks int `json:"queued_tasks"`
CompletedTasks int `json:"completed_tasks"` CompletedTasks int `json:"completed_tasks"`
FailedTasks int `json:"failed_tasks"` FailedTasks int `json:"failed_tasks"`
EstimatedCompletion time.Time `json:"estimated_completion"` EstimatedCompletion time.Time `json:"estimated_completion"`
CurrentTask *GenerationTask `json:"current_task,omitempty"` CurrentTask *GenerationTask `json:"current_task,omitempty"`
} }
type GenerationTask struct { type GenerationTask struct {
ID string `json:"id"` ID string `json:"id"`
Path string `json:"path"` Path string `json:"path"`
Status string `json:"status"` Status string `json:"status"`
Progress float64 `json:"progress"` Progress float64 `json:"progress"`
StartedAt time.Time `json:"started_at"` StartedAt time.Time `json:"started_at"`
EstimatedCompletion time.Time `json:"estimated_completion"` EstimatedCompletion time.Time `json:"estimated_completion"`
Error string `json:"error,omitempty"` Error string `json:"error,omitempty"`
} }
type TrendAnalysis struct { type TrendAnalysis struct {
TimeRange time.Duration `json:"time_range"` TimeRange time.Duration `json:"time_range"`
TotalChanges int `json:"total_changes"` TotalChanges int `json:"total_changes"`
ChangeVelocity float64 `json:"change_velocity"` ChangeVelocity float64 `json:"change_velocity"`
DominantReasons []ChangeReason `json:"dominant_reasons"` DominantReasons []ChangeReason `json:"dominant_reasons"`
QualityTrend string `json:"quality_trend"` QualityTrend string `json:"quality_trend"`
ConfidenceTrend string `json:"confidence_trend"` ConfidenceTrend string `json:"confidence_trend"`
MostActiveAreas []string `json:"most_active_areas"` MostActiveAreas []string `json:"most_active_areas"`
EmergingPatterns []*Pattern `json:"emerging_patterns"` EmergingPatterns []*Pattern `json:"emerging_patterns"`
AnalyzedAt time.Time `json:"analyzed_at"` AnalyzedAt time.Time `json:"analyzed_at"`
} }
type ComparisonResult struct { type ComparisonResult struct {
SimilarityScore float64 `json:"similarity_score"` SimilarityScore float64 `json:"similarity_score"`
Differences []*Difference `json:"differences"` Differences []*Difference `json:"differences"`
CommonElements []string `json:"common_elements"` CommonElements []string `json:"common_elements"`
Recommendations []*Suggestion `json:"recommendations"` Recommendations []*Suggestion `json:"recommendations"`
ComparedAt time.Time `json:"compared_at"` ComparedAt time.Time `json:"compared_at"`
} }
type Difference struct { type Difference struct {
Field string `json:"field"` Field string `json:"field"`
Value1 interface{} `json:"value1"` Value1 interface{} `json:"value1"`
Value2 interface{} `json:"value2"` Value2 interface{} `json:"value2"`
DifferenceType string `json:"difference_type"` DifferenceType string `json:"difference_type"`
Significance float64 `json:"significance"` Significance float64 `json:"significance"`
} }
type ConsistencyIssue struct { type ConsistencyIssue struct {
Type string `json:"type"` Type string `json:"type"`
Description string `json:"description"` Description string `json:"description"`
AffectedNodes []string `json:"affected_nodes"` AffectedNodes []string `json:"affected_nodes"`
Severity string `json:"severity"` Severity string `json:"severity"`
Suggestion string `json:"suggestion"` Suggestion string `json:"suggestion"`
DetectedAt time.Time `json:"detected_at"` DetectedAt time.Time `json:"detected_at"`
} }
type QueryStats struct { type QueryStats struct {
TotalQueries int64 `json:"total_queries"` TotalQueries int64 `json:"total_queries"`
AverageQueryTime time.Duration `json:"average_query_time"` AverageQueryTime time.Duration `json:"average_query_time"`
CacheHitRate float64 `json:"cache_hit_rate"` CacheHitRate float64 `json:"cache_hit_rate"`
IndexUsage map[string]int64 `json:"index_usage"` IndexUsage map[string]int64 `json:"index_usage"`
PopularQueries []string `json:"popular_queries"` PopularQueries []string `json:"popular_queries"`
SlowQueries []string `json:"slow_queries"` SlowQueries []string `json:"slow_queries"`
ErrorRate float64 `json:"error_rate"` ErrorRate float64 `json:"error_rate"`
} }
type CacheStats struct { type CacheStats struct {
@@ -588,17 +589,17 @@ type CacheStats struct {
} }
type HealthStatus struct { type HealthStatus struct {
Overall string `json:"overall"` Overall string `json:"overall"`
Components map[string]*ComponentHealth `json:"components"` Components map[string]*ComponentHealth `json:"components"`
CheckedAt time.Time `json:"checked_at"` CheckedAt time.Time `json:"checked_at"`
Version string `json:"version"` Version string `json:"version"`
Uptime time.Duration `json:"uptime"` Uptime time.Duration `json:"uptime"`
} }
type ComponentHealth struct { type ComponentHealth struct {
Status string `json:"status"` Status string `json:"status"`
Message string `json:"message,omitempty"` Message string `json:"message,omitempty"`
LastCheck time.Time `json:"last_check"` LastCheck time.Time `json:"last_check"`
ResponseTime time.Duration `json:"response_time"` ResponseTime time.Duration `json:"response_time"`
Metadata map[string]interface{} `json:"metadata,omitempty"` Metadata map[string]interface{} `json:"metadata,omitempty"`
} }

View File

@@ -631,7 +631,7 @@ func (s *SLURP) GetTemporalEvolution(ctx context.Context, ucxlAddress string) ([
return nil, fmt.Errorf("invalid UCXL address: %w", err) return nil, fmt.Errorf("invalid UCXL address: %w", err)
} }
return s.temporalGraph.GetEvolutionHistory(ctx, *parsed) return s.temporalGraph.GetEvolutionHistory(ctx, parsed.String())
} }
// NavigateDecisionHops navigates through the decision graph by hop distance. // NavigateDecisionHops navigates through the decision graph by hop distance.
@@ -654,7 +654,7 @@ func (s *SLURP) NavigateDecisionHops(ctx context.Context, ucxlAddress string, ho
} }
if navigator, ok := s.temporalGraph.(DecisionNavigator); ok { if navigator, ok := s.temporalGraph.(DecisionNavigator); ok {
return navigator.NavigateDecisionHops(ctx, *parsed, hops, direction) return navigator.NavigateDecisionHops(ctx, parsed.String(), hops, direction)
} }
return nil, fmt.Errorf("decision navigation not supported by temporal graph") return nil, fmt.Errorf("decision navigation not supported by temporal graph")
@@ -1348,26 +1348,42 @@ func (s *SLURP) handleEvent(event *SLURPEvent) {
} }
} }
// validateSLURPConfig validates SLURP configuration for consistency and correctness // validateSLURPConfig normalises runtime tunables sourced from configuration.
func validateSLURPConfig(config *SLURPConfig) error { func validateSLURPConfig(cfg *config.SlurpConfig) error {
if config.ContextResolution.MaxHierarchyDepth < 1 { if cfg == nil {
return fmt.Errorf("max_hierarchy_depth must be at least 1") return fmt.Errorf("slurp config is nil")
} }
if config.ContextResolution.MinConfidenceThreshold < 0 || config.ContextResolution.MinConfidenceThreshold > 1 { if cfg.Timeout <= 0 {
return fmt.Errorf("min_confidence_threshold must be between 0 and 1") cfg.Timeout = 15 * time.Second
} }
if config.TemporalAnalysis.MaxDecisionHops < 1 { if cfg.RetryCount < 0 {
return fmt.Errorf("max_decision_hops must be at least 1") cfg.RetryCount = 0
} }
if config.TemporalAnalysis.StalenessThreshold < 0 || config.TemporalAnalysis.StalenessThreshold > 1 { if cfg.RetryDelay <= 0 && cfg.RetryCount > 0 {
return fmt.Errorf("staleness_threshold must be between 0 and 1") cfg.RetryDelay = 2 * time.Second
} }
if config.Performance.MaxConcurrentResolutions < 1 { if cfg.Performance.MaxConcurrentResolutions <= 0 {
return fmt.Errorf("max_concurrent_resolutions must be at least 1") cfg.Performance.MaxConcurrentResolutions = 1
}
if cfg.Performance.MetricsCollectionInterval <= 0 {
cfg.Performance.MetricsCollectionInterval = time.Minute
}
if cfg.TemporalAnalysis.MaxDecisionHops <= 0 {
cfg.TemporalAnalysis.MaxDecisionHops = 1
}
if cfg.TemporalAnalysis.StalenessCheckInterval <= 0 {
cfg.TemporalAnalysis.StalenessCheckInterval = 5 * time.Minute
}
if cfg.TemporalAnalysis.StalenessThreshold < 0 || cfg.TemporalAnalysis.StalenessThreshold > 1 {
cfg.TemporalAnalysis.StalenessThreshold = 0.2
} }
return nil return nil

View File

@@ -164,6 +164,8 @@ func (bm *BackupManagerImpl) CreateBackup(
Incremental: config.Incremental, Incremental: config.Incremental,
ParentBackupID: config.ParentBackupID, ParentBackupID: config.ParentBackupID,
Status: BackupStatusInProgress, Status: BackupStatusInProgress,
Progress: 0,
ErrorMessage: "",
CreatedAt: time.Now(), CreatedAt: time.Now(),
RetentionUntil: time.Now().Add(config.Retention), RetentionUntil: time.Now().Add(config.Retention),
} }
@@ -707,6 +709,7 @@ func (bm *BackupManagerImpl) validateFile(filePath string) error {
func (bm *BackupManagerImpl) failBackup(job *BackupJob, backupInfo *BackupInfo, err error) { func (bm *BackupManagerImpl) failBackup(job *BackupJob, backupInfo *BackupInfo, err error) {
bm.mu.Lock() bm.mu.Lock()
backupInfo.Status = BackupStatusFailed backupInfo.Status = BackupStatusFailed
backupInfo.Progress = 0
backupInfo.ErrorMessage = err.Error() backupInfo.ErrorMessage = err.Error()
job.Error = err job.Error = err
bm.mu.Unlock() bm.mu.Unlock()

View File

@@ -3,18 +3,19 @@ package storage
import ( import (
"context" "context"
"fmt" "fmt"
"strings"
"sync" "sync"
"time" "time"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
) )
// BatchOperationsImpl provides efficient batch operations for context storage // BatchOperationsImpl provides efficient batch operations for context storage
type BatchOperationsImpl struct { type BatchOperationsImpl struct {
contextStore *ContextStoreImpl contextStore *ContextStoreImpl
batchSize int batchSize int
maxConcurrency int maxConcurrency int
operationTimeout time.Duration operationTimeout time.Duration
} }
@@ -22,8 +23,8 @@ type BatchOperationsImpl struct {
func NewBatchOperations(contextStore *ContextStoreImpl, batchSize, maxConcurrency int, timeout time.Duration) *BatchOperationsImpl { func NewBatchOperations(contextStore *ContextStoreImpl, batchSize, maxConcurrency int, timeout time.Duration) *BatchOperationsImpl {
return &BatchOperationsImpl{ return &BatchOperationsImpl{
contextStore: contextStore, contextStore: contextStore,
batchSize: batchSize, batchSize: batchSize,
maxConcurrency: maxConcurrency, maxConcurrency: maxConcurrency,
operationTimeout: timeout, operationTimeout: timeout,
} }
} }
@@ -89,7 +90,7 @@ func (cs *ContextStoreImpl) BatchStore(
result.ErrorCount++ result.ErrorCount++
key := workResult.Item.Context.UCXLAddress.String() key := workResult.Item.Context.UCXLAddress.String()
result.Errors[key] = workResult.Error result.Errors[key] = workResult.Error
if batch.FailOnError { if batch.FailOnError {
// Cancel remaining operations // Cancel remaining operations
result.ProcessingTime = time.Since(start) result.ProcessingTime = time.Since(start)
@@ -164,11 +165,11 @@ func (cs *ContextStoreImpl) BatchRetrieve(
// Process results // Process results
for workResult := range resultsCh { for workResult := range resultsCh {
addressStr := workResult.Address.String() addressStr := workResult.Address.String()
if workResult.Error != nil { if workResult.Error != nil {
result.ErrorCount++ result.ErrorCount++
result.Errors[addressStr] = workResult.Error result.Errors[addressStr] = workResult.Error
if batch.FailOnError { if batch.FailOnError {
// Cancel remaining operations // Cancel remaining operations
result.ProcessingTime = time.Since(start) result.ProcessingTime = time.Since(start)

View File

@@ -4,7 +4,6 @@ import (
"context" "context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"regexp"
"sync" "sync"
"time" "time"
@@ -13,13 +12,13 @@ import (
// CacheManagerImpl implements the CacheManager interface using Redis // CacheManagerImpl implements the CacheManager interface using Redis
type CacheManagerImpl struct { type CacheManagerImpl struct {
mu sync.RWMutex mu sync.RWMutex
client *redis.Client client *redis.Client
stats *CacheStatistics stats *CacheStatistics
policy *CachePolicy policy *CachePolicy
prefix string prefix string
nodeID string nodeID string
warmupKeys map[string]bool warmupKeys map[string]bool
} }
// NewCacheManager creates a new cache manager with Redis backend // NewCacheManager creates a new cache manager with Redis backend
@@ -43,7 +42,7 @@ func NewCacheManager(redisAddr, nodeID string, policy *CachePolicy) (*CacheManag
// Test connection // Test connection
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel() defer cancel()
if err := client.Ping(ctx).Err(); err != nil { if err := client.Ping(ctx).Err(); err != nil {
return nil, fmt.Errorf("failed to connect to Redis: %w", err) return nil, fmt.Errorf("failed to connect to Redis: %w", err)
} }
@@ -68,13 +67,13 @@ func NewCacheManager(redisAddr, nodeID string, policy *CachePolicy) (*CacheManag
// DefaultCachePolicy returns default caching policy // DefaultCachePolicy returns default caching policy
func DefaultCachePolicy() *CachePolicy { func DefaultCachePolicy() *CachePolicy {
return &CachePolicy{ return &CachePolicy{
TTL: 24 * time.Hour, TTL: 24 * time.Hour,
MaxSize: 1024 * 1024 * 1024, // 1GB MaxSize: 1024 * 1024 * 1024, // 1GB
EvictionPolicy: "LRU", EvictionPolicy: "LRU",
RefreshThreshold: 0.8, // Refresh when 80% of TTL elapsed RefreshThreshold: 0.8, // Refresh when 80% of TTL elapsed
WarmupEnabled: true, WarmupEnabled: true,
CompressEntries: true, CompressEntries: true,
MaxEntrySize: 10 * 1024 * 1024, // 10MB MaxEntrySize: 10 * 1024 * 1024, // 10MB
} }
} }
@@ -203,7 +202,7 @@ func (cm *CacheManagerImpl) Set(
// Delete removes data from cache // Delete removes data from cache
func (cm *CacheManagerImpl) Delete(ctx context.Context, key string) error { func (cm *CacheManagerImpl) Delete(ctx context.Context, key string) error {
cacheKey := cm.buildCacheKey(key) cacheKey := cm.buildCacheKey(key)
if err := cm.client.Del(ctx, cacheKey).Err(); err != nil { if err := cm.client.Del(ctx, cacheKey).Err(); err != nil {
return fmt.Errorf("cache delete error: %w", err) return fmt.Errorf("cache delete error: %w", err)
} }
@@ -215,37 +214,37 @@ func (cm *CacheManagerImpl) Delete(ctx context.Context, key string) error {
func (cm *CacheManagerImpl) DeletePattern(ctx context.Context, pattern string) error { func (cm *CacheManagerImpl) DeletePattern(ctx context.Context, pattern string) error {
// Build full pattern with prefix // Build full pattern with prefix
fullPattern := cm.buildCacheKey(pattern) fullPattern := cm.buildCacheKey(pattern)
// Use Redis SCAN to find matching keys // Use Redis SCAN to find matching keys
var cursor uint64 var cursor uint64
var keys []string var keys []string
for { for {
result, nextCursor, err := cm.client.Scan(ctx, cursor, fullPattern, 100).Result() result, nextCursor, err := cm.client.Scan(ctx, cursor, fullPattern, 100).Result()
if err != nil { if err != nil {
return fmt.Errorf("cache scan error: %w", err) return fmt.Errorf("cache scan error: %w", err)
} }
keys = append(keys, result...) keys = append(keys, result...)
cursor = nextCursor cursor = nextCursor
if cursor == 0 { if cursor == 0 {
break break
} }
} }
// Delete found keys in batches // Delete found keys in batches
if len(keys) > 0 { if len(keys) > 0 {
pipeline := cm.client.Pipeline() pipeline := cm.client.Pipeline()
for _, key := range keys { for _, key := range keys {
pipeline.Del(ctx, key) pipeline.Del(ctx, key)
} }
if _, err := pipeline.Exec(ctx); err != nil { if _, err := pipeline.Exec(ctx); err != nil {
return fmt.Errorf("cache batch delete error: %w", err) return fmt.Errorf("cache batch delete error: %w", err)
} }
} }
return nil return nil
} }
@@ -282,7 +281,7 @@ func (cm *CacheManagerImpl) GetCacheStats() (*CacheStatistics, error) {
// Update Redis memory usage // Update Redis memory usage
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel() defer cancel()
info, err := cm.client.Info(ctx, "memory").Result() info, err := cm.client.Info(ctx, "memory").Result()
if err == nil { if err == nil {
// Parse memory info to get actual usage // Parse memory info to get actual usage
@@ -314,17 +313,17 @@ func (cm *CacheManagerImpl) SetCachePolicy(policy *CachePolicy) error {
// CacheEntry represents a cached data entry with metadata // CacheEntry represents a cached data entry with metadata
type CacheEntry struct { type CacheEntry struct {
Key string `json:"key"` Key string `json:"key"`
Data []byte `json:"data"` Data []byte `json:"data"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
ExpiresAt time.Time `json:"expires_at"` ExpiresAt time.Time `json:"expires_at"`
TTL time.Duration `json:"ttl"` TTL time.Duration `json:"ttl"`
AccessCount int64 `json:"access_count"` AccessCount int64 `json:"access_count"`
LastAccessedAt time.Time `json:"last_accessed_at"` LastAccessedAt time.Time `json:"last_accessed_at"`
Compressed bool `json:"compressed"` Compressed bool `json:"compressed"`
OriginalSize int64 `json:"original_size"` OriginalSize int64 `json:"original_size"`
CompressedSize int64 `json:"compressed_size"` CompressedSize int64 `json:"compressed_size"`
NodeID string `json:"node_id"` NodeID string `json:"node_id"`
} }
// Helper methods // Helper methods
@@ -361,7 +360,7 @@ func (cm *CacheManagerImpl) recordMiss() {
func (cm *CacheManagerImpl) updateAccessStats(duration time.Duration) { func (cm *CacheManagerImpl) updateAccessStats(duration time.Duration) {
cm.mu.Lock() cm.mu.Lock()
defer cm.mu.Unlock() defer cm.mu.Unlock()
if cm.stats.AverageLoadTime == 0 { if cm.stats.AverageLoadTime == 0 {
cm.stats.AverageLoadTime = duration cm.stats.AverageLoadTime = duration
} else { } else {

View File

@@ -3,20 +3,18 @@ package storage
import ( import (
"bytes" "bytes"
"context" "context"
"os"
"strings" "strings"
"testing" "testing"
"time"
) )
func TestLocalStorageCompression(t *testing.T) { func TestLocalStorageCompression(t *testing.T) {
// Create temporary directory for test // Create temporary directory for test
tempDir := t.TempDir() tempDir := t.TempDir()
// Create storage with compression enabled // Create storage with compression enabled
options := DefaultLocalStorageOptions() options := DefaultLocalStorageOptions()
options.Compression = true options.Compression = true
storage, err := NewLocalStorage(tempDir, options) storage, err := NewLocalStorage(tempDir, options)
if err != nil { if err != nil {
t.Fatalf("Failed to create storage: %v", err) t.Fatalf("Failed to create storage: %v", err)
@@ -25,24 +23,24 @@ func TestLocalStorageCompression(t *testing.T) {
// Test data that should compress well // Test data that should compress well
largeData := strings.Repeat("This is a test string that should compress well! ", 100) largeData := strings.Repeat("This is a test string that should compress well! ", 100)
// Store with compression enabled // Store with compression enabled
storeOptions := &StoreOptions{ storeOptions := &StoreOptions{
Compress: true, Compress: true,
} }
ctx := context.Background() ctx := context.Background()
err = storage.Store(ctx, "test-compress", largeData, storeOptions) err = storage.Store(ctx, "test-compress", largeData, storeOptions)
if err != nil { if err != nil {
t.Fatalf("Failed to store compressed data: %v", err) t.Fatalf("Failed to store compressed data: %v", err)
} }
// Retrieve and verify // Retrieve and verify
retrieved, err := storage.Retrieve(ctx, "test-compress") retrieved, err := storage.Retrieve(ctx, "test-compress")
if err != nil { if err != nil {
t.Fatalf("Failed to retrieve compressed data: %v", err) t.Fatalf("Failed to retrieve compressed data: %v", err)
} }
// Verify data integrity // Verify data integrity
if retrievedStr, ok := retrieved.(string); ok { if retrievedStr, ok := retrieved.(string); ok {
if retrievedStr != largeData { if retrievedStr != largeData {
@@ -51,21 +49,21 @@ func TestLocalStorageCompression(t *testing.T) {
} else { } else {
t.Error("Retrieved data is not a string") t.Error("Retrieved data is not a string")
} }
// Check compression stats // Check compression stats
stats, err := storage.GetCompressionStats() stats, err := storage.GetCompressionStats()
if err != nil { if err != nil {
t.Fatalf("Failed to get compression stats: %v", err) t.Fatalf("Failed to get compression stats: %v", err)
} }
if stats.CompressedEntries == 0 { if stats.CompressedEntries == 0 {
t.Error("Expected at least one compressed entry") t.Error("Expected at least one compressed entry")
} }
if stats.CompressionRatio == 0 { if stats.CompressionRatio == 0 {
t.Error("Expected non-zero compression ratio") t.Error("Expected non-zero compression ratio")
} }
t.Logf("Compression stats: %d/%d entries compressed, ratio: %.2f", t.Logf("Compression stats: %d/%d entries compressed, ratio: %.2f",
stats.CompressedEntries, stats.TotalEntries, stats.CompressionRatio) stats.CompressedEntries, stats.TotalEntries, stats.CompressionRatio)
} }
@@ -81,27 +79,27 @@ func TestCompressionMethods(t *testing.T) {
// Test data // Test data
originalData := []byte(strings.Repeat("Hello, World! ", 1000)) originalData := []byte(strings.Repeat("Hello, World! ", 1000))
// Test compression // Test compression
compressed, err := storage.compress(originalData) compressed, err := storage.compress(originalData)
if err != nil { if err != nil {
t.Fatalf("Compression failed: %v", err) t.Fatalf("Compression failed: %v", err)
} }
t.Logf("Original size: %d bytes", len(originalData)) t.Logf("Original size: %d bytes", len(originalData))
t.Logf("Compressed size: %d bytes", len(compressed)) t.Logf("Compressed size: %d bytes", len(compressed))
// Compressed data should be smaller for repetitive data // Compressed data should be smaller for repetitive data
if len(compressed) >= len(originalData) { if len(compressed) >= len(originalData) {
t.Log("Compression didn't reduce size (may be expected for small or non-repetitive data)") t.Log("Compression didn't reduce size (may be expected for small or non-repetitive data)")
} }
// Test decompression // Test decompression
decompressed, err := storage.decompress(compressed) decompressed, err := storage.decompress(compressed)
if err != nil { if err != nil {
t.Fatalf("Decompression failed: %v", err) t.Fatalf("Decompression failed: %v", err)
} }
// Verify data integrity // Verify data integrity
if !bytes.Equal(originalData, decompressed) { if !bytes.Equal(originalData, decompressed) {
t.Error("Decompressed data doesn't match original") t.Error("Decompressed data doesn't match original")
@@ -111,7 +109,7 @@ func TestCompressionMethods(t *testing.T) {
func TestStorageOptimization(t *testing.T) { func TestStorageOptimization(t *testing.T) {
// Create temporary directory for test // Create temporary directory for test
tempDir := t.TempDir() tempDir := t.TempDir()
storage, err := NewLocalStorage(tempDir, nil) storage, err := NewLocalStorage(tempDir, nil)
if err != nil { if err != nil {
t.Fatalf("Failed to create storage: %v", err) t.Fatalf("Failed to create storage: %v", err)
@@ -119,7 +117,7 @@ func TestStorageOptimization(t *testing.T) {
defer storage.Close() defer storage.Close()
ctx := context.Background() ctx := context.Background()
// Store multiple entries without compression // Store multiple entries without compression
testData := []struct { testData := []struct {
key string key string
@@ -130,50 +128,50 @@ func TestStorageOptimization(t *testing.T) {
{"large2", strings.Repeat("Another large repetitive dataset ", 100)}, {"large2", strings.Repeat("Another large repetitive dataset ", 100)},
{"medium", strings.Repeat("Medium data ", 50)}, {"medium", strings.Repeat("Medium data ", 50)},
} }
for _, item := range testData { for _, item := range testData {
err = storage.Store(ctx, item.key, item.data, &StoreOptions{Compress: false}) err = storage.Store(ctx, item.key, item.data, &StoreOptions{Compress: false})
if err != nil { if err != nil {
t.Fatalf("Failed to store %s: %v", item.key, err) t.Fatalf("Failed to store %s: %v", item.key, err)
} }
} }
// Check initial stats // Check initial stats
initialStats, err := storage.GetCompressionStats() initialStats, err := storage.GetCompressionStats()
if err != nil { if err != nil {
t.Fatalf("Failed to get initial stats: %v", err) t.Fatalf("Failed to get initial stats: %v", err)
} }
t.Logf("Initial: %d entries, %d compressed", t.Logf("Initial: %d entries, %d compressed",
initialStats.TotalEntries, initialStats.CompressedEntries) initialStats.TotalEntries, initialStats.CompressedEntries)
// Optimize storage with threshold (only compress entries larger than 100 bytes) // Optimize storage with threshold (only compress entries larger than 100 bytes)
err = storage.OptimizeStorage(ctx, 100) err = storage.OptimizeStorage(ctx, 100)
if err != nil { if err != nil {
t.Fatalf("Storage optimization failed: %v", err) t.Fatalf("Storage optimization failed: %v", err)
} }
// Check final stats // Check final stats
finalStats, err := storage.GetCompressionStats() finalStats, err := storage.GetCompressionStats()
if err != nil { if err != nil {
t.Fatalf("Failed to get final stats: %v", err) t.Fatalf("Failed to get final stats: %v", err)
} }
t.Logf("Final: %d entries, %d compressed", t.Logf("Final: %d entries, %d compressed",
finalStats.TotalEntries, finalStats.CompressedEntries) finalStats.TotalEntries, finalStats.CompressedEntries)
// Should have more compressed entries after optimization // Should have more compressed entries after optimization
if finalStats.CompressedEntries <= initialStats.CompressedEntries { if finalStats.CompressedEntries <= initialStats.CompressedEntries {
t.Log("Note: Optimization didn't increase compressed entries (may be expected)") t.Log("Note: Optimization didn't increase compressed entries (may be expected)")
} }
// Verify all data is still retrievable // Verify all data is still retrievable
for _, item := range testData { for _, item := range testData {
retrieved, err := storage.Retrieve(ctx, item.key) retrieved, err := storage.Retrieve(ctx, item.key)
if err != nil { if err != nil {
t.Fatalf("Failed to retrieve %s after optimization: %v", item.key, err) t.Fatalf("Failed to retrieve %s after optimization: %v", item.key, err)
} }
if retrievedStr, ok := retrieved.(string); ok { if retrievedStr, ok := retrieved.(string); ok {
if retrievedStr != item.data { if retrievedStr != item.data {
t.Errorf("Data mismatch for %s after optimization", item.key) t.Errorf("Data mismatch for %s after optimization", item.key)
@@ -193,26 +191,26 @@ func TestCompressionFallback(t *testing.T) {
// Random-like data that won't compress well // Random-like data that won't compress well
randomData := []byte("a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0u1v2w3x4y5z6") randomData := []byte("a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0u1v2w3x4y5z6")
// Test compression // Test compression
compressed, err := storage.compress(randomData) compressed, err := storage.compress(randomData)
if err != nil { if err != nil {
t.Fatalf("Compression failed: %v", err) t.Fatalf("Compression failed: %v", err)
} }
// Should return original data if compression doesn't help // Should return original data if compression doesn't help
if len(compressed) >= len(randomData) { if len(compressed) >= len(randomData) {
t.Log("Compression correctly returned original data for incompressible input") t.Log("Compression correctly returned original data for incompressible input")
} }
// Test decompression of uncompressed data // Test decompression of uncompressed data
decompressed, err := storage.decompress(randomData) decompressed, err := storage.decompress(randomData)
if err != nil { if err != nil {
t.Fatalf("Decompression fallback failed: %v", err) t.Fatalf("Decompression fallback failed: %v", err)
} }
// Should return original data unchanged // Should return original data unchanged
if !bytes.Equal(randomData, decompressed) { if !bytes.Equal(randomData, decompressed) {
t.Error("Decompression fallback changed data") t.Error("Decompression fallback changed data")
} }
} }

View File

@@ -2,71 +2,68 @@ package storage
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"sync" "sync"
"time" "time"
"chorus/pkg/crypto"
"chorus/pkg/dht"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
) )
// ContextStoreImpl is the main implementation of the ContextStore interface // ContextStoreImpl is the main implementation of the ContextStore interface
// It coordinates between local storage, distributed storage, encryption, caching, and indexing // It coordinates between local storage, distributed storage, encryption, caching, and indexing
type ContextStoreImpl struct { type ContextStoreImpl struct {
mu sync.RWMutex mu sync.RWMutex
localStorage LocalStorage localStorage LocalStorage
distributedStorage DistributedStorage distributedStorage DistributedStorage
encryptedStorage EncryptedStorage encryptedStorage EncryptedStorage
cacheManager CacheManager cacheManager CacheManager
indexManager IndexManager indexManager IndexManager
backupManager BackupManager backupManager BackupManager
eventNotifier EventNotifier eventNotifier EventNotifier
// Configuration // Configuration
nodeID string nodeID string
options *ContextStoreOptions options *ContextStoreOptions
// Statistics and monitoring // Statistics and monitoring
statistics *StorageStatistics statistics *StorageStatistics
metricsCollector *MetricsCollector metricsCollector *MetricsCollector
// Background processes // Background processes
stopCh chan struct{} stopCh chan struct{}
syncTicker *time.Ticker syncTicker *time.Ticker
compactionTicker *time.Ticker compactionTicker *time.Ticker
cleanupTicker *time.Ticker cleanupTicker *time.Ticker
} }
// ContextStoreOptions configures the context store behavior // ContextStoreOptions configures the context store behavior
type ContextStoreOptions struct { type ContextStoreOptions struct {
// Storage configuration // Storage configuration
PreferLocal bool `json:"prefer_local"` PreferLocal bool `json:"prefer_local"`
AutoReplicate bool `json:"auto_replicate"` AutoReplicate bool `json:"auto_replicate"`
DefaultReplicas int `json:"default_replicas"` DefaultReplicas int `json:"default_replicas"`
EncryptionEnabled bool `json:"encryption_enabled"` EncryptionEnabled bool `json:"encryption_enabled"`
CompressionEnabled bool `json:"compression_enabled"` CompressionEnabled bool `json:"compression_enabled"`
// Caching configuration // Caching configuration
CachingEnabled bool `json:"caching_enabled"` CachingEnabled bool `json:"caching_enabled"`
CacheTTL time.Duration `json:"cache_ttl"` CacheTTL time.Duration `json:"cache_ttl"`
CacheSize int64 `json:"cache_size"` CacheSize int64 `json:"cache_size"`
// Indexing configuration // Indexing configuration
IndexingEnabled bool `json:"indexing_enabled"` IndexingEnabled bool `json:"indexing_enabled"`
IndexRefreshInterval time.Duration `json:"index_refresh_interval"` IndexRefreshInterval time.Duration `json:"index_refresh_interval"`
// Background processes // Background processes
SyncInterval time.Duration `json:"sync_interval"` SyncInterval time.Duration `json:"sync_interval"`
CompactionInterval time.Duration `json:"compaction_interval"` CompactionInterval time.Duration `json:"compaction_interval"`
CleanupInterval time.Duration `json:"cleanup_interval"` CleanupInterval time.Duration `json:"cleanup_interval"`
// Performance tuning // Performance tuning
BatchSize int `json:"batch_size"` BatchSize int `json:"batch_size"`
MaxConcurrentOps int `json:"max_concurrent_ops"` MaxConcurrentOps int `json:"max_concurrent_ops"`
OperationTimeout time.Duration `json:"operation_timeout"` OperationTimeout time.Duration `json:"operation_timeout"`
} }
// MetricsCollector collects and aggregates storage metrics // MetricsCollector collects and aggregates storage metrics
@@ -87,16 +84,16 @@ func DefaultContextStoreOptions() *ContextStoreOptions {
EncryptionEnabled: true, EncryptionEnabled: true,
CompressionEnabled: true, CompressionEnabled: true,
CachingEnabled: true, CachingEnabled: true,
CacheTTL: 24 * time.Hour, CacheTTL: 24 * time.Hour,
CacheSize: 1024 * 1024 * 1024, // 1GB CacheSize: 1024 * 1024 * 1024, // 1GB
IndexingEnabled: true, IndexingEnabled: true,
IndexRefreshInterval: 5 * time.Minute, IndexRefreshInterval: 5 * time.Minute,
SyncInterval: 10 * time.Minute, SyncInterval: 10 * time.Minute,
CompactionInterval: 24 * time.Hour, CompactionInterval: 24 * time.Hour,
CleanupInterval: 1 * time.Hour, CleanupInterval: 1 * time.Hour,
BatchSize: 100, BatchSize: 100,
MaxConcurrentOps: 10, MaxConcurrentOps: 10,
OperationTimeout: 30 * time.Second, OperationTimeout: 30 * time.Second,
} }
} }
@@ -124,8 +121,8 @@ func NewContextStore(
indexManager: indexManager, indexManager: indexManager,
backupManager: backupManager, backupManager: backupManager,
eventNotifier: eventNotifier, eventNotifier: eventNotifier,
nodeID: nodeID, nodeID: nodeID,
options: options, options: options,
statistics: &StorageStatistics{ statistics: &StorageStatistics{
LastSyncTime: time.Now(), LastSyncTime: time.Now(),
}, },
@@ -174,11 +171,11 @@ func (cs *ContextStoreImpl) StoreContext(
} else { } else {
// Store unencrypted // Store unencrypted
storeOptions := &StoreOptions{ storeOptions := &StoreOptions{
Encrypt: false, Encrypt: false,
Replicate: cs.options.AutoReplicate, Replicate: cs.options.AutoReplicate,
Index: cs.options.IndexingEnabled, Index: cs.options.IndexingEnabled,
Cache: cs.options.CachingEnabled, Cache: cs.options.CachingEnabled,
Compress: cs.options.CompressionEnabled, Compress: cs.options.CompressionEnabled,
} }
storeErr = cs.localStorage.Store(ctx, storageKey, node, storeOptions) storeErr = cs.localStorage.Store(ctx, storageKey, node, storeOptions)
} }
@@ -212,14 +209,14 @@ func (cs *ContextStoreImpl) StoreContext(
go func() { go func() {
replicateCtx, cancel := context.WithTimeout(context.Background(), cs.options.OperationTimeout) replicateCtx, cancel := context.WithTimeout(context.Background(), cs.options.OperationTimeout)
defer cancel() defer cancel()
distOptions := &DistributedStoreOptions{ distOptions := &DistributedStoreOptions{
ReplicationFactor: cs.options.DefaultReplicas, ReplicationFactor: cs.options.DefaultReplicas,
ConsistencyLevel: ConsistencyQuorum, ConsistencyLevel: ConsistencyQuorum,
Timeout: cs.options.OperationTimeout, Timeout: cs.options.OperationTimeout,
SyncMode: SyncAsync, SyncMode: SyncAsync,
} }
if err := cs.distributedStorage.Store(replicateCtx, storageKey, node, distOptions); err != nil { if err := cs.distributedStorage.Store(replicateCtx, storageKey, node, distOptions); err != nil {
cs.recordError("replicate", err) cs.recordError("replicate", err)
} }
@@ -523,11 +520,11 @@ func (cs *ContextStoreImpl) recordOperation(operation string) {
func (cs *ContextStoreImpl) recordLatency(operation string, latency time.Duration) { func (cs *ContextStoreImpl) recordLatency(operation string, latency time.Duration) {
cs.metricsCollector.mu.Lock() cs.metricsCollector.mu.Lock()
defer cs.metricsCollector.mu.Unlock() defer cs.metricsCollector.mu.Unlock()
if cs.metricsCollector.latencyHistogram[operation] == nil { if cs.metricsCollector.latencyHistogram[operation] == nil {
cs.metricsCollector.latencyHistogram[operation] = make([]time.Duration, 0, 100) cs.metricsCollector.latencyHistogram[operation] = make([]time.Duration, 0, 100)
} }
// Keep only last 100 samples // Keep only last 100 samples
histogram := cs.metricsCollector.latencyHistogram[operation] histogram := cs.metricsCollector.latencyHistogram[operation]
if len(histogram) >= 100 { if len(histogram) >= 100 {
@@ -541,7 +538,7 @@ func (cs *ContextStoreImpl) recordError(operation string, err error) {
cs.metricsCollector.mu.Lock() cs.metricsCollector.mu.Lock()
defer cs.metricsCollector.mu.Unlock() defer cs.metricsCollector.mu.Unlock()
cs.metricsCollector.errorCount[operation]++ cs.metricsCollector.errorCount[operation]++
// Log the error (in production, use proper logging) // Log the error (in production, use proper logging)
fmt.Printf("Storage error in %s: %v\n", operation, err) fmt.Printf("Storage error in %s: %v\n", operation, err)
} }
@@ -614,7 +611,7 @@ func (cs *ContextStoreImpl) performCleanup(ctx context.Context) {
if err := cs.cacheManager.Clear(ctx); err != nil { if err := cs.cacheManager.Clear(ctx); err != nil {
cs.recordError("cache_cleanup", err) cs.recordError("cache_cleanup", err)
} }
// Clean old metrics // Clean old metrics
cs.cleanupMetrics() cs.cleanupMetrics()
} }
@@ -622,7 +619,7 @@ func (cs *ContextStoreImpl) performCleanup(ctx context.Context) {
func (cs *ContextStoreImpl) cleanupMetrics() { func (cs *ContextStoreImpl) cleanupMetrics() {
cs.metricsCollector.mu.Lock() cs.metricsCollector.mu.Lock()
defer cs.metricsCollector.mu.Unlock() defer cs.metricsCollector.mu.Unlock()
// Reset histograms that are too large // Reset histograms that are too large
for operation, histogram := range cs.metricsCollector.latencyHistogram { for operation, histogram := range cs.metricsCollector.latencyHistogram {
if len(histogram) > 1000 { if len(histogram) > 1000 {
@@ -729,7 +726,7 @@ func (cs *ContextStoreImpl) Sync(ctx context.Context) error {
Type: EventSynced, Type: EventSynced,
Timestamp: time.Now(), Timestamp: time.Now(),
Metadata: map[string]interface{}{ Metadata: map[string]interface{}{
"node_id": cs.nodeID, "node_id": cs.nodeID,
"sync_time": time.Since(start), "sync_time": time.Since(start),
}, },
} }

View File

@@ -8,69 +8,68 @@ import (
"time" "time"
"chorus/pkg/dht" "chorus/pkg/dht"
"chorus/pkg/types"
) )
// DistributedStorageImpl implements the DistributedStorage interface // DistributedStorageImpl implements the DistributedStorage interface
type DistributedStorageImpl struct { type DistributedStorageImpl struct {
mu sync.RWMutex mu sync.RWMutex
dht dht.DHT dht dht.DHT
nodeID string nodeID string
metrics *DistributedStorageStats metrics *DistributedStorageStats
replicas map[string][]string // key -> replica node IDs replicas map[string][]string // key -> replica node IDs
heartbeat *HeartbeatManager heartbeat *HeartbeatManager
consensus *ConsensusManager consensus *ConsensusManager
options *DistributedStorageOptions options *DistributedStorageOptions
} }
// HeartbeatManager manages node heartbeats and health // HeartbeatManager manages node heartbeats and health
type HeartbeatManager struct { type HeartbeatManager struct {
mu sync.RWMutex mu sync.RWMutex
nodes map[string]*NodeHealth nodes map[string]*NodeHealth
heartbeatInterval time.Duration heartbeatInterval time.Duration
timeoutThreshold time.Duration timeoutThreshold time.Duration
stopCh chan struct{} stopCh chan struct{}
} }
// NodeHealth tracks the health of a distributed storage node // NodeHealth tracks the health of a distributed storage node
type NodeHealth struct { type NodeHealth struct {
NodeID string `json:"node_id"` NodeID string `json:"node_id"`
LastSeen time.Time `json:"last_seen"` LastSeen time.Time `json:"last_seen"`
Latency time.Duration `json:"latency"` Latency time.Duration `json:"latency"`
IsActive bool `json:"is_active"` IsActive bool `json:"is_active"`
FailureCount int `json:"failure_count"` FailureCount int `json:"failure_count"`
Load float64 `json:"load"` Load float64 `json:"load"`
} }
// ConsensusManager handles consensus operations for distributed storage // ConsensusManager handles consensus operations for distributed storage
type ConsensusManager struct { type ConsensusManager struct {
mu sync.RWMutex mu sync.RWMutex
pendingOps map[string]*ConsensusOperation pendingOps map[string]*ConsensusOperation
votingTimeout time.Duration votingTimeout time.Duration
quorumSize int quorumSize int
} }
// ConsensusOperation represents a distributed operation requiring consensus // ConsensusOperation represents a distributed operation requiring consensus
type ConsensusOperation struct { type ConsensusOperation struct {
ID string `json:"id"` ID string `json:"id"`
Type string `json:"type"` Type string `json:"type"`
Key string `json:"key"` Key string `json:"key"`
Data interface{} `json:"data"` Data interface{} `json:"data"`
Initiator string `json:"initiator"` Initiator string `json:"initiator"`
Votes map[string]bool `json:"votes"` Votes map[string]bool `json:"votes"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
Status ConsensusStatus `json:"status"` Status ConsensusStatus `json:"status"`
Callback func(bool, error) `json:"-"` Callback func(bool, error) `json:"-"`
} }
// ConsensusStatus represents the status of a consensus operation // ConsensusStatus represents the status of a consensus operation
type ConsensusStatus string type ConsensusStatus string
const ( const (
ConsensusPending ConsensusStatus = "pending" ConsensusPending ConsensusStatus = "pending"
ConsensusApproved ConsensusStatus = "approved" ConsensusApproved ConsensusStatus = "approved"
ConsensusRejected ConsensusStatus = "rejected" ConsensusRejected ConsensusStatus = "rejected"
ConsensusTimeout ConsensusStatus = "timeout" ConsensusTimeout ConsensusStatus = "timeout"
) )
// NewDistributedStorage creates a new distributed storage implementation // NewDistributedStorage creates a new distributed storage implementation
@@ -83,9 +82,9 @@ func NewDistributedStorage(
options = &DistributedStoreOptions{ options = &DistributedStoreOptions{
ReplicationFactor: 3, ReplicationFactor: 3,
ConsistencyLevel: ConsistencyQuorum, ConsistencyLevel: ConsistencyQuorum,
Timeout: 30 * time.Second, Timeout: 30 * time.Second,
PreferLocal: true, PreferLocal: true,
SyncMode: SyncAsync, SyncMode: SyncAsync,
} }
} }
@@ -98,10 +97,10 @@ func NewDistributedStorage(
LastRebalance: time.Now(), LastRebalance: time.Now(),
}, },
heartbeat: &HeartbeatManager{ heartbeat: &HeartbeatManager{
nodes: make(map[string]*NodeHealth), nodes: make(map[string]*NodeHealth),
heartbeatInterval: 30 * time.Second, heartbeatInterval: 30 * time.Second,
timeoutThreshold: 90 * time.Second, timeoutThreshold: 90 * time.Second,
stopCh: make(chan struct{}), stopCh: make(chan struct{}),
}, },
consensus: &ConsensusManager{ consensus: &ConsensusManager{
pendingOps: make(map[string]*ConsensusOperation), pendingOps: make(map[string]*ConsensusOperation),
@@ -125,8 +124,6 @@ func (ds *DistributedStorageImpl) Store(
data interface{}, data interface{},
options *DistributedStoreOptions, options *DistributedStoreOptions,
) error { ) error {
start := time.Now()
if options == nil { if options == nil {
options = ds.options options = ds.options
} }
@@ -179,7 +176,7 @@ func (ds *DistributedStorageImpl) Retrieve(
// Try local first if prefer local is enabled // Try local first if prefer local is enabled
if ds.options.PreferLocal { if ds.options.PreferLocal {
if localData, err := ds.dht.Get(key); err == nil { if localData, err := ds.dht.GetValue(ctx, key); err == nil {
return ds.deserializeEntry(localData) return ds.deserializeEntry(localData)
} }
} }
@@ -226,25 +223,9 @@ func (ds *DistributedStorageImpl) Exists(
ctx context.Context, ctx context.Context,
key string, key string,
) (bool, error) { ) (bool, error) {
// Try local first if _, err := ds.dht.GetValue(ctx, key); err == nil {
if ds.options.PreferLocal { return true, nil
if exists, err := ds.dht.Exists(key); err == nil {
return exists, nil
}
} }
// Check replicas
replicas, err := ds.getReplicationNodes(key)
if err != nil {
return false, fmt.Errorf("failed to get replication nodes: %w", err)
}
for _, nodeID := range replicas {
if exists, err := ds.checkExistsOnNode(ctx, nodeID, key); err == nil && exists {
return true, nil
}
}
return false, nil return false, nil
} }
@@ -306,10 +287,7 @@ func (ds *DistributedStorageImpl) FindReplicas(
// Sync synchronizes with other DHT nodes // Sync synchronizes with other DHT nodes
func (ds *DistributedStorageImpl) Sync(ctx context.Context) error { func (ds *DistributedStorageImpl) Sync(ctx context.Context) error {
start := time.Now() ds.metrics.LastRebalance = time.Now()
defer func() {
ds.metrics.LastRebalance = time.Now()
}()
// Get list of active nodes // Get list of active nodes
activeNodes := ds.heartbeat.getActiveNodes() activeNodes := ds.heartbeat.getActiveNodes()
@@ -346,7 +324,7 @@ func (ds *DistributedStorageImpl) GetDistributedStats() (*DistributedStorageStat
healthyReplicas := int64(0) healthyReplicas := int64(0)
underReplicated := int64(0) underReplicated := int64(0)
for key, replicas := range ds.replicas { for _, replicas := range ds.replicas {
totalReplicas += int64(len(replicas)) totalReplicas += int64(len(replicas))
healthy := 0 healthy := 0
for _, nodeID := range replicas { for _, nodeID := range replicas {
@@ -371,14 +349,14 @@ func (ds *DistributedStorageImpl) GetDistributedStats() (*DistributedStorageStat
// DistributedEntry represents a distributed storage entry // DistributedEntry represents a distributed storage entry
type DistributedEntry struct { type DistributedEntry struct {
Key string `json:"key"` Key string `json:"key"`
Data []byte `json:"data"` Data []byte `json:"data"`
ReplicationFactor int `json:"replication_factor"` ReplicationFactor int `json:"replication_factor"`
ConsistencyLevel ConsistencyLevel `json:"consistency_level"` ConsistencyLevel ConsistencyLevel `json:"consistency_level"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"` UpdatedAt time.Time `json:"updated_at"`
Version int64 `json:"version"` Version int64 `json:"version"`
Checksum string `json:"checksum"` Checksum string `json:"checksum"`
} }
// Helper methods implementation // Helper methods implementation
@@ -394,7 +372,7 @@ func (ds *DistributedStorageImpl) selectReplicationNodes(key string, replication
// This is a simplified version - production would use proper consistent hashing // This is a simplified version - production would use proper consistent hashing
nodes := make([]string, 0, replicationFactor) nodes := make([]string, 0, replicationFactor)
hash := ds.calculateKeyHash(key) hash := ds.calculateKeyHash(key)
// Select nodes in a deterministic way based on key hash // Select nodes in a deterministic way based on key hash
for i := 0; i < replicationFactor && i < len(activeNodes); i++ { for i := 0; i < replicationFactor && i < len(activeNodes); i++ {
nodeIndex := (int(hash) + i) % len(activeNodes) nodeIndex := (int(hash) + i) % len(activeNodes)
@@ -405,13 +383,13 @@ func (ds *DistributedStorageImpl) selectReplicationNodes(key string, replication
} }
func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *DistributedEntry, nodes []string) error { func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *DistributedEntry, nodes []string) error {
// Store asynchronously on all nodes // Store asynchronously on all nodes for SEC-SLURP-1.1a replication policy
errCh := make(chan error, len(nodes)) errCh := make(chan error, len(nodes))
for _, nodeID := range nodes { for _, nodeID := range nodes {
go func(node string) { go func(node string) {
err := ds.storeOnNode(ctx, node, entry) err := ds.storeOnNode(ctx, node, entry)
errorCh <- err errCh <- err
}(nodeID) }(nodeID)
} }
@@ -429,7 +407,7 @@ func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *Dist
// If first failed, try to get at least one success // If first failed, try to get at least one success
timer := time.NewTimer(10 * time.Second) timer := time.NewTimer(10 * time.Second)
defer timer.Stop() defer timer.Stop()
for i := 1; i < len(nodes); i++ { for i := 1; i < len(nodes); i++ {
select { select {
case err := <-errCh: case err := <-errCh:
@@ -445,13 +423,13 @@ func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *Dist
} }
func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *DistributedEntry, nodes []string) error { func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *DistributedEntry, nodes []string) error {
// Store synchronously on all nodes // Store synchronously on all nodes per SEC-SLURP-1.1a durability target
errCh := make(chan error, len(nodes)) errCh := make(chan error, len(nodes))
for _, nodeID := range nodes { for _, nodeID := range nodes {
go func(node string) { go func(node string) {
err := ds.storeOnNode(ctx, node, entry) err := ds.storeOnNode(ctx, node, entry)
errorCh <- err errCh <- err
}(nodeID) }(nodeID)
} }
@@ -476,21 +454,21 @@ func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *Distri
} }
func (ds *DistributedStorageImpl) storeQuorum(ctx context.Context, entry *DistributedEntry, nodes []string) error { func (ds *DistributedStorageImpl) storeQuorum(ctx context.Context, entry *DistributedEntry, nodes []string) error {
// Store on quorum of nodes // Store on quorum of nodes per SEC-SLURP-1.1a availability guardrail
quorumSize := (len(nodes) / 2) + 1 quorumSize := (len(nodes) / 2) + 1
errCh := make(chan error, len(nodes)) errCh := make(chan error, len(nodes))
for _, nodeID := range nodes { for _, nodeID := range nodes {
go func(node string) { go func(node string) {
err := ds.storeOnNode(ctx, node, entry) err := ds.storeOnNode(ctx, node, entry)
errorCh <- err errCh <- err
}(nodeID) }(nodeID)
} }
// Wait for quorum // Wait for quorum
successCount := 0 successCount := 0
errorCount := 0 errorCount := 0
for i := 0; i < len(nodes); i++ { for i := 0; i < len(nodes); i++ {
select { select {
case err := <-errCh: case err := <-errCh:
@@ -537,7 +515,7 @@ func (ds *DistributedStorageImpl) generateOperationID() string {
func (ds *DistributedStorageImpl) updateLatencyMetrics(latency time.Duration) { func (ds *DistributedStorageImpl) updateLatencyMetrics(latency time.Duration) {
ds.mu.Lock() ds.mu.Lock()
defer ds.mu.Unlock() defer ds.mu.Unlock()
if ds.metrics.NetworkLatency == 0 { if ds.metrics.NetworkLatency == 0 {
ds.metrics.NetworkLatency = latency ds.metrics.NetworkLatency = latency
} else { } else {
@@ -553,11 +531,11 @@ func (ds *DistributedStorageImpl) updateLatencyMetrics(latency time.Duration) {
func (ds *DistributedStorageImpl) getReplicationNodes(key string) ([]string, error) { func (ds *DistributedStorageImpl) getReplicationNodes(key string) ([]string, error) {
ds.mu.RLock() ds.mu.RLock()
defer ds.mu.RUnlock() defer ds.mu.RUnlock()
if replicas, exists := ds.replicas[key]; exists { if replicas, exists := ds.replicas[key]; exists {
return replicas, nil return replicas, nil
} }
// Fall back to consistent hashing // Fall back to consistent hashing
return ds.selectReplicationNodes(key, ds.options.ReplicationFactor) return ds.selectReplicationNodes(key, ds.options.ReplicationFactor)
} }

View File

@@ -9,7 +9,6 @@ import (
"time" "time"
"chorus/pkg/crypto" "chorus/pkg/crypto"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
) )
@@ -19,25 +18,25 @@ type EncryptedStorageImpl struct {
crypto crypto.RoleCrypto crypto crypto.RoleCrypto
localStorage LocalStorage localStorage LocalStorage
keyManager crypto.KeyManager keyManager crypto.KeyManager
accessControl crypto.AccessController accessControl crypto.StorageAccessController
auditLogger crypto.AuditLogger auditLogger crypto.StorageAuditLogger
metrics *EncryptionMetrics metrics *EncryptionMetrics
} }
// EncryptionMetrics tracks encryption-related metrics // EncryptionMetrics tracks encryption-related metrics
type EncryptionMetrics struct { type EncryptionMetrics struct {
mu sync.RWMutex mu sync.RWMutex
EncryptOperations int64 EncryptOperations int64
DecryptOperations int64 DecryptOperations int64
KeyRotations int64 KeyRotations int64
AccessDenials int64 AccessDenials int64
EncryptionErrors int64 EncryptionErrors int64
DecryptionErrors int64 DecryptionErrors int64
LastKeyRotation time.Time LastKeyRotation time.Time
AverageEncryptTime time.Duration AverageEncryptTime time.Duration
AverageDecryptTime time.Duration AverageDecryptTime time.Duration
ActiveEncryptionKeys int ActiveEncryptionKeys int
ExpiredKeys int ExpiredKeys int
} }
// NewEncryptedStorage creates a new encrypted storage implementation // NewEncryptedStorage creates a new encrypted storage implementation
@@ -45,8 +44,8 @@ func NewEncryptedStorage(
crypto crypto.RoleCrypto, crypto crypto.RoleCrypto,
localStorage LocalStorage, localStorage LocalStorage,
keyManager crypto.KeyManager, keyManager crypto.KeyManager,
accessControl crypto.AccessController, accessControl crypto.StorageAccessController,
auditLogger crypto.AuditLogger, auditLogger crypto.StorageAuditLogger,
) *EncryptedStorageImpl { ) *EncryptedStorageImpl {
return &EncryptedStorageImpl{ return &EncryptedStorageImpl{
crypto: crypto, crypto: crypto,
@@ -286,12 +285,11 @@ func (es *EncryptedStorageImpl) GetAccessRoles(
return roles, nil return roles, nil
} }
// RotateKeys rotates encryption keys // RotateKeys rotates encryption keys in line with SEC-SLURP-1.1 retention constraints
func (es *EncryptedStorageImpl) RotateKeys( func (es *EncryptedStorageImpl) RotateKeys(
ctx context.Context, ctx context.Context,
maxAge time.Duration, maxAge time.Duration,
) error { ) error {
start := time.Now()
defer func() { defer func() {
es.metrics.mu.Lock() es.metrics.mu.Lock()
es.metrics.KeyRotations++ es.metrics.KeyRotations++
@@ -334,7 +332,7 @@ func (es *EncryptedStorageImpl) ValidateEncryption(
// Validate each encrypted version // Validate each encrypted version
for _, role := range roles { for _, role := range roles {
roleKey := es.generateRoleKey(key, role) roleKey := es.generateRoleKey(key, role)
// Retrieve encrypted context // Retrieve encrypted context
encryptedData, err := es.localStorage.Retrieve(ctx, roleKey) encryptedData, err := es.localStorage.Retrieve(ctx, roleKey)
if err != nil { if err != nil {

View File

@@ -9,22 +9,23 @@ import (
"sync" "sync"
"time" "time"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
"github.com/blevesearch/bleve/v2" "github.com/blevesearch/bleve/v2"
"github.com/blevesearch/bleve/v2/analysis/analyzer/standard" "github.com/blevesearch/bleve/v2/analysis/analyzer/standard"
"github.com/blevesearch/bleve/v2/analysis/lang/en" "github.com/blevesearch/bleve/v2/analysis/lang/en"
"github.com/blevesearch/bleve/v2/mapping" "github.com/blevesearch/bleve/v2/mapping"
"chorus/pkg/ucxl" "github.com/blevesearch/bleve/v2/search/query"
slurpContext "chorus/pkg/slurp/context"
) )
// IndexManagerImpl implements the IndexManager interface using Bleve // IndexManagerImpl implements the IndexManager interface using Bleve
type IndexManagerImpl struct { type IndexManagerImpl struct {
mu sync.RWMutex mu sync.RWMutex
indexes map[string]bleve.Index indexes map[string]bleve.Index
stats map[string]*IndexStatistics stats map[string]*IndexStatistics
basePath string basePath string
nodeID string nodeID string
options *IndexManagerOptions options *IndexManagerOptions
} }
// IndexManagerOptions configures index manager behavior // IndexManagerOptions configures index manager behavior
@@ -60,11 +61,11 @@ func NewIndexManager(basePath, nodeID string, options *IndexManagerOptions) (*In
} }
im := &IndexManagerImpl{ im := &IndexManagerImpl{
indexes: make(map[string]bleve.Index), indexes: make(map[string]bleve.Index),
stats: make(map[string]*IndexStatistics), stats: make(map[string]*IndexStatistics),
basePath: basePath, basePath: basePath,
nodeID: nodeID, nodeID: nodeID,
options: options, options: options,
} }
// Start background optimization if enabled // Start background optimization if enabled
@@ -356,11 +357,11 @@ func (im *IndexManagerImpl) createIndexMapping(config *IndexConfig) (mapping.Ind
fieldMapping.Analyzer = analyzer fieldMapping.Analyzer = analyzer
fieldMapping.Store = true fieldMapping.Store = true
fieldMapping.Index = true fieldMapping.Index = true
if im.options.EnableHighlighting { if im.options.EnableHighlighting {
fieldMapping.IncludeTermVectors = true fieldMapping.IncludeTermVectors = true
} }
docMapping.AddFieldMappingsAt(field, fieldMapping) docMapping.AddFieldMappingsAt(field, fieldMapping)
} }
@@ -432,31 +433,31 @@ func (im *IndexManagerImpl) createIndexDocument(data interface{}) (map[string]in
return doc, nil return doc, nil
} }
func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.SearchRequest, error) { func (im *IndexManagerImpl) buildSearchRequest(searchQuery *SearchQuery) (*bleve.SearchRequest, error) {
// Build Bleve search request from our search query // Build Bleve search request from our search query (SEC-SLURP-1.1 search path)
var bleveQuery bleve.Query var bleveQuery query.Query
if query.Query == "" { if searchQuery.Query == "" {
// Match all query // Match all query
bleveQuery = bleve.NewMatchAllQuery() bleveQuery = bleve.NewMatchAllQuery()
} else { } else {
// Text search query // Text search query
if query.FuzzyMatch { if searchQuery.FuzzyMatch {
// Use fuzzy query // Use fuzzy query
bleveQuery = bleve.NewFuzzyQuery(query.Query) bleveQuery = bleve.NewFuzzyQuery(searchQuery.Query)
} else { } else {
// Use match query for better scoring // Use match query for better scoring
bleveQuery = bleve.NewMatchQuery(query.Query) bleveQuery = bleve.NewMatchQuery(searchQuery.Query)
} }
} }
// Add filters // Add filters
var conjuncts []bleve.Query var conjuncts []query.Query
conjuncts = append(conjuncts, bleveQuery) conjuncts = append(conjuncts, bleveQuery)
// Technology filters // Technology filters
if len(query.Technologies) > 0 { if len(searchQuery.Technologies) > 0 {
for _, tech := range query.Technologies { for _, tech := range searchQuery.Technologies {
techQuery := bleve.NewTermQuery(tech) techQuery := bleve.NewTermQuery(tech)
techQuery.SetField("technologies_facet") techQuery.SetField("technologies_facet")
conjuncts = append(conjuncts, techQuery) conjuncts = append(conjuncts, techQuery)
@@ -464,8 +465,8 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
} }
// Tag filters // Tag filters
if len(query.Tags) > 0 { if len(searchQuery.Tags) > 0 {
for _, tag := range query.Tags { for _, tag := range searchQuery.Tags {
tagQuery := bleve.NewTermQuery(tag) tagQuery := bleve.NewTermQuery(tag)
tagQuery.SetField("tags_facet") tagQuery.SetField("tags_facet")
conjuncts = append(conjuncts, tagQuery) conjuncts = append(conjuncts, tagQuery)
@@ -479,20 +480,20 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
// Create search request // Create search request
searchRequest := bleve.NewSearchRequest(bleveQuery) searchRequest := bleve.NewSearchRequest(bleveQuery)
// Set result options // Set result options
if query.Limit > 0 && query.Limit <= im.options.MaxResults { if searchQuery.Limit > 0 && searchQuery.Limit <= im.options.MaxResults {
searchRequest.Size = query.Limit searchRequest.Size = searchQuery.Limit
} else { } else {
searchRequest.Size = im.options.MaxResults searchRequest.Size = im.options.MaxResults
} }
if query.Offset > 0 { if searchQuery.Offset > 0 {
searchRequest.From = query.Offset searchRequest.From = searchQuery.Offset
} }
// Enable highlighting if requested // Enable highlighting if requested
if query.HighlightTerms && im.options.EnableHighlighting { if searchQuery.HighlightTerms && im.options.EnableHighlighting {
searchRequest.Highlight = bleve.NewHighlight() searchRequest.Highlight = bleve.NewHighlight()
searchRequest.Highlight.AddField("content") searchRequest.Highlight.AddField("content")
searchRequest.Highlight.AddField("summary") searchRequest.Highlight.AddField("summary")
@@ -500,9 +501,9 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
} }
// Add facets if requested // Add facets if requested
if len(query.Facets) > 0 && im.options.EnableFaceting { if len(searchQuery.Facets) > 0 && im.options.EnableFaceting {
searchRequest.Facets = make(bleve.FacetsRequest) searchRequest.Facets = make(bleve.FacetsRequest)
for _, facet := range query.Facets { for _, facet := range searchQuery.Facets {
switch facet { switch facet {
case "technologies": case "technologies":
searchRequest.Facets["technologies"] = bleve.NewFacetRequest("technologies_facet", 10) searchRequest.Facets["technologies"] = bleve.NewFacetRequest("technologies_facet", 10)
@@ -535,7 +536,7 @@ func (im *IndexManagerImpl) convertSearchResults(
searchHit := &SearchResult{ searchHit := &SearchResult{
MatchScore: hit.Score, MatchScore: hit.Score,
MatchedFields: make([]string, 0), MatchedFields: make([]string, 0),
Highlights: make(map[string][]string), Highlights: make(map[string][]string),
Rank: i + 1, Rank: i + 1,
} }
@@ -558,8 +559,8 @@ func (im *IndexManagerImpl) convertSearchResults(
// Parse UCXL address // Parse UCXL address
if ucxlStr, ok := hit.Fields["ucxl_address"].(string); ok { if ucxlStr, ok := hit.Fields["ucxl_address"].(string); ok {
if addr, err := ucxl.ParseAddress(ucxlStr); err == nil { if addr, err := ucxl.Parse(ucxlStr); err == nil {
contextNode.UCXLAddress = addr contextNode.UCXLAddress = *addr
} }
} }
@@ -572,8 +573,10 @@ func (im *IndexManagerImpl) convertSearchResults(
results.Facets = make(map[string]map[string]int) results.Facets = make(map[string]map[string]int)
for facetName, facetResult := range searchResult.Facets { for facetName, facetResult := range searchResult.Facets {
facetCounts := make(map[string]int) facetCounts := make(map[string]int)
for _, term := range facetResult.Terms { if facetResult.Terms != nil {
facetCounts[term.Term] = term.Count for _, term := range facetResult.Terms.Terms() {
facetCounts[term.Term] = term.Count
}
} }
results.Facets[facetName] = facetCounts results.Facets[facetName] = facetCounts
} }

View File

@@ -4,9 +4,8 @@ import (
"context" "context"
"time" "time"
"chorus/pkg/ucxl"
"chorus/pkg/crypto"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
) )
// ContextStore provides the main interface for context storage and retrieval // ContextStore provides the main interface for context storage and retrieval
@@ -17,40 +16,40 @@ import (
type ContextStore interface { type ContextStore interface {
// StoreContext stores a context node with role-based encryption // StoreContext stores a context node with role-based encryption
StoreContext(ctx context.Context, node *slurpContext.ContextNode, roles []string) error StoreContext(ctx context.Context, node *slurpContext.ContextNode, roles []string) error
// RetrieveContext retrieves context for a UCXL address and role // RetrieveContext retrieves context for a UCXL address and role
RetrieveContext(ctx context.Context, address ucxl.Address, role string) (*slurpContext.ContextNode, error) RetrieveContext(ctx context.Context, address ucxl.Address, role string) (*slurpContext.ContextNode, error)
// UpdateContext updates an existing context node // UpdateContext updates an existing context node
UpdateContext(ctx context.Context, node *slurpContext.ContextNode, roles []string) error UpdateContext(ctx context.Context, node *slurpContext.ContextNode, roles []string) error
// DeleteContext removes a context node from storage // DeleteContext removes a context node from storage
DeleteContext(ctx context.Context, address ucxl.Address) error DeleteContext(ctx context.Context, address ucxl.Address) error
// ExistsContext checks if context exists for an address // ExistsContext checks if context exists for an address
ExistsContext(ctx context.Context, address ucxl.Address) (bool, error) ExistsContext(ctx context.Context, address ucxl.Address) (bool, error)
// ListContexts lists contexts matching criteria // ListContexts lists contexts matching criteria
ListContexts(ctx context.Context, criteria *ListCriteria) ([]*slurpContext.ContextNode, error) ListContexts(ctx context.Context, criteria *ListCriteria) ([]*slurpContext.ContextNode, error)
// SearchContexts searches contexts using query criteria // SearchContexts searches contexts using query criteria
SearchContexts(ctx context.Context, query *SearchQuery) (*SearchResults, error) SearchContexts(ctx context.Context, query *SearchQuery) (*SearchResults, error)
// BatchStore stores multiple contexts efficiently // BatchStore stores multiple contexts efficiently
BatchStore(ctx context.Context, batch *BatchStoreRequest) (*BatchStoreResult, error) BatchStore(ctx context.Context, batch *BatchStoreRequest) (*BatchStoreResult, error)
// BatchRetrieve retrieves multiple contexts efficiently // BatchRetrieve retrieves multiple contexts efficiently
BatchRetrieve(ctx context.Context, batch *BatchRetrieveRequest) (*BatchRetrieveResult, error) BatchRetrieve(ctx context.Context, batch *BatchRetrieveRequest) (*BatchRetrieveResult, error)
// GetStorageStats returns storage statistics and health information // GetStorageStats returns storage statistics and health information
GetStorageStats(ctx context.Context) (*StorageStatistics, error) GetStorageStats(ctx context.Context) (*StorageStatistics, error)
// Sync synchronizes with distributed storage // Sync synchronizes with distributed storage
Sync(ctx context.Context) error Sync(ctx context.Context) error
// Backup creates a backup of stored contexts // Backup creates a backup of stored contexts
Backup(ctx context.Context, destination string) error Backup(ctx context.Context, destination string) error
// Restore restores contexts from backup // Restore restores contexts from backup
Restore(ctx context.Context, source string) error Restore(ctx context.Context, source string) error
} }
@@ -59,25 +58,25 @@ type ContextStore interface {
type LocalStorage interface { type LocalStorage interface {
// Store stores context data locally with optional encryption // Store stores context data locally with optional encryption
Store(ctx context.Context, key string, data interface{}, options *StoreOptions) error Store(ctx context.Context, key string, data interface{}, options *StoreOptions) error
// Retrieve retrieves context data from local storage // Retrieve retrieves context data from local storage
Retrieve(ctx context.Context, key string) (interface{}, error) Retrieve(ctx context.Context, key string) (interface{}, error)
// Delete removes data from local storage // Delete removes data from local storage
Delete(ctx context.Context, key string) error Delete(ctx context.Context, key string) error
// Exists checks if data exists locally // Exists checks if data exists locally
Exists(ctx context.Context, key string) (bool, error) Exists(ctx context.Context, key string) (bool, error)
// List lists all keys matching a pattern // List lists all keys matching a pattern
List(ctx context.Context, pattern string) ([]string, error) List(ctx context.Context, pattern string) ([]string, error)
// Size returns the size of stored data // Size returns the size of stored data
Size(ctx context.Context, key string) (int64, error) Size(ctx context.Context, key string) (int64, error)
// Compact compacts local storage to reclaim space // Compact compacts local storage to reclaim space
Compact(ctx context.Context) error Compact(ctx context.Context) error
// GetLocalStats returns local storage statistics // GetLocalStats returns local storage statistics
GetLocalStats() (*LocalStorageStats, error) GetLocalStats() (*LocalStorageStats, error)
} }
@@ -86,25 +85,25 @@ type LocalStorage interface {
type DistributedStorage interface { type DistributedStorage interface {
// Store stores data in the distributed DHT with replication // Store stores data in the distributed DHT with replication
Store(ctx context.Context, key string, data interface{}, options *DistributedStoreOptions) error Store(ctx context.Context, key string, data interface{}, options *DistributedStoreOptions) error
// Retrieve retrieves data from the distributed DHT // Retrieve retrieves data from the distributed DHT
Retrieve(ctx context.Context, key string) (interface{}, error) Retrieve(ctx context.Context, key string) (interface{}, error)
// Delete removes data from the distributed DHT // Delete removes data from the distributed DHT
Delete(ctx context.Context, key string) error Delete(ctx context.Context, key string) error
// Exists checks if data exists in the DHT // Exists checks if data exists in the DHT
Exists(ctx context.Context, key string) (bool, error) Exists(ctx context.Context, key string) (bool, error)
// Replicate ensures data is replicated across nodes // Replicate ensures data is replicated across nodes
Replicate(ctx context.Context, key string, replicationFactor int) error Replicate(ctx context.Context, key string, replicationFactor int) error
// FindReplicas finds all replicas of data // FindReplicas finds all replicas of data
FindReplicas(ctx context.Context, key string) ([]string, error) FindReplicas(ctx context.Context, key string) ([]string, error)
// Sync synchronizes with other DHT nodes // Sync synchronizes with other DHT nodes
Sync(ctx context.Context) error Sync(ctx context.Context) error
// GetDistributedStats returns distributed storage statistics // GetDistributedStats returns distributed storage statistics
GetDistributedStats() (*DistributedStorageStats, error) GetDistributedStats() (*DistributedStorageStats, error)
} }
@@ -113,25 +112,25 @@ type DistributedStorage interface {
type EncryptedStorage interface { type EncryptedStorage interface {
// StoreEncrypted stores data encrypted for specific roles // StoreEncrypted stores data encrypted for specific roles
StoreEncrypted(ctx context.Context, key string, data interface{}, roles []string) error StoreEncrypted(ctx context.Context, key string, data interface{}, roles []string) error
// RetrieveDecrypted retrieves and decrypts data for current role // RetrieveDecrypted retrieves and decrypts data for current role
RetrieveDecrypted(ctx context.Context, key string, role string) (interface{}, error) RetrieveDecrypted(ctx context.Context, key string, role string) (interface{}, error)
// CanAccess checks if a role can access specific data // CanAccess checks if a role can access specific data
CanAccess(ctx context.Context, key string, role string) (bool, error) CanAccess(ctx context.Context, key string, role string) (bool, error)
// ListAccessibleKeys lists keys accessible to a role // ListAccessibleKeys lists keys accessible to a role
ListAccessibleKeys(ctx context.Context, role string) ([]string, error) ListAccessibleKeys(ctx context.Context, role string) ([]string, error)
// ReEncryptForRoles re-encrypts data for different roles // ReEncryptForRoles re-encrypts data for different roles
ReEncryptForRoles(ctx context.Context, key string, newRoles []string) error ReEncryptForRoles(ctx context.Context, key string, newRoles []string) error
// GetAccessRoles gets roles that can access specific data // GetAccessRoles gets roles that can access specific data
GetAccessRoles(ctx context.Context, key string) ([]string, error) GetAccessRoles(ctx context.Context, key string) ([]string, error)
// RotateKeys rotates encryption keys // RotateKeys rotates encryption keys
RotateKeys(ctx context.Context, maxAge time.Duration) error RotateKeys(ctx context.Context, maxAge time.Duration) error
// ValidateEncryption validates encryption integrity // ValidateEncryption validates encryption integrity
ValidateEncryption(ctx context.Context, key string) error ValidateEncryption(ctx context.Context, key string) error
} }
@@ -140,25 +139,25 @@ type EncryptedStorage interface {
type CacheManager interface { type CacheManager interface {
// Get retrieves data from cache // Get retrieves data from cache
Get(ctx context.Context, key string) (interface{}, bool, error) Get(ctx context.Context, key string) (interface{}, bool, error)
// Set stores data in cache with TTL // Set stores data in cache with TTL
Set(ctx context.Context, key string, data interface{}, ttl time.Duration) error Set(ctx context.Context, key string, data interface{}, ttl time.Duration) error
// Delete removes data from cache // Delete removes data from cache
Delete(ctx context.Context, key string) error Delete(ctx context.Context, key string) error
// DeletePattern removes cache entries matching pattern // DeletePattern removes cache entries matching pattern
DeletePattern(ctx context.Context, pattern string) error DeletePattern(ctx context.Context, pattern string) error
// Clear clears all cache entries // Clear clears all cache entries
Clear(ctx context.Context) error Clear(ctx context.Context) error
// Warm pre-loads cache with frequently accessed data // Warm pre-loads cache with frequently accessed data
Warm(ctx context.Context, keys []string) error Warm(ctx context.Context, keys []string) error
// GetCacheStats returns cache performance statistics // GetCacheStats returns cache performance statistics
GetCacheStats() (*CacheStatistics, error) GetCacheStats() (*CacheStatistics, error)
// SetCachePolicy sets caching policy // SetCachePolicy sets caching policy
SetCachePolicy(policy *CachePolicy) error SetCachePolicy(policy *CachePolicy) error
} }
@@ -167,25 +166,25 @@ type CacheManager interface {
type IndexManager interface { type IndexManager interface {
// CreateIndex creates a search index for contexts // CreateIndex creates a search index for contexts
CreateIndex(ctx context.Context, indexName string, config *IndexConfig) error CreateIndex(ctx context.Context, indexName string, config *IndexConfig) error
// UpdateIndex updates search index with new data // UpdateIndex updates search index with new data
UpdateIndex(ctx context.Context, indexName string, key string, data interface{}) error UpdateIndex(ctx context.Context, indexName string, key string, data interface{}) error
// DeleteFromIndex removes data from search index // DeleteFromIndex removes data from search index
DeleteFromIndex(ctx context.Context, indexName string, key string) error DeleteFromIndex(ctx context.Context, indexName string, key string) error
// Search searches indexed data using query // Search searches indexed data using query
Search(ctx context.Context, indexName string, query *SearchQuery) (*SearchResults, error) Search(ctx context.Context, indexName string, query *SearchQuery) (*SearchResults, error)
// RebuildIndex rebuilds search index from stored data // RebuildIndex rebuilds search index from stored data
RebuildIndex(ctx context.Context, indexName string) error RebuildIndex(ctx context.Context, indexName string) error
// OptimizeIndex optimizes search index for performance // OptimizeIndex optimizes search index for performance
OptimizeIndex(ctx context.Context, indexName string) error OptimizeIndex(ctx context.Context, indexName string) error
// GetIndexStats returns index statistics // GetIndexStats returns index statistics
GetIndexStats(ctx context.Context, indexName string) (*IndexStatistics, error) GetIndexStats(ctx context.Context, indexName string) (*IndexStatistics, error)
// ListIndexes lists all available indexes // ListIndexes lists all available indexes
ListIndexes(ctx context.Context) ([]string, error) ListIndexes(ctx context.Context) ([]string, error)
} }
@@ -194,22 +193,22 @@ type IndexManager interface {
type BackupManager interface { type BackupManager interface {
// CreateBackup creates a backup of stored data // CreateBackup creates a backup of stored data
CreateBackup(ctx context.Context, config *BackupConfig) (*BackupInfo, error) CreateBackup(ctx context.Context, config *BackupConfig) (*BackupInfo, error)
// RestoreBackup restores data from backup // RestoreBackup restores data from backup
RestoreBackup(ctx context.Context, backupID string, config *RestoreConfig) error RestoreBackup(ctx context.Context, backupID string, config *RestoreConfig) error
// ListBackups lists available backups // ListBackups lists available backups
ListBackups(ctx context.Context) ([]*BackupInfo, error) ListBackups(ctx context.Context) ([]*BackupInfo, error)
// DeleteBackup removes a backup // DeleteBackup removes a backup
DeleteBackup(ctx context.Context, backupID string) error DeleteBackup(ctx context.Context, backupID string) error
// ValidateBackup validates backup integrity // ValidateBackup validates backup integrity
ValidateBackup(ctx context.Context, backupID string) (*BackupValidation, error) ValidateBackup(ctx context.Context, backupID string) (*BackupValidation, error)
// ScheduleBackup schedules automatic backups // ScheduleBackup schedules automatic backups
ScheduleBackup(ctx context.Context, schedule *BackupSchedule) error ScheduleBackup(ctx context.Context, schedule *BackupSchedule) error
// GetBackupStats returns backup statistics // GetBackupStats returns backup statistics
GetBackupStats(ctx context.Context) (*BackupStatistics, error) GetBackupStats(ctx context.Context) (*BackupStatistics, error)
} }
@@ -218,13 +217,13 @@ type BackupManager interface {
type TransactionManager interface { type TransactionManager interface {
// BeginTransaction starts a new transaction // BeginTransaction starts a new transaction
BeginTransaction(ctx context.Context) (*Transaction, error) BeginTransaction(ctx context.Context) (*Transaction, error)
// CommitTransaction commits a transaction // CommitTransaction commits a transaction
CommitTransaction(ctx context.Context, tx *Transaction) error CommitTransaction(ctx context.Context, tx *Transaction) error
// RollbackTransaction rolls back a transaction // RollbackTransaction rolls back a transaction
RollbackTransaction(ctx context.Context, tx *Transaction) error RollbackTransaction(ctx context.Context, tx *Transaction) error
// GetActiveTransactions returns list of active transactions // GetActiveTransactions returns list of active transactions
GetActiveTransactions(ctx context.Context) ([]*Transaction, error) GetActiveTransactions(ctx context.Context) ([]*Transaction, error)
} }
@@ -233,19 +232,19 @@ type TransactionManager interface {
type EventNotifier interface { type EventNotifier interface {
// NotifyStored notifies when data is stored // NotifyStored notifies when data is stored
NotifyStored(ctx context.Context, event *StorageEvent) error NotifyStored(ctx context.Context, event *StorageEvent) error
// NotifyRetrieved notifies when data is retrieved // NotifyRetrieved notifies when data is retrieved
NotifyRetrieved(ctx context.Context, event *StorageEvent) error NotifyRetrieved(ctx context.Context, event *StorageEvent) error
// NotifyUpdated notifies when data is updated // NotifyUpdated notifies when data is updated
NotifyUpdated(ctx context.Context, event *StorageEvent) error NotifyUpdated(ctx context.Context, event *StorageEvent) error
// NotifyDeleted notifies when data is deleted // NotifyDeleted notifies when data is deleted
NotifyDeleted(ctx context.Context, event *StorageEvent) error NotifyDeleted(ctx context.Context, event *StorageEvent) error
// Subscribe subscribes to storage events // Subscribe subscribes to storage events
Subscribe(ctx context.Context, eventType EventType, handler EventHandler) error Subscribe(ctx context.Context, eventType EventType, handler EventHandler) error
// Unsubscribe unsubscribes from storage events // Unsubscribe unsubscribes from storage events
Unsubscribe(ctx context.Context, eventType EventType, handler EventHandler) error Unsubscribe(ctx context.Context, eventType EventType, handler EventHandler) error
} }
@@ -270,35 +269,35 @@ type EventHandler func(event *StorageEvent) error
// StorageEvent represents a storage operation event // StorageEvent represents a storage operation event
type StorageEvent struct { type StorageEvent struct {
Type EventType `json:"type"` // Event type Type EventType `json:"type"` // Event type
Key string `json:"key"` // Storage key Key string `json:"key"` // Storage key
Data interface{} `json:"data"` // Event data Data interface{} `json:"data"` // Event data
Timestamp time.Time `json:"timestamp"` // When event occurred Timestamp time.Time `json:"timestamp"` // When event occurred
Metadata map[string]interface{} `json:"metadata"` // Additional metadata Metadata map[string]interface{} `json:"metadata"` // Additional metadata
} }
// Transaction represents a storage transaction // Transaction represents a storage transaction
type Transaction struct { type Transaction struct {
ID string `json:"id"` // Transaction ID ID string `json:"id"` // Transaction ID
StartTime time.Time `json:"start_time"` // When transaction started StartTime time.Time `json:"start_time"` // When transaction started
Operations []*TransactionOperation `json:"operations"` // Transaction operations Operations []*TransactionOperation `json:"operations"` // Transaction operations
Status TransactionStatus `json:"status"` // Transaction status Status TransactionStatus `json:"status"` // Transaction status
} }
// TransactionOperation represents a single operation in a transaction // TransactionOperation represents a single operation in a transaction
type TransactionOperation struct { type TransactionOperation struct {
Type string `json:"type"` // Operation type Type string `json:"type"` // Operation type
Key string `json:"key"` // Storage key Key string `json:"key"` // Storage key
Data interface{} `json:"data"` // Operation data Data interface{} `json:"data"` // Operation data
Metadata map[string]interface{} `json:"metadata"` // Operation metadata Metadata map[string]interface{} `json:"metadata"` // Operation metadata
} }
// TransactionStatus represents transaction status // TransactionStatus represents transaction status
type TransactionStatus string type TransactionStatus string
const ( const (
TransactionActive TransactionStatus = "active" TransactionActive TransactionStatus = "active"
TransactionCommitted TransactionStatus = "committed" TransactionCommitted TransactionStatus = "committed"
TransactionRolledBack TransactionStatus = "rolled_back" TransactionRolledBack TransactionStatus = "rolled_back"
TransactionFailed TransactionStatus = "failed" TransactionFailed TransactionStatus = "failed"
) )

View File

@@ -33,12 +33,12 @@ type LocalStorageImpl struct {
// LocalStorageOptions configures local storage behavior // LocalStorageOptions configures local storage behavior
type LocalStorageOptions struct { type LocalStorageOptions struct {
Compression bool `json:"compression"` // Enable compression Compression bool `json:"compression"` // Enable compression
CacheSize int `json:"cache_size"` // Cache size in MB CacheSize int `json:"cache_size"` // Cache size in MB
WriteBuffer int `json:"write_buffer"` // Write buffer size in MB WriteBuffer int `json:"write_buffer"` // Write buffer size in MB
MaxOpenFiles int `json:"max_open_files"` // Maximum open files MaxOpenFiles int `json:"max_open_files"` // Maximum open files
BlockSize int `json:"block_size"` // Block size in KB BlockSize int `json:"block_size"` // Block size in KB
SyncWrites bool `json:"sync_writes"` // Synchronous writes SyncWrites bool `json:"sync_writes"` // Synchronous writes
CompactionInterval time.Duration `json:"compaction_interval"` // Auto-compaction interval CompactionInterval time.Duration `json:"compaction_interval"` // Auto-compaction interval
} }
@@ -46,11 +46,11 @@ type LocalStorageOptions struct {
func DefaultLocalStorageOptions() *LocalStorageOptions { func DefaultLocalStorageOptions() *LocalStorageOptions {
return &LocalStorageOptions{ return &LocalStorageOptions{
Compression: true, Compression: true,
CacheSize: 64, // 64MB cache CacheSize: 64, // 64MB cache
WriteBuffer: 16, // 16MB write buffer WriteBuffer: 16, // 16MB write buffer
MaxOpenFiles: 1000, MaxOpenFiles: 1000,
BlockSize: 4, // 4KB blocks BlockSize: 4, // 4KB blocks
SyncWrites: false, SyncWrites: false,
CompactionInterval: 24 * time.Hour, CompactionInterval: 24 * time.Hour,
} }
} }
@@ -135,13 +135,14 @@ func (ls *LocalStorageImpl) Store(
UpdatedAt: time.Now(), UpdatedAt: time.Now(),
Metadata: make(map[string]interface{}), Metadata: make(map[string]interface{}),
} }
entry.Checksum = ls.computeChecksum(dataBytes)
// Apply options // Apply options
if options != nil { if options != nil {
entry.TTL = options.TTL entry.TTL = options.TTL
entry.Compressed = options.Compress entry.Compressed = options.Compress
entry.AccessLevel = string(options.AccessLevel) entry.AccessLevel = string(options.AccessLevel)
// Copy metadata // Copy metadata
for k, v := range options.Metadata { for k, v := range options.Metadata {
entry.Metadata[k] = v entry.Metadata[k] = v
@@ -179,6 +180,7 @@ func (ls *LocalStorageImpl) Store(
if entry.Compressed { if entry.Compressed {
ls.metrics.CompressedSize += entry.CompressedSize ls.metrics.CompressedSize += entry.CompressedSize
} }
ls.updateFileMetricsLocked()
return nil return nil
} }
@@ -231,6 +233,14 @@ func (ls *LocalStorageImpl) Retrieve(ctx context.Context, key string) (interface
dataBytes = decompressedData dataBytes = decompressedData
} }
// Verify integrity against stored checksum (SEC-SLURP-1.1a requirement)
if entry.Checksum != "" {
computed := ls.computeChecksum(dataBytes)
if computed != entry.Checksum {
return nil, fmt.Errorf("data integrity check failed for key %s", key)
}
}
// Deserialize data // Deserialize data
var result interface{} var result interface{}
if err := json.Unmarshal(dataBytes, &result); err != nil { if err := json.Unmarshal(dataBytes, &result); err != nil {
@@ -260,6 +270,7 @@ func (ls *LocalStorageImpl) Delete(ctx context.Context, key string) error {
if entryBytes != nil { if entryBytes != nil {
ls.metrics.TotalSize -= int64(len(entryBytes)) ls.metrics.TotalSize -= int64(len(entryBytes))
} }
ls.updateFileMetricsLocked()
return nil return nil
} }
@@ -350,7 +361,7 @@ func (ls *LocalStorageImpl) Compact(ctx context.Context) error {
// Update metrics // Update metrics
ls.metrics.LastCompaction = time.Now() ls.metrics.LastCompaction = time.Now()
compactionTime := time.Since(start) compactionTime := time.Since(start)
// Calculate new fragmentation ratio // Calculate new fragmentation ratio
ls.updateFragmentationRatio() ls.updateFragmentationRatio()
@@ -397,6 +408,7 @@ type StorageEntry struct {
Compressed bool `json:"compressed"` Compressed bool `json:"compressed"`
OriginalSize int64 `json:"original_size"` OriginalSize int64 `json:"original_size"`
CompressedSize int64 `json:"compressed_size"` CompressedSize int64 `json:"compressed_size"`
Checksum string `json:"checksum"`
AccessLevel string `json:"access_level"` AccessLevel string `json:"access_level"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
} }
@@ -406,34 +418,70 @@ type StorageEntry struct {
func (ls *LocalStorageImpl) compress(data []byte) ([]byte, error) { func (ls *LocalStorageImpl) compress(data []byte) ([]byte, error) {
// Use gzip compression for efficient data storage // Use gzip compression for efficient data storage
var buf bytes.Buffer var buf bytes.Buffer
// Create gzip writer with best compression // Create gzip writer with best compression
writer := gzip.NewWriter(&buf) writer := gzip.NewWriter(&buf)
writer.Header.Name = "storage_data" writer.Header.Name = "storage_data"
writer.Header.Comment = "CHORUS SLURP local storage compressed data" writer.Header.Comment = "CHORUS SLURP local storage compressed data"
// Write data to gzip writer // Write data to gzip writer
if _, err := writer.Write(data); err != nil { if _, err := writer.Write(data); err != nil {
writer.Close() writer.Close()
return nil, fmt.Errorf("failed to write compressed data: %w", err) return nil, fmt.Errorf("failed to write compressed data: %w", err)
} }
// Close writer to flush data // Close writer to flush data
if err := writer.Close(); err != nil { if err := writer.Close(); err != nil {
return nil, fmt.Errorf("failed to close gzip writer: %w", err) return nil, fmt.Errorf("failed to close gzip writer: %w", err)
} }
compressed := buf.Bytes() compressed := buf.Bytes()
// Only return compressed data if it's actually smaller // Only return compressed data if it's actually smaller
if len(compressed) >= len(data) { if len(compressed) >= len(data) {
// Compression didn't help, return original data // Compression didn't help, return original data
return data, nil return data, nil
} }
return compressed, nil return compressed, nil
} }
func (ls *LocalStorageImpl) computeChecksum(data []byte) string {
// Compute SHA-256 checksum to satisfy SEC-SLURP-1.1a integrity tracking
digest := sha256.Sum256(data)
return fmt.Sprintf("%x", digest)
}
func (ls *LocalStorageImpl) updateFileMetricsLocked() {
// Refresh filesystem metrics using io/fs traversal (SEC-SLURP-1.1a durability telemetry)
var fileCount int64
var aggregateSize int64
walkErr := fs.WalkDir(os.DirFS(ls.basePath), ".", func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
if d.IsDir() {
return nil
}
fileCount++
if info, infoErr := d.Info(); infoErr == nil {
aggregateSize += info.Size()
}
return nil
})
if walkErr != nil {
fmt.Printf("filesystem metrics refresh failed: %v\n", walkErr)
return
}
ls.metrics.TotalFiles = fileCount
if aggregateSize > 0 {
ls.metrics.TotalSize = aggregateSize
}
}
func (ls *LocalStorageImpl) decompress(data []byte) ([]byte, error) { func (ls *LocalStorageImpl) decompress(data []byte) ([]byte, error) {
// Create gzip reader // Create gzip reader
reader, err := gzip.NewReader(bytes.NewReader(data)) reader, err := gzip.NewReader(bytes.NewReader(data))
@@ -442,13 +490,13 @@ func (ls *LocalStorageImpl) decompress(data []byte) ([]byte, error) {
return data, nil return data, nil
} }
defer reader.Close() defer reader.Close()
// Read decompressed data // Read decompressed data
var buf bytes.Buffer var buf bytes.Buffer
if _, err := io.Copy(&buf, reader); err != nil { if _, err := io.Copy(&buf, reader); err != nil {
return nil, fmt.Errorf("failed to decompress data: %w", err) return nil, fmt.Errorf("failed to decompress data: %w", err)
} }
return buf.Bytes(), nil return buf.Bytes(), nil
} }
@@ -462,7 +510,7 @@ func (ls *LocalStorageImpl) getAvailableSpace() (int64, error) {
// Calculate available space in bytes // Calculate available space in bytes
// Available blocks * block size // Available blocks * block size
availableBytes := int64(stat.Bavail) * int64(stat.Bsize) availableBytes := int64(stat.Bavail) * int64(stat.Bsize)
return availableBytes, nil return availableBytes, nil
} }
@@ -498,11 +546,11 @@ func (ls *LocalStorageImpl) GetCompressionStats() (*CompressionStats, error) {
defer ls.mu.RUnlock() defer ls.mu.RUnlock()
stats := &CompressionStats{ stats := &CompressionStats{
TotalEntries: 0, TotalEntries: 0,
CompressedEntries: 0, CompressedEntries: 0,
TotalSize: ls.metrics.TotalSize, TotalSize: ls.metrics.TotalSize,
CompressedSize: ls.metrics.CompressedSize, CompressedSize: ls.metrics.CompressedSize,
CompressionRatio: 0.0, CompressionRatio: 0.0,
} }
// Iterate through all entries to get accurate stats // Iterate through all entries to get accurate stats
@@ -511,7 +559,7 @@ func (ls *LocalStorageImpl) GetCompressionStats() (*CompressionStats, error) {
for iter.Next() { for iter.Next() {
stats.TotalEntries++ stats.TotalEntries++
// Try to parse entry to check if compressed // Try to parse entry to check if compressed
var entry StorageEntry var entry StorageEntry
if err := json.Unmarshal(iter.Value(), &entry); err == nil { if err := json.Unmarshal(iter.Value(), &entry); err == nil {
@@ -549,7 +597,7 @@ func (ls *LocalStorageImpl) OptimizeStorage(ctx context.Context, compressThresho
} }
key := string(iter.Key()) key := string(iter.Key())
// Parse existing entry // Parse existing entry
var entry StorageEntry var entry StorageEntry
if err := json.Unmarshal(iter.Value(), &entry); err != nil { if err := json.Unmarshal(iter.Value(), &entry); err != nil {
@@ -599,11 +647,11 @@ func (ls *LocalStorageImpl) OptimizeStorage(ctx context.Context, compressThresho
// CompressionStats holds compression statistics // CompressionStats holds compression statistics
type CompressionStats struct { type CompressionStats struct {
TotalEntries int64 `json:"total_entries"` TotalEntries int64 `json:"total_entries"`
CompressedEntries int64 `json:"compressed_entries"` CompressedEntries int64 `json:"compressed_entries"`
TotalSize int64 `json:"total_size"` TotalSize int64 `json:"total_size"`
CompressedSize int64 `json:"compressed_size"` CompressedSize int64 `json:"compressed_size"`
CompressionRatio float64 `json:"compression_ratio"` CompressionRatio float64 `json:"compression_ratio"`
} }
// Close closes the local storage // Close closes the local storage

View File

@@ -14,77 +14,77 @@ import (
// MonitoringSystem provides comprehensive monitoring for the storage system // MonitoringSystem provides comprehensive monitoring for the storage system
type MonitoringSystem struct { type MonitoringSystem struct {
mu sync.RWMutex mu sync.RWMutex
nodeID string nodeID string
metrics *StorageMetrics metrics *StorageMetrics
alerts *AlertManager alerts *AlertManager
healthChecker *HealthChecker healthChecker *HealthChecker
performanceProfiler *PerformanceProfiler performanceProfiler *PerformanceProfiler
logger *StructuredLogger logger *StructuredLogger
notifications chan *MonitoringEvent notifications chan *MonitoringEvent
stopCh chan struct{} stopCh chan struct{}
} }
// StorageMetrics contains all Prometheus metrics for storage operations // StorageMetrics contains all Prometheus metrics for storage operations
type StorageMetrics struct { type StorageMetrics struct {
// Operation counters // Operation counters
StoreOperations prometheus.Counter StoreOperations prometheus.Counter
RetrieveOperations prometheus.Counter RetrieveOperations prometheus.Counter
DeleteOperations prometheus.Counter DeleteOperations prometheus.Counter
UpdateOperations prometheus.Counter UpdateOperations prometheus.Counter
SearchOperations prometheus.Counter SearchOperations prometheus.Counter
BatchOperations prometheus.Counter BatchOperations prometheus.Counter
// Error counters // Error counters
StoreErrors prometheus.Counter StoreErrors prometheus.Counter
RetrieveErrors prometheus.Counter RetrieveErrors prometheus.Counter
EncryptionErrors prometheus.Counter EncryptionErrors prometheus.Counter
DecryptionErrors prometheus.Counter DecryptionErrors prometheus.Counter
ReplicationErrors prometheus.Counter ReplicationErrors prometheus.Counter
CacheErrors prometheus.Counter CacheErrors prometheus.Counter
IndexErrors prometheus.Counter IndexErrors prometheus.Counter
// Latency histograms // Latency histograms
StoreLatency prometheus.Histogram StoreLatency prometheus.Histogram
RetrieveLatency prometheus.Histogram RetrieveLatency prometheus.Histogram
EncryptionLatency prometheus.Histogram EncryptionLatency prometheus.Histogram
DecryptionLatency prometheus.Histogram DecryptionLatency prometheus.Histogram
ReplicationLatency prometheus.Histogram ReplicationLatency prometheus.Histogram
SearchLatency prometheus.Histogram SearchLatency prometheus.Histogram
// Cache metrics // Cache metrics
CacheHits prometheus.Counter CacheHits prometheus.Counter
CacheMisses prometheus.Counter CacheMisses prometheus.Counter
CacheEvictions prometheus.Counter CacheEvictions prometheus.Counter
CacheSize prometheus.Gauge CacheSize prometheus.Gauge
// Storage size metrics // Storage size metrics
LocalStorageSize prometheus.Gauge LocalStorageSize prometheus.Gauge
DistributedStorageSize prometheus.Gauge DistributedStorageSize prometheus.Gauge
CompressedStorageSize prometheus.Gauge CompressedStorageSize prometheus.Gauge
IndexStorageSize prometheus.Gauge IndexStorageSize prometheus.Gauge
// Replication metrics // Replication metrics
ReplicationFactor prometheus.Gauge ReplicationFactor prometheus.Gauge
HealthyReplicas prometheus.Gauge HealthyReplicas prometheus.Gauge
UnderReplicated prometheus.Gauge UnderReplicated prometheus.Gauge
ReplicationLag prometheus.Histogram ReplicationLag prometheus.Histogram
// Encryption metrics // Encryption metrics
EncryptedContexts prometheus.Gauge EncryptedContexts prometheus.Gauge
KeyRotations prometheus.Counter KeyRotations prometheus.Counter
AccessDenials prometheus.Counter AccessDenials prometheus.Counter
ActiveKeys prometheus.Gauge ActiveKeys prometheus.Gauge
// Performance metrics // Performance metrics
Throughput prometheus.Gauge Throughput prometheus.Gauge
ConcurrentOperations prometheus.Gauge ConcurrentOperations prometheus.Gauge
QueueDepth prometheus.Gauge QueueDepth prometheus.Gauge
// Health metrics // Health metrics
StorageHealth prometheus.Gauge StorageHealth prometheus.Gauge
NodeConnectivity prometheus.Gauge NodeConnectivity prometheus.Gauge
SyncLatency prometheus.Histogram SyncLatency prometheus.Histogram
} }
// AlertManager handles storage-related alerts and notifications // AlertManager handles storage-related alerts and notifications
@@ -97,18 +97,96 @@ type AlertManager struct {
maxHistory int maxHistory int
} }
func (am *AlertManager) severityRank(severity AlertSeverity) int {
switch severity {
case SeverityCritical:
return 4
case SeverityError:
return 3
case SeverityWarning:
return 2
case SeverityInfo:
return 1
default:
return 0
}
}
// GetActiveAlerts returns sorted active alerts (SEC-SLURP-1.1 monitoring path)
func (am *AlertManager) GetActiveAlerts() []*Alert {
am.mu.RLock()
defer am.mu.RUnlock()
if len(am.activealerts) == 0 {
return nil
}
alerts := make([]*Alert, 0, len(am.activealerts))
for _, alert := range am.activealerts {
alerts = append(alerts, alert)
}
sort.Slice(alerts, func(i, j int) bool {
iRank := am.severityRank(alerts[i].Severity)
jRank := am.severityRank(alerts[j].Severity)
if iRank == jRank {
return alerts[i].StartTime.After(alerts[j].StartTime)
}
return iRank > jRank
})
return alerts
}
// Snapshot marshals monitoring state for UCXL persistence (SEC-SLURP-1.1a telemetry)
func (ms *MonitoringSystem) Snapshot(ctx context.Context) (string, error) {
ms.mu.RLock()
defer ms.mu.RUnlock()
if ms.alerts == nil {
return "", fmt.Errorf("alert manager not initialised")
}
active := ms.alerts.GetActiveAlerts()
alertPayload := make([]map[string]interface{}, 0, len(active))
for _, alert := range active {
alertPayload = append(alertPayload, map[string]interface{}{
"id": alert.ID,
"name": alert.Name,
"severity": alert.Severity,
"message": fmt.Sprintf("%s (threshold %.2f)", alert.Description, alert.Threshold),
"labels": alert.Labels,
"started_at": alert.StartTime,
})
}
snapshot := map[string]interface{}{
"node_id": ms.nodeID,
"generated_at": time.Now().UTC(),
"alert_count": len(active),
"alerts": alertPayload,
}
encoded, err := json.MarshalIndent(snapshot, "", " ")
if err != nil {
return "", fmt.Errorf("failed to marshal monitoring snapshot: %w", err)
}
return string(encoded), nil
}
// AlertRule defines conditions for triggering alerts // AlertRule defines conditions for triggering alerts
type AlertRule struct { type AlertRule struct {
ID string `json:"id"` ID string `json:"id"`
Name string `json:"name"` Name string `json:"name"`
Description string `json:"description"` Description string `json:"description"`
Metric string `json:"metric"` Metric string `json:"metric"`
Condition string `json:"condition"` // >, <, ==, !=, etc. Condition string `json:"condition"` // >, <, ==, !=, etc.
Threshold float64 `json:"threshold"` Threshold float64 `json:"threshold"`
Duration time.Duration `json:"duration"` Duration time.Duration `json:"duration"`
Severity AlertSeverity `json:"severity"` Severity AlertSeverity `json:"severity"`
Labels map[string]string `json:"labels"` Labels map[string]string `json:"labels"`
Enabled bool `json:"enabled"` Enabled bool `json:"enabled"`
} }
// Alert represents an active or resolved alert // Alert represents an active or resolved alert
@@ -163,30 +241,30 @@ type HealthChecker struct {
// HealthCheck defines a single health check // HealthCheck defines a single health check
type HealthCheck struct { type HealthCheck struct {
Name string `json:"name"` Name string `json:"name"`
Description string `json:"description"` Description string `json:"description"`
Checker func(ctx context.Context) HealthResult `json:"-"` Checker func(ctx context.Context) HealthResult `json:"-"`
Interval time.Duration `json:"interval"` Interval time.Duration `json:"interval"`
Timeout time.Duration `json:"timeout"` Timeout time.Duration `json:"timeout"`
Enabled bool `json:"enabled"` Enabled bool `json:"enabled"`
} }
// HealthResult represents the result of a health check // HealthResult represents the result of a health check
type HealthResult struct { type HealthResult struct {
Healthy bool `json:"healthy"` Healthy bool `json:"healthy"`
Message string `json:"message"` Message string `json:"message"`
Latency time.Duration `json:"latency"` Latency time.Duration `json:"latency"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
} }
// SystemHealth represents the overall health of the storage system // SystemHealth represents the overall health of the storage system
type SystemHealth struct { type SystemHealth struct {
OverallStatus HealthStatus `json:"overall_status"` OverallStatus HealthStatus `json:"overall_status"`
Components map[string]HealthResult `json:"components"` Components map[string]HealthResult `json:"components"`
LastUpdate time.Time `json:"last_update"` LastUpdate time.Time `json:"last_update"`
Uptime time.Duration `json:"uptime"` Uptime time.Duration `json:"uptime"`
StartTime time.Time `json:"start_time"` StartTime time.Time `json:"start_time"`
} }
// HealthStatus represents system health status // HealthStatus represents system health status
@@ -200,82 +278,82 @@ const (
// PerformanceProfiler analyzes storage performance patterns // PerformanceProfiler analyzes storage performance patterns
type PerformanceProfiler struct { type PerformanceProfiler struct {
mu sync.RWMutex mu sync.RWMutex
operationProfiles map[string]*OperationProfile operationProfiles map[string]*OperationProfile
resourceUsage *ResourceUsage resourceUsage *ResourceUsage
bottlenecks []*Bottleneck bottlenecks []*Bottleneck
recommendations []*PerformanceRecommendation recommendations []*PerformanceRecommendation
} }
// OperationProfile contains performance analysis for a specific operation type // OperationProfile contains performance analysis for a specific operation type
type OperationProfile struct { type OperationProfile struct {
Operation string `json:"operation"` Operation string `json:"operation"`
TotalOperations int64 `json:"total_operations"` TotalOperations int64 `json:"total_operations"`
AverageLatency time.Duration `json:"average_latency"` AverageLatency time.Duration `json:"average_latency"`
P50Latency time.Duration `json:"p50_latency"` P50Latency time.Duration `json:"p50_latency"`
P95Latency time.Duration `json:"p95_latency"` P95Latency time.Duration `json:"p95_latency"`
P99Latency time.Duration `json:"p99_latency"` P99Latency time.Duration `json:"p99_latency"`
Throughput float64 `json:"throughput"` Throughput float64 `json:"throughput"`
ErrorRate float64 `json:"error_rate"` ErrorRate float64 `json:"error_rate"`
LatencyHistory []time.Duration `json:"-"` LatencyHistory []time.Duration `json:"-"`
LastUpdated time.Time `json:"last_updated"` LastUpdated time.Time `json:"last_updated"`
} }
// ResourceUsage tracks resource consumption // ResourceUsage tracks resource consumption
type ResourceUsage struct { type ResourceUsage struct {
CPUUsage float64 `json:"cpu_usage"` CPUUsage float64 `json:"cpu_usage"`
MemoryUsage int64 `json:"memory_usage"` MemoryUsage int64 `json:"memory_usage"`
DiskUsage int64 `json:"disk_usage"` DiskUsage int64 `json:"disk_usage"`
NetworkIn int64 `json:"network_in"` NetworkIn int64 `json:"network_in"`
NetworkOut int64 `json:"network_out"` NetworkOut int64 `json:"network_out"`
OpenFiles int `json:"open_files"` OpenFiles int `json:"open_files"`
Goroutines int `json:"goroutines"` Goroutines int `json:"goroutines"`
LastUpdated time.Time `json:"last_updated"` LastUpdated time.Time `json:"last_updated"`
} }
// Bottleneck represents a performance bottleneck // Bottleneck represents a performance bottleneck
type Bottleneck struct { type Bottleneck struct {
ID string `json:"id"` ID string `json:"id"`
Type string `json:"type"` // cpu, memory, disk, network, etc. Type string `json:"type"` // cpu, memory, disk, network, etc.
Component string `json:"component"` Component string `json:"component"`
Description string `json:"description"` Description string `json:"description"`
Severity AlertSeverity `json:"severity"` Severity AlertSeverity `json:"severity"`
Impact float64 `json:"impact"` Impact float64 `json:"impact"`
DetectedAt time.Time `json:"detected_at"` DetectedAt time.Time `json:"detected_at"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
} }
// PerformanceRecommendation suggests optimizations // PerformanceRecommendation suggests optimizations
type PerformanceRecommendation struct { type PerformanceRecommendation struct {
ID string `json:"id"` ID string `json:"id"`
Type string `json:"type"` Type string `json:"type"`
Title string `json:"title"` Title string `json:"title"`
Description string `json:"description"` Description string `json:"description"`
Priority int `json:"priority"` Priority int `json:"priority"`
Impact string `json:"impact"` Impact string `json:"impact"`
Effort string `json:"effort"` Effort string `json:"effort"`
GeneratedAt time.Time `json:"generated_at"` GeneratedAt time.Time `json:"generated_at"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
} }
// MonitoringEvent represents a monitoring system event // MonitoringEvent represents a monitoring system event
type MonitoringEvent struct { type MonitoringEvent struct {
Type string `json:"type"` Type string `json:"type"`
Level string `json:"level"` Level string `json:"level"`
Message string `json:"message"` Message string `json:"message"`
Component string `json:"component"` Component string `json:"component"`
NodeID string `json:"node_id"` NodeID string `json:"node_id"`
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
} }
// StructuredLogger provides structured logging for storage operations // StructuredLogger provides structured logging for storage operations
type StructuredLogger struct { type StructuredLogger struct {
mu sync.RWMutex mu sync.RWMutex
level LogLevel level LogLevel
output LogOutput output LogOutput
formatter LogFormatter formatter LogFormatter
buffer []*LogEntry buffer []*LogEntry
maxBuffer int maxBuffer int
} }
@@ -303,27 +381,27 @@ type LogFormatter interface {
// LogEntry represents a single log entry // LogEntry represents a single log entry
type LogEntry struct { type LogEntry struct {
Level LogLevel `json:"level"` Level LogLevel `json:"level"`
Message string `json:"message"` Message string `json:"message"`
Component string `json:"component"` Component string `json:"component"`
Operation string `json:"operation"` Operation string `json:"operation"`
NodeID string `json:"node_id"` NodeID string `json:"node_id"`
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
Fields map[string]interface{} `json:"fields"` Fields map[string]interface{} `json:"fields"`
Error error `json:"error,omitempty"` Error error `json:"error,omitempty"`
} }
// NewMonitoringSystem creates a new monitoring system // NewMonitoringSystem creates a new monitoring system
func NewMonitoringSystem(nodeID string) *MonitoringSystem { func NewMonitoringSystem(nodeID string) *MonitoringSystem {
ms := &MonitoringSystem{ ms := &MonitoringSystem{
nodeID: nodeID, nodeID: nodeID,
metrics: initializeMetrics(nodeID), metrics: initializeMetrics(nodeID),
alerts: newAlertManager(), alerts: newAlertManager(),
healthChecker: newHealthChecker(), healthChecker: newHealthChecker(),
performanceProfiler: newPerformanceProfiler(), performanceProfiler: newPerformanceProfiler(),
logger: newStructuredLogger(), logger: newStructuredLogger(),
notifications: make(chan *MonitoringEvent, 1000), notifications: make(chan *MonitoringEvent, 1000),
stopCh: make(chan struct{}), stopCh: make(chan struct{}),
} }
// Start monitoring goroutines // Start monitoring goroutines
@@ -571,7 +649,7 @@ func (ms *MonitoringSystem) executeHealthCheck(check HealthCheck) {
defer cancel() defer cancel()
result := check.Checker(ctx) result := check.Checker(ctx)
ms.healthChecker.mu.Lock() ms.healthChecker.mu.Lock()
ms.healthChecker.status.Components[check.Name] = result ms.healthChecker.status.Components[check.Name] = result
ms.healthChecker.mu.Unlock() ms.healthChecker.mu.Unlock()
@@ -592,21 +670,21 @@ func (ms *MonitoringSystem) analyzePerformance() {
func newAlertManager() *AlertManager { func newAlertManager() *AlertManager {
return &AlertManager{ return &AlertManager{
rules: make([]*AlertRule, 0), rules: make([]*AlertRule, 0),
activealerts: make(map[string]*Alert), activealerts: make(map[string]*Alert),
notifiers: make([]AlertNotifier, 0), notifiers: make([]AlertNotifier, 0),
history: make([]*Alert, 0), history: make([]*Alert, 0),
maxHistory: 1000, maxHistory: 1000,
} }
} }
func newHealthChecker() *HealthChecker { func newHealthChecker() *HealthChecker {
return &HealthChecker{ return &HealthChecker{
checks: make(map[string]HealthCheck), checks: make(map[string]HealthCheck),
status: &SystemHealth{ status: &SystemHealth{
OverallStatus: HealthHealthy, OverallStatus: HealthHealthy,
Components: make(map[string]HealthResult), Components: make(map[string]HealthResult),
StartTime: time.Now(), StartTime: time.Now(),
}, },
checkInterval: 1 * time.Minute, checkInterval: 1 * time.Minute,
timeout: 30 * time.Second, timeout: 30 * time.Second,
@@ -664,8 +742,8 @@ func (ms *MonitoringSystem) GetMonitoringStats() (*MonitoringStats, error) {
defer ms.mu.RUnlock() defer ms.mu.RUnlock()
stats := &MonitoringStats{ stats := &MonitoringStats{
NodeID: ms.nodeID, NodeID: ms.nodeID,
Timestamp: time.Now(), Timestamp: time.Now(),
HealthStatus: ms.healthChecker.status.OverallStatus, HealthStatus: ms.healthChecker.status.OverallStatus,
ActiveAlerts: len(ms.alerts.activealerts), ActiveAlerts: len(ms.alerts.activealerts),
Bottlenecks: len(ms.performanceProfiler.bottlenecks), Bottlenecks: len(ms.performanceProfiler.bottlenecks),

View File

@@ -3,9 +3,8 @@ package storage
import ( import (
"time" "time"
"chorus/pkg/ucxl"
"chorus/pkg/crypto"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
) )
// DatabaseSchema defines the complete schema for encrypted context storage // DatabaseSchema defines the complete schema for encrypted context storage
@@ -14,325 +13,325 @@ import (
// ContextRecord represents the main context storage record // ContextRecord represents the main context storage record
type ContextRecord struct { type ContextRecord struct {
// Primary identification // Primary identification
ID string `json:"id" db:"id"` // Unique record ID ID string `json:"id" db:"id"` // Unique record ID
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` // UCXL address UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` // UCXL address
Path string `json:"path" db:"path"` // File system path Path string `json:"path" db:"path"` // File system path
PathHash string `json:"path_hash" db:"path_hash"` // Hash of path for indexing PathHash string `json:"path_hash" db:"path_hash"` // Hash of path for indexing
// Core context data // Core context data
Summary string `json:"summary" db:"summary"` Summary string `json:"summary" db:"summary"`
Purpose string `json:"purpose" db:"purpose"` Purpose string `json:"purpose" db:"purpose"`
Technologies []byte `json:"technologies" db:"technologies"` // JSON array Technologies []byte `json:"technologies" db:"technologies"` // JSON array
Tags []byte `json:"tags" db:"tags"` // JSON array Tags []byte `json:"tags" db:"tags"` // JSON array
Insights []byte `json:"insights" db:"insights"` // JSON array Insights []byte `json:"insights" db:"insights"` // JSON array
// Hierarchy control // Hierarchy control
OverridesParent bool `json:"overrides_parent" db:"overrides_parent"` OverridesParent bool `json:"overrides_parent" db:"overrides_parent"`
ContextSpecificity int `json:"context_specificity" db:"context_specificity"` ContextSpecificity int `json:"context_specificity" db:"context_specificity"`
AppliesToChildren bool `json:"applies_to_children" db:"applies_to_children"` AppliesToChildren bool `json:"applies_to_children" db:"applies_to_children"`
// Quality metrics // Quality metrics
RAGConfidence float64 `json:"rag_confidence" db:"rag_confidence"` RAGConfidence float64 `json:"rag_confidence" db:"rag_confidence"`
StalenessScore float64 `json:"staleness_score" db:"staleness_score"` StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
ValidationScore float64 `json:"validation_score" db:"validation_score"` ValidationScore float64 `json:"validation_score" db:"validation_score"`
// Versioning // Versioning
Version int64 `json:"version" db:"version"` Version int64 `json:"version" db:"version"`
ParentVersion *int64 `json:"parent_version" db:"parent_version"` ParentVersion *int64 `json:"parent_version" db:"parent_version"`
ContextHash string `json:"context_hash" db:"context_hash"` ContextHash string `json:"context_hash" db:"context_hash"`
// Temporal metadata // Temporal metadata
CreatedAt time.Time `json:"created_at" db:"created_at"` CreatedAt time.Time `json:"created_at" db:"created_at"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"` UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
GeneratedAt time.Time `json:"generated_at" db:"generated_at"` GeneratedAt time.Time `json:"generated_at" db:"generated_at"`
LastAccessedAt *time.Time `json:"last_accessed_at" db:"last_accessed_at"` LastAccessedAt *time.Time `json:"last_accessed_at" db:"last_accessed_at"`
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"` ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
// Storage metadata // Storage metadata
StorageType string `json:"storage_type" db:"storage_type"` // local, distributed, hybrid StorageType string `json:"storage_type" db:"storage_type"` // local, distributed, hybrid
CompressionType string `json:"compression_type" db:"compression_type"` CompressionType string `json:"compression_type" db:"compression_type"`
EncryptionLevel int `json:"encryption_level" db:"encryption_level"` EncryptionLevel int `json:"encryption_level" db:"encryption_level"`
ReplicationFactor int `json:"replication_factor" db:"replication_factor"` ReplicationFactor int `json:"replication_factor" db:"replication_factor"`
Checksum string `json:"checksum" db:"checksum"` Checksum string `json:"checksum" db:"checksum"`
DataSize int64 `json:"data_size" db:"data_size"` DataSize int64 `json:"data_size" db:"data_size"`
CompressedSize int64 `json:"compressed_size" db:"compressed_size"` CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
} }
// EncryptedContextRecord represents role-based encrypted context storage // EncryptedContextRecord represents role-based encrypted context storage
type EncryptedContextRecord struct { type EncryptedContextRecord struct {
// Primary keys // Primary keys
ID string `json:"id" db:"id"` ID string `json:"id" db:"id"`
ContextID string `json:"context_id" db:"context_id"` // FK to ContextRecord ContextID string `json:"context_id" db:"context_id"` // FK to ContextRecord
Role string `json:"role" db:"role"` Role string `json:"role" db:"role"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
// Encryption details // Encryption details
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"` AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
EncryptedData []byte `json:"encrypted_data" db:"encrypted_data"` EncryptedData []byte `json:"encrypted_data" db:"encrypted_data"`
KeyFingerprint string `json:"key_fingerprint" db:"key_fingerprint"` KeyFingerprint string `json:"key_fingerprint" db:"key_fingerprint"`
EncryptionAlgo string `json:"encryption_algo" db:"encryption_algo"` EncryptionAlgo string `json:"encryption_algo" db:"encryption_algo"`
KeyVersion int `json:"key_version" db:"key_version"` KeyVersion int `json:"key_version" db:"key_version"`
// Data integrity // Data integrity
DataChecksum string `json:"data_checksum" db:"data_checksum"` DataChecksum string `json:"data_checksum" db:"data_checksum"`
EncryptionHash string `json:"encryption_hash" db:"encryption_hash"` EncryptionHash string `json:"encryption_hash" db:"encryption_hash"`
// Temporal data // Temporal data
CreatedAt time.Time `json:"created_at" db:"created_at"` CreatedAt time.Time `json:"created_at" db:"created_at"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"` UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
LastDecryptedAt *time.Time `json:"last_decrypted_at" db:"last_decrypted_at"` LastDecryptedAt *time.Time `json:"last_decrypted_at" db:"last_decrypted_at"`
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"` ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
// Access tracking // Access tracking
AccessCount int64 `json:"access_count" db:"access_count"` AccessCount int64 `json:"access_count" db:"access_count"`
LastAccessedBy string `json:"last_accessed_by" db:"last_accessed_by"` LastAccessedBy string `json:"last_accessed_by" db:"last_accessed_by"`
AccessHistory []byte `json:"access_history" db:"access_history"` // JSON access log AccessHistory []byte `json:"access_history" db:"access_history"` // JSON access log
} }
// ContextHierarchyRecord represents hierarchical relationships between contexts // ContextHierarchyRecord represents hierarchical relationships between contexts
type ContextHierarchyRecord struct { type ContextHierarchyRecord struct {
ID string `json:"id" db:"id"` ID string `json:"id" db:"id"`
ParentAddress ucxl.Address `json:"parent_address" db:"parent_address"` ParentAddress ucxl.Address `json:"parent_address" db:"parent_address"`
ChildAddress ucxl.Address `json:"child_address" db:"child_address"` ChildAddress ucxl.Address `json:"child_address" db:"child_address"`
ParentPath string `json:"parent_path" db:"parent_path"` ParentPath string `json:"parent_path" db:"parent_path"`
ChildPath string `json:"child_path" db:"child_path"` ChildPath string `json:"child_path" db:"child_path"`
// Relationship metadata // Relationship metadata
RelationshipType string `json:"relationship_type" db:"relationship_type"` // parent, sibling, dependency RelationshipType string `json:"relationship_type" db:"relationship_type"` // parent, sibling, dependency
InheritanceWeight float64 `json:"inheritance_weight" db:"inheritance_weight"` InheritanceWeight float64 `json:"inheritance_weight" db:"inheritance_weight"`
OverrideStrength int `json:"override_strength" db:"override_strength"` OverrideStrength int `json:"override_strength" db:"override_strength"`
Distance int `json:"distance" db:"distance"` // Hierarchy depth distance Distance int `json:"distance" db:"distance"` // Hierarchy depth distance
// Temporal tracking // Temporal tracking
CreatedAt time.Time `json:"created_at" db:"created_at"` CreatedAt time.Time `json:"created_at" db:"created_at"`
ValidatedAt time.Time `json:"validated_at" db:"validated_at"` ValidatedAt time.Time `json:"validated_at" db:"validated_at"`
LastResolvedAt *time.Time `json:"last_resolved_at" db:"last_resolved_at"` LastResolvedAt *time.Time `json:"last_resolved_at" db:"last_resolved_at"`
// Resolution statistics // Resolution statistics
ResolutionCount int64 `json:"resolution_count" db:"resolution_count"` ResolutionCount int64 `json:"resolution_count" db:"resolution_count"`
ResolutionTime float64 `json:"resolution_time" db:"resolution_time"` // Average ms ResolutionTime float64 `json:"resolution_time" db:"resolution_time"` // Average ms
} }
// DecisionHopRecord represents temporal decision analysis storage // DecisionHopRecord represents temporal decision analysis storage
type DecisionHopRecord struct { type DecisionHopRecord struct {
// Primary identification // Primary identification
ID string `json:"id" db:"id"` ID string `json:"id" db:"id"`
DecisionID string `json:"decision_id" db:"decision_id"` DecisionID string `json:"decision_id" db:"decision_id"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
ContextVersion int64 `json:"context_version" db:"context_version"` ContextVersion int64 `json:"context_version" db:"context_version"`
// Decision metadata // Decision metadata
ChangeReason string `json:"change_reason" db:"change_reason"` ChangeReason string `json:"change_reason" db:"change_reason"`
DecisionMaker string `json:"decision_maker" db:"decision_maker"` DecisionMaker string `json:"decision_maker" db:"decision_maker"`
DecisionRationale string `json:"decision_rationale" db:"decision_rationale"` DecisionRationale string `json:"decision_rationale" db:"decision_rationale"`
ImpactScope string `json:"impact_scope" db:"impact_scope"` ImpactScope string `json:"impact_scope" db:"impact_scope"`
ConfidenceLevel float64 `json:"confidence_level" db:"confidence_level"` ConfidenceLevel float64 `json:"confidence_level" db:"confidence_level"`
// Context evolution // Context evolution
PreviousHash string `json:"previous_hash" db:"previous_hash"` PreviousHash string `json:"previous_hash" db:"previous_hash"`
CurrentHash string `json:"current_hash" db:"current_hash"` CurrentHash string `json:"current_hash" db:"current_hash"`
ContextDelta []byte `json:"context_delta" db:"context_delta"` // JSON diff ContextDelta []byte `json:"context_delta" db:"context_delta"` // JSON diff
StalenessScore float64 `json:"staleness_score" db:"staleness_score"` StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
// Temporal data // Temporal data
Timestamp time.Time `json:"timestamp" db:"timestamp"` Timestamp time.Time `json:"timestamp" db:"timestamp"`
PreviousDecisionTime *time.Time `json:"previous_decision_time" db:"previous_decision_time"` PreviousDecisionTime *time.Time `json:"previous_decision_time" db:"previous_decision_time"`
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
// External references // External references
ExternalRefs []byte `json:"external_refs" db:"external_refs"` // JSON array ExternalRefs []byte `json:"external_refs" db:"external_refs"` // JSON array
CommitHash string `json:"commit_hash" db:"commit_hash"` CommitHash string `json:"commit_hash" db:"commit_hash"`
TicketID string `json:"ticket_id" db:"ticket_id"` TicketID string `json:"ticket_id" db:"ticket_id"`
} }
// DecisionInfluenceRecord represents decision influence relationships // DecisionInfluenceRecord represents decision influence relationships
type DecisionInfluenceRecord struct { type DecisionInfluenceRecord struct {
ID string `json:"id" db:"id"` ID string `json:"id" db:"id"`
SourceDecisionID string `json:"source_decision_id" db:"source_decision_id"` SourceDecisionID string `json:"source_decision_id" db:"source_decision_id"`
TargetDecisionID string `json:"target_decision_id" db:"target_decision_id"` TargetDecisionID string `json:"target_decision_id" db:"target_decision_id"`
SourceAddress ucxl.Address `json:"source_address" db:"source_address"` SourceAddress ucxl.Address `json:"source_address" db:"source_address"`
TargetAddress ucxl.Address `json:"target_address" db:"target_address"` TargetAddress ucxl.Address `json:"target_address" db:"target_address"`
// Influence metrics // Influence metrics
InfluenceStrength float64 `json:"influence_strength" db:"influence_strength"` InfluenceStrength float64 `json:"influence_strength" db:"influence_strength"`
InfluenceType string `json:"influence_type" db:"influence_type"` // direct, indirect, cascading InfluenceType string `json:"influence_type" db:"influence_type"` // direct, indirect, cascading
PropagationDelay float64 `json:"propagation_delay" db:"propagation_delay"` // hours PropagationDelay float64 `json:"propagation_delay" db:"propagation_delay"` // hours
HopDistance int `json:"hop_distance" db:"hop_distance"` HopDistance int `json:"hop_distance" db:"hop_distance"`
// Path analysis // Path analysis
ShortestPath []byte `json:"shortest_path" db:"shortest_path"` // JSON path array ShortestPath []byte `json:"shortest_path" db:"shortest_path"` // JSON path array
AlternatePaths []byte `json:"alternate_paths" db:"alternate_paths"` // JSON paths AlternatePaths []byte `json:"alternate_paths" db:"alternate_paths"` // JSON paths
PathConfidence float64 `json:"path_confidence" db:"path_confidence"` PathConfidence float64 `json:"path_confidence" db:"path_confidence"`
// Temporal tracking // Temporal tracking
CreatedAt time.Time `json:"created_at" db:"created_at"` CreatedAt time.Time `json:"created_at" db:"created_at"`
LastAnalyzedAt time.Time `json:"last_analyzed_at" db:"last_analyzed_at"` LastAnalyzedAt time.Time `json:"last_analyzed_at" db:"last_analyzed_at"`
ValidatedAt *time.Time `json:"validated_at" db:"validated_at"` ValidatedAt *time.Time `json:"validated_at" db:"validated_at"`
} }
// AccessControlRecord represents role-based access control metadata // AccessControlRecord represents role-based access control metadata
type AccessControlRecord struct { type AccessControlRecord struct {
ID string `json:"id" db:"id"` ID string `json:"id" db:"id"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
Role string `json:"role" db:"role"` Role string `json:"role" db:"role"`
Permissions []byte `json:"permissions" db:"permissions"` // JSON permissions array Permissions []byte `json:"permissions" db:"permissions"` // JSON permissions array
// Access levels // Access levels
ReadAccess bool `json:"read_access" db:"read_access"` ReadAccess bool `json:"read_access" db:"read_access"`
WriteAccess bool `json:"write_access" db:"write_access"` WriteAccess bool `json:"write_access" db:"write_access"`
DeleteAccess bool `json:"delete_access" db:"delete_access"` DeleteAccess bool `json:"delete_access" db:"delete_access"`
AdminAccess bool `json:"admin_access" db:"admin_access"` AdminAccess bool `json:"admin_access" db:"admin_access"`
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"` AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
// Constraints // Constraints
TimeConstraints []byte `json:"time_constraints" db:"time_constraints"` // JSON time rules TimeConstraints []byte `json:"time_constraints" db:"time_constraints"` // JSON time rules
IPConstraints []byte `json:"ip_constraints" db:"ip_constraints"` // JSON IP rules IPConstraints []byte `json:"ip_constraints" db:"ip_constraints"` // JSON IP rules
ContextFilters []byte `json:"context_filters" db:"context_filters"` // JSON filter rules ContextFilters []byte `json:"context_filters" db:"context_filters"` // JSON filter rules
// Audit trail // Audit trail
CreatedAt time.Time `json:"created_at" db:"created_at"` CreatedAt time.Time `json:"created_at" db:"created_at"`
CreatedBy string `json:"created_by" db:"created_by"` CreatedBy string `json:"created_by" db:"created_by"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"` UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
UpdatedBy string `json:"updated_by" db:"updated_by"` UpdatedBy string `json:"updated_by" db:"updated_by"`
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"` ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
} }
// ContextIndexRecord represents search index entries for contexts // ContextIndexRecord represents search index entries for contexts
type ContextIndexRecord struct { type ContextIndexRecord struct {
ID string `json:"id" db:"id"` ID string `json:"id" db:"id"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
IndexName string `json:"index_name" db:"index_name"` IndexName string `json:"index_name" db:"index_name"`
// Indexed content // Indexed content
Tokens []byte `json:"tokens" db:"tokens"` // JSON token array Tokens []byte `json:"tokens" db:"tokens"` // JSON token array
NGrams []byte `json:"ngrams" db:"ngrams"` // JSON n-gram array NGrams []byte `json:"ngrams" db:"ngrams"` // JSON n-gram array
SemanticVector []byte `json:"semantic_vector" db:"semantic_vector"` // Embedding vector SemanticVector []byte `json:"semantic_vector" db:"semantic_vector"` // Embedding vector
// Search metadata // Search metadata
IndexWeight float64 `json:"index_weight" db:"index_weight"` IndexWeight float64 `json:"index_weight" db:"index_weight"`
BoostFactor float64 `json:"boost_factor" db:"boost_factor"` BoostFactor float64 `json:"boost_factor" db:"boost_factor"`
Language string `json:"language" db:"language"` Language string `json:"language" db:"language"`
ContentType string `json:"content_type" db:"content_type"` ContentType string `json:"content_type" db:"content_type"`
// Quality metrics // Quality metrics
RelevanceScore float64 `json:"relevance_score" db:"relevance_score"` RelevanceScore float64 `json:"relevance_score" db:"relevance_score"`
FreshnessScore float64 `json:"freshness_score" db:"freshness_score"` FreshnessScore float64 `json:"freshness_score" db:"freshness_score"`
PopularityScore float64 `json:"popularity_score" db:"popularity_score"` PopularityScore float64 `json:"popularity_score" db:"popularity_score"`
// Temporal tracking // Temporal tracking
CreatedAt time.Time `json:"created_at" db:"created_at"` CreatedAt time.Time `json:"created_at" db:"created_at"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"` UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
LastReindexed time.Time `json:"last_reindexed" db:"last_reindexed"` LastReindexed time.Time `json:"last_reindexed" db:"last_reindexed"`
} }
// CacheEntryRecord represents cached context data // CacheEntryRecord represents cached context data
type CacheEntryRecord struct { type CacheEntryRecord struct {
ID string `json:"id" db:"id"` ID string `json:"id" db:"id"`
CacheKey string `json:"cache_key" db:"cache_key"` CacheKey string `json:"cache_key" db:"cache_key"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
Role string `json:"role" db:"role"` Role string `json:"role" db:"role"`
// Cached data // Cached data
CachedData []byte `json:"cached_data" db:"cached_data"` CachedData []byte `json:"cached_data" db:"cached_data"`
DataHash string `json:"data_hash" db:"data_hash"` DataHash string `json:"data_hash" db:"data_hash"`
Compressed bool `json:"compressed" db:"compressed"` Compressed bool `json:"compressed" db:"compressed"`
OriginalSize int64 `json:"original_size" db:"original_size"` OriginalSize int64 `json:"original_size" db:"original_size"`
CompressedSize int64 `json:"compressed_size" db:"compressed_size"` CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
// Cache metadata // Cache metadata
TTL int64 `json:"ttl" db:"ttl"` // seconds TTL int64 `json:"ttl" db:"ttl"` // seconds
Priority int `json:"priority" db:"priority"` Priority int `json:"priority" db:"priority"`
AccessCount int64 `json:"access_count" db:"access_count"` AccessCount int64 `json:"access_count" db:"access_count"`
HitCount int64 `json:"hit_count" db:"hit_count"` HitCount int64 `json:"hit_count" db:"hit_count"`
// Temporal data // Temporal data
CreatedAt time.Time `json:"created_at" db:"created_at"` CreatedAt time.Time `json:"created_at" db:"created_at"`
LastAccessedAt time.Time `json:"last_accessed_at" db:"last_accessed_at"` LastAccessedAt time.Time `json:"last_accessed_at" db:"last_accessed_at"`
LastHitAt *time.Time `json:"last_hit_at" db:"last_hit_at"` LastHitAt *time.Time `json:"last_hit_at" db:"last_hit_at"`
ExpiresAt time.Time `json:"expires_at" db:"expires_at"` ExpiresAt time.Time `json:"expires_at" db:"expires_at"`
} }
// BackupRecord represents backup metadata // BackupRecord represents backup metadata
type BackupRecord struct { type BackupRecord struct {
ID string `json:"id" db:"id"` ID string `json:"id" db:"id"`
BackupID string `json:"backup_id" db:"backup_id"` BackupID string `json:"backup_id" db:"backup_id"`
Name string `json:"name" db:"name"` Name string `json:"name" db:"name"`
Destination string `json:"destination" db:"destination"` Destination string `json:"destination" db:"destination"`
// Backup content // Backup content
ContextCount int64 `json:"context_count" db:"context_count"` ContextCount int64 `json:"context_count" db:"context_count"`
DataSize int64 `json:"data_size" db:"data_size"` DataSize int64 `json:"data_size" db:"data_size"`
CompressedSize int64 `json:"compressed_size" db:"compressed_size"` CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
Checksum string `json:"checksum" db:"checksum"` Checksum string `json:"checksum" db:"checksum"`
// Backup metadata // Backup metadata
IncludesIndexes bool `json:"includes_indexes" db:"includes_indexes"` IncludesIndexes bool `json:"includes_indexes" db:"includes_indexes"`
IncludesCache bool `json:"includes_cache" db:"includes_cache"` IncludesCache bool `json:"includes_cache" db:"includes_cache"`
Encrypted bool `json:"encrypted" db:"encrypted"` Encrypted bool `json:"encrypted" db:"encrypted"`
Incremental bool `json:"incremental" db:"incremental"` Incremental bool `json:"incremental" db:"incremental"`
ParentBackupID string `json:"parent_backup_id" db:"parent_backup_id"` ParentBackupID string `json:"parent_backup_id" db:"parent_backup_id"`
// Status tracking // Status tracking
Status BackupStatus `json:"status" db:"status"` Status BackupStatus `json:"status" db:"status"`
Progress float64 `json:"progress" db:"progress"` Progress float64 `json:"progress" db:"progress"`
ErrorMessage string `json:"error_message" db:"error_message"` ErrorMessage string `json:"error_message" db:"error_message"`
// Temporal data // Temporal data
CreatedAt time.Time `json:"created_at" db:"created_at"` CreatedAt time.Time `json:"created_at" db:"created_at"`
StartedAt *time.Time `json:"started_at" db:"started_at"` StartedAt *time.Time `json:"started_at" db:"started_at"`
CompletedAt *time.Time `json:"completed_at" db:"completed_at"` CompletedAt *time.Time `json:"completed_at" db:"completed_at"`
RetentionUntil time.Time `json:"retention_until" db:"retention_until"` RetentionUntil time.Time `json:"retention_until" db:"retention_until"`
} }
// MetricsRecord represents storage performance metrics // MetricsRecord represents storage performance metrics
type MetricsRecord struct { type MetricsRecord struct {
ID string `json:"id" db:"id"` ID string `json:"id" db:"id"`
MetricType string `json:"metric_type" db:"metric_type"` // storage, encryption, cache, etc. MetricType string `json:"metric_type" db:"metric_type"` // storage, encryption, cache, etc.
NodeID string `json:"node_id" db:"node_id"` NodeID string `json:"node_id" db:"node_id"`
// Metric data // Metric data
MetricName string `json:"metric_name" db:"metric_name"` MetricName string `json:"metric_name" db:"metric_name"`
MetricValue float64 `json:"metric_value" db:"metric_value"` MetricValue float64 `json:"metric_value" db:"metric_value"`
MetricUnit string `json:"metric_unit" db:"metric_unit"` MetricUnit string `json:"metric_unit" db:"metric_unit"`
Tags []byte `json:"tags" db:"tags"` // JSON tag object Tags []byte `json:"tags" db:"tags"` // JSON tag object
// Aggregation data // Aggregation data
AggregationType string `json:"aggregation_type" db:"aggregation_type"` // avg, sum, count, etc. AggregationType string `json:"aggregation_type" db:"aggregation_type"` // avg, sum, count, etc.
TimeWindow int64 `json:"time_window" db:"time_window"` // seconds TimeWindow int64 `json:"time_window" db:"time_window"` // seconds
SampleCount int64 `json:"sample_count" db:"sample_count"` SampleCount int64 `json:"sample_count" db:"sample_count"`
// Temporal tracking // Temporal tracking
Timestamp time.Time `json:"timestamp" db:"timestamp"` Timestamp time.Time `json:"timestamp" db:"timestamp"`
CreatedAt time.Time `json:"created_at" db:"created_at"` CreatedAt time.Time `json:"created_at" db:"created_at"`
} }
// ContextEvolutionRecord tracks how contexts evolve over time // ContextEvolutionRecord tracks how contexts evolve over time
type ContextEvolutionRecord struct { type ContextEvolutionRecord struct {
ID string `json:"id" db:"id"` ID string `json:"id" db:"id"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
FromVersion int64 `json:"from_version" db:"from_version"` FromVersion int64 `json:"from_version" db:"from_version"`
ToVersion int64 `json:"to_version" db:"to_version"` ToVersion int64 `json:"to_version" db:"to_version"`
// Evolution analysis // Evolution analysis
EvolutionType string `json:"evolution_type" db:"evolution_type"` // enhancement, refactor, fix, etc. EvolutionType string `json:"evolution_type" db:"evolution_type"` // enhancement, refactor, fix, etc.
SimilarityScore float64 `json:"similarity_score" db:"similarity_score"` SimilarityScore float64 `json:"similarity_score" db:"similarity_score"`
ChangesMagnitude float64 `json:"changes_magnitude" db:"changes_magnitude"` ChangesMagnitude float64 `json:"changes_magnitude" db:"changes_magnitude"`
SemanticDrift float64 `json:"semantic_drift" db:"semantic_drift"` SemanticDrift float64 `json:"semantic_drift" db:"semantic_drift"`
// Change details // Change details
ChangedFields []byte `json:"changed_fields" db:"changed_fields"` // JSON array ChangedFields []byte `json:"changed_fields" db:"changed_fields"` // JSON array
FieldDeltas []byte `json:"field_deltas" db:"field_deltas"` // JSON delta object FieldDeltas []byte `json:"field_deltas" db:"field_deltas"` // JSON delta object
ImpactAnalysis []byte `json:"impact_analysis" db:"impact_analysis"` // JSON analysis ImpactAnalysis []byte `json:"impact_analysis" db:"impact_analysis"` // JSON analysis
// Quality assessment // Quality assessment
QualityImprovement float64 `json:"quality_improvement" db:"quality_improvement"` QualityImprovement float64 `json:"quality_improvement" db:"quality_improvement"`
ConfidenceChange float64 `json:"confidence_change" db:"confidence_change"` ConfidenceChange float64 `json:"confidence_change" db:"confidence_change"`
ValidationPassed bool `json:"validation_passed" db:"validation_passed"` ValidationPassed bool `json:"validation_passed" db:"validation_passed"`
// Temporal tracking // Temporal tracking
EvolutionTime time.Time `json:"evolution_time" db:"evolution_time"` EvolutionTime time.Time `json:"evolution_time" db:"evolution_time"`
AnalyzedAt time.Time `json:"analyzed_at" db:"analyzed_at"` AnalyzedAt time.Time `json:"analyzed_at" db:"analyzed_at"`
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
} }
// Schema validation and creation functions // Schema validation and creation functions
@@ -365,44 +364,44 @@ func CreateIndexStatements() []string {
"CREATE INDEX IF NOT EXISTS idx_context_version ON contexts(version)", "CREATE INDEX IF NOT EXISTS idx_context_version ON contexts(version)",
"CREATE INDEX IF NOT EXISTS idx_context_staleness ON contexts(staleness_score)", "CREATE INDEX IF NOT EXISTS idx_context_staleness ON contexts(staleness_score)",
"CREATE INDEX IF NOT EXISTS idx_context_confidence ON contexts(rag_confidence)", "CREATE INDEX IF NOT EXISTS idx_context_confidence ON contexts(rag_confidence)",
// Encrypted context indexes // Encrypted context indexes
"CREATE INDEX IF NOT EXISTS idx_encrypted_context_role ON encrypted_contexts(role)", "CREATE INDEX IF NOT EXISTS idx_encrypted_context_role ON encrypted_contexts(role)",
"CREATE INDEX IF NOT EXISTS idx_encrypted_context_ucxl ON encrypted_contexts(ucxl_address)", "CREATE INDEX IF NOT EXISTS idx_encrypted_context_ucxl ON encrypted_contexts(ucxl_address)",
"CREATE INDEX IF NOT EXISTS idx_encrypted_context_access_level ON encrypted_contexts(access_level)", "CREATE INDEX IF NOT EXISTS idx_encrypted_context_access_level ON encrypted_contexts(access_level)",
"CREATE INDEX IF NOT EXISTS idx_encrypted_context_key_fp ON encrypted_contexts(key_fingerprint)", "CREATE INDEX IF NOT EXISTS idx_encrypted_context_key_fp ON encrypted_contexts(key_fingerprint)",
// Hierarchy indexes // Hierarchy indexes
"CREATE INDEX IF NOT EXISTS idx_hierarchy_parent ON context_hierarchy(parent_address)", "CREATE INDEX IF NOT EXISTS idx_hierarchy_parent ON context_hierarchy(parent_address)",
"CREATE INDEX IF NOT EXISTS idx_hierarchy_child ON context_hierarchy(child_address)", "CREATE INDEX IF NOT EXISTS idx_hierarchy_child ON context_hierarchy(child_address)",
"CREATE INDEX IF NOT EXISTS idx_hierarchy_distance ON context_hierarchy(distance)", "CREATE INDEX IF NOT EXISTS idx_hierarchy_distance ON context_hierarchy(distance)",
"CREATE INDEX IF NOT EXISTS idx_hierarchy_weight ON context_hierarchy(inheritance_weight)", "CREATE INDEX IF NOT EXISTS idx_hierarchy_weight ON context_hierarchy(inheritance_weight)",
// Decision hop indexes // Decision hop indexes
"CREATE INDEX IF NOT EXISTS idx_decision_ucxl ON decision_hops(ucxl_address)", "CREATE INDEX IF NOT EXISTS idx_decision_ucxl ON decision_hops(ucxl_address)",
"CREATE INDEX IF NOT EXISTS idx_decision_timestamp ON decision_hops(timestamp)", "CREATE INDEX IF NOT EXISTS idx_decision_timestamp ON decision_hops(timestamp)",
"CREATE INDEX IF NOT EXISTS idx_decision_reason ON decision_hops(change_reason)", "CREATE INDEX IF NOT EXISTS idx_decision_reason ON decision_hops(change_reason)",
"CREATE INDEX IF NOT EXISTS idx_decision_maker ON decision_hops(decision_maker)", "CREATE INDEX IF NOT EXISTS idx_decision_maker ON decision_hops(decision_maker)",
"CREATE INDEX IF NOT EXISTS idx_decision_version ON decision_hops(context_version)", "CREATE INDEX IF NOT EXISTS idx_decision_version ON decision_hops(context_version)",
// Decision influence indexes // Decision influence indexes
"CREATE INDEX IF NOT EXISTS idx_influence_source ON decision_influence(source_decision_id)", "CREATE INDEX IF NOT EXISTS idx_influence_source ON decision_influence(source_decision_id)",
"CREATE INDEX IF NOT EXISTS idx_influence_target ON decision_influence(target_decision_id)", "CREATE INDEX IF NOT EXISTS idx_influence_target ON decision_influence(target_decision_id)",
"CREATE INDEX IF NOT EXISTS idx_influence_strength ON decision_influence(influence_strength)", "CREATE INDEX IF NOT EXISTS idx_influence_strength ON decision_influence(influence_strength)",
"CREATE INDEX IF NOT EXISTS idx_influence_hop_distance ON decision_influence(hop_distance)", "CREATE INDEX IF NOT EXISTS idx_influence_hop_distance ON decision_influence(hop_distance)",
// Access control indexes // Access control indexes
"CREATE INDEX IF NOT EXISTS idx_access_role ON access_control(role)", "CREATE INDEX IF NOT EXISTS idx_access_role ON access_control(role)",
"CREATE INDEX IF NOT EXISTS idx_access_ucxl ON access_control(ucxl_address)", "CREATE INDEX IF NOT EXISTS idx_access_ucxl ON access_control(ucxl_address)",
"CREATE INDEX IF NOT EXISTS idx_access_level ON access_control(access_level)", "CREATE INDEX IF NOT EXISTS idx_access_level ON access_control(access_level)",
"CREATE INDEX IF NOT EXISTS idx_access_expires ON access_control(expires_at)", "CREATE INDEX IF NOT EXISTS idx_access_expires ON access_control(expires_at)",
// Search index indexes // Search index indexes
"CREATE INDEX IF NOT EXISTS idx_context_index_name ON context_indexes(index_name)", "CREATE INDEX IF NOT EXISTS idx_context_index_name ON context_indexes(index_name)",
"CREATE INDEX IF NOT EXISTS idx_context_index_ucxl ON context_indexes(ucxl_address)", "CREATE INDEX IF NOT EXISTS idx_context_index_ucxl ON context_indexes(ucxl_address)",
"CREATE INDEX IF NOT EXISTS idx_context_index_relevance ON context_indexes(relevance_score)", "CREATE INDEX IF NOT EXISTS idx_context_index_relevance ON context_indexes(relevance_score)",
"CREATE INDEX IF NOT EXISTS idx_context_index_freshness ON context_indexes(freshness_score)", "CREATE INDEX IF NOT EXISTS idx_context_index_freshness ON context_indexes(freshness_score)",
// Cache indexes // Cache indexes
"CREATE INDEX IF NOT EXISTS idx_cache_key ON cache_entries(cache_key)", "CREATE INDEX IF NOT EXISTS idx_cache_key ON cache_entries(cache_key)",
"CREATE INDEX IF NOT EXISTS idx_cache_ucxl ON cache_entries(ucxl_address)", "CREATE INDEX IF NOT EXISTS idx_cache_ucxl ON cache_entries(ucxl_address)",
@@ -410,13 +409,13 @@ func CreateIndexStatements() []string {
"CREATE INDEX IF NOT EXISTS idx_cache_expires ON cache_entries(expires_at)", "CREATE INDEX IF NOT EXISTS idx_cache_expires ON cache_entries(expires_at)",
"CREATE INDEX IF NOT EXISTS idx_cache_priority ON cache_entries(priority)", "CREATE INDEX IF NOT EXISTS idx_cache_priority ON cache_entries(priority)",
"CREATE INDEX IF NOT EXISTS idx_cache_access_count ON cache_entries(access_count)", "CREATE INDEX IF NOT EXISTS idx_cache_access_count ON cache_entries(access_count)",
// Metrics indexes // Metrics indexes
"CREATE INDEX IF NOT EXISTS idx_metrics_type ON metrics(metric_type)", "CREATE INDEX IF NOT EXISTS idx_metrics_type ON metrics(metric_type)",
"CREATE INDEX IF NOT EXISTS idx_metrics_name ON metrics(metric_name)", "CREATE INDEX IF NOT EXISTS idx_metrics_name ON metrics(metric_name)",
"CREATE INDEX IF NOT EXISTS idx_metrics_node ON metrics(node_id)", "CREATE INDEX IF NOT EXISTS idx_metrics_node ON metrics(node_id)",
"CREATE INDEX IF NOT EXISTS idx_metrics_timestamp ON metrics(timestamp)", "CREATE INDEX IF NOT EXISTS idx_metrics_timestamp ON metrics(timestamp)",
// Evolution indexes // Evolution indexes
"CREATE INDEX IF NOT EXISTS idx_evolution_ucxl ON context_evolution(ucxl_address)", "CREATE INDEX IF NOT EXISTS idx_evolution_ucxl ON context_evolution(ucxl_address)",
"CREATE INDEX IF NOT EXISTS idx_evolution_from_version ON context_evolution(from_version)", "CREATE INDEX IF NOT EXISTS idx_evolution_from_version ON context_evolution(from_version)",

View File

@@ -283,32 +283,42 @@ type IndexStatistics struct {
// BackupConfig represents backup configuration // BackupConfig represents backup configuration
type BackupConfig struct { type BackupConfig struct {
Name string `json:"name"` // Backup name Name string `json:"name"` // Backup name
Destination string `json:"destination"` // Backup destination Destination string `json:"destination"` // Backup destination
IncludeIndexes bool `json:"include_indexes"` // Include search indexes IncludeIndexes bool `json:"include_indexes"` // Include search indexes
IncludeCache bool `json:"include_cache"` // Include cache data IncludeCache bool `json:"include_cache"` // Include cache data
Compression bool `json:"compression"` // Enable compression Compression bool `json:"compression"` // Enable compression
Encryption bool `json:"encryption"` // Enable encryption Encryption bool `json:"encryption"` // Enable encryption
EncryptionKey string `json:"encryption_key"` // Encryption key EncryptionKey string `json:"encryption_key"` // Encryption key
Incremental bool `json:"incremental"` // Incremental backup Incremental bool `json:"incremental"` // Incremental backup
Retention time.Duration `json:"retention"` // Backup retention period ParentBackupID string `json:"parent_backup_id"` // Parent backup reference
Metadata map[string]interface{} `json:"metadata"` // Additional metadata Retention time.Duration `json:"retention"` // Backup retention period
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
} }
// BackupInfo represents information about a backup // BackupInfo represents information about a backup
type BackupInfo struct { type BackupInfo struct {
ID string `json:"id"` // Backup ID ID string `json:"id"` // Backup ID
Name string `json:"name"` // Backup name BackupID string `json:"backup_id"` // Legacy identifier
CreatedAt time.Time `json:"created_at"` // Creation time Name string `json:"name"` // Backup name
Size int64 `json:"size"` // Backup size Destination string `json:"destination"` // Destination path
CompressedSize int64 `json:"compressed_size"` // Compressed size CreatedAt time.Time `json:"created_at"` // Creation time
ContextCount int64 `json:"context_count"` // Number of contexts Size int64 `json:"size"` // Backup size
Encrypted bool `json:"encrypted"` // Whether encrypted CompressedSize int64 `json:"compressed_size"` // Compressed size
Incremental bool `json:"incremental"` // Whether incremental DataSize int64 `json:"data_size"` // Total data size
ParentBackupID string `json:"parent_backup_id"` // Parent backup for incremental ContextCount int64 `json:"context_count"` // Number of contexts
Checksum string `json:"checksum"` // Backup checksum Encrypted bool `json:"encrypted"` // Whether encrypted
Status BackupStatus `json:"status"` // Backup status Incremental bool `json:"incremental"` // Whether incremental
Metadata map[string]interface{} `json:"metadata"` // Additional metadata ParentBackupID string `json:"parent_backup_id"` // Parent backup for incremental
IncludesIndexes bool `json:"includes_indexes"` // Include indexes
IncludesCache bool `json:"includes_cache"` // Include cache data
Checksum string `json:"checksum"` // Backup checksum
Status BackupStatus `json:"status"` // Backup status
Progress float64 `json:"progress"` // Completion progress 0-1
ErrorMessage string `json:"error_message"` // Last error message
RetentionUntil time.Time `json:"retention_until"` // Retention deadline
CompletedAt *time.Time `json:"completed_at"` // Completion time
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
} }
// BackupStatus represents backup status // BackupStatus represents backup status

View File

@@ -5,7 +5,9 @@ import (
"fmt" "fmt"
"time" "time"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/slurp/storage" "chorus/pkg/slurp/storage"
"chorus/pkg/ucxl"
) )
// TemporalGraphFactory creates and configures temporal graph components // TemporalGraphFactory creates and configures temporal graph components
@@ -17,44 +19,44 @@ type TemporalGraphFactory struct {
// TemporalConfig represents configuration for the temporal graph system // TemporalConfig represents configuration for the temporal graph system
type TemporalConfig struct { type TemporalConfig struct {
// Core graph settings // Core graph settings
MaxDepth int `json:"max_depth"` MaxDepth int `json:"max_depth"`
StalenessWeights *StalenessWeights `json:"staleness_weights"` StalenessWeights *StalenessWeights `json:"staleness_weights"`
CacheTimeout time.Duration `json:"cache_timeout"` CacheTimeout time.Duration `json:"cache_timeout"`
// Analysis settings // Analysis settings
InfluenceAnalysisConfig *InfluenceAnalysisConfig `json:"influence_analysis_config"` InfluenceAnalysisConfig *InfluenceAnalysisConfig `json:"influence_analysis_config"`
NavigationConfig *NavigationConfig `json:"navigation_config"` NavigationConfig *NavigationConfig `json:"navigation_config"`
QueryConfig *QueryConfig `json:"query_config"` QueryConfig *QueryConfig `json:"query_config"`
// Persistence settings // Persistence settings
PersistenceConfig *PersistenceConfig `json:"persistence_config"` PersistenceConfig *PersistenceConfig `json:"persistence_config"`
// Performance settings // Performance settings
EnableCaching bool `json:"enable_caching"` EnableCaching bool `json:"enable_caching"`
EnableCompression bool `json:"enable_compression"` EnableCompression bool `json:"enable_compression"`
EnableMetrics bool `json:"enable_metrics"` EnableMetrics bool `json:"enable_metrics"`
// Debug settings // Debug settings
EnableDebugLogging bool `json:"enable_debug_logging"` EnableDebugLogging bool `json:"enable_debug_logging"`
EnableValidation bool `json:"enable_validation"` EnableValidation bool `json:"enable_validation"`
} }
// InfluenceAnalysisConfig represents configuration for influence analysis // InfluenceAnalysisConfig represents configuration for influence analysis
type InfluenceAnalysisConfig struct { type InfluenceAnalysisConfig struct {
DampingFactor float64 `json:"damping_factor"` DampingFactor float64 `json:"damping_factor"`
MaxIterations int `json:"max_iterations"` MaxIterations int `json:"max_iterations"`
ConvergenceThreshold float64 `json:"convergence_threshold"` ConvergenceThreshold float64 `json:"convergence_threshold"`
CacheValidDuration time.Duration `json:"cache_valid_duration"` CacheValidDuration time.Duration `json:"cache_valid_duration"`
EnableCentralityMetrics bool `json:"enable_centrality_metrics"` EnableCentralityMetrics bool `json:"enable_centrality_metrics"`
EnableCommunityDetection bool `json:"enable_community_detection"` EnableCommunityDetection bool `json:"enable_community_detection"`
} }
// NavigationConfig represents configuration for decision navigation // NavigationConfig represents configuration for decision navigation
type NavigationConfig struct { type NavigationConfig struct {
MaxNavigationHistory int `json:"max_navigation_history"` MaxNavigationHistory int `json:"max_navigation_history"`
BookmarkRetention time.Duration `json:"bookmark_retention"` BookmarkRetention time.Duration `json:"bookmark_retention"`
SessionTimeout time.Duration `json:"session_timeout"` SessionTimeout time.Duration `json:"session_timeout"`
EnablePathCaching bool `json:"enable_path_caching"` EnablePathCaching bool `json:"enable_path_caching"`
} }
// QueryConfig represents configuration for decision-hop queries // QueryConfig represents configuration for decision-hop queries
@@ -68,17 +70,17 @@ type QueryConfig struct {
// TemporalGraphSystem represents the complete temporal graph system // TemporalGraphSystem represents the complete temporal graph system
type TemporalGraphSystem struct { type TemporalGraphSystem struct {
Graph TemporalGraph Graph TemporalGraph
Navigator DecisionNavigator Navigator DecisionNavigator
InfluenceAnalyzer InfluenceAnalyzer InfluenceAnalyzer InfluenceAnalyzer
StalenessDetector StalenessDetector StalenessDetector StalenessDetector
ConflictDetector ConflictDetector ConflictDetector ConflictDetector
PatternAnalyzer PatternAnalyzer PatternAnalyzer PatternAnalyzer
VersionManager VersionManager VersionManager VersionManager
HistoryManager HistoryManager HistoryManager HistoryManager
MetricsCollector MetricsCollector MetricsCollector MetricsCollector
QuerySystem *querySystemImpl QuerySystem *querySystemImpl
PersistenceManager *persistenceManagerImpl PersistenceManager *persistenceManagerImpl
} }
// NewTemporalGraphFactory creates a new temporal graph factory // NewTemporalGraphFactory creates a new temporal graph factory
@@ -86,7 +88,7 @@ func NewTemporalGraphFactory(storage storage.ContextStore, config *TemporalConfi
if config == nil { if config == nil {
config = DefaultTemporalConfig() config = DefaultTemporalConfig()
} }
return &TemporalGraphFactory{ return &TemporalGraphFactory{
storage: storage, storage: storage,
config: config, config: config,
@@ -100,22 +102,22 @@ func (tgf *TemporalGraphFactory) CreateTemporalGraphSystem(
encryptedStorage storage.EncryptedStorage, encryptedStorage storage.EncryptedStorage,
backupManager storage.BackupManager, backupManager storage.BackupManager,
) (*TemporalGraphSystem, error) { ) (*TemporalGraphSystem, error) {
// Create core temporal graph // Create core temporal graph
graph := NewTemporalGraph(tgf.storage).(*temporalGraphImpl) graph := NewTemporalGraph(tgf.storage).(*temporalGraphImpl)
// Create navigator // Create navigator
navigator := NewDecisionNavigator(graph) navigator := NewDecisionNavigator(graph)
// Create influence analyzer // Create influence analyzer
analyzer := NewInfluenceAnalyzer(graph) analyzer := NewInfluenceAnalyzer(graph)
// Create staleness detector // Create staleness detector
detector := NewStalenessDetector(graph) detector := NewStalenessDetector(graph)
// Create query system // Create query system
querySystem := NewQuerySystem(graph, navigator, analyzer, detector) querySystem := NewQuerySystem(graph, navigator, analyzer, detector)
// Create persistence manager // Create persistence manager
persistenceManager := NewPersistenceManager( persistenceManager := NewPersistenceManager(
tgf.storage, tgf.storage,
@@ -126,28 +128,28 @@ func (tgf *TemporalGraphFactory) CreateTemporalGraphSystem(
graph, graph,
tgf.config.PersistenceConfig, tgf.config.PersistenceConfig,
) )
// Create additional components // Create additional components
conflictDetector := NewConflictDetector(graph) conflictDetector := NewConflictDetector(graph)
patternAnalyzer := NewPatternAnalyzer(graph) patternAnalyzer := NewPatternAnalyzer(graph)
versionManager := NewVersionManager(graph, persistenceManager) versionManager := NewVersionManager(graph, persistenceManager)
historyManager := NewHistoryManager(graph, persistenceManager) historyManager := NewHistoryManager(graph, persistenceManager)
metricsCollector := NewMetricsCollector(graph) metricsCollector := NewMetricsCollector(graph)
system := &TemporalGraphSystem{ system := &TemporalGraphSystem{
Graph: graph, Graph: graph,
Navigator: navigator, Navigator: navigator,
InfluenceAnalyzer: analyzer, InfluenceAnalyzer: analyzer,
StalenessDetector: detector, StalenessDetector: detector,
ConflictDetector: conflictDetector, ConflictDetector: conflictDetector,
PatternAnalyzer: patternAnalyzer, PatternAnalyzer: patternAnalyzer,
VersionManager: versionManager, VersionManager: versionManager,
HistoryManager: historyManager, HistoryManager: historyManager,
MetricsCollector: metricsCollector, MetricsCollector: metricsCollector,
QuerySystem: querySystem, QuerySystem: querySystem,
PersistenceManager: persistenceManager, PersistenceManager: persistenceManager,
} }
return system, nil return system, nil
} }
@@ -159,19 +161,19 @@ func (tgf *TemporalGraphFactory) LoadExistingSystem(
encryptedStorage storage.EncryptedStorage, encryptedStorage storage.EncryptedStorage,
backupManager storage.BackupManager, backupManager storage.BackupManager,
) (*TemporalGraphSystem, error) { ) (*TemporalGraphSystem, error) {
// Create system // Create system
system, err := tgf.CreateTemporalGraphSystem(localStorage, distributedStorage, encryptedStorage, backupManager) system, err := tgf.CreateTemporalGraphSystem(localStorage, distributedStorage, encryptedStorage, backupManager)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to create system: %w", err) return nil, fmt.Errorf("failed to create system: %w", err)
} }
// Load graph data // Load graph data
err = system.PersistenceManager.LoadTemporalGraph(ctx) err = system.PersistenceManager.LoadTemporalGraph(ctx)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to load temporal graph: %w", err) return nil, fmt.Errorf("failed to load temporal graph: %w", err)
} }
return system, nil return system, nil
} }
@@ -188,23 +190,23 @@ func DefaultTemporalConfig() *TemporalConfig {
DependencyWeight: 0.3, DependencyWeight: 0.3,
}, },
CacheTimeout: time.Minute * 15, CacheTimeout: time.Minute * 15,
InfluenceAnalysisConfig: &InfluenceAnalysisConfig{ InfluenceAnalysisConfig: &InfluenceAnalysisConfig{
DampingFactor: 0.85, DampingFactor: 0.85,
MaxIterations: 100, MaxIterations: 100,
ConvergenceThreshold: 1e-6, ConvergenceThreshold: 1e-6,
CacheValidDuration: time.Minute * 30, CacheValidDuration: time.Minute * 30,
EnableCentralityMetrics: true, EnableCentralityMetrics: true,
EnableCommunityDetection: true, EnableCommunityDetection: true,
}, },
NavigationConfig: &NavigationConfig{ NavigationConfig: &NavigationConfig{
MaxNavigationHistory: 100, MaxNavigationHistory: 100,
BookmarkRetention: time.Hour * 24 * 30, // 30 days BookmarkRetention: time.Hour * 24 * 30, // 30 days
SessionTimeout: time.Hour * 2, SessionTimeout: time.Hour * 2,
EnablePathCaching: true, EnablePathCaching: true,
}, },
QueryConfig: &QueryConfig{ QueryConfig: &QueryConfig{
DefaultMaxHops: 10, DefaultMaxHops: 10,
MaxQueryResults: 1000, MaxQueryResults: 1000,
@@ -212,28 +214,28 @@ func DefaultTemporalConfig() *TemporalConfig {
CacheQueryResults: true, CacheQueryResults: true,
EnableQueryOptimization: true, EnableQueryOptimization: true,
}, },
PersistenceConfig: &PersistenceConfig{ PersistenceConfig: &PersistenceConfig{
EnableLocalStorage: true, EnableLocalStorage: true,
EnableDistributedStorage: true, EnableDistributedStorage: true,
EnableEncryption: true, EnableEncryption: true,
EncryptionRoles: []string{"analyst", "architect", "developer"}, EncryptionRoles: []string{"analyst", "architect", "developer"},
SyncInterval: time.Minute * 15, SyncInterval: time.Minute * 15,
ConflictResolutionStrategy: "latest_wins", ConflictResolutionStrategy: "latest_wins",
EnableAutoSync: true, EnableAutoSync: true,
MaxSyncRetries: 3, MaxSyncRetries: 3,
BatchSize: 50, BatchSize: 50,
FlushInterval: time.Second * 30, FlushInterval: time.Second * 30,
EnableWriteBuffer: true, EnableWriteBuffer: true,
EnableAutoBackup: true, EnableAutoBackup: true,
BackupInterval: time.Hour * 6, BackupInterval: time.Hour * 6,
RetainBackupCount: 10, RetainBackupCount: 10,
KeyPrefix: "temporal_graph", KeyPrefix: "temporal_graph",
NodeKeyPattern: "temporal_graph/nodes/%s", NodeKeyPattern: "temporal_graph/nodes/%s",
GraphKeyPattern: "temporal_graph/graph/%s", GraphKeyPattern: "temporal_graph/graph/%s",
MetadataKeyPattern: "temporal_graph/metadata/%s", MetadataKeyPattern: "temporal_graph/metadata/%s",
}, },
EnableCaching: true, EnableCaching: true,
EnableCompression: false, EnableCompression: false,
EnableMetrics: true, EnableMetrics: true,
@@ -308,11 +310,11 @@ func (cd *conflictDetectorImpl) ValidateDecisionSequence(ctx context.Context, ad
func (cd *conflictDetectorImpl) ResolveTemporalConflict(ctx context.Context, conflict *TemporalConflict) (*ConflictResolution, error) { func (cd *conflictDetectorImpl) ResolveTemporalConflict(ctx context.Context, conflict *TemporalConflict) (*ConflictResolution, error) {
// Implementation would resolve specific temporal conflicts // Implementation would resolve specific temporal conflicts
return &ConflictResolution{ return &ConflictResolution{
ConflictID: conflict.ID, ConflictID: conflict.ID,
Resolution: "auto_resolved", ResolutionMethod: "auto_resolved",
ResolvedAt: time.Now(), ResolvedAt: time.Now(),
ResolvedBy: "system", ResolvedBy: "system",
Confidence: 0.8, Confidence: 0.8,
}, nil }, nil
} }
@@ -373,7 +375,7 @@ type versionManagerImpl struct {
persistence *persistenceManagerImpl persistence *persistenceManagerImpl
} }
func (vm *versionManagerImpl) CreateVersion(ctx context.Context, address ucxl.Address, func (vm *versionManagerImpl) CreateVersion(ctx context.Context, address ucxl.Address,
contextNode *slurpContext.ContextNode, metadata *VersionMetadata) (*TemporalNode, error) { contextNode *slurpContext.ContextNode, metadata *VersionMetadata) (*TemporalNode, error) {
// Implementation would create a new temporal version // Implementation would create a new temporal version
return vm.graph.EvolveContext(ctx, address, contextNode, metadata.Reason, metadata.Decision) return vm.graph.EvolveContext(ctx, address, contextNode, metadata.Reason, metadata.Decision)
@@ -390,7 +392,7 @@ func (vm *versionManagerImpl) ListVersions(ctx context.Context, address ucxl.Add
if err != nil { if err != nil {
return nil, err return nil, err
} }
versions := make([]*VersionInfo, len(history)) versions := make([]*VersionInfo, len(history))
for i, node := range history { for i, node := range history {
versions[i] = &VersionInfo{ versions[i] = &VersionInfo{
@@ -402,11 +404,11 @@ func (vm *versionManagerImpl) ListVersions(ctx context.Context, address ucxl.Add
DecisionID: node.DecisionID, DecisionID: node.DecisionID,
} }
} }
return versions, nil return versions, nil
} }
func (vm *versionManagerImpl) CompareVersions(ctx context.Context, address ucxl.Address, func (vm *versionManagerImpl) CompareVersions(ctx context.Context, address ucxl.Address,
version1, version2 int) (*VersionComparison, error) { version1, version2 int) (*VersionComparison, error) {
// Implementation would compare two temporal versions // Implementation would compare two temporal versions
return &VersionComparison{ return &VersionComparison{
@@ -420,7 +422,7 @@ func (vm *versionManagerImpl) CompareVersions(ctx context.Context, address ucxl.
}, nil }, nil
} }
func (vm *versionManagerImpl) MergeVersions(ctx context.Context, address ucxl.Address, func (vm *versionManagerImpl) MergeVersions(ctx context.Context, address ucxl.Address,
versions []int, strategy MergeStrategy) (*TemporalNode, error) { versions []int, strategy MergeStrategy) (*TemporalNode, error) {
// Implementation would merge multiple versions // Implementation would merge multiple versions
return vm.graph.GetLatestVersion(ctx, address) return vm.graph.GetLatestVersion(ctx, address)
@@ -447,7 +449,7 @@ func (hm *historyManagerImpl) GetFullHistory(ctx context.Context, address ucxl.A
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &ContextHistory{ return &ContextHistory{
Address: address, Address: address,
Versions: history, Versions: history,
@@ -455,7 +457,7 @@ func (hm *historyManagerImpl) GetFullHistory(ctx context.Context, address ucxl.A
}, nil }, nil
} }
func (hm *historyManagerImpl) GetHistoryRange(ctx context.Context, address ucxl.Address, func (hm *historyManagerImpl) GetHistoryRange(ctx context.Context, address ucxl.Address,
startHop, endHop int) (*ContextHistory, error) { startHop, endHop int) (*ContextHistory, error) {
// Implementation would get history within a specific range // Implementation would get history within a specific range
return hm.GetFullHistory(ctx, address) return hm.GetFullHistory(ctx, address)
@@ -539,13 +541,13 @@ func (mc *metricsCollectorImpl) GetInfluenceMetrics(ctx context.Context) (*Influ
func (mc *metricsCollectorImpl) GetQualityMetrics(ctx context.Context) (*QualityMetrics, error) { func (mc *metricsCollectorImpl) GetQualityMetrics(ctx context.Context) (*QualityMetrics, error) {
// Implementation would get temporal data quality metrics // Implementation would get temporal data quality metrics
return &QualityMetrics{ return &QualityMetrics{
DataCompleteness: 1.0, DataCompleteness: 1.0,
DataConsistency: 1.0, DataConsistency: 1.0,
DataAccuracy: 1.0, DataAccuracy: 1.0,
AverageConfidence: 0.8, AverageConfidence: 0.8,
ConflictsDetected: 0, ConflictsDetected: 0,
ConflictsResolved: 0, ConflictsResolved: 0,
LastQualityCheck: time.Now(), LastQualityCheck: time.Now(),
}, nil }, nil
} }
@@ -560,4 +562,4 @@ func (mc *metricsCollectorImpl) calculateInfluenceConnections() int {
total += len(influences) total += len(influences)
} }
return total return total
} }

View File

@@ -9,36 +9,36 @@ import (
"sync" "sync"
"time" "time"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context" slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/slurp/storage" "chorus/pkg/slurp/storage"
"chorus/pkg/ucxl"
) )
// temporalGraphImpl implements the TemporalGraph interface // temporalGraphImpl implements the TemporalGraph interface
type temporalGraphImpl struct { type temporalGraphImpl struct {
mu sync.RWMutex mu sync.RWMutex
// Core storage // Core storage
storage storage.ContextStore storage storage.ContextStore
// In-memory graph structures for fast access // In-memory graph structures for fast access
nodes map[string]*TemporalNode // nodeID -> TemporalNode nodes map[string]*TemporalNode // nodeID -> TemporalNode
addressToNodes map[string][]*TemporalNode // address -> list of temporal nodes addressToNodes map[string][]*TemporalNode // address -> list of temporal nodes
influences map[string][]string // nodeID -> list of influenced nodeIDs influences map[string][]string // nodeID -> list of influenced nodeIDs
influencedBy map[string][]string // nodeID -> list of influencer nodeIDs influencedBy map[string][]string // nodeID -> list of influencer nodeIDs
// Decision tracking // Decision tracking
decisions map[string]*DecisionMetadata // decisionID -> DecisionMetadata decisions map[string]*DecisionMetadata // decisionID -> DecisionMetadata
decisionToNodes map[string][]*TemporalNode // decisionID -> list of affected nodes decisionToNodes map[string][]*TemporalNode // decisionID -> list of affected nodes
// Performance optimization // Performance optimization
pathCache map[string][]*DecisionStep // cache for decision paths pathCache map[string][]*DecisionStep // cache for decision paths
metricsCache map[string]interface{} // cache for expensive metrics metricsCache map[string]interface{} // cache for expensive metrics
cacheTimeout time.Duration cacheTimeout time.Duration
lastCacheClean time.Time lastCacheClean time.Time
// Configuration // Configuration
maxDepth int // Maximum depth for path finding maxDepth int // Maximum depth for path finding
stalenessWeight *StalenessWeights stalenessWeight *StalenessWeights
} }
@@ -69,113 +69,113 @@ func NewTemporalGraph(storage storage.ContextStore) TemporalGraph {
} }
// CreateInitialContext creates the first temporal version of context // CreateInitialContext creates the first temporal version of context
func (tg *temporalGraphImpl) CreateInitialContext(ctx context.Context, address ucxl.Address, func (tg *temporalGraphImpl) CreateInitialContext(ctx context.Context, address ucxl.Address,
contextData *slurpContext.ContextNode, creator string) (*TemporalNode, error) { contextData *slurpContext.ContextNode, creator string) (*TemporalNode, error) {
tg.mu.Lock() tg.mu.Lock()
defer tg.mu.Unlock() defer tg.mu.Unlock()
// Generate node ID // Generate node ID
nodeID := tg.generateNodeID(address, 1) nodeID := tg.generateNodeID(address, 1)
// Create temporal node // Create temporal node
temporalNode := &TemporalNode{ temporalNode := &TemporalNode{
ID: nodeID, ID: nodeID,
UCXLAddress: address, UCXLAddress: address,
Version: 1, Version: 1,
Context: contextData, Context: contextData,
Timestamp: time.Now(), Timestamp: time.Now(),
DecisionID: fmt.Sprintf("initial-%s", creator), DecisionID: fmt.Sprintf("initial-%s", creator),
ChangeReason: ReasonInitialCreation, ChangeReason: ReasonInitialCreation,
ParentNode: nil, ParentNode: nil,
ContextHash: tg.calculateContextHash(contextData), ContextHash: tg.calculateContextHash(contextData),
Confidence: contextData.RAGConfidence, Confidence: contextData.RAGConfidence,
Staleness: 0.0, Staleness: 0.0,
Influences: make([]ucxl.Address, 0), Influences: make([]ucxl.Address, 0),
InfluencedBy: make([]ucxl.Address, 0), InfluencedBy: make([]ucxl.Address, 0),
ValidatedBy: []string{creator}, ValidatedBy: []string{creator},
LastValidated: time.Now(), LastValidated: time.Now(),
ImpactScope: ImpactLocal, ImpactScope: ImpactLocal,
PropagatedTo: make([]ucxl.Address, 0), PropagatedTo: make([]ucxl.Address, 0),
Metadata: make(map[string]interface{}), Metadata: make(map[string]interface{}),
} }
// Store in memory structures // Store in memory structures
tg.nodes[nodeID] = temporalNode tg.nodes[nodeID] = temporalNode
addressKey := address.String() addressKey := address.String()
tg.addressToNodes[addressKey] = []*TemporalNode{temporalNode} tg.addressToNodes[addressKey] = []*TemporalNode{temporalNode}
// Initialize influence maps // Initialize influence maps
tg.influences[nodeID] = make([]string, 0) tg.influences[nodeID] = make([]string, 0)
tg.influencedBy[nodeID] = make([]string, 0) tg.influencedBy[nodeID] = make([]string, 0)
// Store decision metadata // Store decision metadata
decisionMeta := &DecisionMetadata{ decisionMeta := &DecisionMetadata{
ID: temporalNode.DecisionID, ID: temporalNode.DecisionID,
Maker: creator, Maker: creator,
Rationale: "Initial context creation", Rationale: "Initial context creation",
Scope: ImpactLocal, Scope: ImpactLocal,
ConfidenceLevel: contextData.RAGConfidence, ConfidenceLevel: contextData.RAGConfidence,
ExternalRefs: make([]string, 0), ExternalRefs: make([]string, 0),
CreatedAt: time.Now(), CreatedAt: time.Now(),
ImplementationStatus: "complete", ImplementationStatus: "complete",
Metadata: make(map[string]interface{}), Metadata: make(map[string]interface{}),
} }
tg.decisions[temporalNode.DecisionID] = decisionMeta tg.decisions[temporalNode.DecisionID] = decisionMeta
tg.decisionToNodes[temporalNode.DecisionID] = []*TemporalNode{temporalNode} tg.decisionToNodes[temporalNode.DecisionID] = []*TemporalNode{temporalNode}
// Persist to storage // Persist to storage
if err := tg.persistTemporalNode(ctx, temporalNode); err != nil { if err := tg.persistTemporalNode(ctx, temporalNode); err != nil {
return nil, fmt.Errorf("failed to persist initial temporal node: %w", err) return nil, fmt.Errorf("failed to persist initial temporal node: %w", err)
} }
return temporalNode, nil return temporalNode, nil
} }
// EvolveContext creates a new temporal version due to a decision // EvolveContext creates a new temporal version due to a decision
func (tg *temporalGraphImpl) EvolveContext(ctx context.Context, address ucxl.Address, func (tg *temporalGraphImpl) EvolveContext(ctx context.Context, address ucxl.Address,
newContext *slurpContext.ContextNode, reason ChangeReason, newContext *slurpContext.ContextNode, reason ChangeReason,
decision *DecisionMetadata) (*TemporalNode, error) { decision *DecisionMetadata) (*TemporalNode, error) {
tg.mu.Lock() tg.mu.Lock()
defer tg.mu.Unlock() defer tg.mu.Unlock()
// Get latest version // Get latest version
addressKey := address.String() addressKey := address.String()
nodes, exists := tg.addressToNodes[addressKey] nodes, exists := tg.addressToNodes[addressKey]
if !exists || len(nodes) == 0 { if !exists || len(nodes) == 0 {
return nil, fmt.Errorf("no existing context found for address %s", address.String()) return nil, fmt.Errorf("no existing context found for address %s", address.String())
} }
// Find latest version // Find latest version
latestNode := nodes[len(nodes)-1] latestNode := nodes[len(nodes)-1]
newVersion := latestNode.Version + 1 newVersion := latestNode.Version + 1
// Generate new node ID // Generate new node ID
nodeID := tg.generateNodeID(address, newVersion) nodeID := tg.generateNodeID(address, newVersion)
// Create new temporal node // Create new temporal node
temporalNode := &TemporalNode{ temporalNode := &TemporalNode{
ID: nodeID, ID: nodeID,
UCXLAddress: address, UCXLAddress: address,
Version: newVersion, Version: newVersion,
Context: newContext, Context: newContext,
Timestamp: time.Now(), Timestamp: time.Now(),
DecisionID: decision.ID, DecisionID: decision.ID,
ChangeReason: reason, ChangeReason: reason,
ParentNode: &latestNode.ID, ParentNode: &latestNode.ID,
ContextHash: tg.calculateContextHash(newContext), ContextHash: tg.calculateContextHash(newContext),
Confidence: newContext.RAGConfidence, Confidence: newContext.RAGConfidence,
Staleness: 0.0, // New version, not stale Staleness: 0.0, // New version, not stale
Influences: make([]ucxl.Address, 0), Influences: make([]ucxl.Address, 0),
InfluencedBy: make([]ucxl.Address, 0), InfluencedBy: make([]ucxl.Address, 0),
ValidatedBy: []string{decision.Maker}, ValidatedBy: []string{decision.Maker},
LastValidated: time.Now(), LastValidated: time.Now(),
ImpactScope: decision.Scope, ImpactScope: decision.Scope,
PropagatedTo: make([]ucxl.Address, 0), PropagatedTo: make([]ucxl.Address, 0),
Metadata: make(map[string]interface{}), Metadata: make(map[string]interface{}),
} }
// Copy influence relationships from parent // Copy influence relationships from parent
if latestNodeInfluences, exists := tg.influences[latestNode.ID]; exists { if latestNodeInfluences, exists := tg.influences[latestNode.ID]; exists {
tg.influences[nodeID] = make([]string, len(latestNodeInfluences)) tg.influences[nodeID] = make([]string, len(latestNodeInfluences))
@@ -183,18 +183,18 @@ func (tg *temporalGraphImpl) EvolveContext(ctx context.Context, address ucxl.Add
} else { } else {
tg.influences[nodeID] = make([]string, 0) tg.influences[nodeID] = make([]string, 0)
} }
if latestNodeInfluencedBy, exists := tg.influencedBy[latestNode.ID]; exists { if latestNodeInfluencedBy, exists := tg.influencedBy[latestNode.ID]; exists {
tg.influencedBy[nodeID] = make([]string, len(latestNodeInfluencedBy)) tg.influencedBy[nodeID] = make([]string, len(latestNodeInfluencedBy))
copy(tg.influencedBy[nodeID], latestNodeInfluencedBy) copy(tg.influencedBy[nodeID], latestNodeInfluencedBy)
} else { } else {
tg.influencedBy[nodeID] = make([]string, 0) tg.influencedBy[nodeID] = make([]string, 0)
} }
// Store in memory structures // Store in memory structures
tg.nodes[nodeID] = temporalNode tg.nodes[nodeID] = temporalNode
tg.addressToNodes[addressKey] = append(tg.addressToNodes[addressKey], temporalNode) tg.addressToNodes[addressKey] = append(tg.addressToNodes[addressKey], temporalNode)
// Store decision metadata // Store decision metadata
tg.decisions[decision.ID] = decision tg.decisions[decision.ID] = decision
if existing, exists := tg.decisionToNodes[decision.ID]; exists { if existing, exists := tg.decisionToNodes[decision.ID]; exists {
@@ -202,18 +202,18 @@ func (tg *temporalGraphImpl) EvolveContext(ctx context.Context, address ucxl.Add
} else { } else {
tg.decisionToNodes[decision.ID] = []*TemporalNode{temporalNode} tg.decisionToNodes[decision.ID] = []*TemporalNode{temporalNode}
} }
// Update staleness for related contexts // Update staleness for related contexts
tg.updateStalenessAfterChange(temporalNode) tg.updateStalenessAfterChange(temporalNode)
// Clear relevant caches // Clear relevant caches
tg.clearCacheForAddress(address) tg.clearCacheForAddress(address)
// Persist to storage // Persist to storage
if err := tg.persistTemporalNode(ctx, temporalNode); err != nil { if err := tg.persistTemporalNode(ctx, temporalNode); err != nil {
return nil, fmt.Errorf("failed to persist evolved temporal node: %w", err) return nil, fmt.Errorf("failed to persist evolved temporal node: %w", err)
} }
return temporalNode, nil return temporalNode, nil
} }
@@ -221,38 +221,38 @@ func (tg *temporalGraphImpl) EvolveContext(ctx context.Context, address ucxl.Add
func (tg *temporalGraphImpl) GetLatestVersion(ctx context.Context, address ucxl.Address) (*TemporalNode, error) { func (tg *temporalGraphImpl) GetLatestVersion(ctx context.Context, address ucxl.Address) (*TemporalNode, error) {
tg.mu.RLock() tg.mu.RLock()
defer tg.mu.RUnlock() defer tg.mu.RUnlock()
addressKey := address.String() addressKey := address.String()
nodes, exists := tg.addressToNodes[addressKey] nodes, exists := tg.addressToNodes[addressKey]
if !exists || len(nodes) == 0 { if !exists || len(nodes) == 0 {
return nil, fmt.Errorf("no temporal nodes found for address %s", address.String()) return nil, fmt.Errorf("no temporal nodes found for address %s", address.String())
} }
// Return the latest version (last in slice) // Return the latest version (last in slice)
return nodes[len(nodes)-1], nil return nodes[len(nodes)-1], nil
} }
// GetVersionAtDecision gets context as it was at a specific decision hop // GetVersionAtDecision gets context as it was at a specific decision hop
func (tg *temporalGraphImpl) GetVersionAtDecision(ctx context.Context, address ucxl.Address, func (tg *temporalGraphImpl) GetVersionAtDecision(ctx context.Context, address ucxl.Address,
decisionHop int) (*TemporalNode, error) { decisionHop int) (*TemporalNode, error) {
tg.mu.RLock() tg.mu.RLock()
defer tg.mu.RUnlock() defer tg.mu.RUnlock()
addressKey := address.String() addressKey := address.String()
nodes, exists := tg.addressToNodes[addressKey] nodes, exists := tg.addressToNodes[addressKey]
if !exists || len(nodes) == 0 { if !exists || len(nodes) == 0 {
return nil, fmt.Errorf("no temporal nodes found for address %s", address.String()) return nil, fmt.Errorf("no temporal nodes found for address %s", address.String())
} }
// Find node at specific decision hop (version) // Find node at specific decision hop (version)
for _, node := range nodes { for _, node := range nodes {
if node.Version == decisionHop { if node.Version == decisionHop {
return node, nil return node, nil
} }
} }
return nil, fmt.Errorf("no temporal node found at decision hop %d for address %s", return nil, fmt.Errorf("no temporal node found at decision hop %d for address %s",
decisionHop, address.String()) decisionHop, address.String())
} }
@@ -260,20 +260,20 @@ func (tg *temporalGraphImpl) GetVersionAtDecision(ctx context.Context, address u
func (tg *temporalGraphImpl) GetEvolutionHistory(ctx context.Context, address ucxl.Address) ([]*TemporalNode, error) { func (tg *temporalGraphImpl) GetEvolutionHistory(ctx context.Context, address ucxl.Address) ([]*TemporalNode, error) {
tg.mu.RLock() tg.mu.RLock()
defer tg.mu.RUnlock() defer tg.mu.RUnlock()
addressKey := address.String() addressKey := address.String()
nodes, exists := tg.addressToNodes[addressKey] nodes, exists := tg.addressToNodes[addressKey]
if !exists || len(nodes) == 0 { if !exists || len(nodes) == 0 {
return []*TemporalNode{}, nil return []*TemporalNode{}, nil
} }
// Sort by version to ensure proper order // Sort by version to ensure proper order
sortedNodes := make([]*TemporalNode, len(nodes)) sortedNodes := make([]*TemporalNode, len(nodes))
copy(sortedNodes, nodes) copy(sortedNodes, nodes)
sort.Slice(sortedNodes, func(i, j int) bool { sort.Slice(sortedNodes, func(i, j int) bool {
return sortedNodes[i].Version < sortedNodes[j].Version return sortedNodes[i].Version < sortedNodes[j].Version
}) })
return sortedNodes, nil return sortedNodes, nil
} }
@@ -281,22 +281,22 @@ func (tg *temporalGraphImpl) GetEvolutionHistory(ctx context.Context, address uc
func (tg *temporalGraphImpl) AddInfluenceRelationship(ctx context.Context, influencer, influenced ucxl.Address) error { func (tg *temporalGraphImpl) AddInfluenceRelationship(ctx context.Context, influencer, influenced ucxl.Address) error {
tg.mu.Lock() tg.mu.Lock()
defer tg.mu.Unlock() defer tg.mu.Unlock()
// Get latest nodes for both addresses // Get latest nodes for both addresses
influencerNode, err := tg.getLatestNodeUnsafe(influencer) influencerNode, err := tg.getLatestNodeUnsafe(influencer)
if err != nil { if err != nil {
return fmt.Errorf("influencer node not found: %w", err) return fmt.Errorf("influencer node not found: %w", err)
} }
influencedNode, err := tg.getLatestNodeUnsafe(influenced) influencedNode, err := tg.getLatestNodeUnsafe(influenced)
if err != nil { if err != nil {
return fmt.Errorf("influenced node not found: %w", err) return fmt.Errorf("influenced node not found: %w", err)
} }
// Add to influence mappings // Add to influence mappings
influencerNodeID := influencerNode.ID influencerNodeID := influencerNode.ID
influencedNodeID := influencedNode.ID influencedNodeID := influencedNode.ID
// Add to influences map (influencer -> influenced) // Add to influences map (influencer -> influenced)
if influences, exists := tg.influences[influencerNodeID]; exists { if influences, exists := tg.influences[influencerNodeID]; exists {
// Check if relationship already exists // Check if relationship already exists
@@ -309,7 +309,7 @@ func (tg *temporalGraphImpl) AddInfluenceRelationship(ctx context.Context, influ
} else { } else {
tg.influences[influencerNodeID] = []string{influencedNodeID} tg.influences[influencerNodeID] = []string{influencedNodeID}
} }
// Add to influencedBy map (influenced <- influencer) // Add to influencedBy map (influenced <- influencer)
if influencedBy, exists := tg.influencedBy[influencedNodeID]; exists { if influencedBy, exists := tg.influencedBy[influencedNodeID]; exists {
// Check if relationship already exists // Check if relationship already exists
@@ -322,14 +322,14 @@ func (tg *temporalGraphImpl) AddInfluenceRelationship(ctx context.Context, influ
} else { } else {
tg.influencedBy[influencedNodeID] = []string{influencerNodeID} tg.influencedBy[influencedNodeID] = []string{influencerNodeID}
} }
// Update temporal nodes with the influence relationship // Update temporal nodes with the influence relationship
influencerNode.Influences = append(influencerNode.Influences, influenced) influencerNode.Influences = append(influencerNode.Influences, influenced)
influencedNode.InfluencedBy = append(influencedNode.InfluencedBy, influencer) influencedNode.InfluencedBy = append(influencedNode.InfluencedBy, influencer)
// Clear path cache as influence graph has changed // Clear path cache as influence graph has changed
tg.pathCache = make(map[string][]*DecisionStep) tg.pathCache = make(map[string][]*DecisionStep)
// Persist changes // Persist changes
if err := tg.persistTemporalNode(ctx, influencerNode); err != nil { if err := tg.persistTemporalNode(ctx, influencerNode); err != nil {
return fmt.Errorf("failed to persist influencer node: %w", err) return fmt.Errorf("failed to persist influencer node: %w", err)
@@ -337,7 +337,7 @@ func (tg *temporalGraphImpl) AddInfluenceRelationship(ctx context.Context, influ
if err := tg.persistTemporalNode(ctx, influencedNode); err != nil { if err := tg.persistTemporalNode(ctx, influencedNode); err != nil {
return fmt.Errorf("failed to persist influenced node: %w", err) return fmt.Errorf("failed to persist influenced node: %w", err)
} }
return nil return nil
} }
@@ -345,39 +345,39 @@ func (tg *temporalGraphImpl) AddInfluenceRelationship(ctx context.Context, influ
func (tg *temporalGraphImpl) RemoveInfluenceRelationship(ctx context.Context, influencer, influenced ucxl.Address) error { func (tg *temporalGraphImpl) RemoveInfluenceRelationship(ctx context.Context, influencer, influenced ucxl.Address) error {
tg.mu.Lock() tg.mu.Lock()
defer tg.mu.Unlock() defer tg.mu.Unlock()
// Get latest nodes for both addresses // Get latest nodes for both addresses
influencerNode, err := tg.getLatestNodeUnsafe(influencer) influencerNode, err := tg.getLatestNodeUnsafe(influencer)
if err != nil { if err != nil {
return fmt.Errorf("influencer node not found: %w", err) return fmt.Errorf("influencer node not found: %w", err)
} }
influencedNode, err := tg.getLatestNodeUnsafe(influenced) influencedNode, err := tg.getLatestNodeUnsafe(influenced)
if err != nil { if err != nil {
return fmt.Errorf("influenced node not found: %w", err) return fmt.Errorf("influenced node not found: %w", err)
} }
// Remove from influence mappings // Remove from influence mappings
influencerNodeID := influencerNode.ID influencerNodeID := influencerNode.ID
influencedNodeID := influencedNode.ID influencedNodeID := influencedNode.ID
// Remove from influences map // Remove from influences map
if influences, exists := tg.influences[influencerNodeID]; exists { if influences, exists := tg.influences[influencerNodeID]; exists {
tg.influences[influencerNodeID] = tg.removeFromSlice(influences, influencedNodeID) tg.influences[influencerNodeID] = tg.removeFromSlice(influences, influencedNodeID)
} }
// Remove from influencedBy map // Remove from influencedBy map
if influencedBy, exists := tg.influencedBy[influencedNodeID]; exists { if influencedBy, exists := tg.influencedBy[influencedNodeID]; exists {
tg.influencedBy[influencedNodeID] = tg.removeFromSlice(influencedBy, influencerNodeID) tg.influencedBy[influencedNodeID] = tg.removeFromSlice(influencedBy, influencerNodeID)
} }
// Update temporal nodes // Update temporal nodes
influencerNode.Influences = tg.removeAddressFromSlice(influencerNode.Influences, influenced) influencerNode.Influences = tg.removeAddressFromSlice(influencerNode.Influences, influenced)
influencedNode.InfluencedBy = tg.removeAddressFromSlice(influencedNode.InfluencedBy, influencer) influencedNode.InfluencedBy = tg.removeAddressFromSlice(influencedNode.InfluencedBy, influencer)
// Clear path cache // Clear path cache
tg.pathCache = make(map[string][]*DecisionStep) tg.pathCache = make(map[string][]*DecisionStep)
// Persist changes // Persist changes
if err := tg.persistTemporalNode(ctx, influencerNode); err != nil { if err := tg.persistTemporalNode(ctx, influencerNode); err != nil {
return fmt.Errorf("failed to persist influencer node: %w", err) return fmt.Errorf("failed to persist influencer node: %w", err)
@@ -385,7 +385,7 @@ func (tg *temporalGraphImpl) RemoveInfluenceRelationship(ctx context.Context, in
if err := tg.persistTemporalNode(ctx, influencedNode); err != nil { if err := tg.persistTemporalNode(ctx, influencedNode); err != nil {
return fmt.Errorf("failed to persist influenced node: %w", err) return fmt.Errorf("failed to persist influenced node: %w", err)
} }
return nil return nil
} }
@@ -393,28 +393,28 @@ func (tg *temporalGraphImpl) RemoveInfluenceRelationship(ctx context.Context, in
func (tg *temporalGraphImpl) GetInfluenceRelationships(ctx context.Context, address ucxl.Address) ([]ucxl.Address, []ucxl.Address, error) { func (tg *temporalGraphImpl) GetInfluenceRelationships(ctx context.Context, address ucxl.Address) ([]ucxl.Address, []ucxl.Address, error) {
tg.mu.RLock() tg.mu.RLock()
defer tg.mu.RUnlock() defer tg.mu.RUnlock()
node, err := tg.getLatestNodeUnsafe(address) node, err := tg.getLatestNodeUnsafe(address)
if err != nil { if err != nil {
return nil, nil, fmt.Errorf("node not found: %w", err) return nil, nil, fmt.Errorf("node not found: %w", err)
} }
influences := make([]ucxl.Address, len(node.Influences)) influences := make([]ucxl.Address, len(node.Influences))
copy(influences, node.Influences) copy(influences, node.Influences)
influencedBy := make([]ucxl.Address, len(node.InfluencedBy)) influencedBy := make([]ucxl.Address, len(node.InfluencedBy))
copy(influencedBy, node.InfluencedBy) copy(influencedBy, node.InfluencedBy)
return influences, influencedBy, nil return influences, influencedBy, nil
} }
// FindRelatedDecisions finds decisions within N decision hops // FindRelatedDecisions finds decisions within N decision hops
func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address ucxl.Address, func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address ucxl.Address,
maxHops int) ([]*DecisionPath, error) { maxHops int) ([]*DecisionPath, error) {
tg.mu.RLock() tg.mu.RLock()
defer tg.mu.RUnlock() defer tg.mu.RUnlock()
// Check cache first // Check cache first
cacheKey := fmt.Sprintf("related-%s-%d", address.String(), maxHops) cacheKey := fmt.Sprintf("related-%s-%d", address.String(), maxHops)
if cached, exists := tg.pathCache[cacheKey]; exists { if cached, exists := tg.pathCache[cacheKey]; exists {
@@ -430,27 +430,27 @@ func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address u
} }
return paths, nil return paths, nil
} }
startNode, err := tg.getLatestNodeUnsafe(address) startNode, err := tg.getLatestNodeUnsafe(address)
if err != nil { if err != nil {
return nil, fmt.Errorf("start node not found: %w", err) return nil, fmt.Errorf("start node not found: %w", err)
} }
// Use BFS to find all nodes within maxHops // Use BFS to find all nodes within maxHops
visited := make(map[string]bool) visited := make(map[string]bool)
queue := []*bfsItem{{node: startNode, distance: 0, path: []*DecisionStep{}}} queue := []*bfsItem{{node: startNode, distance: 0, path: []*DecisionStep{}}}
relatedPaths := make([]*DecisionPath, 0) relatedPaths := make([]*DecisionPath, 0)
for len(queue) > 0 { for len(queue) > 0 {
current := queue[0] current := queue[0]
queue = queue[1:] queue = queue[1:]
nodeID := current.node.ID nodeID := current.node.ID
if visited[nodeID] || current.distance > maxHops { if visited[nodeID] || current.distance > maxHops {
continue continue
} }
visited[nodeID] = true visited[nodeID] = true
// If this is not the starting node, add it to results // If this is not the starting node, add it to results
if current.distance > 0 { if current.distance > 0 {
step := &DecisionStep{ step := &DecisionStep{
@@ -459,7 +459,7 @@ func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address u
HopDistance: current.distance, HopDistance: current.distance,
Relationship: "influence", Relationship: "influence",
} }
path := &DecisionPath{ path := &DecisionPath{
From: address, From: address,
To: current.node.UCXLAddress, To: current.node.UCXLAddress,
@@ -469,7 +469,7 @@ func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address u
} }
relatedPaths = append(relatedPaths, path) relatedPaths = append(relatedPaths, path)
} }
// Add influenced nodes to queue // Add influenced nodes to queue
if influences, exists := tg.influences[nodeID]; exists { if influences, exists := tg.influences[nodeID]; exists {
for _, influencedID := range influences { for _, influencedID := range influences {
@@ -491,7 +491,7 @@ func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address u
} }
} }
} }
// Add influencer nodes to queue // Add influencer nodes to queue
if influencedBy, exists := tg.influencedBy[nodeID]; exists { if influencedBy, exists := tg.influencedBy[nodeID]; exists {
for _, influencerID := range influencedBy { for _, influencerID := range influencedBy {
@@ -514,7 +514,7 @@ func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address u
} }
} }
} }
return relatedPaths, nil return relatedPaths, nil
} }
@@ -522,44 +522,44 @@ func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address u
func (tg *temporalGraphImpl) FindDecisionPath(ctx context.Context, from, to ucxl.Address) ([]*DecisionStep, error) { func (tg *temporalGraphImpl) FindDecisionPath(ctx context.Context, from, to ucxl.Address) ([]*DecisionStep, error) {
tg.mu.RLock() tg.mu.RLock()
defer tg.mu.RUnlock() defer tg.mu.RUnlock()
// Check cache first // Check cache first
cacheKey := fmt.Sprintf("path-%s-%s", from.String(), to.String()) cacheKey := fmt.Sprintf("path-%s-%s", from.String(), to.String())
if cached, exists := tg.pathCache[cacheKey]; exists { if cached, exists := tg.pathCache[cacheKey]; exists {
return cached, nil return cached, nil
} }
fromNode, err := tg.getLatestNodeUnsafe(from) fromNode, err := tg.getLatestNodeUnsafe(from)
if err != nil { if err != nil {
return nil, fmt.Errorf("from node not found: %w", err) return nil, fmt.Errorf("from node not found: %w", err)
} }
toNode, err := tg.getLatestNodeUnsafe(to) _, err := tg.getLatestNodeUnsafe(to)
if err != nil { if err != nil {
return nil, fmt.Errorf("to node not found: %w", err) return nil, fmt.Errorf("to node not found: %w", err)
} }
// Use BFS to find shortest path // Use BFS to find shortest path
visited := make(map[string]bool) visited := make(map[string]bool)
queue := []*pathItem{{node: fromNode, path: []*DecisionStep{}}} queue := []*pathItem{{node: fromNode, path: []*DecisionStep{}}}
for len(queue) > 0 { for len(queue) > 0 {
current := queue[0] current := queue[0]
queue = queue[1:] queue = queue[1:]
nodeID := current.node.ID nodeID := current.node.ID
if visited[nodeID] { if visited[nodeID] {
continue continue
} }
visited[nodeID] = true visited[nodeID] = true
// Check if we reached the target // Check if we reached the target
if current.node.UCXLAddress.String() == to.String() { if current.node.UCXLAddress.String() == to.String() {
// Cache the result // Cache the result
tg.pathCache[cacheKey] = current.path tg.pathCache[cacheKey] = current.path
return current.path, nil return current.path, nil
} }
// Explore influenced nodes // Explore influenced nodes
if influences, exists := tg.influences[nodeID]; exists { if influences, exists := tg.influences[nodeID]; exists {
for _, influencedID := range influences { for _, influencedID := range influences {
@@ -580,7 +580,7 @@ func (tg *temporalGraphImpl) FindDecisionPath(ctx context.Context, from, to ucxl
} }
} }
} }
// Explore influencer nodes // Explore influencer nodes
if influencedBy, exists := tg.influencedBy[nodeID]; exists { if influencedBy, exists := tg.influencedBy[nodeID]; exists {
for _, influencerID := range influencedBy { for _, influencerID := range influencedBy {
@@ -602,7 +602,7 @@ func (tg *temporalGraphImpl) FindDecisionPath(ctx context.Context, from, to ucxl
} }
} }
} }
return nil, fmt.Errorf("no path found from %s to %s", from.String(), to.String()) return nil, fmt.Errorf("no path found from %s to %s", from.String(), to.String())
} }
@@ -610,7 +610,7 @@ func (tg *temporalGraphImpl) FindDecisionPath(ctx context.Context, from, to ucxl
func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*DecisionAnalysis, error) { func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*DecisionAnalysis, error) {
tg.mu.RLock() tg.mu.RLock()
defer tg.mu.RUnlock() defer tg.mu.RUnlock()
analysis := &DecisionAnalysis{ analysis := &DecisionAnalysis{
TimeRange: 24 * time.Hour, // Analyze last 24 hours by default TimeRange: 24 * time.Hour, // Analyze last 24 hours by default
TotalDecisions: len(tg.decisions), TotalDecisions: len(tg.decisions),
@@ -620,10 +620,10 @@ func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*Deci
MostInfluentialDecisions: make([]*InfluentialDecision, 0), MostInfluentialDecisions: make([]*InfluentialDecision, 0),
DecisionClusters: make([]*DecisionCluster, 0), DecisionClusters: make([]*DecisionCluster, 0),
Patterns: make([]*DecisionPattern, 0), Patterns: make([]*DecisionPattern, 0),
Anomalies: make([]*AnomalousDecision, 0), Anomalies: make([]*AnomalousDecision, 0),
AnalyzedAt: time.Now(), AnalyzedAt: time.Now(),
} }
// Calculate decision velocity // Calculate decision velocity
cutoff := time.Now().Add(-analysis.TimeRange) cutoff := time.Now().Add(-analysis.TimeRange)
recentDecisions := 0 recentDecisions := 0
@@ -633,7 +633,7 @@ func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*Deci
} }
} }
analysis.DecisionVelocity = float64(recentDecisions) / analysis.TimeRange.Hours() analysis.DecisionVelocity = float64(recentDecisions) / analysis.TimeRange.Hours()
// Calculate average influence distance // Calculate average influence distance
totalDistance := 0.0 totalDistance := 0.0
connections := 0 connections := 0
@@ -648,37 +648,37 @@ func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*Deci
if connections > 0 { if connections > 0 {
analysis.AverageInfluenceDistance = totalDistance / float64(connections) analysis.AverageInfluenceDistance = totalDistance / float64(connections)
} }
// Find most influential decisions (simplified) // Find most influential decisions (simplified)
influenceScores := make(map[string]float64) influenceScores := make(map[string]float64)
for nodeID, node := range tg.nodes { for nodeID, node := range tg.nodes {
score := float64(len(tg.influences[nodeID])) * 1.0 // Direct influences score := float64(len(tg.influences[nodeID])) * 1.0 // Direct influences
score += float64(len(tg.influencedBy[nodeID])) * 0.5 // Being influenced score += float64(len(tg.influencedBy[nodeID])) * 0.5 // Being influenced
influenceScores[nodeID] = score influenceScores[nodeID] = score
if score > 3.0 { // Threshold for "influential" if score > 3.0 { // Threshold for "influential"
influential := &InfluentialDecision{ influential := &InfluentialDecision{
Address: node.UCXLAddress, Address: node.UCXLAddress,
DecisionHop: node.Version, DecisionHop: node.Version,
InfluenceScore: score, InfluenceScore: score,
AffectedContexts: node.Influences, AffectedContexts: node.Influences,
DecisionMetadata: tg.decisions[node.DecisionID], DecisionMetadata: tg.decisions[node.DecisionID],
InfluenceReasons: []string{"high_connectivity", "multiple_influences"}, InfluenceReasons: []string{"high_connectivity", "multiple_influences"},
} }
analysis.MostInfluentialDecisions = append(analysis.MostInfluentialDecisions, influential) analysis.MostInfluentialDecisions = append(analysis.MostInfluentialDecisions, influential)
} }
} }
// Sort influential decisions by score // Sort influential decisions by score
sort.Slice(analysis.MostInfluentialDecisions, func(i, j int) bool { sort.Slice(analysis.MostInfluentialDecisions, func(i, j int) bool {
return analysis.MostInfluentialDecisions[i].InfluenceScore > analysis.MostInfluentialDecisions[j].InfluenceScore return analysis.MostInfluentialDecisions[i].InfluenceScore > analysis.MostInfluentialDecisions[j].InfluenceScore
}) })
// Limit to top 10 // Limit to top 10
if len(analysis.MostInfluentialDecisions) > 10 { if len(analysis.MostInfluentialDecisions) > 10 {
analysis.MostInfluentialDecisions = analysis.MostInfluentialDecisions[:10] analysis.MostInfluentialDecisions = analysis.MostInfluentialDecisions[:10]
} }
return analysis, nil return analysis, nil
} }
@@ -686,19 +686,19 @@ func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*Deci
func (tg *temporalGraphImpl) ValidateTemporalIntegrity(ctx context.Context) error { func (tg *temporalGraphImpl) ValidateTemporalIntegrity(ctx context.Context) error {
tg.mu.RLock() tg.mu.RLock()
defer tg.mu.RUnlock() defer tg.mu.RUnlock()
errors := make([]string, 0) errors := make([]string, 0)
// Check for orphaned nodes // Check for orphaned nodes
for nodeID, node := range tg.nodes { for nodeID, node := range tg.nodes {
if node.ParentNode != nil { if node.ParentNode != nil {
if _, exists := tg.nodes[*node.ParentNode]; !exists { if _, exists := tg.nodes[*node.ParentNode]; !exists {
errors = append(errors, fmt.Sprintf("orphaned node %s has non-existent parent %s", errors = append(errors, fmt.Sprintf("orphaned node %s has non-existent parent %s",
nodeID, *node.ParentNode)) nodeID, *node.ParentNode))
} }
} }
} }
// Check influence consistency // Check influence consistency
for nodeID := range tg.influences { for nodeID := range tg.influences {
if influences, exists := tg.influences[nodeID]; exists { if influences, exists := tg.influences[nodeID]; exists {
@@ -713,33 +713,33 @@ func (tg *temporalGraphImpl) ValidateTemporalIntegrity(ctx context.Context) erro
} }
} }
if !found { if !found {
errors = append(errors, fmt.Sprintf("influence inconsistency: %s -> %s not reflected in influencedBy", errors = append(errors, fmt.Sprintf("influence inconsistency: %s -> %s not reflected in influencedBy",
nodeID, influencedID)) nodeID, influencedID))
} }
} }
} }
} }
} }
// Check version sequence integrity // Check version sequence integrity
for address, nodes := range tg.addressToNodes { for address, nodes := range tg.addressToNodes {
sort.Slice(nodes, func(i, j int) bool { sort.Slice(nodes, func(i, j int) bool {
return nodes[i].Version < nodes[j].Version return nodes[i].Version < nodes[j].Version
}) })
for i, node := range nodes { for i, node := range nodes {
expectedVersion := i + 1 expectedVersion := i + 1
if node.Version != expectedVersion { if node.Version != expectedVersion {
errors = append(errors, fmt.Sprintf("version sequence error for address %s: expected %d, got %d", errors = append(errors, fmt.Sprintf("version sequence error for address %s: expected %d, got %d",
address, expectedVersion, node.Version)) address, expectedVersion, node.Version))
} }
} }
} }
if len(errors) > 0 { if len(errors) > 0 {
return fmt.Errorf("temporal integrity violations: %v", errors) return fmt.Errorf("temporal integrity violations: %v", errors)
} }
return nil return nil
} }
@@ -747,21 +747,21 @@ func (tg *temporalGraphImpl) ValidateTemporalIntegrity(ctx context.Context) erro
func (tg *temporalGraphImpl) CompactHistory(ctx context.Context, beforeTime time.Time) error { func (tg *temporalGraphImpl) CompactHistory(ctx context.Context, beforeTime time.Time) error {
tg.mu.Lock() tg.mu.Lock()
defer tg.mu.Unlock() defer tg.mu.Unlock()
compacted := 0 compacted := 0
// For each address, keep only the latest version and major milestones before the cutoff // For each address, keep only the latest version and major milestones before the cutoff
for address, nodes := range tg.addressToNodes { for address, nodes := range tg.addressToNodes {
toKeep := make([]*TemporalNode, 0) toKeep := make([]*TemporalNode, 0)
toRemove := make([]*TemporalNode, 0) toRemove := make([]*TemporalNode, 0)
for _, node := range nodes { for _, node := range nodes {
// Always keep nodes after the cutoff time // Always keep nodes after the cutoff time
if node.Timestamp.After(beforeTime) { if node.Timestamp.After(beforeTime) {
toKeep = append(toKeep, node) toKeep = append(toKeep, node)
continue continue
} }
// Keep major changes and influential decisions // Keep major changes and influential decisions
if tg.isMajorChange(node) || tg.isInfluentialDecision(node) { if tg.isMajorChange(node) || tg.isInfluentialDecision(node) {
toKeep = append(toKeep, node) toKeep = append(toKeep, node)
@@ -769,10 +769,10 @@ func (tg *temporalGraphImpl) CompactHistory(ctx context.Context, beforeTime time
toRemove = append(toRemove, node) toRemove = append(toRemove, node)
} }
} }
// Update the address mapping // Update the address mapping
tg.addressToNodes[address] = toKeep tg.addressToNodes[address] = toKeep
// Remove old nodes from main maps // Remove old nodes from main maps
for _, node := range toRemove { for _, node := range toRemove {
delete(tg.nodes, node.ID) delete(tg.nodes, node.ID)
@@ -781,11 +781,11 @@ func (tg *temporalGraphImpl) CompactHistory(ctx context.Context, beforeTime time
compacted++ compacted++
} }
} }
// Clear caches after compaction // Clear caches after compaction
tg.pathCache = make(map[string][]*DecisionStep) tg.pathCache = make(map[string][]*DecisionStep)
tg.metricsCache = make(map[string]interface{}) tg.metricsCache = make(map[string]interface{})
return nil return nil
} }
@@ -847,13 +847,13 @@ func (tg *temporalGraphImpl) calculateStaleness(node *TemporalNode, changedNode
// Simple staleness calculation based on time since last update and influence strength // Simple staleness calculation based on time since last update and influence strength
timeSinceUpdate := time.Since(node.Timestamp) timeSinceUpdate := time.Since(node.Timestamp)
timeWeight := math.Min(timeSinceUpdate.Hours()/168.0, 1.0) // Max staleness from time: 1 week timeWeight := math.Min(timeSinceUpdate.Hours()/168.0, 1.0) // Max staleness from time: 1 week
// Influence weight based on connection strength // Influence weight based on connection strength
influenceWeight := 0.0 influenceWeight := 0.0
if len(node.InfluencedBy) > 0 { if len(node.InfluencedBy) > 0 {
influenceWeight = 1.0 / float64(len(node.InfluencedBy)) // Stronger if fewer influencers influenceWeight = 1.0 / float64(len(node.InfluencedBy)) // Stronger if fewer influencers
} }
// Impact scope weight // Impact scope weight
impactWeight := 0.0 impactWeight := 0.0
switch changedNode.ImpactScope { switch changedNode.ImpactScope {
@@ -866,23 +866,23 @@ func (tg *temporalGraphImpl) calculateStaleness(node *TemporalNode, changedNode
case ImpactLocal: case ImpactLocal:
impactWeight = 0.4 impactWeight = 0.4
} }
return math.Min( return math.Min(
tg.stalenessWeight.TimeWeight*timeWeight+ tg.stalenessWeight.TimeWeight*timeWeight+
tg.stalenessWeight.InfluenceWeight*influenceWeight+ tg.stalenessWeight.InfluenceWeight*influenceWeight+
tg.stalenessWeight.ImportanceWeight*impactWeight, 1.0) tg.stalenessWeight.ImportanceWeight*impactWeight, 1.0)
} }
func (tg *temporalGraphImpl) clearCacheForAddress(address ucxl.Address) { func (tg *temporalGraphImpl) clearCacheForAddress(address ucxl.Address) {
addressStr := address.String() addressStr := address.String()
keysToDelete := make([]string, 0) keysToDelete := make([]string, 0)
for key := range tg.pathCache { for key := range tg.pathCache {
if contains(key, addressStr) { if contains(key, addressStr) {
keysToDelete = append(keysToDelete, key) keysToDelete = append(keysToDelete, key)
} }
} }
for _, key := range keysToDelete { for _, key := range keysToDelete {
delete(tg.pathCache, key) delete(tg.pathCache, key)
} }
@@ -908,7 +908,7 @@ func (tg *temporalGraphImpl) persistTemporalNode(ctx context.Context, node *Temp
} }
func contains(s, substr string) bool { func contains(s, substr string) bool {
return len(s) >= len(substr) && (s == substr || return len(s) >= len(substr) && (s == substr ||
(len(s) > len(substr) && (s[:len(substr)] == substr || s[len(s)-len(substr):] == substr))) (len(s) > len(substr) && (s[:len(substr)] == substr || s[len(s)-len(substr):] == substr)))
} }
@@ -923,4 +923,4 @@ type bfsItem struct {
type pathItem struct { type pathItem struct {
node *TemporalNode node *TemporalNode
path []*DecisionStep path []*DecisionStep
} }

File diff suppressed because it is too large Load Diff

View File

@@ -13,36 +13,36 @@ import (
// decisionNavigatorImpl implements the DecisionNavigator interface // decisionNavigatorImpl implements the DecisionNavigator interface
type decisionNavigatorImpl struct { type decisionNavigatorImpl struct {
mu sync.RWMutex mu sync.RWMutex
// Reference to the temporal graph // Reference to the temporal graph
graph *temporalGraphImpl graph *temporalGraphImpl
// Navigation state // Navigation state
navigationSessions map[string]*NavigationSession navigationSessions map[string]*NavigationSession
bookmarks map[string]*DecisionBookmark bookmarks map[string]*DecisionBookmark
// Configuration // Configuration
maxNavigationHistory int maxNavigationHistory int
} }
// NavigationSession represents a navigation session // NavigationSession represents a navigation session
type NavigationSession struct { type NavigationSession struct {
ID string `json:"id"` ID string `json:"id"`
UserID string `json:"user_id"` UserID string `json:"user_id"`
StartedAt time.Time `json:"started_at"` StartedAt time.Time `json:"started_at"`
LastActivity time.Time `json:"last_activity"` LastActivity time.Time `json:"last_activity"`
CurrentPosition ucxl.Address `json:"current_position"` CurrentPosition ucxl.Address `json:"current_position"`
History []*DecisionStep `json:"history"` History []*DecisionStep `json:"history"`
Bookmarks []string `json:"bookmarks"` Bookmarks []string `json:"bookmarks"`
Preferences *NavPreferences `json:"preferences"` Preferences *NavPreferences `json:"preferences"`
} }
// NavPreferences represents navigation preferences // NavPreferences represents navigation preferences
type NavPreferences struct { type NavPreferences struct {
MaxHops int `json:"max_hops"` MaxHops int `json:"max_hops"`
PreferRecentDecisions bool `json:"prefer_recent_decisions"` PreferRecentDecisions bool `json:"prefer_recent_decisions"`
FilterByConfidence float64 `json:"filter_by_confidence"` FilterByConfidence float64 `json:"filter_by_confidence"`
IncludeStaleContexts bool `json:"include_stale_contexts"` IncludeStaleContexts bool `json:"include_stale_contexts"`
} }
// NewDecisionNavigator creates a new decision navigator // NewDecisionNavigator creates a new decision navigator
@@ -50,24 +50,24 @@ func NewDecisionNavigator(graph *temporalGraphImpl) DecisionNavigator {
return &decisionNavigatorImpl{ return &decisionNavigatorImpl{
graph: graph, graph: graph,
navigationSessions: make(map[string]*NavigationSession), navigationSessions: make(map[string]*NavigationSession),
bookmarks: make(map[string]*DecisionBookmark), bookmarks: make(map[string]*DecisionBookmark),
maxNavigationHistory: 100, maxNavigationHistory: 100,
} }
} }
// NavigateDecisionHops navigates by decision distance, not time // NavigateDecisionHops navigates by decision distance, not time
func (dn *decisionNavigatorImpl) NavigateDecisionHops(ctx context.Context, address ucxl.Address, func (dn *decisionNavigatorImpl) NavigateDecisionHops(ctx context.Context, address ucxl.Address,
hops int, direction NavigationDirection) (*TemporalNode, error) { hops int, direction NavigationDirection) (*TemporalNode, error) {
dn.mu.RLock() dn.mu.RLock()
defer dn.mu.RUnlock() defer dn.mu.RUnlock()
// Get starting node // Get starting node
startNode, err := dn.graph.getLatestNodeUnsafe(address) startNode, err := dn.graph.getLatestNodeUnsafe(address)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to get starting node: %w", err) return nil, fmt.Errorf("failed to get starting node: %w", err)
} }
// Navigate by hops // Navigate by hops
currentNode := startNode currentNode := startNode
for i := 0; i < hops; i++ { for i := 0; i < hops; i++ {
@@ -77,23 +77,23 @@ func (dn *decisionNavigatorImpl) NavigateDecisionHops(ctx context.Context, addre
} }
currentNode = nextNode currentNode = nextNode
} }
return currentNode, nil return currentNode, nil
} }
// GetDecisionTimeline gets timeline ordered by decision sequence // GetDecisionTimeline gets timeline ordered by decision sequence
func (dn *decisionNavigatorImpl) GetDecisionTimeline(ctx context.Context, address ucxl.Address, func (dn *decisionNavigatorImpl) GetDecisionTimeline(ctx context.Context, address ucxl.Address,
includeRelated bool, maxHops int) (*DecisionTimeline, error) { includeRelated bool, maxHops int) (*DecisionTimeline, error) {
dn.mu.RLock() dn.mu.RLock()
defer dn.mu.RUnlock() defer dn.mu.RUnlock()
// Get evolution history for the primary address // Get evolution history for the primary address
history, err := dn.graph.GetEvolutionHistory(ctx, address) history, err := dn.graph.GetEvolutionHistory(ctx, address)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to get evolution history: %w", err) return nil, fmt.Errorf("failed to get evolution history: %w", err)
} }
// Build decision timeline entries // Build decision timeline entries
decisionSequence := make([]*DecisionTimelineEntry, len(history)) decisionSequence := make([]*DecisionTimelineEntry, len(history))
for i, node := range history { for i, node := range history {
@@ -112,7 +112,7 @@ func (dn *decisionNavigatorImpl) GetDecisionTimeline(ctx context.Context, addres
} }
decisionSequence[i] = entry decisionSequence[i] = entry
} }
// Get related decisions if requested // Get related decisions if requested
relatedDecisions := make([]*RelatedDecision, 0) relatedDecisions := make([]*RelatedDecision, 0)
if includeRelated && maxHops > 0 { if includeRelated && maxHops > 0 {
@@ -136,16 +136,16 @@ func (dn *decisionNavigatorImpl) GetDecisionTimeline(ctx context.Context, addres
} }
} }
} }
// Calculate timeline analysis // Calculate timeline analysis
analysis := dn.analyzeTimeline(decisionSequence, relatedDecisions) analysis := dn.analyzeTimeline(decisionSequence, relatedDecisions)
// Calculate time span // Calculate time span
var timeSpan time.Duration var timeSpan time.Duration
if len(history) > 1 { if len(history) > 1 {
timeSpan = history[len(history)-1].Timestamp.Sub(history[0].Timestamp) timeSpan = history[len(history)-1].Timestamp.Sub(history[0].Timestamp)
} }
timeline := &DecisionTimeline{ timeline := &DecisionTimeline{
PrimaryAddress: address, PrimaryAddress: address,
DecisionSequence: decisionSequence, DecisionSequence: decisionSequence,
@@ -154,7 +154,7 @@ func (dn *decisionNavigatorImpl) GetDecisionTimeline(ctx context.Context, addres
TimeSpan: timeSpan, TimeSpan: timeSpan,
AnalysisMetadata: analysis, AnalysisMetadata: analysis,
} }
return timeline, nil return timeline, nil
} }
@@ -162,31 +162,31 @@ func (dn *decisionNavigatorImpl) GetDecisionTimeline(ctx context.Context, addres
func (dn *decisionNavigatorImpl) FindStaleContexts(ctx context.Context, stalenessThreshold float64) ([]*StaleContext, error) { func (dn *decisionNavigatorImpl) FindStaleContexts(ctx context.Context, stalenessThreshold float64) ([]*StaleContext, error) {
dn.mu.RLock() dn.mu.RLock()
defer dn.mu.RUnlock() defer dn.mu.RUnlock()
staleContexts := make([]*StaleContext, 0) staleContexts := make([]*StaleContext, 0)
// Check all nodes for staleness // Check all nodes for staleness
for _, node := range dn.graph.nodes { for _, node := range dn.graph.nodes {
if node.Staleness >= stalenessThreshold { if node.Staleness >= stalenessThreshold {
staleness := &StaleContext{ staleness := &StaleContext{
UCXLAddress: node.UCXLAddress, UCXLAddress: node.UCXLAddress,
TemporalNode: node, TemporalNode: node,
StalenessScore: node.Staleness, StalenessScore: node.Staleness,
LastUpdated: node.Timestamp, LastUpdated: node.Timestamp,
Reasons: dn.getStalenessReasons(node), Reasons: dn.getStalenessReasons(node),
SuggestedActions: dn.getSuggestedActions(node), SuggestedActions: dn.getSuggestedActions(node),
RelatedChanges: dn.getRelatedChanges(node), RelatedChanges: dn.getRelatedChanges(node),
Priority: dn.calculateStalePriority(node), Priority: dn.calculateStalePriority(node),
} }
staleContexts = append(staleContexts, staleness) staleContexts = append(staleContexts, staleness)
} }
} }
// Sort by staleness score (highest first) // Sort by staleness score (highest first)
sort.Slice(staleContexts, func(i, j int) bool { sort.Slice(staleContexts, func(i, j int) bool {
return staleContexts[i].StalenessScore > staleContexts[j].StalenessScore return staleContexts[i].StalenessScore > staleContexts[j].StalenessScore
}) })
return staleContexts, nil return staleContexts, nil
} }
@@ -195,28 +195,28 @@ func (dn *decisionNavigatorImpl) ValidateDecisionPath(ctx context.Context, path
if len(path) == 0 { if len(path) == 0 {
return fmt.Errorf("empty decision path") return fmt.Errorf("empty decision path")
} }
dn.mu.RLock() dn.mu.RLock()
defer dn.mu.RUnlock() defer dn.mu.RUnlock()
// Validate each step in the path // Validate each step in the path
for i, step := range path { for i, step := range path {
// Check if the temporal node exists // Check if the temporal node exists
if step.TemporalNode == nil { if step.TemporalNode == nil {
return fmt.Errorf("step %d has nil temporal node", i) return fmt.Errorf("step %d has nil temporal node", i)
} }
nodeID := step.TemporalNode.ID nodeID := step.TemporalNode.ID
if _, exists := dn.graph.nodes[nodeID]; !exists { if _, exists := dn.graph.nodes[nodeID]; !exists {
return fmt.Errorf("step %d references non-existent node %s", i, nodeID) return fmt.Errorf("step %d references non-existent node %s", i, nodeID)
} }
// Validate hop distance // Validate hop distance
if step.HopDistance != i { if step.HopDistance != i {
return fmt.Errorf("step %d has incorrect hop distance: expected %d, got %d", return fmt.Errorf("step %d has incorrect hop distance: expected %d, got %d",
i, i, step.HopDistance) i, i, step.HopDistance)
} }
// Validate relationship to next step // Validate relationship to next step
if i < len(path)-1 { if i < len(path)-1 {
nextStep := path[i+1] nextStep := path[i+1]
@@ -225,7 +225,7 @@ func (dn *decisionNavigatorImpl) ValidateDecisionPath(ctx context.Context, path
} }
} }
} }
return nil return nil
} }
@@ -233,16 +233,16 @@ func (dn *decisionNavigatorImpl) ValidateDecisionPath(ctx context.Context, path
func (dn *decisionNavigatorImpl) GetNavigationHistory(ctx context.Context, sessionID string) ([]*DecisionStep, error) { func (dn *decisionNavigatorImpl) GetNavigationHistory(ctx context.Context, sessionID string) ([]*DecisionStep, error) {
dn.mu.RLock() dn.mu.RLock()
defer dn.mu.RUnlock() defer dn.mu.RUnlock()
session, exists := dn.navigationSessions[sessionID] session, exists := dn.navigationSessions[sessionID]
if !exists { if !exists {
return nil, fmt.Errorf("navigation session %s not found", sessionID) return nil, fmt.Errorf("navigation session %s not found", sessionID)
} }
// Return a copy of the history // Return a copy of the history
history := make([]*DecisionStep, len(session.History)) history := make([]*DecisionStep, len(session.History))
copy(history, session.History) copy(history, session.History)
return history, nil return history, nil
} }
@@ -250,22 +250,22 @@ func (dn *decisionNavigatorImpl) GetNavigationHistory(ctx context.Context, sessi
func (dn *decisionNavigatorImpl) ResetNavigation(ctx context.Context, address ucxl.Address) error { func (dn *decisionNavigatorImpl) ResetNavigation(ctx context.Context, address ucxl.Address) error {
dn.mu.Lock() dn.mu.Lock()
defer dn.mu.Unlock() defer dn.mu.Unlock()
// Clear any navigation sessions for this address // Clear any navigation sessions for this address
for sessionID, session := range dn.navigationSessions { for _, session := range dn.navigationSessions {
if session.CurrentPosition.String() == address.String() { if session.CurrentPosition.String() == address.String() {
// Reset to latest version // Reset to latest version
latestNode, err := dn.graph.getLatestNodeUnsafe(address) latestNode, err := dn.graph.getLatestNodeUnsafe(address)
if err != nil { if err != nil {
return fmt.Errorf("failed to get latest node: %w", err) return fmt.Errorf("failed to get latest node: %w", err)
} }
session.CurrentPosition = address session.CurrentPosition = address
session.History = []*DecisionStep{} session.History = []*DecisionStep{}
session.LastActivity = time.Now() session.LastActivity = time.Now()
} }
} }
return nil return nil
} }
@@ -273,13 +273,13 @@ func (dn *decisionNavigatorImpl) ResetNavigation(ctx context.Context, address uc
func (dn *decisionNavigatorImpl) BookmarkDecision(ctx context.Context, address ucxl.Address, hop int, name string) error { func (dn *decisionNavigatorImpl) BookmarkDecision(ctx context.Context, address ucxl.Address, hop int, name string) error {
dn.mu.Lock() dn.mu.Lock()
defer dn.mu.Unlock() defer dn.mu.Unlock()
// Validate the decision point exists // Validate the decision point exists
node, err := dn.graph.GetVersionAtDecision(ctx, address, hop) node, err := dn.graph.GetVersionAtDecision(ctx, address, hop)
if err != nil { if err != nil {
return fmt.Errorf("decision point not found: %w", err) return fmt.Errorf("decision point not found: %w", err)
} }
// Create bookmark // Create bookmark
bookmarkID := fmt.Sprintf("%s-%d-%d", address.String(), hop, time.Now().Unix()) bookmarkID := fmt.Sprintf("%s-%d-%d", address.String(), hop, time.Now().Unix())
bookmark := &DecisionBookmark{ bookmark := &DecisionBookmark{
@@ -293,14 +293,14 @@ func (dn *decisionNavigatorImpl) BookmarkDecision(ctx context.Context, address u
Tags: []string{}, Tags: []string{},
Metadata: make(map[string]interface{}), Metadata: make(map[string]interface{}),
} }
// Add context information to metadata // Add context information to metadata
bookmark.Metadata["change_reason"] = node.ChangeReason bookmark.Metadata["change_reason"] = node.ChangeReason
bookmark.Metadata["decision_id"] = node.DecisionID bookmark.Metadata["decision_id"] = node.DecisionID
bookmark.Metadata["confidence"] = node.Confidence bookmark.Metadata["confidence"] = node.Confidence
dn.bookmarks[bookmarkID] = bookmark dn.bookmarks[bookmarkID] = bookmark
return nil return nil
} }
@@ -308,17 +308,17 @@ func (dn *decisionNavigatorImpl) BookmarkDecision(ctx context.Context, address u
func (dn *decisionNavigatorImpl) ListBookmarks(ctx context.Context) ([]*DecisionBookmark, error) { func (dn *decisionNavigatorImpl) ListBookmarks(ctx context.Context) ([]*DecisionBookmark, error) {
dn.mu.RLock() dn.mu.RLock()
defer dn.mu.RUnlock() defer dn.mu.RUnlock()
bookmarks := make([]*DecisionBookmark, 0, len(dn.bookmarks)) bookmarks := make([]*DecisionBookmark, 0, len(dn.bookmarks))
for _, bookmark := range dn.bookmarks { for _, bookmark := range dn.bookmarks {
bookmarks = append(bookmarks, bookmark) bookmarks = append(bookmarks, bookmark)
} }
// Sort by creation time (newest first) // Sort by creation time (newest first)
sort.Slice(bookmarks, func(i, j int) bool { sort.Slice(bookmarks, func(i, j int) bool {
return bookmarks[i].CreatedAt.After(bookmarks[j].CreatedAt) return bookmarks[i].CreatedAt.After(bookmarks[j].CreatedAt)
}) })
return bookmarks, nil return bookmarks, nil
} }
@@ -342,14 +342,14 @@ func (dn *decisionNavigatorImpl) navigateForward(currentNode *TemporalNode) (*Te
if !exists { if !exists {
return nil, fmt.Errorf("no nodes found for address") return nil, fmt.Errorf("no nodes found for address")
} }
// Find current node in the list and get the next one // Find current node in the list and get the next one
for i, node := range nodes { for i, node := range nodes {
if node.ID == currentNode.ID && i < len(nodes)-1 { if node.ID == currentNode.ID && i < len(nodes)-1 {
return nodes[i+1], nil return nodes[i+1], nil
} }
} }
return nil, fmt.Errorf("no forward navigation possible") return nil, fmt.Errorf("no forward navigation possible")
} }
@@ -358,12 +358,12 @@ func (dn *decisionNavigatorImpl) navigateBackward(currentNode *TemporalNode) (*T
if currentNode.ParentNode == nil { if currentNode.ParentNode == nil {
return nil, fmt.Errorf("no backward navigation possible: no parent node") return nil, fmt.Errorf("no backward navigation possible: no parent node")
} }
parentNode, exists := dn.graph.nodes[*currentNode.ParentNode] parentNode, exists := dn.graph.nodes[*currentNode.ParentNode]
if !exists { if !exists {
return nil, fmt.Errorf("parent node not found: %s", *currentNode.ParentNode) return nil, fmt.Errorf("parent node not found: %s", *currentNode.ParentNode)
} }
return parentNode, nil return parentNode, nil
} }
@@ -387,7 +387,7 @@ func (dn *decisionNavigatorImpl) analyzeTimeline(sequence []*DecisionTimelineEnt
AnalyzedAt: time.Now(), AnalyzedAt: time.Now(),
} }
} }
// Calculate change velocity // Calculate change velocity
var changeVelocity float64 var changeVelocity float64
if len(sequence) > 1 { if len(sequence) > 1 {
@@ -398,27 +398,27 @@ func (dn *decisionNavigatorImpl) analyzeTimeline(sequence []*DecisionTimelineEnt
changeVelocity = float64(len(sequence)-1) / duration.Hours() changeVelocity = float64(len(sequence)-1) / duration.Hours()
} }
} }
// Analyze confidence trend // Analyze confidence trend
confidenceTrend := "stable" confidenceTrend := "stable"
if len(sequence) > 1 { if len(sequence) > 1 {
firstConfidence := sequence[0].ConfidenceEvolution firstConfidence := sequence[0].ConfidenceEvolution
lastConfidence := sequence[len(sequence)-1].ConfidenceEvolution lastConfidence := sequence[len(sequence)-1].ConfidenceEvolution
diff := lastConfidence - firstConfidence diff := lastConfidence - firstConfidence
if diff > 0.1 { if diff > 0.1 {
confidenceTrend = "increasing" confidenceTrend = "increasing"
} else if diff < -0.1 { } else if diff < -0.1 {
confidenceTrend = "decreasing" confidenceTrend = "decreasing"
} }
} }
// Count change reasons // Count change reasons
reasonCounts := make(map[ChangeReason]int) reasonCounts := make(map[ChangeReason]int)
for _, entry := range sequence { for _, entry := range sequence {
reasonCounts[entry.ChangeReason]++ reasonCounts[entry.ChangeReason]++
} }
// Find dominant reasons // Find dominant reasons
dominantReasons := make([]ChangeReason, 0) dominantReasons := make([]ChangeReason, 0)
maxCount := 0 maxCount := 0
@@ -430,19 +430,19 @@ func (dn *decisionNavigatorImpl) analyzeTimeline(sequence []*DecisionTimelineEnt
dominantReasons = append(dominantReasons, reason) dominantReasons = append(dominantReasons, reason)
} }
} }
// Count decision makers // Count decision makers
makerCounts := make(map[string]int) makerCounts := make(map[string]int)
for _, entry := range sequence { for _, entry := range sequence {
makerCounts[entry.DecisionMaker]++ makerCounts[entry.DecisionMaker]++
} }
// Count impact scope distribution // Count impact scope distribution
scopeCounts := make(map[ImpactScope]int) scopeCounts := make(map[ImpactScope]int)
for _, entry := range sequence { for _, entry := range sequence {
scopeCounts[entry.ImpactScope]++ scopeCounts[entry.ImpactScope]++
} }
return &TimelineAnalysis{ return &TimelineAnalysis{
ChangeVelocity: changeVelocity, ChangeVelocity: changeVelocity,
ConfidenceTrend: confidenceTrend, ConfidenceTrend: confidenceTrend,
@@ -456,47 +456,47 @@ func (dn *decisionNavigatorImpl) analyzeTimeline(sequence []*DecisionTimelineEnt
func (dn *decisionNavigatorImpl) getStalenessReasons(node *TemporalNode) []string { func (dn *decisionNavigatorImpl) getStalenessReasons(node *TemporalNode) []string {
reasons := make([]string, 0) reasons := make([]string, 0)
// Time-based staleness // Time-based staleness
timeSinceUpdate := time.Since(node.Timestamp) timeSinceUpdate := time.Since(node.Timestamp)
if timeSinceUpdate > 7*24*time.Hour { if timeSinceUpdate > 7*24*time.Hour {
reasons = append(reasons, "not updated in over a week") reasons = append(reasons, "not updated in over a week")
} }
// Influence-based staleness // Influence-based staleness
if len(node.InfluencedBy) > 0 { if len(node.InfluencedBy) > 0 {
reasons = append(reasons, "influenced by other contexts that may have changed") reasons = append(reasons, "influenced by other contexts that may have changed")
} }
// Confidence-based staleness // Confidence-based staleness
if node.Confidence < 0.7 { if node.Confidence < 0.7 {
reasons = append(reasons, "low confidence score") reasons = append(reasons, "low confidence score")
} }
return reasons return reasons
} }
func (dn *decisionNavigatorImpl) getSuggestedActions(node *TemporalNode) []string { func (dn *decisionNavigatorImpl) getSuggestedActions(node *TemporalNode) []string {
actions := make([]string, 0) actions := make([]string, 0)
actions = append(actions, "review context for accuracy") actions = append(actions, "review context for accuracy")
actions = append(actions, "check related decisions for impact") actions = append(actions, "check related decisions for impact")
if node.Confidence < 0.7 { if node.Confidence < 0.7 {
actions = append(actions, "improve context confidence through additional analysis") actions = append(actions, "improve context confidence through additional analysis")
} }
if len(node.InfluencedBy) > 3 { if len(node.InfluencedBy) > 3 {
actions = append(actions, "validate dependencies are still accurate") actions = append(actions, "validate dependencies are still accurate")
} }
return actions return actions
} }
func (dn *decisionNavigatorImpl) getRelatedChanges(node *TemporalNode) []ucxl.Address { func (dn *decisionNavigatorImpl) getRelatedChanges(node *TemporalNode) []ucxl.Address {
// Find contexts that have changed recently and might affect this one // Find contexts that have changed recently and might affect this one
relatedChanges := make([]ucxl.Address, 0) relatedChanges := make([]ucxl.Address, 0)
cutoff := time.Now().Add(-24 * time.Hour) cutoff := time.Now().Add(-24 * time.Hour)
for _, otherNode := range dn.graph.nodes { for _, otherNode := range dn.graph.nodes {
if otherNode.Timestamp.After(cutoff) && otherNode.ID != node.ID { if otherNode.Timestamp.After(cutoff) && otherNode.ID != node.ID {
@@ -509,18 +509,18 @@ func (dn *decisionNavigatorImpl) getRelatedChanges(node *TemporalNode) []ucxl.Ad
} }
} }
} }
return relatedChanges return relatedChanges
} }
func (dn *decisionNavigatorImpl) calculateStalePriority(node *TemporalNode) StalePriority { func (dn *decisionNavigatorImpl) calculateStalePriority(node *TemporalNode) StalePriority {
score := node.Staleness score := node.Staleness
// Adjust based on influence // Adjust based on influence
if len(node.Influences) > 5 { if len(node.Influences) > 5 {
score += 0.2 // Higher priority if it influences many others score += 0.2 // Higher priority if it influences many others
} }
// Adjust based on impact scope // Adjust based on impact scope
switch node.ImpactScope { switch node.ImpactScope {
case ImpactSystem: case ImpactSystem:
@@ -530,7 +530,7 @@ func (dn *decisionNavigatorImpl) calculateStalePriority(node *TemporalNode) Stal
case ImpactModule: case ImpactModule:
score += 0.1 score += 0.1
} }
if score >= 0.9 { if score >= 0.9 {
return PriorityCritical return PriorityCritical
} else if score >= 0.7 { } else if score >= 0.7 {
@@ -545,7 +545,7 @@ func (dn *decisionNavigatorImpl) validateStepRelationship(step, nextStep *Decisi
// Check if there's a valid relationship between the steps // Check if there's a valid relationship between the steps
currentNodeID := step.TemporalNode.ID currentNodeID := step.TemporalNode.ID
nextNodeID := nextStep.TemporalNode.ID nextNodeID := nextStep.TemporalNode.ID
switch step.Relationship { switch step.Relationship {
case "influences": case "influences":
if influences, exists := dn.graph.influences[currentNodeID]; exists { if influences, exists := dn.graph.influences[currentNodeID]; exists {
@@ -564,6 +564,6 @@ func (dn *decisionNavigatorImpl) validateStepRelationship(step, nextStep *Decisi
} }
} }
} }
return false return false
} }

View File

@@ -7,93 +7,93 @@ import (
"sync" "sync"
"time" "time"
"chorus/pkg/ucxl"
"chorus/pkg/slurp/storage" "chorus/pkg/slurp/storage"
"chorus/pkg/ucxl"
) )
// persistenceManagerImpl handles persistence and synchronization of temporal graph data // persistenceManagerImpl handles persistence and synchronization of temporal graph data
type persistenceManagerImpl struct { type persistenceManagerImpl struct {
mu sync.RWMutex mu sync.RWMutex
// Storage interfaces // Storage interfaces
contextStore storage.ContextStore contextStore storage.ContextStore
localStorage storage.LocalStorage localStorage storage.LocalStorage
distributedStore storage.DistributedStorage distributedStore storage.DistributedStorage
encryptedStore storage.EncryptedStorage encryptedStore storage.EncryptedStorage
backupManager storage.BackupManager backupManager storage.BackupManager
// Reference to temporal graph // Reference to temporal graph
graph *temporalGraphImpl graph *temporalGraphImpl
// Persistence configuration // Persistence configuration
config *PersistenceConfig config *PersistenceConfig
// Synchronization state // Synchronization state
lastSyncTime time.Time lastSyncTime time.Time
syncInProgress bool syncInProgress bool
pendingChanges map[string]*PendingChange pendingChanges map[string]*PendingChange
conflictResolver ConflictResolver conflictResolver ConflictResolver
// Performance optimization // Performance optimization
batchSize int batchSize int
writeBuffer []*TemporalNode writeBuffer []*TemporalNode
bufferMutex sync.Mutex bufferMutex sync.Mutex
flushInterval time.Duration flushInterval time.Duration
lastFlush time.Time lastFlush time.Time
} }
// PersistenceConfig represents configuration for temporal graph persistence // PersistenceConfig represents configuration for temporal graph persistence
type PersistenceConfig struct { type PersistenceConfig struct {
// Storage settings // Storage settings
EnableLocalStorage bool `json:"enable_local_storage"` EnableLocalStorage bool `json:"enable_local_storage"`
EnableDistributedStorage bool `json:"enable_distributed_storage"` EnableDistributedStorage bool `json:"enable_distributed_storage"`
EnableEncryption bool `json:"enable_encryption"` EnableEncryption bool `json:"enable_encryption"`
EncryptionRoles []string `json:"encryption_roles"` EncryptionRoles []string `json:"encryption_roles"`
// Synchronization settings // Synchronization settings
SyncInterval time.Duration `json:"sync_interval"` SyncInterval time.Duration `json:"sync_interval"`
ConflictResolutionStrategy string `json:"conflict_resolution_strategy"` ConflictResolutionStrategy string `json:"conflict_resolution_strategy"`
EnableAutoSync bool `json:"enable_auto_sync"` EnableAutoSync bool `json:"enable_auto_sync"`
MaxSyncRetries int `json:"max_sync_retries"` MaxSyncRetries int `json:"max_sync_retries"`
// Performance settings // Performance settings
BatchSize int `json:"batch_size"` BatchSize int `json:"batch_size"`
FlushInterval time.Duration `json:"flush_interval"` FlushInterval time.Duration `json:"flush_interval"`
EnableWriteBuffer bool `json:"enable_write_buffer"` EnableWriteBuffer bool `json:"enable_write_buffer"`
// Backup settings // Backup settings
EnableAutoBackup bool `json:"enable_auto_backup"` EnableAutoBackup bool `json:"enable_auto_backup"`
BackupInterval time.Duration `json:"backup_interval"` BackupInterval time.Duration `json:"backup_interval"`
RetainBackupCount int `json:"retain_backup_count"` RetainBackupCount int `json:"retain_backup_count"`
// Storage keys and patterns // Storage keys and patterns
KeyPrefix string `json:"key_prefix"` KeyPrefix string `json:"key_prefix"`
NodeKeyPattern string `json:"node_key_pattern"` NodeKeyPattern string `json:"node_key_pattern"`
GraphKeyPattern string `json:"graph_key_pattern"` GraphKeyPattern string `json:"graph_key_pattern"`
MetadataKeyPattern string `json:"metadata_key_pattern"` MetadataKeyPattern string `json:"metadata_key_pattern"`
} }
// PendingChange represents a change waiting to be synchronized // PendingChange represents a change waiting to be synchronized
type PendingChange struct { type PendingChange struct {
ID string `json:"id"` ID string `json:"id"`
Type ChangeType `json:"type"` Type ChangeType `json:"type"`
NodeID string `json:"node_id"` NodeID string `json:"node_id"`
Data interface{} `json:"data"` Data interface{} `json:"data"`
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
Retries int `json:"retries"` Retries int `json:"retries"`
LastError string `json:"last_error"` LastError string `json:"last_error"`
Metadata map[string]interface{} `json:"metadata"` Metadata map[string]interface{} `json:"metadata"`
} }
// ChangeType represents the type of change to be synchronized // ChangeType represents the type of change to be synchronized
type ChangeType string type ChangeType string
const ( const (
ChangeTypeNodeCreated ChangeType = "node_created" ChangeTypeNodeCreated ChangeType = "node_created"
ChangeTypeNodeUpdated ChangeType = "node_updated" ChangeTypeNodeUpdated ChangeType = "node_updated"
ChangeTypeNodeDeleted ChangeType = "node_deleted" ChangeTypeNodeDeleted ChangeType = "node_deleted"
ChangeTypeGraphUpdated ChangeType = "graph_updated" ChangeTypeGraphUpdated ChangeType = "graph_updated"
ChangeTypeInfluenceAdded ChangeType = "influence_added" ChangeTypeInfluenceAdded ChangeType = "influence_added"
ChangeTypeInfluenceRemoved ChangeType = "influence_removed" ChangeTypeInfluenceRemoved ChangeType = "influence_removed"
) )
@@ -105,39 +105,39 @@ type ConflictResolver interface {
// GraphSnapshot represents a snapshot of the temporal graph for synchronization // GraphSnapshot represents a snapshot of the temporal graph for synchronization
type GraphSnapshot struct { type GraphSnapshot struct {
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
Nodes map[string]*TemporalNode `json:"nodes"` Nodes map[string]*TemporalNode `json:"nodes"`
Influences map[string][]string `json:"influences"` Influences map[string][]string `json:"influences"`
InfluencedBy map[string][]string `json:"influenced_by"` InfluencedBy map[string][]string `json:"influenced_by"`
Decisions map[string]*DecisionMetadata `json:"decisions"` Decisions map[string]*DecisionMetadata `json:"decisions"`
Metadata *GraphMetadata `json:"metadata"` Metadata *GraphMetadata `json:"metadata"`
Checksum string `json:"checksum"` Checksum string `json:"checksum"`
} }
// GraphMetadata represents metadata about the temporal graph // GraphMetadata represents metadata about the temporal graph
type GraphMetadata struct { type GraphMetadata struct {
Version int `json:"version"` Version int `json:"version"`
LastModified time.Time `json:"last_modified"` LastModified time.Time `json:"last_modified"`
NodeCount int `json:"node_count"` NodeCount int `json:"node_count"`
EdgeCount int `json:"edge_count"` EdgeCount int `json:"edge_count"`
DecisionCount int `json:"decision_count"` DecisionCount int `json:"decision_count"`
CreatedBy string `json:"created_by"` CreatedBy string `json:"created_by"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
} }
// SyncResult represents the result of a synchronization operation // SyncResult represents the result of a synchronization operation
type SyncResult struct { type SyncResult struct {
StartTime time.Time `json:"start_time"` StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"` EndTime time.Time `json:"end_time"`
Duration time.Duration `json:"duration"` Duration time.Duration `json:"duration"`
NodesProcessed int `json:"nodes_processed"` NodesProcessed int `json:"nodes_processed"`
NodesCreated int `json:"nodes_created"` NodesCreated int `json:"nodes_created"`
NodesUpdated int `json:"nodes_updated"` NodesUpdated int `json:"nodes_updated"`
NodesDeleted int `json:"nodes_deleted"` NodesDeleted int `json:"nodes_deleted"`
ConflictsFound int `json:"conflicts_found"` ConflictsFound int `json:"conflicts_found"`
ConflictsResolved int `json:"conflicts_resolved"` ConflictsResolved int `json:"conflicts_resolved"`
Errors []string `json:"errors"` Errors []string `json:"errors"`
Success bool `json:"success"` Success bool `json:"success"`
} }
// NewPersistenceManager creates a new persistence manager // NewPersistenceManager creates a new persistence manager
@@ -150,7 +150,7 @@ func NewPersistenceManager(
graph *temporalGraphImpl, graph *temporalGraphImpl,
config *PersistenceConfig, config *PersistenceConfig,
) *persistenceManagerImpl { ) *persistenceManagerImpl {
pm := &persistenceManagerImpl{ pm := &persistenceManagerImpl{
contextStore: contextStore, contextStore: contextStore,
localStorage: localStorage, localStorage: localStorage,
@@ -165,20 +165,20 @@ func NewPersistenceManager(
writeBuffer: make([]*TemporalNode, 0, config.BatchSize), writeBuffer: make([]*TemporalNode, 0, config.BatchSize),
flushInterval: config.FlushInterval, flushInterval: config.FlushInterval,
} }
// Start background processes // Start background processes
if config.EnableAutoSync { if config.EnableAutoSync {
go pm.syncWorker() go pm.syncWorker()
} }
if config.EnableWriteBuffer { if config.EnableWriteBuffer {
go pm.flushWorker() go pm.flushWorker()
} }
if config.EnableAutoBackup { if config.EnableAutoBackup {
go pm.backupWorker() go pm.backupWorker()
} }
return pm return pm
} }
@@ -186,12 +186,12 @@ func NewPersistenceManager(
func (pm *persistenceManagerImpl) PersistTemporalNode(ctx context.Context, node *TemporalNode) error { func (pm *persistenceManagerImpl) PersistTemporalNode(ctx context.Context, node *TemporalNode) error {
pm.mu.Lock() pm.mu.Lock()
defer pm.mu.Unlock() defer pm.mu.Unlock()
// Add to write buffer if enabled // Add to write buffer if enabled
if pm.config.EnableWriteBuffer { if pm.config.EnableWriteBuffer {
return pm.addToWriteBuffer(node) return pm.addToWriteBuffer(node)
} }
// Direct persistence // Direct persistence
return pm.persistNodeDirect(ctx, node) return pm.persistNodeDirect(ctx, node)
} }
@@ -200,20 +200,20 @@ func (pm *persistenceManagerImpl) PersistTemporalNode(ctx context.Context, node
func (pm *persistenceManagerImpl) LoadTemporalGraph(ctx context.Context) error { func (pm *persistenceManagerImpl) LoadTemporalGraph(ctx context.Context) error {
pm.mu.Lock() pm.mu.Lock()
defer pm.mu.Unlock() defer pm.mu.Unlock()
// Load from different storage layers // Load from different storage layers
if pm.config.EnableLocalStorage { if pm.config.EnableLocalStorage {
if err := pm.loadFromLocalStorage(ctx); err != nil { if err := pm.loadFromLocalStorage(ctx); err != nil {
return fmt.Errorf("failed to load from local storage: %w", err) return fmt.Errorf("failed to load from local storage: %w", err)
} }
} }
if pm.config.EnableDistributedStorage { if pm.config.EnableDistributedStorage {
if err := pm.loadFromDistributedStorage(ctx); err != nil { if err := pm.loadFromDistributedStorage(ctx); err != nil {
return fmt.Errorf("failed to load from distributed storage: %w", err) return fmt.Errorf("failed to load from distributed storage: %w", err)
} }
} }
return nil return nil
} }
@@ -226,19 +226,19 @@ func (pm *persistenceManagerImpl) SynchronizeGraph(ctx context.Context) (*SyncRe
} }
pm.syncInProgress = true pm.syncInProgress = true
pm.mu.Unlock() pm.mu.Unlock()
defer func() { defer func() {
pm.mu.Lock() pm.mu.Lock()
pm.syncInProgress = false pm.syncInProgress = false
pm.lastSyncTime = time.Now() pm.lastSyncTime = time.Now()
pm.mu.Unlock() pm.mu.Unlock()
}() }()
result := &SyncResult{ result := &SyncResult{
StartTime: time.Now(), StartTime: time.Now(),
Errors: make([]string, 0), Errors: make([]string, 0),
} }
// Create local snapshot // Create local snapshot
localSnapshot, err := pm.createGraphSnapshot() localSnapshot, err := pm.createGraphSnapshot()
if err != nil { if err != nil {
@@ -246,31 +246,31 @@ func (pm *persistenceManagerImpl) SynchronizeGraph(ctx context.Context) (*SyncRe
result.Success = false result.Success = false
return result, err return result, err
} }
// Get remote snapshot // Get remote snapshot
remoteSnapshot, err := pm.getRemoteSnapshot(ctx) remoteSnapshot, err := pm.getRemoteSnapshot(ctx)
if err != nil { if err != nil {
// Remote might not exist yet, continue with local // Remote might not exist yet, continue with local
remoteSnapshot = nil remoteSnapshot = nil
} }
// Perform synchronization // Perform synchronization
if remoteSnapshot != nil { if remoteSnapshot != nil {
err = pm.performBidirectionalSync(ctx, localSnapshot, remoteSnapshot, result) err = pm.performBidirectionalSync(ctx, localSnapshot, remoteSnapshot, result)
} else { } else {
err = pm.performInitialSync(ctx, localSnapshot, result) err = pm.performInitialSync(ctx, localSnapshot, result)
} }
if err != nil { if err != nil {
result.Errors = append(result.Errors, fmt.Sprintf("sync failed: %v", err)) result.Errors = append(result.Errors, fmt.Sprintf("sync failed: %v", err))
result.Success = false result.Success = false
} else { } else {
result.Success = true result.Success = true
} }
result.EndTime = time.Now() result.EndTime = time.Now()
result.Duration = result.EndTime.Sub(result.StartTime) result.Duration = result.EndTime.Sub(result.StartTime)
return result, err return result, err
} }
@@ -278,35 +278,27 @@ func (pm *persistenceManagerImpl) SynchronizeGraph(ctx context.Context) (*SyncRe
func (pm *persistenceManagerImpl) BackupGraph(ctx context.Context) error { func (pm *persistenceManagerImpl) BackupGraph(ctx context.Context) error {
pm.mu.RLock() pm.mu.RLock()
defer pm.mu.RUnlock() defer pm.mu.RUnlock()
if !pm.config.EnableAutoBackup { if !pm.config.EnableAutoBackup {
return fmt.Errorf("backup not enabled") return fmt.Errorf("backup not enabled")
} }
// Create graph snapshot // Create graph snapshot
snapshot, err := pm.createGraphSnapshot() snapshot, err := pm.createGraphSnapshot()
if err != nil { if err != nil {
return fmt.Errorf("failed to create snapshot: %w", err) return fmt.Errorf("failed to create snapshot: %w", err)
} }
// Serialize snapshot
data, err := json.Marshal(snapshot)
if err != nil {
return fmt.Errorf("failed to serialize snapshot: %w", err)
}
// Create backup configuration // Create backup configuration
backupConfig := &storage.BackupConfig{ backupConfig := &storage.BackupConfig{
Type: "temporal_graph", Name: "temporal_graph",
Description: "Temporal graph backup",
Tags: []string{"temporal", "graph", "decision"},
Metadata: map[string]interface{}{ Metadata: map[string]interface{}{
"node_count": snapshot.Metadata.NodeCount, "node_count": snapshot.Metadata.NodeCount,
"edge_count": snapshot.Metadata.EdgeCount, "edge_count": snapshot.Metadata.EdgeCount,
"decision_count": snapshot.Metadata.DecisionCount, "decision_count": snapshot.Metadata.DecisionCount,
}, },
} }
// Create backup // Create backup
_, err = pm.backupManager.CreateBackup(ctx, backupConfig) _, err = pm.backupManager.CreateBackup(ctx, backupConfig)
return err return err
@@ -316,19 +308,19 @@ func (pm *persistenceManagerImpl) BackupGraph(ctx context.Context) error {
func (pm *persistenceManagerImpl) RestoreGraph(ctx context.Context, backupID string) error { func (pm *persistenceManagerImpl) RestoreGraph(ctx context.Context, backupID string) error {
pm.mu.Lock() pm.mu.Lock()
defer pm.mu.Unlock() defer pm.mu.Unlock()
// Create restore configuration // Create restore configuration
restoreConfig := &storage.RestoreConfig{ restoreConfig := &storage.RestoreConfig{
OverwriteExisting: true, OverwriteExisting: true,
ValidateIntegrity: true, ValidateIntegrity: true,
} }
// Restore from backup // Restore from backup
err := pm.backupManager.RestoreBackup(ctx, backupID, restoreConfig) err := pm.backupManager.RestoreBackup(ctx, backupID, restoreConfig)
if err != nil { if err != nil {
return fmt.Errorf("failed to restore backup: %w", err) return fmt.Errorf("failed to restore backup: %w", err)
} }
// Reload graph from storage // Reload graph from storage
return pm.LoadTemporalGraph(ctx) return pm.LoadTemporalGraph(ctx)
} }
@@ -338,14 +330,14 @@ func (pm *persistenceManagerImpl) RestoreGraph(ctx context.Context, backupID str
func (pm *persistenceManagerImpl) addToWriteBuffer(node *TemporalNode) error { func (pm *persistenceManagerImpl) addToWriteBuffer(node *TemporalNode) error {
pm.bufferMutex.Lock() pm.bufferMutex.Lock()
defer pm.bufferMutex.Unlock() defer pm.bufferMutex.Unlock()
pm.writeBuffer = append(pm.writeBuffer, node) pm.writeBuffer = append(pm.writeBuffer, node)
// Check if buffer is full // Check if buffer is full
if len(pm.writeBuffer) >= pm.batchSize { if len(pm.writeBuffer) >= pm.batchSize {
return pm.flushWriteBuffer() return pm.flushWriteBuffer()
} }
return nil return nil
} }
@@ -353,59 +345,57 @@ func (pm *persistenceManagerImpl) flushWriteBuffer() error {
if len(pm.writeBuffer) == 0 { if len(pm.writeBuffer) == 0 {
return nil return nil
} }
// Create batch store request // Create batch store request
batch := &storage.BatchStoreRequest{ batch := &storage.BatchStoreRequest{
Operations: make([]*storage.BatchStoreOperation, len(pm.writeBuffer)), Contexts: make([]*storage.ContextStoreItem, len(pm.writeBuffer)),
Roles: pm.config.EncryptionRoles,
FailOnError: true,
} }
for i, node := range pm.writeBuffer { for i, node := range pm.writeBuffer {
key := pm.generateNodeKey(node) batch.Contexts[i] = &storage.ContextStoreItem{
Context: node,
batch.Operations[i] = &storage.BatchStoreOperation{ Roles: pm.config.EncryptionRoles,
Type: "store",
Key: key,
Data: node,
Roles: pm.config.EncryptionRoles,
} }
} }
// Execute batch store // Execute batch store
ctx := context.Background() ctx := context.Background()
_, err := pm.contextStore.BatchStore(ctx, batch) _, err := pm.contextStore.BatchStore(ctx, batch)
if err != nil { if err != nil {
return fmt.Errorf("failed to flush write buffer: %w", err) return fmt.Errorf("failed to flush write buffer: %w", err)
} }
// Clear buffer // Clear buffer
pm.writeBuffer = pm.writeBuffer[:0] pm.writeBuffer = pm.writeBuffer[:0]
pm.lastFlush = time.Now() pm.lastFlush = time.Now()
return nil return nil
} }
func (pm *persistenceManagerImpl) persistNodeDirect(ctx context.Context, node *TemporalNode) error { func (pm *persistenceManagerImpl) persistNodeDirect(ctx context.Context, node *TemporalNode) error {
key := pm.generateNodeKey(node) key := pm.generateNodeKey(node)
// Store in different layers // Store in different layers
if pm.config.EnableLocalStorage { if pm.config.EnableLocalStorage {
if err := pm.localStorage.Store(ctx, key, node, nil); err != nil { if err := pm.localStorage.Store(ctx, key, node, nil); err != nil {
return fmt.Errorf("failed to store in local storage: %w", err) return fmt.Errorf("failed to store in local storage: %w", err)
} }
} }
if pm.config.EnableDistributedStorage { if pm.config.EnableDistributedStorage {
if err := pm.distributedStore.Store(ctx, key, node, nil); err != nil { if err := pm.distributedStore.Store(ctx, key, node, nil); err != nil {
return fmt.Errorf("failed to store in distributed storage: %w", err) return fmt.Errorf("failed to store in distributed storage: %w", err)
} }
} }
if pm.config.EnableEncryption { if pm.config.EnableEncryption {
if err := pm.encryptedStore.StoreEncrypted(ctx, key, node, pm.config.EncryptionRoles); err != nil { if err := pm.encryptedStore.StoreEncrypted(ctx, key, node, pm.config.EncryptionRoles); err != nil {
return fmt.Errorf("failed to store encrypted: %w", err) return fmt.Errorf("failed to store encrypted: %w", err)
} }
} }
// Add to pending changes for synchronization // Add to pending changes for synchronization
change := &PendingChange{ change := &PendingChange{
ID: fmt.Sprintf("%s-%d", node.ID, time.Now().UnixNano()), ID: fmt.Sprintf("%s-%d", node.ID, time.Now().UnixNano()),
@@ -415,9 +405,9 @@ func (pm *persistenceManagerImpl) persistNodeDirect(ctx context.Context, node *T
Timestamp: time.Now(), Timestamp: time.Now(),
Metadata: make(map[string]interface{}), Metadata: make(map[string]interface{}),
} }
pm.pendingChanges[change.ID] = change pm.pendingChanges[change.ID] = change
return nil return nil
} }
@@ -428,51 +418,51 @@ func (pm *persistenceManagerImpl) loadFromLocalStorage(ctx context.Context) erro
if err != nil { if err != nil {
return fmt.Errorf("failed to load metadata: %w", err) return fmt.Errorf("failed to load metadata: %w", err)
} }
var metadata *GraphMetadata var metadata *GraphMetadata
if err := json.Unmarshal(metadataData.([]byte), &metadata); err != nil { if err := json.Unmarshal(metadataData.([]byte), &metadata); err != nil {
return fmt.Errorf("failed to unmarshal metadata: %w", err) return fmt.Errorf("failed to unmarshal metadata: %w", err)
} }
// Load all nodes // Load all nodes
pattern := pm.generateNodeKeyPattern() pattern := pm.generateNodeKeyPattern()
nodeKeys, err := pm.localStorage.List(ctx, pattern) nodeKeys, err := pm.localStorage.List(ctx, pattern)
if err != nil { if err != nil {
return fmt.Errorf("failed to list nodes: %w", err) return fmt.Errorf("failed to list nodes: %w", err)
} }
// Load nodes in batches // Load nodes in batches
batchReq := &storage.BatchRetrieveRequest{ batchReq := &storage.BatchRetrieveRequest{
Keys: nodeKeys, Keys: nodeKeys,
} }
batchResult, err := pm.contextStore.BatchRetrieve(ctx, batchReq) batchResult, err := pm.contextStore.BatchRetrieve(ctx, batchReq)
if err != nil { if err != nil {
return fmt.Errorf("failed to batch retrieve nodes: %w", err) return fmt.Errorf("failed to batch retrieve nodes: %w", err)
} }
// Reconstruct graph // Reconstruct graph
pm.graph.mu.Lock() pm.graph.mu.Lock()
defer pm.graph.mu.Unlock() defer pm.graph.mu.Unlock()
pm.graph.nodes = make(map[string]*TemporalNode) pm.graph.nodes = make(map[string]*TemporalNode)
pm.graph.addressToNodes = make(map[string][]*TemporalNode) pm.graph.addressToNodes = make(map[string][]*TemporalNode)
pm.graph.influences = make(map[string][]string) pm.graph.influences = make(map[string][]string)
pm.graph.influencedBy = make(map[string][]string) pm.graph.influencedBy = make(map[string][]string)
for key, result := range batchResult.Results { for key, result := range batchResult.Results {
if result.Error != nil { if result.Error != nil {
continue // Skip failed retrievals continue // Skip failed retrievals
} }
var node *TemporalNode var node *TemporalNode
if err := json.Unmarshal(result.Data.([]byte), &node); err != nil { if err := json.Unmarshal(result.Data.([]byte), &node); err != nil {
continue // Skip invalid nodes continue // Skip invalid nodes
} }
pm.reconstructGraphNode(node) pm.reconstructGraphNode(node)
} }
return nil return nil
} }
@@ -485,7 +475,7 @@ func (pm *persistenceManagerImpl) loadFromDistributedStorage(ctx context.Context
func (pm *persistenceManagerImpl) createGraphSnapshot() (*GraphSnapshot, error) { func (pm *persistenceManagerImpl) createGraphSnapshot() (*GraphSnapshot, error) {
pm.graph.mu.RLock() pm.graph.mu.RLock()
defer pm.graph.mu.RUnlock() defer pm.graph.mu.RUnlock()
snapshot := &GraphSnapshot{ snapshot := &GraphSnapshot{
Timestamp: time.Now(), Timestamp: time.Now(),
Nodes: make(map[string]*TemporalNode), Nodes: make(map[string]*TemporalNode),
@@ -502,48 +492,48 @@ func (pm *persistenceManagerImpl) createGraphSnapshot() (*GraphSnapshot, error)
CreatedAt: time.Now(), CreatedAt: time.Now(),
}, },
} }
// Copy nodes // Copy nodes
for id, node := range pm.graph.nodes { for id, node := range pm.graph.nodes {
snapshot.Nodes[id] = node snapshot.Nodes[id] = node
} }
// Copy influences // Copy influences
for id, influences := range pm.graph.influences { for id, influences := range pm.graph.influences {
snapshot.Influences[id] = make([]string, len(influences)) snapshot.Influences[id] = make([]string, len(influences))
copy(snapshot.Influences[id], influences) copy(snapshot.Influences[id], influences)
} }
// Copy influenced by // Copy influenced by
for id, influencedBy := range pm.graph.influencedBy { for id, influencedBy := range pm.graph.influencedBy {
snapshot.InfluencedBy[id] = make([]string, len(influencedBy)) snapshot.InfluencedBy[id] = make([]string, len(influencedBy))
copy(snapshot.InfluencedBy[id], influencedBy) copy(snapshot.InfluencedBy[id], influencedBy)
} }
// Copy decisions // Copy decisions
for id, decision := range pm.graph.decisions { for id, decision := range pm.graph.decisions {
snapshot.Decisions[id] = decision snapshot.Decisions[id] = decision
} }
// Calculate checksum // Calculate checksum
snapshot.Checksum = pm.calculateSnapshotChecksum(snapshot) snapshot.Checksum = pm.calculateSnapshotChecksum(snapshot)
return snapshot, nil return snapshot, nil
} }
func (pm *persistenceManagerImpl) getRemoteSnapshot(ctx context.Context) (*GraphSnapshot, error) { func (pm *persistenceManagerImpl) getRemoteSnapshot(ctx context.Context) (*GraphSnapshot, error) {
key := pm.generateGraphKey() key := pm.generateGraphKey()
data, err := pm.distributedStore.Retrieve(ctx, key) data, err := pm.distributedStore.Retrieve(ctx, key)
if err != nil { if err != nil {
return nil, err return nil, err
} }
var snapshot *GraphSnapshot var snapshot *GraphSnapshot
if err := json.Unmarshal(data.([]byte), &snapshot); err != nil { if err := json.Unmarshal(data.([]byte), &snapshot); err != nil {
return nil, fmt.Errorf("failed to unmarshal remote snapshot: %w", err) return nil, fmt.Errorf("failed to unmarshal remote snapshot: %w", err)
} }
return snapshot, nil return snapshot, nil
} }
@@ -551,7 +541,7 @@ func (pm *persistenceManagerImpl) performBidirectionalSync(ctx context.Context,
// Compare snapshots and identify differences // Compare snapshots and identify differences
conflicts := pm.identifyConflicts(local, remote) conflicts := pm.identifyConflicts(local, remote)
result.ConflictsFound = len(conflicts) result.ConflictsFound = len(conflicts)
// Resolve conflicts // Resolve conflicts
for _, conflict := range conflicts { for _, conflict := range conflicts {
resolved, err := pm.resolveConflict(ctx, conflict) resolved, err := pm.resolveConflict(ctx, conflict)
@@ -559,48 +549,48 @@ func (pm *persistenceManagerImpl) performBidirectionalSync(ctx context.Context,
result.Errors = append(result.Errors, fmt.Sprintf("failed to resolve conflict %s: %v", conflict.NodeID, err)) result.Errors = append(result.Errors, fmt.Sprintf("failed to resolve conflict %s: %v", conflict.NodeID, err))
continue continue
} }
// Apply resolution // Apply resolution
if err := pm.applyConflictResolution(ctx, resolved); err != nil { if err := pm.applyConflictResolution(ctx, resolved); err != nil {
result.Errors = append(result.Errors, fmt.Sprintf("failed to apply resolution for %s: %v", conflict.NodeID, err)) result.Errors = append(result.Errors, fmt.Sprintf("failed to apply resolution for %s: %v", conflict.NodeID, err))
continue continue
} }
result.ConflictsResolved++ result.ConflictsResolved++
} }
// Sync local changes to remote // Sync local changes to remote
err := pm.syncLocalToRemote(ctx, local, remote, result) err := pm.syncLocalToRemote(ctx, local, remote, result)
if err != nil { if err != nil {
return fmt.Errorf("failed to sync local to remote: %w", err) return fmt.Errorf("failed to sync local to remote: %w", err)
} }
// Sync remote changes to local // Sync remote changes to local
err = pm.syncRemoteToLocal(ctx, remote, local, result) err = pm.syncRemoteToLocal(ctx, remote, local, result)
if err != nil { if err != nil {
return fmt.Errorf("failed to sync remote to local: %w", err) return fmt.Errorf("failed to sync remote to local: %w", err)
} }
return nil return nil
} }
func (pm *persistenceManagerImpl) performInitialSync(ctx context.Context, local *GraphSnapshot, result *SyncResult) error { func (pm *persistenceManagerImpl) performInitialSync(ctx context.Context, local *GraphSnapshot, result *SyncResult) error {
// Store entire local snapshot to remote // Store entire local snapshot to remote
key := pm.generateGraphKey() key := pm.generateGraphKey()
data, err := json.Marshal(local) data, err := json.Marshal(local)
if err != nil { if err != nil {
return fmt.Errorf("failed to marshal snapshot: %w", err) return fmt.Errorf("failed to marshal snapshot: %w", err)
} }
err = pm.distributedStore.Store(ctx, key, data, nil) err = pm.distributedStore.Store(ctx, key, data, nil)
if err != nil { if err != nil {
return fmt.Errorf("failed to store snapshot: %w", err) return fmt.Errorf("failed to store snapshot: %w", err)
} }
result.NodesProcessed = len(local.Nodes) result.NodesProcessed = len(local.Nodes)
result.NodesCreated = len(local.Nodes) result.NodesCreated = len(local.Nodes)
return nil return nil
} }
@@ -609,7 +599,7 @@ func (pm *persistenceManagerImpl) performInitialSync(ctx context.Context, local
func (pm *persistenceManagerImpl) syncWorker() { func (pm *persistenceManagerImpl) syncWorker() {
ticker := time.NewTicker(pm.config.SyncInterval) ticker := time.NewTicker(pm.config.SyncInterval)
defer ticker.Stop() defer ticker.Stop()
for range ticker.C { for range ticker.C {
ctx := context.Background() ctx := context.Background()
if _, err := pm.SynchronizeGraph(ctx); err != nil { if _, err := pm.SynchronizeGraph(ctx); err != nil {
@@ -622,7 +612,7 @@ func (pm *persistenceManagerImpl) syncWorker() {
func (pm *persistenceManagerImpl) flushWorker() { func (pm *persistenceManagerImpl) flushWorker() {
ticker := time.NewTicker(pm.flushInterval) ticker := time.NewTicker(pm.flushInterval)
defer ticker.Stop() defer ticker.Stop()
for range ticker.C { for range ticker.C {
pm.bufferMutex.Lock() pm.bufferMutex.Lock()
if time.Since(pm.lastFlush) >= pm.flushInterval && len(pm.writeBuffer) > 0 { if time.Since(pm.lastFlush) >= pm.flushInterval && len(pm.writeBuffer) > 0 {
@@ -637,7 +627,7 @@ func (pm *persistenceManagerImpl) flushWorker() {
func (pm *persistenceManagerImpl) backupWorker() { func (pm *persistenceManagerImpl) backupWorker() {
ticker := time.NewTicker(pm.config.BackupInterval) ticker := time.NewTicker(pm.config.BackupInterval)
defer ticker.Stop() defer ticker.Stop()
for range ticker.C { for range ticker.C {
ctx := context.Background() ctx := context.Background()
if err := pm.BackupGraph(ctx); err != nil { if err := pm.BackupGraph(ctx); err != nil {
@@ -681,7 +671,7 @@ func (pm *persistenceManagerImpl) calculateSnapshotChecksum(snapshot *GraphSnaps
func (pm *persistenceManagerImpl) reconstructGraphNode(node *TemporalNode) { func (pm *persistenceManagerImpl) reconstructGraphNode(node *TemporalNode) {
// Add node to graph // Add node to graph
pm.graph.nodes[node.ID] = node pm.graph.nodes[node.ID] = node
// Update address mapping // Update address mapping
addressKey := node.UCXLAddress.String() addressKey := node.UCXLAddress.String()
if existing, exists := pm.graph.addressToNodes[addressKey]; exists { if existing, exists := pm.graph.addressToNodes[addressKey]; exists {
@@ -689,17 +679,17 @@ func (pm *persistenceManagerImpl) reconstructGraphNode(node *TemporalNode) {
} else { } else {
pm.graph.addressToNodes[addressKey] = []*TemporalNode{node} pm.graph.addressToNodes[addressKey] = []*TemporalNode{node}
} }
// Reconstruct influence relationships // Reconstruct influence relationships
pm.graph.influences[node.ID] = make([]string, 0) pm.graph.influences[node.ID] = make([]string, 0)
pm.graph.influencedBy[node.ID] = make([]string, 0) pm.graph.influencedBy[node.ID] = make([]string, 0)
// These would be rebuilt from the influence data in the snapshot // These would be rebuilt from the influence data in the snapshot
} }
func (pm *persistenceManagerImpl) identifyConflicts(local, remote *GraphSnapshot) []*SyncConflict { func (pm *persistenceManagerImpl) identifyConflicts(local, remote *GraphSnapshot) []*SyncConflict {
conflicts := make([]*SyncConflict, 0) conflicts := make([]*SyncConflict, 0)
// Compare nodes // Compare nodes
for nodeID, localNode := range local.Nodes { for nodeID, localNode := range local.Nodes {
if remoteNode, exists := remote.Nodes[nodeID]; exists { if remoteNode, exists := remote.Nodes[nodeID]; exists {
@@ -714,7 +704,7 @@ func (pm *persistenceManagerImpl) identifyConflicts(local, remote *GraphSnapshot
} }
} }
} }
return conflicts return conflicts
} }
@@ -727,28 +717,28 @@ func (pm *persistenceManagerImpl) resolveConflict(ctx context.Context, conflict
// Use conflict resolver to resolve the conflict // Use conflict resolver to resolve the conflict
localNode := conflict.LocalData.(*TemporalNode) localNode := conflict.LocalData.(*TemporalNode)
remoteNode := conflict.RemoteData.(*TemporalNode) remoteNode := conflict.RemoteData.(*TemporalNode)
resolvedNode, err := pm.conflictResolver.ResolveConflict(ctx, localNode, remoteNode) resolvedNode, err := pm.conflictResolver.ResolveConflict(ctx, localNode, remoteNode)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &ConflictResolution{ return &ConflictResolution{
ConflictID: conflict.NodeID, ConflictID: conflict.NodeID,
Resolution: "merged", Resolution: "merged",
ResolvedData: resolvedNode, ResolvedData: resolvedNode,
ResolvedAt: time.Now(), ResolvedAt: time.Now(),
}, nil }, nil
} }
func (pm *persistenceManagerImpl) applyConflictResolution(ctx context.Context, resolution *ConflictResolution) error { func (pm *persistenceManagerImpl) applyConflictResolution(ctx context.Context, resolution *ConflictResolution) error {
// Apply the resolved node back to the graph // Apply the resolved node back to the graph
resolvedNode := resolution.ResolvedData.(*TemporalNode) resolvedNode := resolution.ResolvedData.(*TemporalNode)
pm.graph.mu.Lock() pm.graph.mu.Lock()
pm.graph.nodes[resolvedNode.ID] = resolvedNode pm.graph.nodes[resolvedNode.ID] = resolvedNode
pm.graph.mu.Unlock() pm.graph.mu.Unlock()
// Persist the resolved node // Persist the resolved node
return pm.persistNodeDirect(ctx, resolvedNode) return pm.persistNodeDirect(ctx, resolvedNode)
} }
@@ -757,7 +747,7 @@ func (pm *persistenceManagerImpl) syncLocalToRemote(ctx context.Context, local,
// Sync nodes that exist locally but not remotely, or are newer locally // Sync nodes that exist locally but not remotely, or are newer locally
for nodeID, localNode := range local.Nodes { for nodeID, localNode := range local.Nodes {
shouldSync := false shouldSync := false
if remoteNode, exists := remote.Nodes[nodeID]; exists { if remoteNode, exists := remote.Nodes[nodeID]; exists {
// Check if local is newer // Check if local is newer
if localNode.Timestamp.After(remoteNode.Timestamp) { if localNode.Timestamp.After(remoteNode.Timestamp) {
@@ -768,7 +758,7 @@ func (pm *persistenceManagerImpl) syncLocalToRemote(ctx context.Context, local,
shouldSync = true shouldSync = true
result.NodesCreated++ result.NodesCreated++
} }
if shouldSync { if shouldSync {
key := pm.generateNodeKey(localNode) key := pm.generateNodeKey(localNode)
data, err := json.Marshal(localNode) data, err := json.Marshal(localNode)
@@ -776,19 +766,19 @@ func (pm *persistenceManagerImpl) syncLocalToRemote(ctx context.Context, local,
result.Errors = append(result.Errors, fmt.Sprintf("failed to marshal node %s: %v", nodeID, err)) result.Errors = append(result.Errors, fmt.Sprintf("failed to marshal node %s: %v", nodeID, err))
continue continue
} }
err = pm.distributedStore.Store(ctx, key, data, nil) err = pm.distributedStore.Store(ctx, key, data, nil)
if err != nil { if err != nil {
result.Errors = append(result.Errors, fmt.Sprintf("failed to sync node %s to remote: %v", nodeID, err)) result.Errors = append(result.Errors, fmt.Sprintf("failed to sync node %s to remote: %v", nodeID, err))
continue continue
} }
if remoteNode, exists := remote.Nodes[nodeID]; exists && localNode.Timestamp.After(remoteNode.Timestamp) { if remoteNode, exists := remote.Nodes[nodeID]; exists && localNode.Timestamp.After(remoteNode.Timestamp) {
result.NodesUpdated++ result.NodesUpdated++
} }
} }
} }
return nil return nil
} }
@@ -796,7 +786,7 @@ func (pm *persistenceManagerImpl) syncRemoteToLocal(ctx context.Context, remote,
// Sync nodes that exist remotely but not locally, or are newer remotely // Sync nodes that exist remotely but not locally, or are newer remotely
for nodeID, remoteNode := range remote.Nodes { for nodeID, remoteNode := range remote.Nodes {
shouldSync := false shouldSync := false
if localNode, exists := local.Nodes[nodeID]; exists { if localNode, exists := local.Nodes[nodeID]; exists {
// Check if remote is newer // Check if remote is newer
if remoteNode.Timestamp.After(localNode.Timestamp) { if remoteNode.Timestamp.After(localNode.Timestamp) {
@@ -807,55 +797,41 @@ func (pm *persistenceManagerImpl) syncRemoteToLocal(ctx context.Context, remote,
shouldSync = true shouldSync = true
result.NodesCreated++ result.NodesCreated++
} }
if shouldSync { if shouldSync {
// Add to local graph // Add to local graph
pm.graph.mu.Lock() pm.graph.mu.Lock()
pm.graph.nodes[remoteNode.ID] = remoteNode pm.graph.nodes[remoteNode.ID] = remoteNode
pm.reconstructGraphNode(remoteNode) pm.reconstructGraphNode(remoteNode)
pm.graph.mu.Unlock() pm.graph.mu.Unlock()
// Persist locally // Persist locally
err := pm.persistNodeDirect(ctx, remoteNode) err := pm.persistNodeDirect(ctx, remoteNode)
if err != nil { if err != nil {
result.Errors = append(result.Errors, fmt.Sprintf("failed to sync node %s to local: %v", nodeID, err)) result.Errors = append(result.Errors, fmt.Sprintf("failed to sync node %s to local: %v", nodeID, err))
continue continue
} }
if localNode, exists := local.Nodes[nodeID]; exists && remoteNode.Timestamp.After(localNode.Timestamp) { if localNode, exists := local.Nodes[nodeID]; exists && remoteNode.Timestamp.After(localNode.Timestamp) {
result.NodesUpdated++ result.NodesUpdated++
} }
} }
} }
return nil return nil
} }
// Supporting types for conflict resolution // Supporting types for conflict resolution
type SyncConflict struct { type SyncConflict struct {
Type ConflictType `json:"type"` Type ConflictType `json:"type"`
NodeID string `json:"node_id"` NodeID string `json:"node_id"`
LocalData interface{} `json:"local_data"` LocalData interface{} `json:"local_data"`
RemoteData interface{} `json:"remote_data"` RemoteData interface{} `json:"remote_data"`
Severity string `json:"severity"` Severity string `json:"severity"`
} }
type ConflictType string // Default conflict resolver implementation
const (
ConflictTypeNodeMismatch ConflictType = "node_mismatch"
ConflictTypeInfluenceMismatch ConflictType = "influence_mismatch"
ConflictTypeMetadataMismatch ConflictType = "metadata_mismatch"
)
type ConflictResolution struct {
ConflictID string `json:"conflict_id"`
Resolution string `json:"resolution"`
ResolvedData interface{} `json:"resolved_data"`
ResolvedAt time.Time `json:"resolved_at"`
ResolvedBy string `json:"resolved_by"`
}
// Default conflict resolver implementation // Default conflict resolver implementation
@@ -886,4 +862,4 @@ func (dcr *defaultConflictResolver) ResolveGraphConflict(ctx context.Context, lo
return localGraph, nil return localGraph, nil
} }
return remoteGraph, nil return remoteGraph, nil
} }

View File

@@ -17,45 +17,46 @@ import (
// cascading context resolution with bounded depth traversal. // cascading context resolution with bounded depth traversal.
type ContextNode struct { type ContextNode struct {
// Identity and addressing // Identity and addressing
ID string `json:"id"` // Unique identifier ID string `json:"id"` // Unique identifier
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
Path string `json:"path"` // Filesystem path Path string `json:"path"` // Filesystem path
// Core context information // Core context information
Summary string `json:"summary"` // Brief description Summary string `json:"summary"` // Brief description
Purpose string `json:"purpose"` // What this component does Purpose string `json:"purpose"` // What this component does
Technologies []string `json:"technologies"` // Technologies used Technologies []string `json:"technologies"` // Technologies used
Tags []string `json:"tags"` // Categorization tags Tags []string `json:"tags"` // Categorization tags
Insights []string `json:"insights"` // Analytical insights Insights []string `json:"insights"` // Analytical insights
// Hierarchy relationships // Hierarchy relationships
Parent *string `json:"parent,omitempty"` // Parent context ID Parent *string `json:"parent,omitempty"` // Parent context ID
Children []string `json:"children"` // Child context IDs Children []string `json:"children"` // Child context IDs
Specificity int `json:"specificity"` // Specificity level (higher = more specific) Specificity int `json:"specificity"` // Specificity level (higher = more specific)
// File metadata // File metadata
FileType string `json:"file_type"` // File extension or type FileType string `json:"file_type"` // File extension or type
Language *string `json:"language,omitempty"` // Programming language Language *string `json:"language,omitempty"` // Programming language
Size *int64 `json:"size,omitempty"` // File size in bytes Size *int64 `json:"size,omitempty"` // File size in bytes
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification time LastModified *time.Time `json:"last_modified,omitempty"` // Last modification time
ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection
// Resolution metadata // Resolution metadata
CreatedBy string `json:"created_by"` // Who/what created this context CreatedBy string `json:"created_by"` // Who/what created this context
CreatedAt time.Time `json:"created_at"` // When created CreatedAt time.Time `json:"created_at"` // When created
UpdatedAt time.Time `json:"updated_at"` // When last updated UpdatedAt time.Time `json:"updated_at"` // When last updated
Confidence float64 `json:"confidence"` // Confidence in accuracy (0-1) UpdatedBy string `json:"updated_by"` // Who performed the last update
Confidence float64 `json:"confidence"` // Confidence in accuracy (0-1)
// Cascading behavior rules // Cascading behavior rules
AppliesTo ContextScope `json:"applies_to"` // Scope of application AppliesTo ContextScope `json:"applies_to"` // Scope of application
Overrides bool `json:"overrides"` // Whether this overrides parent context Overrides bool `json:"overrides"` // Whether this overrides parent context
// Security and access control // Security and access control
EncryptedFor []string `json:"encrypted_for"` // Roles that can access EncryptedFor []string `json:"encrypted_for"` // Roles that can access
AccessLevel crypto.AccessLevel `json:"access_level"` // Access level required AccessLevel crypto.AccessLevel `json:"access_level"` // Access level required
// Custom metadata // Custom metadata
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
} }
// ResolvedContext represents the final resolved context for a UCXL address. // ResolvedContext represents the final resolved context for a UCXL address.
@@ -64,41 +65,41 @@ type ContextNode struct {
// information from multiple hierarchy levels and applying global contexts. // information from multiple hierarchy levels and applying global contexts.
type ResolvedContext struct { type ResolvedContext struct {
// Resolved context data // Resolved context data
UCXLAddress string `json:"ucxl_address"` // Original UCXL address UCXLAddress string `json:"ucxl_address"` // Original UCXL address
Summary string `json:"summary"` // Resolved summary Summary string `json:"summary"` // Resolved summary
Purpose string `json:"purpose"` // Resolved purpose Purpose string `json:"purpose"` // Resolved purpose
Technologies []string `json:"technologies"` // Merged technologies Technologies []string `json:"technologies"` // Merged technologies
Tags []string `json:"tags"` // Merged tags Tags []string `json:"tags"` // Merged tags
Insights []string `json:"insights"` // Merged insights Insights []string `json:"insights"` // Merged insights
// File information // File information
FileType string `json:"file_type"` // File type FileType string `json:"file_type"` // File type
Language *string `json:"language,omitempty"` // Programming language Language *string `json:"language,omitempty"` // Programming language
Size *int64 `json:"size,omitempty"` // File size Size *int64 `json:"size,omitempty"` // File size
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification LastModified *time.Time `json:"last_modified,omitempty"` // Last modification
ContentHash *string `json:"content_hash,omitempty"` // Content hash ContentHash *string `json:"content_hash,omitempty"` // Content hash
// Resolution metadata // Resolution metadata
SourcePath string `json:"source_path"` // Primary source context path SourcePath string `json:"source_path"` // Primary source context path
InheritanceChain []string `json:"inheritance_chain"` // Context inheritance chain InheritanceChain []string `json:"inheritance_chain"` // Context inheritance chain
Confidence float64 `json:"confidence"` // Overall confidence (0-1) Confidence float64 `json:"confidence"` // Overall confidence (0-1)
BoundedDepth int `json:"bounded_depth"` // Actual traversal depth used BoundedDepth int `json:"bounded_depth"` // Actual traversal depth used
GlobalApplied bool `json:"global_applied"` // Whether global contexts were applied GlobalApplied bool `json:"global_applied"` // Whether global contexts were applied
ResolvedAt time.Time `json:"resolved_at"` // When resolution occurred ResolvedAt time.Time `json:"resolved_at"` // When resolution occurred
// Temporal information // Temporal information
Version int `json:"version"` // Current version number Version int `json:"version"` // Current version number
LastUpdated time.Time `json:"last_updated"` // When context was last updated LastUpdated time.Time `json:"last_updated"` // When context was last updated
EvolutionHistory []string `json:"evolution_history"` // Brief evolution history EvolutionHistory []string `json:"evolution_history"` // Brief evolution history
// Access control // Access control
AccessibleBy []string `json:"accessible_by"` // Roles that can access this AccessibleBy []string `json:"accessible_by"` // Roles that can access this
EncryptionKeys []string `json:"encryption_keys"` // Keys used for encryption EncryptionKeys []string `json:"encryption_keys"` // Keys used for encryption
// Performance metadata // Performance metadata
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
CacheHit bool `json:"cache_hit"` // Whether result was cached CacheHit bool `json:"cache_hit"` // Whether result was cached
NodesTraversed int `json:"nodes_traversed"` // Number of hierarchy nodes traversed NodesTraversed int `json:"nodes_traversed"` // Number of hierarchy nodes traversed
} }
// ContextScope defines the scope of a context node's application // ContextScope defines the scope of a context node's application
@@ -117,38 +118,38 @@ const (
// simple chronological progression. // simple chronological progression.
type TemporalNode struct { type TemporalNode struct {
// Node identity // Node identity
ID string `json:"id"` // Unique temporal node ID ID string `json:"id"` // Unique temporal node ID
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
Version int `json:"version"` // Version number (monotonic) Version int `json:"version"` // Version number (monotonic)
// Context snapshot // Context snapshot
Context ContextNode `json:"context"` // Context data at this point Context ContextNode `json:"context"` // Context data at this point
// Temporal metadata // Temporal metadata
Timestamp time.Time `json:"timestamp"` // When this version was created Timestamp time.Time `json:"timestamp"` // When this version was created
DecisionID string `json:"decision_id"` // Associated decision identifier DecisionID string `json:"decision_id"` // Associated decision identifier
ChangeReason ChangeReason `json:"change_reason"` // Why context changed ChangeReason ChangeReason `json:"change_reason"` // Why context changed
ParentNode *string `json:"parent_node,omitempty"` // Previous version ID ParentNode *string `json:"parent_node,omitempty"` // Previous version ID
// Evolution tracking // Evolution tracking
ContextHash string `json:"context_hash"` // Hash of context content ContextHash string `json:"context_hash"` // Hash of context content
Confidence float64 `json:"confidence"` // Confidence in this version (0-1) Confidence float64 `json:"confidence"` // Confidence in this version (0-1)
Staleness float64 `json:"staleness"` // Staleness indicator (0-1) Staleness float64 `json:"staleness"` // Staleness indicator (0-1)
// Decision graph relationships // Decision graph relationships
Influences []string `json:"influences"` // UCXL addresses this influences Influences []string `json:"influences"` // UCXL addresses this influences
InfluencedBy []string `json:"influenced_by"` // UCXL addresses that influence this InfluencedBy []string `json:"influenced_by"` // UCXL addresses that influence this
// Validation metadata // Validation metadata
ValidatedBy []string `json:"validated_by"` // Who/what validated this ValidatedBy []string `json:"validated_by"` // Who/what validated this
LastValidated time.Time `json:"last_validated"` // When last validated LastValidated time.Time `json:"last_validated"` // When last validated
// Change impact analysis // Change impact analysis
ImpactScope ImpactScope `json:"impact_scope"` // Scope of change impact ImpactScope ImpactScope `json:"impact_scope"` // Scope of change impact
PropagatedTo []string `json:"propagated_to"` // Addresses that received impact PropagatedTo []string `json:"propagated_to"` // Addresses that received impact
// Custom temporal metadata // Custom temporal metadata
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
} }
// DecisionMetadata represents metadata about a decision that changed context. // DecisionMetadata represents metadata about a decision that changed context.
@@ -157,56 +158,56 @@ type TemporalNode struct {
// representing why and how context evolved rather than just when. // representing why and how context evolved rather than just when.
type DecisionMetadata struct { type DecisionMetadata struct {
// Decision identity // Decision identity
ID string `json:"id"` // Unique decision identifier ID string `json:"id"` // Unique decision identifier
Maker string `json:"maker"` // Who/what made the decision Maker string `json:"maker"` // Who/what made the decision
Rationale string `json:"rationale"` // Why the decision was made Rationale string `json:"rationale"` // Why the decision was made
// Impact and scope // Impact and scope
Scope ImpactScope `json:"scope"` // Scope of impact Scope ImpactScope `json:"scope"` // Scope of impact
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in decision (0-1) ConfidenceLevel float64 `json:"confidence_level"` // Confidence in decision (0-1)
// External references // External references
ExternalRefs []string `json:"external_refs"` // External references (URLs, docs) ExternalRefs []string `json:"external_refs"` // External references (URLs, docs)
GitCommit *string `json:"git_commit,omitempty"` // Associated git commit GitCommit *string `json:"git_commit,omitempty"` // Associated git commit
IssueNumber *int `json:"issue_number,omitempty"` // Associated issue number IssueNumber *int `json:"issue_number,omitempty"` // Associated issue number
PullRequestNumber *int `json:"pull_request,omitempty"` // Associated PR number PullRequestNumber *int `json:"pull_request,omitempty"` // Associated PR number
// Timing information // Timing information
CreatedAt time.Time `json:"created_at"` // When decision was made CreatedAt time.Time `json:"created_at"` // When decision was made
EffectiveAt *time.Time `json:"effective_at,omitempty"` // When decision takes effect EffectiveAt *time.Time `json:"effective_at,omitempty"` // When decision takes effect
ExpiresAt *time.Time `json:"expires_at,omitempty"` // When decision expires ExpiresAt *time.Time `json:"expires_at,omitempty"` // When decision expires
// Decision quality // Decision quality
ReviewedBy []string `json:"reviewed_by,omitempty"` // Who reviewed this decision ReviewedBy []string `json:"reviewed_by,omitempty"` // Who reviewed this decision
ApprovedBy []string `json:"approved_by,omitempty"` // Who approved this decision ApprovedBy []string `json:"approved_by,omitempty"` // Who approved this decision
// Implementation tracking // Implementation tracking
ImplementationStatus string `json:"implementation_status"` // Status: planned, active, complete, cancelled ImplementationStatus string `json:"implementation_status"` // Status: planned, active, complete, cancelled
ImplementationNotes string `json:"implementation_notes"` // Implementation details ImplementationNotes string `json:"implementation_notes"` // Implementation details
// Custom metadata // Custom metadata
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
} }
// ChangeReason represents why context changed // ChangeReason represents why context changed
type ChangeReason string type ChangeReason string
const ( const (
ReasonInitialCreation ChangeReason = "initial_creation" // First time context creation ReasonInitialCreation ChangeReason = "initial_creation" // First time context creation
ReasonCodeChange ChangeReason = "code_change" // Code modification ReasonCodeChange ChangeReason = "code_change" // Code modification
ReasonDesignDecision ChangeReason = "design_decision" // Design/architecture decision ReasonDesignDecision ChangeReason = "design_decision" // Design/architecture decision
ReasonRefactoring ChangeReason = "refactoring" // Code refactoring ReasonRefactoring ChangeReason = "refactoring" // Code refactoring
ReasonArchitectureChange ChangeReason = "architecture_change" // Major architecture change ReasonArchitectureChange ChangeReason = "architecture_change" // Major architecture change
ReasonRequirementsChange ChangeReason = "requirements_change" // Requirements modification ReasonRequirementsChange ChangeReason = "requirements_change" // Requirements modification
ReasonLearningEvolution ChangeReason = "learning_evolution" // Improved understanding ReasonLearningEvolution ChangeReason = "learning_evolution" // Improved understanding
ReasonRAGEnhancement ChangeReason = "rag_enhancement" // RAG system enhancement ReasonRAGEnhancement ChangeReason = "rag_enhancement" // RAG system enhancement
ReasonTeamInput ChangeReason = "team_input" // Team member input ReasonTeamInput ChangeReason = "team_input" // Team member input
ReasonBugDiscovery ChangeReason = "bug_discovery" // Bug found that changes understanding ReasonBugDiscovery ChangeReason = "bug_discovery" // Bug found that changes understanding
ReasonPerformanceInsight ChangeReason = "performance_insight" // Performance analysis insight ReasonPerformanceInsight ChangeReason = "performance_insight" // Performance analysis insight
ReasonSecurityReview ChangeReason = "security_review" // Security analysis ReasonSecurityReview ChangeReason = "security_review" // Security analysis
ReasonDependencyChange ChangeReason = "dependency_change" // Dependency update ReasonDependencyChange ChangeReason = "dependency_change" // Dependency update
ReasonEnvironmentChange ChangeReason = "environment_change" // Environment configuration change ReasonEnvironmentChange ChangeReason = "environment_change" // Environment configuration change
ReasonToolingUpdate ChangeReason = "tooling_update" // Development tooling update ReasonToolingUpdate ChangeReason = "tooling_update" // Development tooling update
ReasonDocumentationUpdate ChangeReason = "documentation_update" // Documentation improvement ReasonDocumentationUpdate ChangeReason = "documentation_update" // Documentation improvement
) )
@@ -222,11 +223,11 @@ const (
// DecisionPath represents a path between two decision points in the temporal graph // DecisionPath represents a path between two decision points in the temporal graph
type DecisionPath struct { type DecisionPath struct {
From string `json:"from"` // Starting UCXL address From string `json:"from"` // Starting UCXL address
To string `json:"to"` // Ending UCXL address To string `json:"to"` // Ending UCXL address
Steps []*DecisionStep `json:"steps"` // Path steps Steps []*DecisionStep `json:"steps"` // Path steps
TotalHops int `json:"total_hops"` // Total decision hops TotalHops int `json:"total_hops"` // Total decision hops
PathType string `json:"path_type"` // Type of path (direct, influence, etc.) PathType string `json:"path_type"` // Type of path (direct, influence, etc.)
} }
// DecisionStep represents a single step in a decision path // DecisionStep represents a single step in a decision path
@@ -239,7 +240,7 @@ type DecisionStep struct {
// DecisionTimeline represents the decision evolution timeline for a context // DecisionTimeline represents the decision evolution timeline for a context
type DecisionTimeline struct { type DecisionTimeline struct {
PrimaryAddress string `json:"primary_address"` // Main UCXL address PrimaryAddress string `json:"primary_address"` // Main UCXL address
DecisionSequence []*DecisionTimelineEntry `json:"decision_sequence"` // Ordered by decision hops DecisionSequence []*DecisionTimelineEntry `json:"decision_sequence"` // Ordered by decision hops
RelatedDecisions []*RelatedDecision `json:"related_decisions"` // Related decisions within hop limit RelatedDecisions []*RelatedDecision `json:"related_decisions"` // Related decisions within hop limit
TotalDecisions int `json:"total_decisions"` // Total decisions in timeline TotalDecisions int `json:"total_decisions"` // Total decisions in timeline
@@ -249,40 +250,40 @@ type DecisionTimeline struct {
// DecisionTimelineEntry represents an entry in the decision timeline // DecisionTimelineEntry represents an entry in the decision timeline
type DecisionTimelineEntry struct { type DecisionTimelineEntry struct {
Version int `json:"version"` // Version number Version int `json:"version"` // Version number
DecisionHop int `json:"decision_hop"` // Decision distance from initial DecisionHop int `json:"decision_hop"` // Decision distance from initial
ChangeReason ChangeReason `json:"change_reason"` // Why it changed ChangeReason ChangeReason `json:"change_reason"` // Why it changed
DecisionMaker string `json:"decision_maker"` // Who made the decision DecisionMaker string `json:"decision_maker"` // Who made the decision
DecisionRationale string `json:"decision_rationale"` // Rationale for decision DecisionRationale string `json:"decision_rationale"` // Rationale for decision
ConfidenceEvolution float64 `json:"confidence_evolution"` // Confidence at this point ConfidenceEvolution float64 `json:"confidence_evolution"` // Confidence at this point
Timestamp time.Time `json:"timestamp"` // When decision occurred Timestamp time.Time `json:"timestamp"` // When decision occurred
InfluencesCount int `json:"influences_count"` // Number of influenced addresses InfluencesCount int `json:"influences_count"` // Number of influenced addresses
InfluencedByCount int `json:"influenced_by_count"` // Number of influencing addresses InfluencedByCount int `json:"influenced_by_count"` // Number of influencing addresses
ImpactScope ImpactScope `json:"impact_scope"` // Scope of this decision ImpactScope ImpactScope `json:"impact_scope"` // Scope of this decision
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
} }
// RelatedDecision represents a decision related through the influence graph // RelatedDecision represents a decision related through the influence graph
type RelatedDecision struct { type RelatedDecision struct {
Address string `json:"address"` // UCXL address Address string `json:"address"` // UCXL address
DecisionHops int `json:"decision_hops"` // Hops from primary address DecisionHops int `json:"decision_hops"` // Hops from primary address
LatestVersion int `json:"latest_version"` // Latest version number LatestVersion int `json:"latest_version"` // Latest version number
ChangeReason ChangeReason `json:"change_reason"` // Latest change reason ChangeReason ChangeReason `json:"change_reason"` // Latest change reason
DecisionMaker string `json:"decision_maker"` // Latest decision maker DecisionMaker string `json:"decision_maker"` // Latest decision maker
Confidence float64 `json:"confidence"` // Current confidence Confidence float64 `json:"confidence"` // Current confidence
LastDecisionTimestamp time.Time `json:"last_decision_timestamp"` // When last decision occurred LastDecisionTimestamp time.Time `json:"last_decision_timestamp"` // When last decision occurred
RelationshipType string `json:"relationship_type"` // Type of relationship (influences, influenced_by) RelationshipType string `json:"relationship_type"` // Type of relationship (influences, influenced_by)
} }
// TimelineAnalysis contains analysis metadata for decision timelines // TimelineAnalysis contains analysis metadata for decision timelines
type TimelineAnalysis struct { type TimelineAnalysis struct {
ChangeVelocity float64 `json:"change_velocity"` // Changes per unit time ChangeVelocity float64 `json:"change_velocity"` // Changes per unit time
ConfidenceTrend string `json:"confidence_trend"` // increasing, decreasing, stable ConfidenceTrend string `json:"confidence_trend"` // increasing, decreasing, stable
DominantChangeReasons []ChangeReason `json:"dominant_change_reasons"` // Most common reasons DominantChangeReasons []ChangeReason `json:"dominant_change_reasons"` // Most common reasons
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
ImpactScopeDistribution map[ImpactScope]int `json:"impact_scope_distribution"` // Distribution of impact scopes ImpactScopeDistribution map[ImpactScope]int `json:"impact_scope_distribution"` // Distribution of impact scopes
InfluenceNetworkSize int `json:"influence_network_size"` // Size of influence network InfluenceNetworkSize int `json:"influence_network_size"` // Size of influence network
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
} }
// NavigationDirection represents direction for temporal navigation // NavigationDirection represents direction for temporal navigation
@@ -295,77 +296,77 @@ const (
// StaleContext represents a potentially outdated context // StaleContext represents a potentially outdated context
type StaleContext struct { type StaleContext struct {
UCXLAddress string `json:"ucxl_address"` // Address of stale context UCXLAddress string `json:"ucxl_address"` // Address of stale context
TemporalNode *TemporalNode `json:"temporal_node"` // Latest temporal node TemporalNode *TemporalNode `json:"temporal_node"` // Latest temporal node
StalenessScore float64 `json:"staleness_score"` // Staleness score (0-1) StalenessScore float64 `json:"staleness_score"` // Staleness score (0-1)
LastUpdated time.Time `json:"last_updated"` // When last updated LastUpdated time.Time `json:"last_updated"` // When last updated
Reasons []string `json:"reasons"` // Reasons why considered stale Reasons []string `json:"reasons"` // Reasons why considered stale
SuggestedActions []string `json:"suggested_actions"` // Suggested remediation actions SuggestedActions []string `json:"suggested_actions"` // Suggested remediation actions
} }
// GenerationOptions configures context generation behavior // GenerationOptions configures context generation behavior
type GenerationOptions struct { type GenerationOptions struct {
// Analysis options // Analysis options
AnalyzeContent bool `json:"analyze_content"` // Analyze file content AnalyzeContent bool `json:"analyze_content"` // Analyze file content
AnalyzeStructure bool `json:"analyze_structure"` // Analyze directory structure AnalyzeStructure bool `json:"analyze_structure"` // Analyze directory structure
AnalyzeHistory bool `json:"analyze_history"` // Analyze git history AnalyzeHistory bool `json:"analyze_history"` // Analyze git history
AnalyzeDependencies bool `json:"analyze_dependencies"` // Analyze dependencies AnalyzeDependencies bool `json:"analyze_dependencies"` // Analyze dependencies
// Generation scope // Generation scope
MaxDepth int `json:"max_depth"` // Maximum directory depth MaxDepth int `json:"max_depth"` // Maximum directory depth
IncludePatterns []string `json:"include_patterns"` // File patterns to include IncludePatterns []string `json:"include_patterns"` // File patterns to include
ExcludePatterns []string `json:"exclude_patterns"` // File patterns to exclude ExcludePatterns []string `json:"exclude_patterns"` // File patterns to exclude
// Quality settings // Quality settings
MinConfidence float64 `json:"min_confidence"` // Minimum confidence threshold MinConfidence float64 `json:"min_confidence"` // Minimum confidence threshold
RequireValidation bool `json:"require_validation"` // Require human validation RequireValidation bool `json:"require_validation"` // Require human validation
// External integration // External integration
UseRAG bool `json:"use_rag"` // Use RAG for enhancement UseRAG bool `json:"use_rag"` // Use RAG for enhancement
RAGEndpoint string `json:"rag_endpoint"` // RAG service endpoint RAGEndpoint string `json:"rag_endpoint"` // RAG service endpoint
// Output options // Output options
EncryptForRoles []string `json:"encrypt_for_roles"` // Roles to encrypt for EncryptForRoles []string `json:"encrypt_for_roles"` // Roles to encrypt for
// Performance limits // Performance limits
Timeout time.Duration `json:"timeout"` // Generation timeout Timeout time.Duration `json:"timeout"` // Generation timeout
MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze
// Custom options // Custom options
CustomOptions map[string]interface{} `json:"custom_options,omitempty"` // Additional options CustomOptions map[string]interface{} `json:"custom_options,omitempty"` // Additional options
} }
// HierarchyStats represents statistics about hierarchy generation // HierarchyStats represents statistics about hierarchy generation
type HierarchyStats struct { type HierarchyStats struct {
NodesCreated int `json:"nodes_created"` // Number of nodes created NodesCreated int `json:"nodes_created"` // Number of nodes created
NodesUpdated int `json:"nodes_updated"` // Number of nodes updated NodesUpdated int `json:"nodes_updated"` // Number of nodes updated
FilesAnalyzed int `json:"files_analyzed"` // Number of files analyzed FilesAnalyzed int `json:"files_analyzed"` // Number of files analyzed
DirectoriesScanned int `json:"directories_scanned"` // Number of directories scanned DirectoriesScanned int `json:"directories_scanned"` // Number of directories scanned
GenerationTime time.Duration `json:"generation_time"` // Time taken for generation GenerationTime time.Duration `json:"generation_time"` // Time taken for generation
AverageConfidence float64 `json:"average_confidence"` // Average confidence score AverageConfidence float64 `json:"average_confidence"` // Average confidence score
TotalSize int64 `json:"total_size"` // Total size of analyzed content TotalSize int64 `json:"total_size"` // Total size of analyzed content
SkippedFiles int `json:"skipped_files"` // Number of files skipped SkippedFiles int `json:"skipped_files"` // Number of files skipped
Errors []string `json:"errors"` // Generation errors Errors []string `json:"errors"` // Generation errors
} }
// ValidationResult represents the result of context validation // ValidationResult represents the result of context validation
type ValidationResult struct { type ValidationResult struct {
Valid bool `json:"valid"` // Whether context is valid Valid bool `json:"valid"` // Whether context is valid
ConfidenceScore float64 `json:"confidence_score"` // Overall confidence (0-1) ConfidenceScore float64 `json:"confidence_score"` // Overall confidence (0-1)
QualityScore float64 `json:"quality_score"` // Quality assessment (0-1) QualityScore float64 `json:"quality_score"` // Quality assessment (0-1)
Issues []*ValidationIssue `json:"issues"` // Validation issues found Issues []*ValidationIssue `json:"issues"` // Validation issues found
Suggestions []*ValidationSuggestion `json:"suggestions"` // Improvement suggestions Suggestions []*ValidationSuggestion `json:"suggestions"` // Improvement suggestions
ValidatedAt time.Time `json:"validated_at"` // When validation occurred ValidatedAt time.Time `json:"validated_at"` // When validation occurred
ValidatedBy string `json:"validated_by"` // Who/what performed validation ValidatedBy string `json:"validated_by"` // Who/what performed validation
} }
// ValidationIssue represents an issue found during validation // ValidationIssue represents an issue found during validation
type ValidationIssue struct { type ValidationIssue struct {
Severity string `json:"severity"` // error, warning, info Severity string `json:"severity"` // error, warning, info
Message string `json:"message"` // Issue description Message string `json:"message"` // Issue description
Field string `json:"field"` // Affected field Field string `json:"field"` // Affected field
Suggestion string `json:"suggestion"` // How to fix Suggestion string `json:"suggestion"` // How to fix
} }
// ValidationSuggestion represents a suggestion for context improvement // ValidationSuggestion represents a suggestion for context improvement
@@ -378,24 +379,24 @@ type ValidationSuggestion struct {
// CostEstimate represents estimated resource cost for operations // CostEstimate represents estimated resource cost for operations
type CostEstimate struct { type CostEstimate struct {
CPUCost float64 `json:"cpu_cost"` // Estimated CPU cost CPUCost float64 `json:"cpu_cost"` // Estimated CPU cost
MemoryCost float64 `json:"memory_cost"` // Estimated memory cost MemoryCost float64 `json:"memory_cost"` // Estimated memory cost
StorageCost float64 `json:"storage_cost"` // Estimated storage cost StorageCost float64 `json:"storage_cost"` // Estimated storage cost
TimeCost time.Duration `json:"time_cost"` // Estimated time cost TimeCost time.Duration `json:"time_cost"` // Estimated time cost
TotalCost float64 `json:"total_cost"` // Total normalized cost TotalCost float64 `json:"total_cost"` // Total normalized cost
CostBreakdown map[string]float64 `json:"cost_breakdown"` // Detailed cost breakdown CostBreakdown map[string]float64 `json:"cost_breakdown"` // Detailed cost breakdown
} }
// AnalysisResult represents the result of context analysis // AnalysisResult represents the result of context analysis
type AnalysisResult struct { type AnalysisResult struct {
QualityScore float64 `json:"quality_score"` // Overall quality (0-1) QualityScore float64 `json:"quality_score"` // Overall quality (0-1)
ConsistencyScore float64 `json:"consistency_score"` // Consistency with hierarchy ConsistencyScore float64 `json:"consistency_score"` // Consistency with hierarchy
CompletenessScore float64 `json:"completeness_score"` // Completeness assessment CompletenessScore float64 `json:"completeness_score"` // Completeness assessment
AccuracyScore float64 `json:"accuracy_score"` // Accuracy assessment AccuracyScore float64 `json:"accuracy_score"` // Accuracy assessment
Issues []*AnalysisIssue `json:"issues"` // Issues found Issues []*AnalysisIssue `json:"issues"` // Issues found
Strengths []string `json:"strengths"` // Context strengths Strengths []string `json:"strengths"` // Context strengths
Improvements []*Suggestion `json:"improvements"` // Improvement suggestions Improvements []*Suggestion `json:"improvements"` // Improvement suggestions
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis occurred AnalyzedAt time.Time `json:"analyzed_at"` // When analysis occurred
} }
// AnalysisIssue represents an issue found during analysis // AnalysisIssue represents an issue found during analysis
@@ -418,86 +419,86 @@ type Suggestion struct {
// Pattern represents a detected context pattern // Pattern represents a detected context pattern
type Pattern struct { type Pattern struct {
ID string `json:"id"` // Pattern identifier ID string `json:"id"` // Pattern identifier
Name string `json:"name"` // Pattern name Name string `json:"name"` // Pattern name
Description string `json:"description"` // Pattern description Description string `json:"description"` // Pattern description
MatchCriteria map[string]interface{} `json:"match_criteria"` // Criteria for matching MatchCriteria map[string]interface{} `json:"match_criteria"` // Criteria for matching
Confidence float64 `json:"confidence"` // Pattern confidence (0-1) Confidence float64 `json:"confidence"` // Pattern confidence (0-1)
Frequency int `json:"frequency"` // How often pattern appears Frequency int `json:"frequency"` // How often pattern appears
Examples []string `json:"examples"` // Example contexts that match Examples []string `json:"examples"` // Example contexts that match
CreatedAt time.Time `json:"created_at"` // When pattern was detected CreatedAt time.Time `json:"created_at"` // When pattern was detected
} }
// PatternMatch represents a match between context and pattern // PatternMatch represents a match between context and pattern
type PatternMatch struct { type PatternMatch struct {
PatternID string `json:"pattern_id"` // ID of matched pattern PatternID string `json:"pattern_id"` // ID of matched pattern
MatchScore float64 `json:"match_score"` // How well it matches (0-1) MatchScore float64 `json:"match_score"` // How well it matches (0-1)
MatchedFields []string `json:"matched_fields"` // Which fields matched MatchedFields []string `json:"matched_fields"` // Which fields matched
Confidence float64 `json:"confidence"` // Confidence in match Confidence float64 `json:"confidence"` // Confidence in match
} }
// ContextPattern represents a registered context pattern template // ContextPattern represents a registered context pattern template
type ContextPattern struct { type ContextPattern struct {
ID string `json:"id"` // Pattern identifier ID string `json:"id"` // Pattern identifier
Name string `json:"name"` // Human-readable name Name string `json:"name"` // Human-readable name
Description string `json:"description"` // Pattern description Description string `json:"description"` // Pattern description
Template *ContextNode `json:"template"` // Template for matching Template *ContextNode `json:"template"` // Template for matching
Criteria map[string]interface{} `json:"criteria"` // Matching criteria Criteria map[string]interface{} `json:"criteria"` // Matching criteria
Priority int `json:"priority"` // Pattern priority Priority int `json:"priority"` // Pattern priority
CreatedBy string `json:"created_by"` // Who created pattern CreatedBy string `json:"created_by"` // Who created pattern
CreatedAt time.Time `json:"created_at"` // When created CreatedAt time.Time `json:"created_at"` // When created
UpdatedAt time.Time `json:"updated_at"` // When last updated UpdatedAt time.Time `json:"updated_at"` // When last updated
UsageCount int `json:"usage_count"` // How often used UsageCount int `json:"usage_count"` // How often used
} }
// Inconsistency represents a detected inconsistency in the context hierarchy // Inconsistency represents a detected inconsistency in the context hierarchy
type Inconsistency struct { type Inconsistency struct {
Type string `json:"type"` // Type of inconsistency Type string `json:"type"` // Type of inconsistency
Description string `json:"description"` // Description of the issue Description string `json:"description"` // Description of the issue
AffectedNodes []string `json:"affected_nodes"` // Nodes involved AffectedNodes []string `json:"affected_nodes"` // Nodes involved
Severity string `json:"severity"` // Severity level Severity string `json:"severity"` // Severity level
Suggestion string `json:"suggestion"` // How to resolve Suggestion string `json:"suggestion"` // How to resolve
DetectedAt time.Time `json:"detected_at"` // When detected DetectedAt time.Time `json:"detected_at"` // When detected
} }
// SearchQuery represents a context search query // SearchQuery represents a context search query
type SearchQuery struct { type SearchQuery struct {
// Query terms // Query terms
Query string `json:"query"` // Main search query Query string `json:"query"` // Main search query
Tags []string `json:"tags"` // Required tags Tags []string `json:"tags"` // Required tags
Technologies []string `json:"technologies"` // Required technologies Technologies []string `json:"technologies"` // Required technologies
FileTypes []string `json:"file_types"` // File types to include FileTypes []string `json:"file_types"` // File types to include
// Filters // Filters
MinConfidence float64 `json:"min_confidence"` // Minimum confidence MinConfidence float64 `json:"min_confidence"` // Minimum confidence
MaxAge *time.Duration `json:"max_age"` // Maximum age MaxAge *time.Duration `json:"max_age"` // Maximum age
Roles []string `json:"roles"` // Required access roles Roles []string `json:"roles"` // Required access roles
// Scope // Scope
Scope []string `json:"scope"` // Paths to search within Scope []string `json:"scope"` // Paths to search within
ExcludeScope []string `json:"exclude_scope"` // Paths to exclude ExcludeScope []string `json:"exclude_scope"` // Paths to exclude
// Result options // Result options
Limit int `json:"limit"` // Maximum results Limit int `json:"limit"` // Maximum results
Offset int `json:"offset"` // Result offset Offset int `json:"offset"` // Result offset
SortBy string `json:"sort_by"` // Sort field SortBy string `json:"sort_by"` // Sort field
SortOrder string `json:"sort_order"` // asc, desc SortOrder string `json:"sort_order"` // asc, desc
// Advanced options // Advanced options
FuzzyMatch bool `json:"fuzzy_match"` // Enable fuzzy matching FuzzyMatch bool `json:"fuzzy_match"` // Enable fuzzy matching
IncludeStale bool `json:"include_stale"` // Include stale contexts IncludeStale bool `json:"include_stale"` // Include stale contexts
TemporalFilter *TemporalFilter `json:"temporal_filter"` // Temporal filtering TemporalFilter *TemporalFilter `json:"temporal_filter"` // Temporal filtering
} }
// TemporalFilter represents temporal filtering options // TemporalFilter represents temporal filtering options
type TemporalFilter struct { type TemporalFilter struct {
FromTime *time.Time `json:"from_time"` // Start time FromTime *time.Time `json:"from_time"` // Start time
ToTime *time.Time `json:"to_time"` // End time ToTime *time.Time `json:"to_time"` // End time
VersionRange *VersionRange `json:"version_range"` // Version range VersionRange *VersionRange `json:"version_range"` // Version range
ChangeReasons []ChangeReason `json:"change_reasons"` // Specific change reasons ChangeReasons []ChangeReason `json:"change_reasons"` // Specific change reasons
DecisionMakers []string `json:"decision_makers"` // Specific decision makers DecisionMakers []string `json:"decision_makers"` // Specific decision makers
MinDecisionHops int `json:"min_decision_hops"` // Minimum decision hops MinDecisionHops int `json:"min_decision_hops"` // Minimum decision hops
MaxDecisionHops int `json:"max_decision_hops"` // Maximum decision hops MaxDecisionHops int `json:"max_decision_hops"` // Maximum decision hops
} }
// VersionRange represents a range of versions // VersionRange represents a range of versions
@@ -509,58 +510,58 @@ type VersionRange struct {
// SearchResult represents a single search result // SearchResult represents a single search result
type SearchResult struct { type SearchResult struct {
Context *ResolvedContext `json:"context"` // Resolved context Context *ResolvedContext `json:"context"` // Resolved context
TemporalNode *TemporalNode `json:"temporal_node"` // Associated temporal node TemporalNode *TemporalNode `json:"temporal_node"` // Associated temporal node
MatchScore float64 `json:"match_score"` // How well it matches query (0-1) MatchScore float64 `json:"match_score"` // How well it matches query (0-1)
MatchedFields []string `json:"matched_fields"` // Which fields matched MatchedFields []string `json:"matched_fields"` // Which fields matched
Snippet string `json:"snippet"` // Text snippet showing match Snippet string `json:"snippet"` // Text snippet showing match
Rank int `json:"rank"` // Result rank Rank int `json:"rank"` // Result rank
} }
// IndexMetadata represents metadata for context indexing // IndexMetadata represents metadata for context indexing
type IndexMetadata struct { type IndexMetadata struct {
IndexType string `json:"index_type"` // Type of index IndexType string `json:"index_type"` // Type of index
IndexedFields []string `json:"indexed_fields"` // Fields that are indexed IndexedFields []string `json:"indexed_fields"` // Fields that are indexed
IndexedAt time.Time `json:"indexed_at"` // When indexed IndexedAt time.Time `json:"indexed_at"` // When indexed
IndexVersion string `json:"index_version"` // Index version IndexVersion string `json:"index_version"` // Index version
Metadata map[string]interface{} `json:"metadata"` // Additional metadata Metadata map[string]interface{} `json:"metadata"` // Additional metadata
} }
// DecisionAnalysis represents analysis of decision patterns // DecisionAnalysis represents analysis of decision patterns
type DecisionAnalysis struct { type DecisionAnalysis struct {
TotalDecisions int `json:"total_decisions"` // Total decisions analyzed TotalDecisions int `json:"total_decisions"` // Total decisions analyzed
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
ChangeReasons map[ChangeReason]int `json:"change_reasons"` // Change reason frequency ChangeReasons map[ChangeReason]int `json:"change_reasons"` // Change reason frequency
ImpactScopes map[ImpactScope]int `json:"impact_scopes"` // Impact scope distribution ImpactScopes map[ImpactScope]int `json:"impact_scopes"` // Impact scope distribution
ConfidenceTrends map[string]float64 `json:"confidence_trends"` // Confidence trends over time ConfidenceTrends map[string]float64 `json:"confidence_trends"` // Confidence trends over time
DecisionFrequency map[string]int `json:"decision_frequency"` // Decisions per time period DecisionFrequency map[string]int `json:"decision_frequency"` // Decisions per time period
InfluenceNetworkStats *InfluenceNetworkStats `json:"influence_network_stats"` // Network statistics InfluenceNetworkStats *InfluenceNetworkStats `json:"influence_network_stats"` // Network statistics
Patterns []*DecisionPattern `json:"patterns"` // Detected decision patterns Patterns []*DecisionPattern `json:"patterns"` // Detected decision patterns
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
AnalysisTimeSpan time.Duration `json:"analysis_time_span"` // Time span analyzed AnalysisTimeSpan time.Duration `json:"analysis_time_span"` // Time span analyzed
} }
// InfluenceNetworkStats represents statistics about the influence network // InfluenceNetworkStats represents statistics about the influence network
type InfluenceNetworkStats struct { type InfluenceNetworkStats struct {
TotalNodes int `json:"total_nodes"` // Total nodes in network TotalNodes int `json:"total_nodes"` // Total nodes in network
TotalEdges int `json:"total_edges"` // Total influence relationships TotalEdges int `json:"total_edges"` // Total influence relationships
AverageConnections float64 `json:"average_connections"` // Average connections per node AverageConnections float64 `json:"average_connections"` // Average connections per node
MaxConnections int `json:"max_connections"` // Maximum connections for any node MaxConnections int `json:"max_connections"` // Maximum connections for any node
NetworkDensity float64 `json:"network_density"` // Network density (0-1) NetworkDensity float64 `json:"network_density"` // Network density (0-1)
ClusteringCoeff float64 `json:"clustering_coeff"` // Clustering coefficient ClusteringCoeff float64 `json:"clustering_coeff"` // Clustering coefficient
MaxPathLength int `json:"max_path_length"` // Maximum path length in network MaxPathLength int `json:"max_path_length"` // Maximum path length in network
CentralNodes []string `json:"central_nodes"` // Most central nodes CentralNodes []string `json:"central_nodes"` // Most central nodes
} }
// DecisionPattern represents a detected pattern in decision-making // DecisionPattern represents a detected pattern in decision-making
type DecisionPattern struct { type DecisionPattern struct {
ID string `json:"id"` // Pattern identifier ID string `json:"id"` // Pattern identifier
Name string `json:"name"` // Pattern name Name string `json:"name"` // Pattern name
Description string `json:"description"` // Pattern description Description string `json:"description"` // Pattern description
Frequency int `json:"frequency"` // How often this pattern occurs Frequency int `json:"frequency"` // How often this pattern occurs
Confidence float64 `json:"confidence"` // Confidence in pattern (0-1) Confidence float64 `json:"confidence"` // Confidence in pattern (0-1)
ExampleDecisions []string `json:"example_decisions"` // Example decisions that match ExampleDecisions []string `json:"example_decisions"` // Example decisions that match
Characteristics map[string]interface{} `json:"characteristics"` // Pattern characteristics Characteristics map[string]interface{} `json:"characteristics"` // Pattern characteristics
DetectedAt time.Time `json:"detected_at"` // When pattern was detected DetectedAt time.Time `json:"detected_at"` // When pattern was detected
} }
// ResolverStatistics represents statistics about context resolution operations // ResolverStatistics represents statistics about context resolution operations
@@ -577,4 +578,4 @@ type ResolverStatistics struct {
MaxCacheSize int64 `json:"max_cache_size"` // Maximum cache size MaxCacheSize int64 `json:"max_cache_size"` // Maximum cache size
CacheEvictions int64 `json:"cache_evictions"` // Number of cache evictions CacheEvictions int64 `json:"cache_evictions"` // Number of cache evictions
LastResetAt time.Time `json:"last_reset_at"` // When statistics were last reset LastResetAt time.Time `json:"last_reset_at"` // When statistics were last reset
} }