chore: align slurp config and scaffolding

This commit is contained in:
anthonyrawlins
2025-09-27 21:03:12 +10:00
parent acc4361463
commit 4a77862289
47 changed files with 5133 additions and 4274 deletions

View File

@@ -0,0 +1,94 @@
# SEC-SLURP UCXL Beacon & Pin Steward Design Notes
## Purpose
- Establish the authoritative UCXL context beacon that bridges SLURP persistence with WHOOSH/role-aware agents.
- Define the Pin Steward responsibilities so DHT replication, healing, and telemetry satisfy SEC-SLURP 1.1a acceptance criteria.
- Provide an incremental execution plan aligned with the Persistence Wiring Report and DHT Resilience Supplement.
## UCXL Beacon Data Model
- **manifest_id** (`string`): deterministic hash of `project:task:address:version`.
- **ucxl_address** (`ucxl.Address`): canonical address that produced the manifest.
- **context_version** (`int`): monotonic version from SLURP temporal graph.
- **source_hash** (`string`): content hash emitted by `persistContext` (LevelDB) for change detection.
- **generated_by** (`string`): CHORUS agent id / role bundle that wrote the context.
- **generated_at** (`time.Time`): timestamp from SLURP persistence event.
- **replica_targets** (`[]string`): desired replica node ids (Pin Steward enforces `replication_factor`).
- **replica_state** (`[]ReplicaInfo`): health snapshot (`node_id`, `provider_id`, `status`, `last_checked`, `latency_ms`).
- **encryption** (`EncryptionMetadata`):
- `dek_fingerprint` (`string`)
- `kek_policy` (`string`): BACKBEAT rotation policy identifier.
- `rotation_due` (`time.Time`)
- **compliance_tags** (`[]string`): SHHH/WHOOSH governance hooks (e.g. `sec-high`, `audit-required`).
- **beacon_metrics** (`BeaconMetrics`): summarized counters for cache hits, DHT retrieves, validation errors.
### Storage Strategy
- Primary persistence in LevelDB (`pkg/slurp/slurp.go`) using key prefix `beacon::<manifest_id>`.
- Secondary replication to DHT under `dht://beacon/<manifest_id>` enabling WHOOSH agents to read via Pin Steward API.
- Optional export to UCXL Decision Record envelope for historical traceability.
## Beacon APIs
| Endpoint | Purpose | Notes |
|----------|---------|-------|
| `Beacon.Upsert(manifest)` | Persist/update manifest | Called by SLURP after `persistContext` success. |
| `Beacon.Get(ucxlAddress)` | Resolve latest manifest | Used by WHOOSH/agents to locate canonical context. |
| `Beacon.List(filter)` | Query manifests by tags/roles/time | Backs dashboards and Pin Steward audits. |
| `Beacon.StreamChanges(since)` | Provide change feed for Pin Steward anti-entropy jobs | Implements backpressure and bookmark tokens. |
All APIs return envelope with UCXL citation + checksum to make SLURP⇄WHOOSH handoff auditable.
## Pin Steward Responsibilities
1. **Replication Planning**
- Read manifests via `Beacon.StreamChanges`.
- Evaluate current replica_state vs. `replication_factor` from configuration.
- Produce queue of DHT store/refresh tasks (`storeAsync`, `storeSync`, `storeQuorum`).
2. **Healing & Anti-Entropy**
- Schedule `heal_under_replicated` jobs every `anti_entropy_interval`.
- Re-announce providers on Pulse/Reverb when TTL < threshold.
- Record outcomes back into manifest (`replica_state`).
3. **Envelope Encryption Enforcement**
- Request KEK material from KACHING/SHHH as described in SEC-SLURP 1.1a.
- Ensure DEK fingerprints match `encryption` metadata; trigger rotation if stale.
4. **Telemetry Export**
- Emit Prometheus counters: `pin_steward_replica_heal_total`, `pin_steward_replica_unhealthy`, `pin_steward_encryption_rotations_total`.
- Surface aggregated health to WHOOSH dashboards for council visibility.
## Interaction Flow
1. **SLURP Persistence**
- `UpsertContext` LevelDB write manifests assembled (`persistContext`).
- Beacon `Upsert` called with manifest + context hash.
2. **Pin Steward Intake**
- `StreamChanges` yields manifest steward verifies encryption metadata and schedules replication tasks.
3. **DHT Coordination**
- `ReplicationManager.EnsureReplication` invoked with target factor.
- `defaultVectorClockManager` (temporary) to be replaced with libp2p-aware implementation for provider TTL tracking.
4. **WHOOSH Consumption**
- WHOOSH SLURP proxy fetches manifest via `Beacon.Get`, caches in WHOOSH DB, attaches to deliverable artifacts.
- Council UI surfaces replication state + encryption posture for operator decisions.
## Incremental Delivery Plan
1. **Sprint A (Persistence parity)**
- Finalize LevelDB manifest schema + tests (extend `slurp_persistence_test.go`).
- Implement Beacon interfaces within SLURP service (in-memory + LevelDB).
- Add Prometheus metrics for persistence reads/misses.
2. **Sprint B (Pin Steward MVP)**
- Build steward worker with configurable reconciliation loop.
- Wire to existing `DistributedStorage` stubs (`StoreAsync/Sync/Quorum`).
- Emit health logs; integrate with CLI diagnostics.
3. **Sprint C (DHT Resilience)**
- Swap `defaultVectorClockManager` with libp2p implementation; add provider TTL probes.
- Implement envelope encryption path leveraging KACHING/SHHH interfaces (replace stubs in `pkg/crypto`).
- Add CI checks: replica factor assertions, provider refresh tests, beacon schema validation.
4. **Sprint D (WHOOSH Integration)**
- Expose REST/gRPC endpoint for WHOOSH to query manifests.
- Update WHOOSH SLURPArtifactManager to require beacon confirmation before submission.
- Surface Pin Steward alerts in WHOOSH admin UI.
## Open Questions
- Confirm whether Beacon manifests should include DER signatures or rely on UCXL envelope hash.
- Determine storage for historical manifests (append-only log vs. latest-only) to support temporal rewind.
- Align Pin Steward job scheduling with existing BACKBEAT cadence to avoid conflicting rotations.
## Next Actions
- Prototype `BeaconStore` interface + LevelDB implementation in SLURP package.
- Document Pin Steward anti-entropy algorithm with pseudocode and integrate into SEC-SLURP test plan.
- Sync with WHOOSH team on manifest query contract (REST vs. gRPC; pagination semantics).

View File

@@ -0,0 +1,52 @@
# WHOOSH ↔ CHORUS Integration Demo Plan (SEC-SLURP Track)
## Demo Objectives
- Showcase end-to-end persistence → UCXL beacon → Pin Steward → WHOOSH artifact submission flow.
- Validate role-based agent interactions with SLURP contexts (resolver + temporal graph) prior to DHT hardening.
- Capture metrics/telemetry needed for SEC-SLURP exit criteria and WHOOSH Phase 1 sign-off.
## Sequenced Milestones
1. **Persistence Validation Session**
- Run `GOWORK=off go test ./pkg/slurp/...` with stubs patched; demo LevelDB warm/load using `slurp_persistence_test.go`.
- Inspect beacon manifests via CLI (`slurpctl beacon list`).
- Deliverable: test log + manifest sample archived in UCXL.
2. **Beacon → Pin Steward Dry Run**
- Replay stored manifests through Pin Steward worker with mock DHT backend.
- Show replication planner queue + telemetry counters (`pin_steward_replica_heal_total`).
- Deliverable: decision record linking manifest to replication outcome.
3. **WHOOSH SLURP Proxy Alignment**
- Point WHOOSH dev stack (`npm run dev`) at local SLURP with beacon API enabled.
- Walk through council formation, capture SLURP artifact submission with beacon confirmation modal.
- Deliverable: screen recording + WHOOSH DB entry referencing beacon manifest id.
4. **DHT Resilience Checkpoint**
- Switch Pin Steward to libp2p DHT (once wired) and run replication + provider TTL check.
- Fail one node intentionally, demonstrate heal path + alert surfaced in WHOOSH UI.
- Deliverable: telemetry dump + alert screenshot.
5. **Governance & Telemetry Wrap-Up**
- Export Prometheus metrics (cache hit/miss, beacon writes, replication heals) into KACHING dashboard.
- Publish Decision Record documenting UCXL address flow, referencing SEC-SLURP docs.
## Roles & Responsibilities
- **SLURP Team:** finalize persistence build, implement beacon APIs, own Pin Steward worker.
- **WHOOSH Team:** wire beacon client, expose replication/encryption status in UI, capture council telemetry.
- **KACHING/SHHH Stakeholders:** validate telemetry ingestion and encryption custody notes.
- **Program Management:** schedule demo rehearsal, ensure Decision Records and UCXL addresses recorded.
## Tooling & Environments
- Local cluster via `docker compose up slurp whoosh pin-steward` (to be scripted in `commands/`).
- Use `make demo-sec-slurp` target to run integration harness (to be added).
- Prometheus/Grafana docker compose for metrics validation.
## Success Criteria
- Beacon manifest accessible from WHOOSH UI within 2s average latency.
- Pin Steward resolves under-replicated manifest within demo timeline (<30s) and records healing event.
- All demo steps logged with UCXL references and SHHH redaction checks passing.
## Open Items
- Need sample repo/issues to feed WHOOSH analyzer (consider `project-queues/active/WHOOSH/demo-data`).
- Determine minimal DHT cluster footprint for the demo (3 vs 5 nodes).
- Align on telemetry retention window for demo (24h?).

View File

@@ -0,0 +1,32 @@
# SEC-SLURP 1.1a DHT Resilience Supplement
## Requirements (derived from `docs/Modules/DHT.md`)
1. **Real DHT state & persistence**
- Replace mock DHT usage with libp2p-based storage or equivalent real implementation.
- Store DHT/blockstore data on persistent volumes (named volumes/ZFS/NFS) with node placement constraints.
- Ensure bootstrap nodes are stateful and survive container churn.
2. **Pin Steward + replication policy**
- Introduce a Pin Steward service that tracks UCXL CID manifests and enforces replication factor (e.g. 35 replicas).
- Re-announce providers on Pulse/Reverb and heal under-replicated content.
- Schedule anti-entropy jobs to verify and repair replicas.
3. **Envelope encryption & shared key custody**
- Implement envelope encryption (DEK+KEK) with threshold/organizational custody rather than per-role ownership.
- Store KEK metadata with UCXL manifests; rotate via BACKBEAT.
- Update crypto/key-manager stubs to real implementations once available.
4. **Shared UCXL Beacon index**
- Maintain an authoritative CID registry (DR/UCXL) replicated outside individual agents.
- Ensure metadata updates are durable and role-agnostic to prevent stranded CIDs.
5. **CI/SLO validation**
- Add automated tests/health checks covering provider refresh, replication factor, and persistent-storage guarantees.
- Gate releases on DHT resilience checks (provider TTLs, replica counts).
## Integration Path for SEC-SLURP 1.1
- Incorporate the above requirements as acceptance criteria alongside LevelDB persistence.
- Sequence work to: migrate DHT interactions, introduce Pin Steward, implement envelope crypto, and wire CI validation.
- Attach artifacts (Pin Steward design, envelope crypto spec, CI scripts) to the Phase 1 deliverable checklist.

View File

@@ -5,10 +5,14 @@
- Upgraded SLURPs lifecycle so initialization bootstraps cached context data from disk, cache misses hydrate from persistence, successful `UpsertContext` calls write back to LevelDB, and shutdown closes the store with error telemetry.
- Introduced `pkg/slurp/slurp_persistence_test.go` to confirm contexts survive process restarts and can be resolved after clearing in-memory caches.
- Instrumented cache/persistence metrics so hit/miss ratios and storage failures are tracked for observability.
- Attempted `GOWORK=off go test ./pkg/slurp`; execution was blocked by legacy references to `config.Authority*` symbols in `pkg/slurp/context`, so the new test did not run.
- Implemented lightweight crypto/key-management stubs (`pkg/crypto/role_crypto_stub.go`, `pkg/crypto/key_manager_stub.go`) so SLURP modules compile while the production stack is ported.
- Updated DHT distribution and encrypted storage layers (`pkg/slurp/distribution/dht_impl.go`, `pkg/slurp/storage/encrypted_storage.go`) to use the crypto stubs, adding per-role fingerprints and durable decoding logic.
- Expanded storage metadata models (`pkg/slurp/storage/types.go`, `pkg/slurp/storage/backup_manager.go`) with fields referenced by backup/replication flows (progress, error messages, retention, data size).
- Incrementally stubbed/simplified distributed storage helpers to inch toward a compilable SLURP package.
- Attempted `GOWORK=off go test ./pkg/slurp`; the original authority-level blocker is resolved, but builds still fail in storage/index code due to remaining stub work (e.g., Bleve queries, DHT helpers).
## Recommended Next Steps
- Address the `config.Authority*` symbol drift (or scope down the impacted packages) so the SLURP test suite can compile cleanly, then rerun `GOWORK=off go test ./pkg/slurp` to validate persistence changes.
- Feed the durable store into the resolver and temporal graph implementations to finish the remaining Phase1 SLURP roadmap items.
- Expand Prometheus metrics and logging to track cache hit/miss ratios plus persistence errors for SEC-SLURP observability goals.
- Review unrelated changes on `feature/phase-4-real-providers` (e.g., docker-compose edits) and either align them with this roadmap work or revert to keep the branch focused.
- Stub the remaining storage/index dependencies (Bleve query scaffolding, UCXL helpers, `errorCh` queues, cache regex usage) or neutralize the heavy modules so that `GOWORK=off go test ./pkg/slurp` compiles and runs.
- Feed the durable store into the resolver and temporal graph implementations to finish the SEC-SLURP1.1 milestone once the package builds cleanly.
- Extend Prometheus metrics/logging to track cache hit/miss ratios plus persistence errors for observability alignment.
- Review unrelated changes still tracked on `feature/phase-4-real-providers` (e.g., docker-compose edits) and either align them with this roadmap work or revert for focus.

View File

@@ -130,7 +130,27 @@ type ResolutionConfig struct {
// SlurpConfig defines SLURP settings
type SlurpConfig struct {
Enabled bool `yaml:"enabled"`
Enabled bool `yaml:"enabled"`
BaseURL string `yaml:"base_url"`
APIKey string `yaml:"api_key"`
Timeout time.Duration `yaml:"timeout"`
RetryCount int `yaml:"retry_count"`
RetryDelay time.Duration `yaml:"retry_delay"`
TemporalAnalysis SlurpTemporalAnalysisConfig `yaml:"temporal_analysis"`
Performance SlurpPerformanceConfig `yaml:"performance"`
}
// SlurpTemporalAnalysisConfig captures temporal behaviour tuning for SLURP.
type SlurpTemporalAnalysisConfig struct {
MaxDecisionHops int `yaml:"max_decision_hops"`
StalenessCheckInterval time.Duration `yaml:"staleness_check_interval"`
StalenessThreshold float64 `yaml:"staleness_threshold"`
}
// SlurpPerformanceConfig exposes performance related tunables for SLURP.
type SlurpPerformanceConfig struct {
MaxConcurrentResolutions int `yaml:"max_concurrent_resolutions"`
MetricsCollectionInterval time.Duration `yaml:"metrics_collection_interval"`
}
// WHOOSHAPIConfig defines WHOOSH API integration settings
@@ -211,7 +231,21 @@ func LoadFromEnvironment() (*Config, error) {
},
},
Slurp: SlurpConfig{
Enabled: getEnvBoolOrDefault("CHORUS_SLURP_ENABLED", false),
Enabled: getEnvBoolOrDefault("CHORUS_SLURP_ENABLED", false),
BaseURL: getEnvOrDefault("CHORUS_SLURP_API_BASE_URL", "http://localhost:9090"),
APIKey: getEnvOrFileContent("CHORUS_SLURP_API_KEY", "CHORUS_SLURP_API_KEY_FILE"),
Timeout: getEnvDurationOrDefault("CHORUS_SLURP_API_TIMEOUT", 15*time.Second),
RetryCount: getEnvIntOrDefault("CHORUS_SLURP_API_RETRY_COUNT", 3),
RetryDelay: getEnvDurationOrDefault("CHORUS_SLURP_API_RETRY_DELAY", 2*time.Second),
TemporalAnalysis: SlurpTemporalAnalysisConfig{
MaxDecisionHops: getEnvIntOrDefault("CHORUS_SLURP_MAX_DECISION_HOPS", 5),
StalenessCheckInterval: getEnvDurationOrDefault("CHORUS_SLURP_STALENESS_CHECK_INTERVAL", 5*time.Minute),
StalenessThreshold: 0.2,
},
Performance: SlurpPerformanceConfig{
MaxConcurrentResolutions: getEnvIntOrDefault("CHORUS_SLURP_MAX_CONCURRENT_RESOLUTIONS", 4),
MetricsCollectionInterval: getEnvDurationOrDefault("CHORUS_SLURP_METRICS_COLLECTION_INTERVAL", time.Minute),
},
},
Security: SecurityConfig{
KeyRotationDays: getEnvIntOrDefault("CHORUS_KEY_ROTATION_DAYS", 30),

View File

@@ -0,0 +1,23 @@
package crypto
import "time"
// GenerateKey returns a deterministic placeholder key identifier for the given role.
func (km *KeyManager) GenerateKey(role string) (string, error) {
return "stub-key-" + role, nil
}
// DeprecateKey is a no-op in the stub implementation.
func (km *KeyManager) DeprecateKey(keyID string) error {
return nil
}
// GetKeysForRotation mirrors SEC-SLURP-1.1 key rotation discovery while remaining inert.
func (km *KeyManager) GetKeysForRotation(maxAge time.Duration) ([]*KeyInfo, error) {
return nil, nil
}
// ValidateKeyFingerprint accepts all fingerprints in the stubbed environment.
func (km *KeyManager) ValidateKeyFingerprint(role, fingerprint string) bool {
return true
}

View File

@@ -0,0 +1,75 @@
package crypto
import (
"crypto/sha256"
"encoding/base64"
"encoding/json"
"fmt"
"chorus/pkg/config"
)
type RoleCrypto struct {
config *config.Config
}
func NewRoleCrypto(cfg *config.Config, _ interface{}, _ interface{}, _ interface{}) (*RoleCrypto, error) {
if cfg == nil {
return nil, fmt.Errorf("config cannot be nil")
}
return &RoleCrypto{config: cfg}, nil
}
func (rc *RoleCrypto) EncryptForRole(data []byte, role string) ([]byte, string, error) {
if len(data) == 0 {
return []byte{}, rc.fingerprint(data), nil
}
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(data)))
base64.StdEncoding.Encode(encoded, data)
return encoded, rc.fingerprint(data), nil
}
func (rc *RoleCrypto) DecryptForRole(data []byte, role string, _ string) ([]byte, error) {
if len(data) == 0 {
return []byte{}, nil
}
decoded := make([]byte, base64.StdEncoding.DecodedLen(len(data)))
n, err := base64.StdEncoding.Decode(decoded, data)
if err != nil {
return nil, err
}
return decoded[:n], nil
}
func (rc *RoleCrypto) EncryptContextForRoles(payload interface{}, roles []string, _ []string) ([]byte, error) {
raw, err := json.Marshal(payload)
if err != nil {
return nil, err
}
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(raw)))
base64.StdEncoding.Encode(encoded, raw)
return encoded, nil
}
func (rc *RoleCrypto) fingerprint(data []byte) string {
sum := sha256.Sum256(data)
return base64.StdEncoding.EncodeToString(sum[:])
}
type StorageAccessController interface {
CanStore(role, key string) bool
CanRetrieve(role, key string) bool
}
type StorageAuditLogger interface {
LogEncryptionOperation(role, key, operation string, success bool)
LogDecryptionOperation(role, key, operation string, success bool)
LogKeyRotation(role, keyID string, success bool, message string)
LogError(message string)
LogAccessDenial(role, key, operation string)
}
type KeyInfo struct {
Role string
KeyID string
}

View File

@@ -0,0 +1,284 @@
package alignment
import "time"
// GoalStatistics summarizes goal management metrics.
type GoalStatistics struct {
TotalGoals int
ActiveGoals int
Completed int
Archived int
LastUpdated time.Time
}
// AlignmentGapAnalysis captures detected misalignments that require follow-up.
type AlignmentGapAnalysis struct {
Address string
Severity string
Findings []string
DetectedAt time.Time
}
// AlignmentComparison provides a simple comparison view between two contexts.
type AlignmentComparison struct {
PrimaryScore float64
SecondaryScore float64
Differences []string
}
// AlignmentStatistics aggregates assessment metrics across contexts.
type AlignmentStatistics struct {
TotalAssessments int
AverageScore float64
SuccessRate float64
FailureRate float64
LastUpdated time.Time
}
// ProgressHistory captures historical progress samples for a goal.
type ProgressHistory struct {
GoalID string
Samples []ProgressSample
}
// ProgressSample represents a single progress measurement.
type ProgressSample struct {
Timestamp time.Time
Percentage float64
}
// CompletionPrediction represents a simple completion forecast for a goal.
type CompletionPrediction struct {
GoalID string
EstimatedFinish time.Time
Confidence float64
}
// ProgressStatistics aggregates goal progress metrics.
type ProgressStatistics struct {
AverageCompletion float64
OpenGoals int
OnTrackGoals int
AtRiskGoals int
}
// DriftHistory tracks historical drift events.
type DriftHistory struct {
Address string
Events []DriftEvent
}
// DriftEvent captures a single drift occurrence.
type DriftEvent struct {
Timestamp time.Time
Severity DriftSeverity
Details string
}
// DriftThresholds defines sensitivity thresholds for drift detection.
type DriftThresholds struct {
SeverityThreshold DriftSeverity
ScoreDelta float64
ObservationWindow time.Duration
}
// DriftPatternAnalysis summarizes detected drift patterns.
type DriftPatternAnalysis struct {
Patterns []string
Summary string
}
// DriftPrediction provides a lightweight stub for future drift forecasting.
type DriftPrediction struct {
Address string
Horizon time.Duration
Severity DriftSeverity
Confidence float64
}
// DriftAlert represents an alert emitted when drift exceeds thresholds.
type DriftAlert struct {
ID string
Address string
Severity DriftSeverity
CreatedAt time.Time
Message string
}
// GoalRecommendation summarises next actions for a specific goal.
type GoalRecommendation struct {
GoalID string
Title string
Description string
Priority int
}
// StrategicRecommendation captures higher-level alignment guidance.
type StrategicRecommendation struct {
Theme string
Summary string
Impact string
RecommendedBy string
}
// PrioritizedRecommendation wraps a recommendation with ranking metadata.
type PrioritizedRecommendation struct {
Recommendation *AlignmentRecommendation
Score float64
Rank int
}
// RecommendationHistory tracks lifecycle updates for a recommendation.
type RecommendationHistory struct {
RecommendationID string
Entries []RecommendationHistoryEntry
}
// RecommendationHistoryEntry represents a single change entry.
type RecommendationHistoryEntry struct {
Timestamp time.Time
Status ImplementationStatus
Notes string
}
// ImplementationStatus reflects execution state for recommendations.
type ImplementationStatus string
const (
ImplementationPending ImplementationStatus = "pending"
ImplementationActive ImplementationStatus = "active"
ImplementationBlocked ImplementationStatus = "blocked"
ImplementationDone ImplementationStatus = "completed"
)
// RecommendationEffectiveness offers coarse metrics on outcome quality.
type RecommendationEffectiveness struct {
SuccessRate float64
AverageTime time.Duration
Feedback []string
}
// RecommendationStatistics aggregates recommendation issuance metrics.
type RecommendationStatistics struct {
TotalCreated int
TotalCompleted int
AveragePriority float64
LastUpdated time.Time
}
// AlignmentMetrics is a lightweight placeholder exported for engine integration.
type AlignmentMetrics struct {
Assessments int
SuccessRate float64
FailureRate float64
AverageScore float64
}
// GoalMetrics is a stub summarising per-goal metrics.
type GoalMetrics struct {
GoalID string
AverageScore float64
SuccessRate float64
LastUpdated time.Time
}
// ProgressMetrics is a stub capturing aggregate progress data.
type ProgressMetrics struct {
OverallCompletion float64
ActiveGoals int
CompletedGoals int
UpdatedAt time.Time
}
// MetricsTrends wraps high-level trend information.
type MetricsTrends struct {
Metric string
TrendLine []float64
Timestamp time.Time
}
// MetricsReport represents a generated metrics report placeholder.
type MetricsReport struct {
ID string
Generated time.Time
Summary string
}
// MetricsConfiguration reflects configuration for metrics collection.
type MetricsConfiguration struct {
Enabled bool
Interval time.Duration
}
// SyncResult summarises a synchronisation run.
type SyncResult struct {
SyncedItems int
Errors []string
}
// ImportResult summarises the outcome of an import operation.
type ImportResult struct {
Imported int
Skipped int
Errors []string
}
// SyncSettings captures synchronisation preferences.
type SyncSettings struct {
Enabled bool
Interval time.Duration
}
// SyncStatus provides health information about sync processes.
type SyncStatus struct {
LastSync time.Time
Healthy bool
Message string
}
// AssessmentValidation provides validation results for assessments.
type AssessmentValidation struct {
Valid bool
Issues []string
CheckedAt time.Time
}
// ConfigurationValidation summarises configuration validation status.
type ConfigurationValidation struct {
Valid bool
Messages []string
}
// WeightsValidation describes validation for weighting schemes.
type WeightsValidation struct {
Normalized bool
Adjustments map[string]float64
}
// ConsistencyIssue represents a detected consistency issue.
type ConsistencyIssue struct {
Description string
Severity DriftSeverity
DetectedAt time.Time
}
// AlignmentHealthCheck is a stub for health check outputs.
type AlignmentHealthCheck struct {
Status string
Details string
CheckedAt time.Time
}
// NotificationRules captures notification configuration stubs.
type NotificationRules struct {
Enabled bool
Channels []string
}
// NotificationRecord represents a delivered notification.
type NotificationRecord struct {
ID string
Timestamp time.Time
Recipient string
Status string
}

View File

@@ -4,176 +4,175 @@ import (
"time"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context"
)
// ProjectGoal represents a high-level project objective
type ProjectGoal struct {
ID string `json:"id"` // Unique identifier
Name string `json:"name"` // Goal name
Description string `json:"description"` // Detailed description
Keywords []string `json:"keywords"` // Associated keywords
Priority int `json:"priority"` // Priority level (1=highest)
Phase string `json:"phase"` // Project phase
Category string `json:"category"` // Goal category
Owner string `json:"owner"` // Goal owner
Status GoalStatus `json:"status"` // Current status
ID string `json:"id"` // Unique identifier
Name string `json:"name"` // Goal name
Description string `json:"description"` // Detailed description
Keywords []string `json:"keywords"` // Associated keywords
Priority int `json:"priority"` // Priority level (1=highest)
Phase string `json:"phase"` // Project phase
Category string `json:"category"` // Goal category
Owner string `json:"owner"` // Goal owner
Status GoalStatus `json:"status"` // Current status
// Success criteria
Metrics []string `json:"metrics"` // Success metrics
SuccessCriteria []*SuccessCriterion `json:"success_criteria"` // Detailed success criteria
AcceptanceCriteria []string `json:"acceptance_criteria"` // Acceptance criteria
Metrics []string `json:"metrics"` // Success metrics
SuccessCriteria []*SuccessCriterion `json:"success_criteria"` // Detailed success criteria
AcceptanceCriteria []string `json:"acceptance_criteria"` // Acceptance criteria
// Timeline
StartDate *time.Time `json:"start_date,omitempty"` // Goal start date
TargetDate *time.Time `json:"target_date,omitempty"` // Target completion date
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
StartDate *time.Time `json:"start_date,omitempty"` // Goal start date
TargetDate *time.Time `json:"target_date,omitempty"` // Target completion date
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
// Relationships
ParentGoalID *string `json:"parent_goal_id,omitempty"` // Parent goal
ChildGoalIDs []string `json:"child_goal_ids"` // Child goals
Dependencies []string `json:"dependencies"` // Goal dependencies
ParentGoalID *string `json:"parent_goal_id,omitempty"` // Parent goal
ChildGoalIDs []string `json:"child_goal_ids"` // Child goals
Dependencies []string `json:"dependencies"` // Goal dependencies
// Configuration
Weights *GoalWeights `json:"weights"` // Assessment weights
ThresholdScore float64 `json:"threshold_score"` // Minimum alignment score
Weights *GoalWeights `json:"weights"` // Assessment weights
ThresholdScore float64 `json:"threshold_score"` // Minimum alignment score
// Metadata
CreatedAt time.Time `json:"created_at"` // When created
UpdatedAt time.Time `json:"updated_at"` // When last updated
CreatedBy string `json:"created_by"` // Who created it
Tags []string `json:"tags"` // Goal tags
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
CreatedAt time.Time `json:"created_at"` // When created
UpdatedAt time.Time `json:"updated_at"` // When last updated
CreatedBy string `json:"created_by"` // Who created it
Tags []string `json:"tags"` // Goal tags
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}
// GoalStatus represents the current status of a goal
type GoalStatus string
const (
GoalStatusDraft GoalStatus = "draft" // Goal is in draft state
GoalStatusActive GoalStatus = "active" // Goal is active
GoalStatusOnHold GoalStatus = "on_hold" // Goal is on hold
GoalStatusCompleted GoalStatus = "completed" // Goal is completed
GoalStatusCancelled GoalStatus = "cancelled" // Goal is cancelled
GoalStatusArchived GoalStatus = "archived" // Goal is archived
GoalStatusDraft GoalStatus = "draft" // Goal is in draft state
GoalStatusActive GoalStatus = "active" // Goal is active
GoalStatusOnHold GoalStatus = "on_hold" // Goal is on hold
GoalStatusCompleted GoalStatus = "completed" // Goal is completed
GoalStatusCancelled GoalStatus = "cancelled" // Goal is cancelled
GoalStatusArchived GoalStatus = "archived" // Goal is archived
)
// SuccessCriterion represents a specific success criterion for a goal
type SuccessCriterion struct {
ID string `json:"id"` // Criterion ID
Description string `json:"description"` // Criterion description
MetricName string `json:"metric_name"` // Associated metric
TargetValue interface{} `json:"target_value"` // Target value
CurrentValue interface{} `json:"current_value"` // Current value
Unit string `json:"unit"` // Value unit
ComparisonOp string `json:"comparison_op"` // Comparison operator (>=, <=, ==, etc.)
Weight float64 `json:"weight"` // Criterion weight
Achieved bool `json:"achieved"` // Whether achieved
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
ID string `json:"id"` // Criterion ID
Description string `json:"description"` // Criterion description
MetricName string `json:"metric_name"` // Associated metric
TargetValue interface{} `json:"target_value"` // Target value
CurrentValue interface{} `json:"current_value"` // Current value
Unit string `json:"unit"` // Value unit
ComparisonOp string `json:"comparison_op"` // Comparison operator (>=, <=, ==, etc.)
Weight float64 `json:"weight"` // Criterion weight
Achieved bool `json:"achieved"` // Whether achieved
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
}
// GoalWeights represents weights for different aspects of goal alignment assessment
type GoalWeights struct {
KeywordMatch float64 `json:"keyword_match"` // Weight for keyword matching
SemanticAlignment float64 `json:"semantic_alignment"` // Weight for semantic alignment
PurposeAlignment float64 `json:"purpose_alignment"` // Weight for purpose alignment
TechnologyMatch float64 `json:"technology_match"` // Weight for technology matching
QualityScore float64 `json:"quality_score"` // Weight for context quality
RecentActivity float64 `json:"recent_activity"` // Weight for recent activity
ImportanceScore float64 `json:"importance_score"` // Weight for component importance
KeywordMatch float64 `json:"keyword_match"` // Weight for keyword matching
SemanticAlignment float64 `json:"semantic_alignment"` // Weight for semantic alignment
PurposeAlignment float64 `json:"purpose_alignment"` // Weight for purpose alignment
TechnologyMatch float64 `json:"technology_match"` // Weight for technology matching
QualityScore float64 `json:"quality_score"` // Weight for context quality
RecentActivity float64 `json:"recent_activity"` // Weight for recent activity
ImportanceScore float64 `json:"importance_score"` // Weight for component importance
}
// AlignmentAssessment represents overall alignment assessment for a context
type AlignmentAssessment struct {
Address ucxl.Address `json:"address"` // Context address
OverallScore float64 `json:"overall_score"` // Overall alignment score (0-1)
GoalAlignments []*GoalAlignment `json:"goal_alignments"` // Individual goal alignments
StrengthAreas []string `json:"strength_areas"` // Areas of strong alignment
WeaknessAreas []string `json:"weakness_areas"` // Areas of weak alignment
Recommendations []*AlignmentRecommendation `json:"recommendations"` // Improvement recommendations
AssessedAt time.Time `json:"assessed_at"` // When assessment was performed
AssessmentVersion string `json:"assessment_version"` // Assessment algorithm version
Confidence float64 `json:"confidence"` // Assessment confidence (0-1)
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
Address ucxl.Address `json:"address"` // Context address
OverallScore float64 `json:"overall_score"` // Overall alignment score (0-1)
GoalAlignments []*GoalAlignment `json:"goal_alignments"` // Individual goal alignments
StrengthAreas []string `json:"strength_areas"` // Areas of strong alignment
WeaknessAreas []string `json:"weakness_areas"` // Areas of weak alignment
Recommendations []*AlignmentRecommendation `json:"recommendations"` // Improvement recommendations
AssessedAt time.Time `json:"assessed_at"` // When assessment was performed
AssessmentVersion string `json:"assessment_version"` // Assessment algorithm version
Confidence float64 `json:"confidence"` // Assessment confidence (0-1)
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}
// GoalAlignment represents alignment assessment for a specific goal
type GoalAlignment struct {
GoalID string `json:"goal_id"` // Goal identifier
GoalName string `json:"goal_name"` // Goal name
AlignmentScore float64 `json:"alignment_score"` // Alignment score (0-1)
ComponentScores *AlignmentScores `json:"component_scores"` // Component-wise scores
MatchedKeywords []string `json:"matched_keywords"` // Keywords that matched
MatchedCriteria []string `json:"matched_criteria"` // Criteria that matched
Explanation string `json:"explanation"` // Alignment explanation
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in assessment
ImprovementAreas []string `json:"improvement_areas"` // Areas for improvement
Strengths []string `json:"strengths"` // Alignment strengths
GoalID string `json:"goal_id"` // Goal identifier
GoalName string `json:"goal_name"` // Goal name
AlignmentScore float64 `json:"alignment_score"` // Alignment score (0-1)
ComponentScores *AlignmentScores `json:"component_scores"` // Component-wise scores
MatchedKeywords []string `json:"matched_keywords"` // Keywords that matched
MatchedCriteria []string `json:"matched_criteria"` // Criteria that matched
Explanation string `json:"explanation"` // Alignment explanation
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in assessment
ImprovementAreas []string `json:"improvement_areas"` // Areas for improvement
Strengths []string `json:"strengths"` // Alignment strengths
}
// AlignmentScores represents component scores for alignment assessment
type AlignmentScores struct {
KeywordScore float64 `json:"keyword_score"` // Keyword matching score
SemanticScore float64 `json:"semantic_score"` // Semantic alignment score
PurposeScore float64 `json:"purpose_score"` // Purpose alignment score
TechnologyScore float64 `json:"technology_score"` // Technology alignment score
QualityScore float64 `json:"quality_score"` // Context quality score
ActivityScore float64 `json:"activity_score"` // Recent activity score
ImportanceScore float64 `json:"importance_score"` // Component importance score
KeywordScore float64 `json:"keyword_score"` // Keyword matching score
SemanticScore float64 `json:"semantic_score"` // Semantic alignment score
PurposeScore float64 `json:"purpose_score"` // Purpose alignment score
TechnologyScore float64 `json:"technology_score"` // Technology alignment score
QualityScore float64 `json:"quality_score"` // Context quality score
ActivityScore float64 `json:"activity_score"` // Recent activity score
ImportanceScore float64 `json:"importance_score"` // Component importance score
}
// AlignmentRecommendation represents a recommendation for improving alignment
type AlignmentRecommendation struct {
ID string `json:"id"` // Recommendation ID
Type RecommendationType `json:"type"` // Recommendation type
Priority int `json:"priority"` // Priority (1=highest)
Title string `json:"title"` // Recommendation title
Description string `json:"description"` // Detailed description
GoalID *string `json:"goal_id,omitempty"` // Related goal
Address ucxl.Address `json:"address"` // Context address
ID string `json:"id"` // Recommendation ID
Type RecommendationType `json:"type"` // Recommendation type
Priority int `json:"priority"` // Priority (1=highest)
Title string `json:"title"` // Recommendation title
Description string `json:"description"` // Detailed description
GoalID *string `json:"goal_id,omitempty"` // Related goal
Address ucxl.Address `json:"address"` // Context address
// Implementation details
ActionItems []string `json:"action_items"` // Specific actions
EstimatedEffort EffortLevel `json:"estimated_effort"` // Estimated effort
ExpectedImpact ImpactLevel `json:"expected_impact"` // Expected impact
RequiredRoles []string `json:"required_roles"` // Required roles
Prerequisites []string `json:"prerequisites"` // Prerequisites
ActionItems []string `json:"action_items"` // Specific actions
EstimatedEffort EffortLevel `json:"estimated_effort"` // Estimated effort
ExpectedImpact ImpactLevel `json:"expected_impact"` // Expected impact
RequiredRoles []string `json:"required_roles"` // Required roles
Prerequisites []string `json:"prerequisites"` // Prerequisites
// Status tracking
Status RecommendationStatus `json:"status"` // Implementation status
AssignedTo []string `json:"assigned_to"` // Assigned team members
CreatedAt time.Time `json:"created_at"` // When created
DueDate *time.Time `json:"due_date,omitempty"` // Implementation due date
CompletedAt *time.Time `json:"completed_at,omitempty"` // When completed
Status RecommendationStatus `json:"status"` // Implementation status
AssignedTo []string `json:"assigned_to"` // Assigned team members
CreatedAt time.Time `json:"created_at"` // When created
DueDate *time.Time `json:"due_date,omitempty"` // Implementation due date
CompletedAt *time.Time `json:"completed_at,omitempty"` // When completed
// Metadata
Tags []string `json:"tags"` // Recommendation tags
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
Tags []string `json:"tags"` // Recommendation tags
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}
// RecommendationType represents types of alignment recommendations
type RecommendationType string
const (
RecommendationKeywordImprovement RecommendationType = "keyword_improvement" // Improve keyword matching
RecommendationPurposeAlignment RecommendationType = "purpose_alignment" // Align purpose better
RecommendationTechnologyUpdate RecommendationType = "technology_update" // Update technology usage
RecommendationQualityImprovement RecommendationType = "quality_improvement" // Improve context quality
RecommendationDocumentation RecommendationType = "documentation" // Add/improve documentation
RecommendationRefactoring RecommendationType = "refactoring" // Code refactoring
RecommendationArchitectural RecommendationType = "architectural" // Architectural changes
RecommendationTesting RecommendationType = "testing" // Testing improvements
RecommendationPerformance RecommendationType = "performance" // Performance optimization
RecommendationSecurity RecommendationType = "security" // Security enhancements
RecommendationKeywordImprovement RecommendationType = "keyword_improvement" // Improve keyword matching
RecommendationPurposeAlignment RecommendationType = "purpose_alignment" // Align purpose better
RecommendationTechnologyUpdate RecommendationType = "technology_update" // Update technology usage
RecommendationQualityImprovement RecommendationType = "quality_improvement" // Improve context quality
RecommendationDocumentation RecommendationType = "documentation" // Add/improve documentation
RecommendationRefactoring RecommendationType = "refactoring" // Code refactoring
RecommendationArchitectural RecommendationType = "architectural" // Architectural changes
RecommendationTesting RecommendationType = "testing" // Testing improvements
RecommendationPerformance RecommendationType = "performance" // Performance optimization
RecommendationSecurity RecommendationType = "security" // Security enhancements
)
// EffortLevel represents estimated effort levels
type EffortLevel string
const (
EffortLow EffortLevel = "low" // Low effort (1-2 hours)
EffortMedium EffortLevel = "medium" // Medium effort (1-2 days)
EffortHigh EffortLevel = "high" // High effort (1-2 weeks)
EffortLow EffortLevel = "low" // Low effort (1-2 hours)
EffortMedium EffortLevel = "medium" // Medium effort (1-2 days)
EffortHigh EffortLevel = "high" // High effort (1-2 weeks)
EffortVeryHigh EffortLevel = "very_high" // Very high effort (>2 weeks)
)
@@ -181,9 +180,9 @@ const (
type ImpactLevel string
const (
ImpactLow ImpactLevel = "low" // Low impact
ImpactMedium ImpactLevel = "medium" // Medium impact
ImpactHigh ImpactLevel = "high" // High impact
ImpactLow ImpactLevel = "low" // Low impact
ImpactMedium ImpactLevel = "medium" // Medium impact
ImpactHigh ImpactLevel = "high" // High impact
ImpactCritical ImpactLevel = "critical" // Critical impact
)
@@ -201,38 +200,38 @@ const (
// GoalProgress represents progress toward goal achievement
type GoalProgress struct {
GoalID string `json:"goal_id"` // Goal identifier
CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage (0-100)
CriteriaProgress []*CriterionProgress `json:"criteria_progress"` // Progress for each criterion
Milestones []*MilestoneProgress `json:"milestones"` // Milestone progress
Velocity float64 `json:"velocity"` // Progress velocity (% per day)
EstimatedCompletion *time.Time `json:"estimated_completion,omitempty"` // Estimated completion date
RiskFactors []string `json:"risk_factors"` // Identified risk factors
Blockers []string `json:"blockers"` // Current blockers
LastUpdated time.Time `json:"last_updated"` // When last updated
UpdatedBy string `json:"updated_by"` // Who last updated
GoalID string `json:"goal_id"` // Goal identifier
CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage (0-100)
CriteriaProgress []*CriterionProgress `json:"criteria_progress"` // Progress for each criterion
Milestones []*MilestoneProgress `json:"milestones"` // Milestone progress
Velocity float64 `json:"velocity"` // Progress velocity (% per day)
EstimatedCompletion *time.Time `json:"estimated_completion,omitempty"` // Estimated completion date
RiskFactors []string `json:"risk_factors"` // Identified risk factors
Blockers []string `json:"blockers"` // Current blockers
LastUpdated time.Time `json:"last_updated"` // When last updated
UpdatedBy string `json:"updated_by"` // Who last updated
}
// CriterionProgress represents progress for a specific success criterion
type CriterionProgress struct {
CriterionID string `json:"criterion_id"` // Criterion ID
CurrentValue interface{} `json:"current_value"` // Current value
TargetValue interface{} `json:"target_value"` // Target value
ProgressPercentage float64 `json:"progress_percentage"` // Progress percentage
Achieved bool `json:"achieved"` // Whether achieved
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
Notes string `json:"notes"` // Progress notes
CriterionID string `json:"criterion_id"` // Criterion ID
CurrentValue interface{} `json:"current_value"` // Current value
TargetValue interface{} `json:"target_value"` // Target value
ProgressPercentage float64 `json:"progress_percentage"` // Progress percentage
Achieved bool `json:"achieved"` // Whether achieved
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
Notes string `json:"notes"` // Progress notes
}
// MilestoneProgress represents progress for a goal milestone
type MilestoneProgress struct {
MilestoneID string `json:"milestone_id"` // Milestone ID
Name string `json:"name"` // Milestone name
Status MilestoneStatus `json:"status"` // Current status
MilestoneID string `json:"milestone_id"` // Milestone ID
Name string `json:"name"` // Milestone name
Status MilestoneStatus `json:"status"` // Current status
CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage
PlannedDate time.Time `json:"planned_date"` // Planned completion date
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
DelayReason string `json:"delay_reason"` // Reason for delay if applicable
PlannedDate time.Time `json:"planned_date"` // Planned completion date
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
DelayReason string `json:"delay_reason"` // Reason for delay if applicable
}
// MilestoneStatus represents status of a milestone
@@ -248,27 +247,27 @@ const (
// AlignmentDrift represents detected alignment drift
type AlignmentDrift struct {
Address ucxl.Address `json:"address"` // Context address
DriftType DriftType `json:"drift_type"` // Type of drift
Severity DriftSeverity `json:"severity"` // Drift severity
CurrentScore float64 `json:"current_score"` // Current alignment score
PreviousScore float64 `json:"previous_score"` // Previous alignment score
ScoreDelta float64 `json:"score_delta"` // Change in score
AffectedGoals []string `json:"affected_goals"` // Goals affected by drift
DetectedAt time.Time `json:"detected_at"` // When drift was detected
DriftReason []string `json:"drift_reason"` // Reasons for drift
RecommendedActions []string `json:"recommended_actions"` // Recommended actions
Priority DriftPriority `json:"priority"` // Priority for addressing
Address ucxl.Address `json:"address"` // Context address
DriftType DriftType `json:"drift_type"` // Type of drift
Severity DriftSeverity `json:"severity"` // Drift severity
CurrentScore float64 `json:"current_score"` // Current alignment score
PreviousScore float64 `json:"previous_score"` // Previous alignment score
ScoreDelta float64 `json:"score_delta"` // Change in score
AffectedGoals []string `json:"affected_goals"` // Goals affected by drift
DetectedAt time.Time `json:"detected_at"` // When drift was detected
DriftReason []string `json:"drift_reason"` // Reasons for drift
RecommendedActions []string `json:"recommended_actions"` // Recommended actions
Priority DriftPriority `json:"priority"` // Priority for addressing
}
// DriftType represents types of alignment drift
type DriftType string
const (
DriftTypeGradual DriftType = "gradual" // Gradual drift over time
DriftTypeSudden DriftType = "sudden" // Sudden drift
DriftTypeOscillating DriftType = "oscillating" // Oscillating drift pattern
DriftTypeGoalChange DriftType = "goal_change" // Due to goal changes
DriftTypeGradual DriftType = "gradual" // Gradual drift over time
DriftTypeSudden DriftType = "sudden" // Sudden drift
DriftTypeOscillating DriftType = "oscillating" // Oscillating drift pattern
DriftTypeGoalChange DriftType = "goal_change" // Due to goal changes
DriftTypeContextChange DriftType = "context_change" // Due to context changes
)
@@ -286,68 +285,68 @@ const (
type DriftPriority string
const (
DriftPriorityLow DriftPriority = "low" // Low priority
DriftPriorityMedium DriftPriority = "medium" // Medium priority
DriftPriorityHigh DriftPriority = "high" // High priority
DriftPriorityUrgent DriftPriority = "urgent" // Urgent priority
DriftPriorityLow DriftPriority = "low" // Low priority
DriftPriorityMedium DriftPriority = "medium" // Medium priority
DriftPriorityHigh DriftPriority = "high" // High priority
DriftPriorityUrgent DriftPriority = "urgent" // Urgent priority
)
// AlignmentTrends represents alignment trends over time
type AlignmentTrends struct {
Address ucxl.Address `json:"address"` // Context address
TimeRange time.Duration `json:"time_range"` // Analyzed time range
DataPoints []*TrendDataPoint `json:"data_points"` // Trend data points
OverallTrend TrendDirection `json:"overall_trend"` // Overall trend direction
TrendStrength float64 `json:"trend_strength"` // Trend strength (0-1)
Volatility float64 `json:"volatility"` // Score volatility
SeasonalPatterns []*SeasonalPattern `json:"seasonal_patterns"` // Detected seasonal patterns
AnomalousPoints []*AnomalousPoint `json:"anomalous_points"` // Anomalous data points
Predictions []*TrendPrediction `json:"predictions"` // Future trend predictions
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
Address ucxl.Address `json:"address"` // Context address
TimeRange time.Duration `json:"time_range"` // Analyzed time range
DataPoints []*TrendDataPoint `json:"data_points"` // Trend data points
OverallTrend TrendDirection `json:"overall_trend"` // Overall trend direction
TrendStrength float64 `json:"trend_strength"` // Trend strength (0-1)
Volatility float64 `json:"volatility"` // Score volatility
SeasonalPatterns []*SeasonalPattern `json:"seasonal_patterns"` // Detected seasonal patterns
AnomalousPoints []*AnomalousPoint `json:"anomalous_points"` // Anomalous data points
Predictions []*TrendPrediction `json:"predictions"` // Future trend predictions
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
}
// TrendDataPoint represents a single data point in alignment trends
type TrendDataPoint struct {
Timestamp time.Time `json:"timestamp"` // Data point timestamp
AlignmentScore float64 `json:"alignment_score"` // Alignment score at this time
GoalScores map[string]float64 `json:"goal_scores"` // Individual goal scores
Events []string `json:"events"` // Events that occurred around this time
Timestamp time.Time `json:"timestamp"` // Data point timestamp
AlignmentScore float64 `json:"alignment_score"` // Alignment score at this time
GoalScores map[string]float64 `json:"goal_scores"` // Individual goal scores
Events []string `json:"events"` // Events that occurred around this time
}
// TrendDirection represents direction of alignment trends
type TrendDirection string
const (
TrendDirectionImproving TrendDirection = "improving" // Improving trend
TrendDirectionDeclining TrendDirection = "declining" // Declining trend
TrendDirectionStable TrendDirection = "stable" // Stable trend
TrendDirectionVolatile TrendDirection = "volatile" // Volatile trend
TrendDirectionImproving TrendDirection = "improving" // Improving trend
TrendDirectionDeclining TrendDirection = "declining" // Declining trend
TrendDirectionStable TrendDirection = "stable" // Stable trend
TrendDirectionVolatile TrendDirection = "volatile" // Volatile trend
)
// SeasonalPattern represents a detected seasonal pattern in alignment
type SeasonalPattern struct {
PatternType string `json:"pattern_type"` // Type of pattern (weekly, monthly, etc.)
Period time.Duration `json:"period"` // Pattern period
Amplitude float64 `json:"amplitude"` // Pattern amplitude
Confidence float64 `json:"confidence"` // Pattern confidence
Description string `json:"description"` // Pattern description
PatternType string `json:"pattern_type"` // Type of pattern (weekly, monthly, etc.)
Period time.Duration `json:"period"` // Pattern period
Amplitude float64 `json:"amplitude"` // Pattern amplitude
Confidence float64 `json:"confidence"` // Pattern confidence
Description string `json:"description"` // Pattern description
}
// AnomalousPoint represents an anomalous data point
type AnomalousPoint struct {
Timestamp time.Time `json:"timestamp"` // When anomaly occurred
ExpectedScore float64 `json:"expected_score"` // Expected alignment score
ActualScore float64 `json:"actual_score"` // Actual alignment score
AnomalyScore float64 `json:"anomaly_score"` // Anomaly score
PossibleCauses []string `json:"possible_causes"` // Possible causes
Timestamp time.Time `json:"timestamp"` // When anomaly occurred
ExpectedScore float64 `json:"expected_score"` // Expected alignment score
ActualScore float64 `json:"actual_score"` // Actual alignment score
AnomalyScore float64 `json:"anomaly_score"` // Anomaly score
PossibleCauses []string `json:"possible_causes"` // Possible causes
}
// TrendPrediction represents a prediction of future alignment trends
type TrendPrediction struct {
Timestamp time.Time `json:"timestamp"` // Predicted timestamp
PredictedScore float64 `json:"predicted_score"` // Predicted alignment score
Timestamp time.Time `json:"timestamp"` // Predicted timestamp
PredictedScore float64 `json:"predicted_score"` // Predicted alignment score
ConfidenceInterval *ConfidenceInterval `json:"confidence_interval"` // Confidence interval
Probability float64 `json:"probability"` // Prediction probability
Probability float64 `json:"probability"` // Prediction probability
}
// ConfidenceInterval represents a confidence interval for predictions
@@ -359,21 +358,21 @@ type ConfidenceInterval struct {
// AlignmentWeights represents weights for alignment calculation
type AlignmentWeights struct {
GoalWeights map[string]float64 `json:"goal_weights"` // Weights by goal ID
CategoryWeights map[string]float64 `json:"category_weights"` // Weights by goal category
PriorityWeights map[int]float64 `json:"priority_weights"` // Weights by priority level
PhaseWeights map[string]float64 `json:"phase_weights"` // Weights by project phase
RoleWeights map[string]float64 `json:"role_weights"` // Weights by role
ComponentWeights *AlignmentScores `json:"component_weights"` // Weights for score components
TemporalWeights *TemporalWeights `json:"temporal_weights"` // Temporal weighting factors
GoalWeights map[string]float64 `json:"goal_weights"` // Weights by goal ID
CategoryWeights map[string]float64 `json:"category_weights"` // Weights by goal category
PriorityWeights map[int]float64 `json:"priority_weights"` // Weights by priority level
PhaseWeights map[string]float64 `json:"phase_weights"` // Weights by project phase
RoleWeights map[string]float64 `json:"role_weights"` // Weights by role
ComponentWeights *AlignmentScores `json:"component_weights"` // Weights for score components
TemporalWeights *TemporalWeights `json:"temporal_weights"` // Temporal weighting factors
}
// TemporalWeights represents temporal weighting factors
type TemporalWeights struct {
RecentWeight float64 `json:"recent_weight"` // Weight for recent changes
DecayFactor float64 `json:"decay_factor"` // Score decay factor over time
RecencyWindow time.Duration `json:"recency_window"` // Window for considering recent activity
HistoricalWeight float64 `json:"historical_weight"` // Weight for historical alignment
RecentWeight float64 `json:"recent_weight"` // Weight for recent changes
DecayFactor float64 `json:"decay_factor"` // Score decay factor over time
RecencyWindow time.Duration `json:"recency_window"` // Window for considering recent activity
HistoricalWeight float64 `json:"historical_weight"` // Weight for historical alignment
}
// GoalFilter represents filtering criteria for goal listing
@@ -393,55 +392,55 @@ type GoalFilter struct {
// GoalHierarchy represents the hierarchical structure of goals
type GoalHierarchy struct {
RootGoals []*GoalNode `json:"root_goals"` // Root level goals
MaxDepth int `json:"max_depth"` // Maximum hierarchy depth
TotalGoals int `json:"total_goals"` // Total number of goals
GeneratedAt time.Time `json:"generated_at"` // When hierarchy was generated
RootGoals []*GoalNode `json:"root_goals"` // Root level goals
MaxDepth int `json:"max_depth"` // Maximum hierarchy depth
TotalGoals int `json:"total_goals"` // Total number of goals
GeneratedAt time.Time `json:"generated_at"` // When hierarchy was generated
}
// GoalNode represents a node in the goal hierarchy
type GoalNode struct {
Goal *ProjectGoal `json:"goal"` // Goal information
Children []*GoalNode `json:"children"` // Child goals
Depth int `json:"depth"` // Depth in hierarchy
Path []string `json:"path"` // Path from root
Goal *ProjectGoal `json:"goal"` // Goal information
Children []*GoalNode `json:"children"` // Child goals
Depth int `json:"depth"` // Depth in hierarchy
Path []string `json:"path"` // Path from root
}
// GoalValidation represents validation results for a goal
type GoalValidation struct {
Valid bool `json:"valid"` // Whether goal is valid
Issues []*ValidationIssue `json:"issues"` // Validation issues
Warnings []*ValidationWarning `json:"warnings"` // Validation warnings
ValidatedAt time.Time `json:"validated_at"` // When validated
Valid bool `json:"valid"` // Whether goal is valid
Issues []*ValidationIssue `json:"issues"` // Validation issues
Warnings []*ValidationWarning `json:"warnings"` // Validation warnings
ValidatedAt time.Time `json:"validated_at"` // When validated
}
// ValidationIssue represents a validation issue
type ValidationIssue struct {
Field string `json:"field"` // Affected field
Code string `json:"code"` // Issue code
Message string `json:"message"` // Issue message
Severity string `json:"severity"` // Issue severity
Suggestion string `json:"suggestion"` // Suggested fix
Field string `json:"field"` // Affected field
Code string `json:"code"` // Issue code
Message string `json:"message"` // Issue message
Severity string `json:"severity"` // Issue severity
Suggestion string `json:"suggestion"` // Suggested fix
}
// ValidationWarning represents a validation warning
type ValidationWarning struct {
Field string `json:"field"` // Affected field
Code string `json:"code"` // Warning code
Message string `json:"message"` // Warning message
Suggestion string `json:"suggestion"` // Suggested improvement
Field string `json:"field"` // Affected field
Code string `json:"code"` // Warning code
Message string `json:"message"` // Warning message
Suggestion string `json:"suggestion"` // Suggested improvement
}
// GoalMilestone represents a milestone for goal tracking
type GoalMilestone struct {
ID string `json:"id"` // Milestone ID
Name string `json:"name"` // Milestone name
Description string `json:"description"` // Milestone description
PlannedDate time.Time `json:"planned_date"` // Planned completion date
Weight float64 `json:"weight"` // Milestone weight
Criteria []string `json:"criteria"` // Completion criteria
Dependencies []string `json:"dependencies"` // Milestone dependencies
CreatedAt time.Time `json:"created_at"` // When created
ID string `json:"id"` // Milestone ID
Name string `json:"name"` // Milestone name
Description string `json:"description"` // Milestone description
PlannedDate time.Time `json:"planned_date"` // Planned completion date
Weight float64 `json:"weight"` // Milestone weight
Criteria []string `json:"criteria"` // Completion criteria
Dependencies []string `json:"dependencies"` // Milestone dependencies
CreatedAt time.Time `json:"created_at"` // When created
}
// MilestoneStatus represents status of a milestone (duplicate removed)
@@ -449,39 +448,39 @@ type GoalMilestone struct {
// ProgressUpdate represents an update to goal progress
type ProgressUpdate struct {
UpdateType ProgressUpdateType `json:"update_type"` // Type of update
CompletionDelta float64 `json:"completion_delta"` // Change in completion percentage
CriteriaUpdates []*CriterionUpdate `json:"criteria_updates"` // Updates to criteria
MilestoneUpdates []*MilestoneUpdate `json:"milestone_updates"` // Updates to milestones
Notes string `json:"notes"` // Update notes
UpdatedBy string `json:"updated_by"` // Who made the update
Evidence []string `json:"evidence"` // Evidence for progress
RiskFactors []string `json:"risk_factors"` // New risk factors
Blockers []string `json:"blockers"` // New blockers
UpdateType ProgressUpdateType `json:"update_type"` // Type of update
CompletionDelta float64 `json:"completion_delta"` // Change in completion percentage
CriteriaUpdates []*CriterionUpdate `json:"criteria_updates"` // Updates to criteria
MilestoneUpdates []*MilestoneUpdate `json:"milestone_updates"` // Updates to milestones
Notes string `json:"notes"` // Update notes
UpdatedBy string `json:"updated_by"` // Who made the update
Evidence []string `json:"evidence"` // Evidence for progress
RiskFactors []string `json:"risk_factors"` // New risk factors
Blockers []string `json:"blockers"` // New blockers
}
// ProgressUpdateType represents types of progress updates
type ProgressUpdateType string
const (
ProgressUpdateTypeIncrement ProgressUpdateType = "increment" // Incremental progress
ProgressUpdateTypeAbsolute ProgressUpdateType = "absolute" // Absolute progress value
ProgressUpdateTypeMilestone ProgressUpdateType = "milestone" // Milestone completion
ProgressUpdateTypeCriterion ProgressUpdateType = "criterion" // Criterion achievement
ProgressUpdateTypeIncrement ProgressUpdateType = "increment" // Incremental progress
ProgressUpdateTypeAbsolute ProgressUpdateType = "absolute" // Absolute progress value
ProgressUpdateTypeMilestone ProgressUpdateType = "milestone" // Milestone completion
ProgressUpdateTypeCriterion ProgressUpdateType = "criterion" // Criterion achievement
)
// CriterionUpdate represents an update to a success criterion
type CriterionUpdate struct {
CriterionID string `json:"criterion_id"` // Criterion ID
NewValue interface{} `json:"new_value"` // New current value
Achieved bool `json:"achieved"` // Whether now achieved
Notes string `json:"notes"` // Update notes
CriterionID string `json:"criterion_id"` // Criterion ID
NewValue interface{} `json:"new_value"` // New current value
Achieved bool `json:"achieved"` // Whether now achieved
Notes string `json:"notes"` // Update notes
}
// MilestoneUpdate represents an update to a milestone
type MilestoneUpdate struct {
MilestoneID string `json:"milestone_id"` // Milestone ID
NewStatus MilestoneStatus `json:"new_status"` // New status
MilestoneID string `json:"milestone_id"` // Milestone ID
NewStatus MilestoneStatus `json:"new_status"` // New status
CompletedDate *time.Time `json:"completed_date,omitempty"` // Completion date if completed
Notes string `json:"notes"` // Update notes
}
Notes string `json:"notes"` // Update notes
}

View File

@@ -26,12 +26,25 @@ type ContextNode struct {
Insights []string `json:"insights"` // Analytical insights
// Hierarchy control
OverridesParent bool `json:"overrides_parent"` // Whether this overrides parent context
ContextSpecificity int `json:"context_specificity"` // Specificity level (higher = more specific)
AppliesToChildren bool `json:"applies_to_children"` // Whether this applies to child directories
OverridesParent bool `json:"overrides_parent"` // Whether this overrides parent context
ContextSpecificity int `json:"context_specificity"` // Specificity level (higher = more specific)
AppliesToChildren bool `json:"applies_to_children"` // Whether this applies to child directories
AppliesTo ContextScope `json:"applies_to"` // Scope of application within hierarchy
Parent *string `json:"parent,omitempty"` // Parent context path
Children []string `json:"children,omitempty"` // Child context paths
// Metadata
// File metadata
FileType string `json:"file_type"` // File extension or type
Language *string `json:"language,omitempty"` // Programming language
Size *int64 `json:"size,omitempty"` // File size in bytes
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification timestamp
ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection
// Temporal metadata
GeneratedAt time.Time `json:"generated_at"` // When context was generated
UpdatedAt time.Time `json:"updated_at"` // Last update timestamp
CreatedBy string `json:"created_by"` // Who created the context
WhoUpdated string `json:"who_updated"` // Who performed the last update
RAGConfidence float64 `json:"rag_confidence"` // RAG system confidence (0-1)
// Access control

View File

@@ -40,7 +40,7 @@ func (ch *ConsistentHashingImpl) AddNode(nodeID string) error {
for i := 0; i < ch.virtualNodes; i++ {
virtualNodeKey := fmt.Sprintf("%s:%d", nodeID, i)
hash := ch.hashKey(virtualNodeKey)
ch.ring[hash] = nodeID
ch.sortedHashes = append(ch.sortedHashes, hash)
}
@@ -88,7 +88,7 @@ func (ch *ConsistentHashingImpl) GetNode(key string) (string, error) {
}
hash := ch.hashKey(key)
// Find the first node with hash >= key hash
idx := sort.Search(len(ch.sortedHashes), func(i int) bool {
return ch.sortedHashes[i] >= hash
@@ -175,7 +175,7 @@ func (ch *ConsistentHashingImpl) GetNodeDistribution() map[string]float64 {
// Calculate the range each node is responsible for
for i, hash := range ch.sortedHashes {
nodeID := ch.ring[hash]
var rangeSize uint64
if i == len(ch.sortedHashes)-1 {
// Last hash wraps around to first
@@ -230,7 +230,7 @@ func (ch *ConsistentHashingImpl) calculateLoadBalance() float64 {
}
avgVariance := totalVariance / float64(len(distribution))
// Convert to a balance score (higher is better, 1.0 is perfect)
// Use 1/(1+variance) to map variance to [0,1] range
return 1.0 / (1.0 + avgVariance/100.0)
@@ -261,11 +261,11 @@ func (ch *ConsistentHashingImpl) GetMetrics() *ConsistentHashMetrics {
defer ch.mu.RUnlock()
return &ConsistentHashMetrics{
TotalKeys: 0, // Would be maintained by usage tracking
NodeUtilization: ch.GetNodeDistribution(),
RebalanceEvents: 0, // Would be maintained by event tracking
AverageSeekTime: 0.1, // Placeholder - would be measured
LoadBalanceScore: ch.calculateLoadBalance(),
TotalKeys: 0, // Would be maintained by usage tracking
NodeUtilization: ch.GetNodeDistribution(),
RebalanceEvents: 0, // Would be maintained by event tracking
AverageSeekTime: 0.1, // Placeholder - would be measured
LoadBalanceScore: ch.calculateLoadBalance(),
LastRebalanceTime: 0, // Would be maintained by event tracking
}
}
@@ -306,7 +306,7 @@ func (ch *ConsistentHashingImpl) addNodeUnsafe(nodeID string) error {
for i := 0; i < ch.virtualNodes; i++ {
virtualNodeKey := fmt.Sprintf("%s:%d", nodeID, i)
hash := ch.hashKey(virtualNodeKey)
ch.ring[hash] = nodeID
ch.sortedHashes = append(ch.sortedHashes, hash)
}
@@ -333,7 +333,7 @@ func (ch *ConsistentHashingImpl) SetVirtualNodeCount(count int) error {
defer ch.mu.Unlock()
ch.virtualNodes = count
// Rehash with new virtual node count
return ch.Rehash()
}
@@ -364,8 +364,8 @@ func (ch *ConsistentHashingImpl) FindClosestNodes(key string, count int) ([]stri
if hash >= keyHash {
distance = hash - keyHash
} else {
// Wrap around distance
distance = (1<<32 - keyHash) + hash
// Wrap around distance without overflowing 32-bit space
distance = uint32((uint64(1)<<32 - uint64(keyHash)) + uint64(hash))
}
distances = append(distances, struct {
@@ -397,4 +397,4 @@ func (ch *ConsistentHashingImpl) FindClosestNodes(key string, count int) ([]stri
}
return nodes, hashes, nil
}
}

View File

@@ -7,39 +7,39 @@ import (
"sync"
"time"
"chorus/pkg/dht"
"chorus/pkg/crypto"
"chorus/pkg/election"
"chorus/pkg/config"
"chorus/pkg/ucxl"
"chorus/pkg/crypto"
"chorus/pkg/dht"
"chorus/pkg/election"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
)
// DistributionCoordinator orchestrates distributed context operations across the cluster
type DistributionCoordinator struct {
mu sync.RWMutex
config *config.Config
dht *dht.DHT
roleCrypto *crypto.RoleCrypto
election election.Election
distributor ContextDistributor
replicationMgr ReplicationManager
conflictResolver ConflictResolver
gossipProtocol GossipProtocol
networkMgr NetworkManager
mu sync.RWMutex
config *config.Config
dht dht.DHT
roleCrypto *crypto.RoleCrypto
election election.Election
distributor ContextDistributor
replicationMgr ReplicationManager
conflictResolver ConflictResolver
gossipProtocol GossipProtocol
networkMgr NetworkManager
// Coordination state
isLeader bool
leaderID string
coordinationTasks chan *CoordinationTask
distributionQueue chan *DistributionRequest
roleFilters map[string]*RoleFilter
healthMonitors map[string]*HealthMonitor
isLeader bool
leaderID string
coordinationTasks chan *CoordinationTask
distributionQueue chan *DistributionRequest
roleFilters map[string]*RoleFilter
healthMonitors map[string]*HealthMonitor
// Statistics and metrics
stats *CoordinationStatistics
performanceMetrics *PerformanceMetrics
stats *CoordinationStatistics
performanceMetrics *PerformanceMetrics
// Configuration
maxConcurrentTasks int
healthCheckInterval time.Duration
@@ -49,14 +49,14 @@ type DistributionCoordinator struct {
// CoordinationTask represents a task for the coordinator
type CoordinationTask struct {
TaskID string `json:"task_id"`
TaskType CoordinationTaskType `json:"task_type"`
Priority Priority `json:"priority"`
CreatedAt time.Time `json:"created_at"`
RequestedBy string `json:"requested_by"`
Payload interface{} `json:"payload"`
Context context.Context `json:"-"`
Callback func(error) `json:"-"`
TaskID string `json:"task_id"`
TaskType CoordinationTaskType `json:"task_type"`
Priority Priority `json:"priority"`
CreatedAt time.Time `json:"created_at"`
RequestedBy string `json:"requested_by"`
Payload interface{} `json:"payload"`
Context context.Context `json:"-"`
Callback func(error) `json:"-"`
}
// CoordinationTaskType represents different types of coordination tasks
@@ -74,55 +74,55 @@ const (
// DistributionRequest represents a request for context distribution
type DistributionRequest struct {
RequestID string `json:"request_id"`
ContextNode *slurpContext.ContextNode `json:"context_node"`
TargetRoles []string `json:"target_roles"`
Priority Priority `json:"priority"`
RequesterID string `json:"requester_id"`
CreatedAt time.Time `json:"created_at"`
Options *DistributionOptions `json:"options"`
Callback func(*DistributionResult, error) `json:"-"`
RequestID string `json:"request_id"`
ContextNode *slurpContext.ContextNode `json:"context_node"`
TargetRoles []string `json:"target_roles"`
Priority Priority `json:"priority"`
RequesterID string `json:"requester_id"`
CreatedAt time.Time `json:"created_at"`
Options *DistributionOptions `json:"options"`
Callback func(*DistributionResult, error) `json:"-"`
}
// DistributionOptions contains options for context distribution
type DistributionOptions struct {
ReplicationFactor int `json:"replication_factor"`
ConsistencyLevel ConsistencyLevel `json:"consistency_level"`
EncryptionLevel crypto.AccessLevel `json:"encryption_level"`
TTL *time.Duration `json:"ttl,omitempty"`
PreferredZones []string `json:"preferred_zones"`
ExcludedNodes []string `json:"excluded_nodes"`
ConflictResolution ResolutionType `json:"conflict_resolution"`
ReplicationFactor int `json:"replication_factor"`
ConsistencyLevel ConsistencyLevel `json:"consistency_level"`
EncryptionLevel crypto.AccessLevel `json:"encryption_level"`
TTL *time.Duration `json:"ttl,omitempty"`
PreferredZones []string `json:"preferred_zones"`
ExcludedNodes []string `json:"excluded_nodes"`
ConflictResolution ResolutionType `json:"conflict_resolution"`
}
// DistributionResult represents the result of a distribution operation
type DistributionResult struct {
RequestID string `json:"request_id"`
Success bool `json:"success"`
DistributedNodes []string `json:"distributed_nodes"`
ReplicationFactor int `json:"replication_factor"`
ProcessingTime time.Duration `json:"processing_time"`
Errors []string `json:"errors"`
ConflictResolved *ConflictResolution `json:"conflict_resolved,omitempty"`
CompletedAt time.Time `json:"completed_at"`
RequestID string `json:"request_id"`
Success bool `json:"success"`
DistributedNodes []string `json:"distributed_nodes"`
ReplicationFactor int `json:"replication_factor"`
ProcessingTime time.Duration `json:"processing_time"`
Errors []string `json:"errors"`
ConflictResolved *ConflictResolution `json:"conflict_resolved,omitempty"`
CompletedAt time.Time `json:"completed_at"`
}
// RoleFilter manages role-based filtering for context access
type RoleFilter struct {
RoleID string `json:"role_id"`
AccessLevel crypto.AccessLevel `json:"access_level"`
AllowedCompartments []string `json:"allowed_compartments"`
FilterRules []*FilterRule `json:"filter_rules"`
LastUpdated time.Time `json:"last_updated"`
RoleID string `json:"role_id"`
AccessLevel crypto.AccessLevel `json:"access_level"`
AllowedCompartments []string `json:"allowed_compartments"`
FilterRules []*FilterRule `json:"filter_rules"`
LastUpdated time.Time `json:"last_updated"`
}
// FilterRule represents a single filtering rule
type FilterRule struct {
RuleID string `json:"rule_id"`
RuleType FilterRuleType `json:"rule_type"`
Pattern string `json:"pattern"`
Action FilterAction `json:"action"`
Metadata map[string]interface{} `json:"metadata"`
RuleID string `json:"rule_id"`
RuleType FilterRuleType `json:"rule_type"`
Pattern string `json:"pattern"`
Action FilterAction `json:"action"`
Metadata map[string]interface{} `json:"metadata"`
}
// FilterRuleType represents different types of filter rules
@@ -139,10 +139,10 @@ const (
type FilterAction string
const (
FilterActionAllow FilterAction = "allow"
FilterActionDeny FilterAction = "deny"
FilterActionModify FilterAction = "modify"
FilterActionAudit FilterAction = "audit"
FilterActionAllow FilterAction = "allow"
FilterActionDeny FilterAction = "deny"
FilterActionModify FilterAction = "modify"
FilterActionAudit FilterAction = "audit"
)
// HealthMonitor monitors the health of a specific component
@@ -160,10 +160,10 @@ type HealthMonitor struct {
type ComponentType string
const (
ComponentTypeDHT ComponentType = "dht"
ComponentTypeReplication ComponentType = "replication"
ComponentTypeGossip ComponentType = "gossip"
ComponentTypeNetwork ComponentType = "network"
ComponentTypeDHT ComponentType = "dht"
ComponentTypeReplication ComponentType = "replication"
ComponentTypeGossip ComponentType = "gossip"
ComponentTypeNetwork ComponentType = "network"
ComponentTypeConflictResolver ComponentType = "conflict_resolver"
)
@@ -190,13 +190,13 @@ type CoordinationStatistics struct {
// PerformanceMetrics tracks detailed performance metrics
type PerformanceMetrics struct {
ThroughputPerSecond float64 `json:"throughput_per_second"`
LatencyPercentiles map[string]float64 `json:"latency_percentiles"`
ErrorRateByType map[string]float64 `json:"error_rate_by_type"`
ResourceUtilization map[string]float64 `json:"resource_utilization"`
NetworkMetrics *NetworkMetrics `json:"network_metrics"`
StorageMetrics *StorageMetrics `json:"storage_metrics"`
LastCalculated time.Time `json:"last_calculated"`
ThroughputPerSecond float64 `json:"throughput_per_second"`
LatencyPercentiles map[string]float64 `json:"latency_percentiles"`
ErrorRateByType map[string]float64 `json:"error_rate_by_type"`
ResourceUtilization map[string]float64 `json:"resource_utilization"`
NetworkMetrics *NetworkMetrics `json:"network_metrics"`
StorageMetrics *StorageMetrics `json:"storage_metrics"`
LastCalculated time.Time `json:"last_calculated"`
}
// NetworkMetrics tracks network-related performance
@@ -210,24 +210,24 @@ type NetworkMetrics struct {
// StorageMetrics tracks storage-related performance
type StorageMetrics struct {
TotalContexts int64 `json:"total_contexts"`
StorageUtilization float64 `json:"storage_utilization"`
CompressionRatio float64 `json:"compression_ratio"`
TotalContexts int64 `json:"total_contexts"`
StorageUtilization float64 `json:"storage_utilization"`
CompressionRatio float64 `json:"compression_ratio"`
ReplicationEfficiency float64 `json:"replication_efficiency"`
CacheHitRate float64 `json:"cache_hit_rate"`
CacheHitRate float64 `json:"cache_hit_rate"`
}
// NewDistributionCoordinator creates a new distribution coordinator
func NewDistributionCoordinator(
config *config.Config,
dht *dht.DHT,
dhtInstance dht.DHT,
roleCrypto *crypto.RoleCrypto,
election election.Election,
) (*DistributionCoordinator, error) {
if config == nil {
return nil, fmt.Errorf("config is required")
}
if dht == nil {
if dhtInstance == nil {
return nil, fmt.Errorf("DHT instance is required")
}
if roleCrypto == nil {
@@ -238,14 +238,14 @@ func NewDistributionCoordinator(
}
// Create distributor
distributor, err := NewDHTContextDistributor(dht, roleCrypto, election, config)
distributor, err := NewDHTContextDistributor(dhtInstance, roleCrypto, election, config)
if err != nil {
return nil, fmt.Errorf("failed to create context distributor: %w", err)
}
coord := &DistributionCoordinator{
config: config,
dht: dht,
dht: dhtInstance,
roleCrypto: roleCrypto,
election: election,
distributor: distributor,
@@ -264,9 +264,9 @@ func NewDistributionCoordinator(
LatencyPercentiles: make(map[string]float64),
ErrorRateByType: make(map[string]float64),
ResourceUtilization: make(map[string]float64),
NetworkMetrics: &NetworkMetrics{},
StorageMetrics: &StorageMetrics{},
LastCalculated: time.Now(),
NetworkMetrics: &NetworkMetrics{},
StorageMetrics: &StorageMetrics{},
LastCalculated: time.Now(),
},
}
@@ -356,7 +356,7 @@ func (dc *DistributionCoordinator) CoordinateReplication(
CreatedAt: time.Now(),
RequestedBy: dc.config.Agent.ID,
Payload: map[string]interface{}{
"address": address,
"address": address,
"target_factor": targetFactor,
},
Context: ctx,
@@ -398,14 +398,14 @@ func (dc *DistributionCoordinator) GetClusterHealth() (*ClusterHealth, error) {
defer dc.mu.RUnlock()
health := &ClusterHealth{
OverallStatus: dc.calculateOverallHealth(),
NodeCount: len(dc.dht.GetConnectedPeers()) + 1, // +1 for current node
HealthyNodes: 0,
UnhealthyNodes: 0,
ComponentHealth: make(map[string]*ComponentHealth),
LastUpdated: time.Now(),
Alerts: []string{},
Recommendations: []string{},
OverallStatus: dc.calculateOverallHealth(),
NodeCount: len(dc.healthMonitors) + 1, // Placeholder count including current node
HealthyNodes: 0,
UnhealthyNodes: 0,
ComponentHealth: make(map[string]*ComponentHealth),
LastUpdated: time.Now(),
Alerts: []string{},
Recommendations: []string{},
}
// Calculate component health
@@ -582,7 +582,7 @@ func (dc *DistributionCoordinator) initializeComponents() error {
func (dc *DistributionCoordinator) initializeRoleFilters() {
// Initialize role filters based on configuration
roles := []string{"senior_architect", "project_manager", "devops_engineer", "backend_developer", "frontend_developer"}
for _, role := range roles {
dc.roleFilters[role] = &RoleFilter{
RoleID: role,
@@ -598,8 +598,8 @@ func (dc *DistributionCoordinator) initializeHealthMonitors() {
components := map[string]ComponentType{
"dht": ComponentTypeDHT,
"replication": ComponentTypeReplication,
"gossip": ComponentTypeGossip,
"network": ComponentTypeNetwork,
"gossip": ComponentTypeGossip,
"network": ComponentTypeNetwork,
"conflict_resolver": ComponentTypeConflictResolver,
}
@@ -682,8 +682,8 @@ func (dc *DistributionCoordinator) executeDistribution(ctx context.Context, requ
Success: false,
DistributedNodes: []string{},
ProcessingTime: 0,
Errors: []string{},
CompletedAt: time.Now(),
Errors: []string{},
CompletedAt: time.Now(),
}
// Execute distribution via distributor
@@ -703,14 +703,14 @@ func (dc *DistributionCoordinator) executeDistribution(ctx context.Context, requ
// ClusterHealth represents overall cluster health
type ClusterHealth struct {
OverallStatus HealthStatus `json:"overall_status"`
NodeCount int `json:"node_count"`
HealthyNodes int `json:"healthy_nodes"`
UnhealthyNodes int `json:"unhealthy_nodes"`
ComponentHealth map[string]*ComponentHealth `json:"component_health"`
LastUpdated time.Time `json:"last_updated"`
Alerts []string `json:"alerts"`
Recommendations []string `json:"recommendations"`
OverallStatus HealthStatus `json:"overall_status"`
NodeCount int `json:"node_count"`
HealthyNodes int `json:"healthy_nodes"`
UnhealthyNodes int `json:"unhealthy_nodes"`
ComponentHealth map[string]*ComponentHealth `json:"component_health"`
LastUpdated time.Time `json:"last_updated"`
Alerts []string `json:"alerts"`
Recommendations []string `json:"recommendations"`
}
// ComponentHealth represents individual component health
@@ -736,14 +736,14 @@ func (dc *DistributionCoordinator) getDefaultDistributionOptions() *Distribution
return &DistributionOptions{
ReplicationFactor: 3,
ConsistencyLevel: ConsistencyEventual,
EncryptionLevel: crypto.AccessMedium,
EncryptionLevel: crypto.AccessLevel(slurpContext.AccessMedium),
ConflictResolution: ResolutionMerged,
}
}
func (dc *DistributionCoordinator) getAccessLevelForRole(role string) crypto.AccessLevel {
// Placeholder implementation
return crypto.AccessMedium
return crypto.AccessLevel(slurpContext.AccessMedium)
}
func (dc *DistributionCoordinator) getAllowedCompartments(role string) []string {
@@ -796,13 +796,13 @@ func (dc *DistributionCoordinator) updatePerformanceMetrics() {
func (dc *DistributionCoordinator) priorityFromSeverity(severity ConflictSeverity) Priority {
switch severity {
case SeverityCritical:
case ConflictSeverityCritical:
return PriorityCritical
case SeverityHigh:
case ConflictSeverityHigh:
return PriorityHigh
case SeverityMedium:
case ConflictSeverityMedium:
return PriorityNormal
default:
return PriorityLow
}
}
}

View File

@@ -9,12 +9,12 @@ import (
"sync"
"time"
"chorus/pkg/dht"
"chorus/pkg/crypto"
"chorus/pkg/election"
"chorus/pkg/ucxl"
"chorus/pkg/config"
"chorus/pkg/crypto"
"chorus/pkg/dht"
"chorus/pkg/election"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
)
// ContextDistributor handles distributed context operations via DHT
@@ -27,62 +27,68 @@ type ContextDistributor interface {
// The context is encrypted for each specified role and distributed across
// the cluster with the configured replication factor
DistributeContext(ctx context.Context, node *slurpContext.ContextNode, roles []string) error
// RetrieveContext gets context from DHT and decrypts for the requesting role
// Automatically handles role-based decryption and returns the resolved context
RetrieveContext(ctx context.Context, address ucxl.Address, role string) (*slurpContext.ResolvedContext, error)
// UpdateContext updates existing distributed context with conflict resolution
// Uses vector clocks and leader coordination for consistent updates
UpdateContext(ctx context.Context, node *slurpContext.ContextNode, roles []string) (*ConflictResolution, error)
// DeleteContext removes context from distributed storage
// Handles distributed deletion across all replicas
DeleteContext(ctx context.Context, address ucxl.Address) error
// ListDistributedContexts lists contexts available in the DHT for a role
// Provides efficient enumeration with role-based filtering
ListDistributedContexts(ctx context.Context, role string, criteria *DistributionCriteria) ([]*DistributedContextInfo, error)
// Sync synchronizes local state with distributed DHT
// Ensures eventual consistency by exchanging metadata with peers
Sync(ctx context.Context) (*SyncResult, error)
// Replicate ensures context has the desired replication factor
// Manages replica placement and health across cluster nodes
Replicate(ctx context.Context, address ucxl.Address, replicationFactor int) error
// GetReplicaHealth returns health status of context replicas
// Provides visibility into replication status and node health
GetReplicaHealth(ctx context.Context, address ucxl.Address) (*ReplicaHealth, error)
// GetDistributionStats returns distribution performance statistics
GetDistributionStats() (*DistributionStatistics, error)
// SetReplicationPolicy configures replication behavior
SetReplicationPolicy(policy *ReplicationPolicy) error
// Start initializes background distribution routines
Start(ctx context.Context) error
// Stop releases distribution resources
Stop(ctx context.Context) error
}
// DHTStorage provides direct DHT storage operations for context data
type DHTStorage interface {
// Put stores encrypted context data in the DHT
Put(ctx context.Context, key string, data []byte, options *DHTStoreOptions) error
// Get retrieves encrypted context data from the DHT
Get(ctx context.Context, key string) ([]byte, *DHTMetadata, error)
// Delete removes data from the DHT
Delete(ctx context.Context, key string) error
// Exists checks if data exists in the DHT
Exists(ctx context.Context, key string) (bool, error)
// FindProviders finds nodes that have the specified data
FindProviders(ctx context.Context, key string) ([]string, error)
// ListKeys lists all keys matching a pattern
ListKeys(ctx context.Context, pattern string) ([]string, error)
// GetStats returns DHT operation statistics
GetStats() (*DHTStatistics, error)
}
@@ -92,18 +98,18 @@ type ConflictResolver interface {
// ResolveConflict resolves conflicts between concurrent context updates
// Uses vector clocks and semantic merging rules for resolution
ResolveConflict(ctx context.Context, local *slurpContext.ContextNode, remote *slurpContext.ContextNode) (*ConflictResolution, error)
// DetectConflicts detects potential conflicts before they occur
// Provides early warning for conflicting operations
DetectConflicts(ctx context.Context, update *slurpContext.ContextNode) ([]*PotentialConflict, error)
// MergeContexts merges multiple context versions semantically
// Combines changes from different sources intelligently
MergeContexts(ctx context.Context, contexts []*slurpContext.ContextNode) (*slurpContext.ContextNode, error)
// GetConflictHistory returns history of resolved conflicts
GetConflictHistory(ctx context.Context, address ucxl.Address) ([]*ConflictResolution, error)
// SetResolutionStrategy configures conflict resolution strategy
SetResolutionStrategy(strategy *ResolutionStrategy) error
}
@@ -112,19 +118,19 @@ type ConflictResolver interface {
type ReplicationManager interface {
// EnsureReplication ensures context meets replication requirements
EnsureReplication(ctx context.Context, address ucxl.Address, factor int) error
// RepairReplicas repairs missing or corrupted replicas
RepairReplicas(ctx context.Context, address ucxl.Address) (*RepairResult, error)
// BalanceReplicas rebalances replicas across cluster nodes
BalanceReplicas(ctx context.Context) (*RebalanceResult, error)
// GetReplicationStatus returns current replication status
GetReplicationStatus(ctx context.Context, address ucxl.Address) (*ReplicationStatus, error)
// SetReplicationFactor sets the desired replication factor
SetReplicationFactor(factor int) error
// GetReplicationStats returns replication statistics
GetReplicationStats() (*ReplicationStatistics, error)
}
@@ -133,19 +139,19 @@ type ReplicationManager interface {
type GossipProtocol interface {
// StartGossip begins gossip protocol for metadata synchronization
StartGossip(ctx context.Context) error
// StopGossip stops gossip protocol
StopGossip(ctx context.Context) error
// GossipMetadata exchanges metadata with peer nodes
GossipMetadata(ctx context.Context, peer string) error
// GetGossipState returns current gossip protocol state
GetGossipState() (*GossipState, error)
// SetGossipInterval configures gossip frequency
SetGossipInterval(interval time.Duration) error
// GetGossipStats returns gossip protocol statistics
GetGossipStats() (*GossipStatistics, error)
}
@@ -154,19 +160,19 @@ type GossipProtocol interface {
type NetworkManager interface {
// DetectPartition detects network partitions in the cluster
DetectPartition(ctx context.Context) (*PartitionInfo, error)
// GetTopology returns current network topology
GetTopology(ctx context.Context) (*NetworkTopology, error)
// GetPeers returns list of available peer nodes
GetPeers(ctx context.Context) ([]*PeerInfo, error)
// CheckConnectivity checks connectivity to peer nodes
CheckConnectivity(ctx context.Context, peers []string) (*ConnectivityReport, error)
// RecoverFromPartition attempts to recover from network partition
RecoverFromPartition(ctx context.Context) (*RecoveryResult, error)
// GetNetworkStats returns network performance statistics
GetNetworkStats() (*NetworkStatistics, error)
}
@@ -175,59 +181,59 @@ type NetworkManager interface {
// DistributionCriteria represents criteria for listing distributed contexts
type DistributionCriteria struct {
Tags []string `json:"tags"` // Required tags
Technologies []string `json:"technologies"` // Required technologies
MinReplicas int `json:"min_replicas"` // Minimum replica count
MaxAge *time.Duration `json:"max_age"` // Maximum age
HealthyOnly bool `json:"healthy_only"` // Only healthy replicas
Limit int `json:"limit"` // Maximum results
Offset int `json:"offset"` // Result offset
Tags []string `json:"tags"` // Required tags
Technologies []string `json:"technologies"` // Required technologies
MinReplicas int `json:"min_replicas"` // Minimum replica count
MaxAge *time.Duration `json:"max_age"` // Maximum age
HealthyOnly bool `json:"healthy_only"` // Only healthy replicas
Limit int `json:"limit"` // Maximum results
Offset int `json:"offset"` // Result offset
}
// DistributedContextInfo represents information about distributed context
type DistributedContextInfo struct {
Address ucxl.Address `json:"address"` // Context address
Roles []string `json:"roles"` // Accessible roles
ReplicaCount int `json:"replica_count"` // Number of replicas
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
LastUpdated time.Time `json:"last_updated"` // Last update time
Version int64 `json:"version"` // Version number
Size int64 `json:"size"` // Data size
Checksum string `json:"checksum"` // Data checksum
Address ucxl.Address `json:"address"` // Context address
Roles []string `json:"roles"` // Accessible roles
ReplicaCount int `json:"replica_count"` // Number of replicas
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
LastUpdated time.Time `json:"last_updated"` // Last update time
Version int64 `json:"version"` // Version number
Size int64 `json:"size"` // Data size
Checksum string `json:"checksum"` // Data checksum
}
// ConflictResolution represents the result of conflict resolution
type ConflictResolution struct {
Address ucxl.Address `json:"address"` // Context address
ResolutionType ResolutionType `json:"resolution_type"` // How conflict was resolved
MergedContext *slurpContext.ContextNode `json:"merged_context"` // Resulting merged context
ConflictingSources []string `json:"conflicting_sources"` // Sources of conflict
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
ResolvedAt time.Time `json:"resolved_at"` // When resolved
Confidence float64 `json:"confidence"` // Confidence in resolution
ManualReview bool `json:"manual_review"` // Whether manual review needed
Address ucxl.Address `json:"address"` // Context address
ResolutionType ResolutionType `json:"resolution_type"` // How conflict was resolved
MergedContext *slurpContext.ContextNode `json:"merged_context"` // Resulting merged context
ConflictingSources []string `json:"conflicting_sources"` // Sources of conflict
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
ResolvedAt time.Time `json:"resolved_at"` // When resolved
Confidence float64 `json:"confidence"` // Confidence in resolution
ManualReview bool `json:"manual_review"` // Whether manual review needed
}
// ResolutionType represents different types of conflict resolution
type ResolutionType string
const (
ResolutionMerged ResolutionType = "merged" // Contexts were merged
ResolutionLastWriter ResolutionType = "last_writer" // Last writer wins
ResolutionMerged ResolutionType = "merged" // Contexts were merged
ResolutionLastWriter ResolutionType = "last_writer" // Last writer wins
ResolutionLeaderDecision ResolutionType = "leader_decision" // Leader made decision
ResolutionManual ResolutionType = "manual" // Manual resolution required
ResolutionFailed ResolutionType = "failed" // Resolution failed
ResolutionManual ResolutionType = "manual" // Manual resolution required
ResolutionFailed ResolutionType = "failed" // Resolution failed
)
// PotentialConflict represents a detected potential conflict
type PotentialConflict struct {
Address ucxl.Address `json:"address"` // Context address
ConflictType ConflictType `json:"conflict_type"` // Type of conflict
Description string `json:"description"` // Conflict description
Severity ConflictSeverity `json:"severity"` // Conflict severity
AffectedFields []string `json:"affected_fields"` // Fields in conflict
Suggestions []string `json:"suggestions"` // Resolution suggestions
DetectedAt time.Time `json:"detected_at"` // When detected
Address ucxl.Address `json:"address"` // Context address
ConflictType ConflictType `json:"conflict_type"` // Type of conflict
Description string `json:"description"` // Conflict description
Severity ConflictSeverity `json:"severity"` // Conflict severity
AffectedFields []string `json:"affected_fields"` // Fields in conflict
Suggestions []string `json:"suggestions"` // Resolution suggestions
DetectedAt time.Time `json:"detected_at"` // When detected
}
// ConflictType represents different types of conflicts
@@ -245,88 +251,88 @@ const (
type ConflictSeverity string
const (
SeverityLow ConflictSeverity = "low" // Low severity - auto-resolvable
SeverityMedium ConflictSeverity = "medium" // Medium severity - may need review
SeverityHigh ConflictSeverity = "high" // High severity - needs attention
SeverityCritical ConflictSeverity = "critical" // Critical - manual intervention required
ConflictSeverityLow ConflictSeverity = "low" // Low severity - auto-resolvable
ConflictSeverityMedium ConflictSeverity = "medium" // Medium severity - may need review
ConflictSeverityHigh ConflictSeverity = "high" // High severity - needs attention
ConflictSeverityCritical ConflictSeverity = "critical" // Critical - manual intervention required
)
// ResolutionStrategy represents conflict resolution strategy configuration
type ResolutionStrategy struct {
DefaultResolution ResolutionType `json:"default_resolution"` // Default resolution method
FieldPriorities map[string]int `json:"field_priorities"` // Field priority mapping
AutoMergeEnabled bool `json:"auto_merge_enabled"` // Enable automatic merging
RequireConsensus bool `json:"require_consensus"` // Require node consensus
LeaderBreaksTies bool `json:"leader_breaks_ties"` // Leader resolves ties
MaxConflictAge time.Duration `json:"max_conflict_age"` // Max age before escalation
EscalationRoles []string `json:"escalation_roles"` // Roles for manual escalation
DefaultResolution ResolutionType `json:"default_resolution"` // Default resolution method
FieldPriorities map[string]int `json:"field_priorities"` // Field priority mapping
AutoMergeEnabled bool `json:"auto_merge_enabled"` // Enable automatic merging
RequireConsensus bool `json:"require_consensus"` // Require node consensus
LeaderBreaksTies bool `json:"leader_breaks_ties"` // Leader resolves ties
MaxConflictAge time.Duration `json:"max_conflict_age"` // Max age before escalation
EscalationRoles []string `json:"escalation_roles"` // Roles for manual escalation
}
// SyncResult represents the result of synchronization operation
type SyncResult struct {
SyncedContexts int `json:"synced_contexts"` // Contexts synchronized
ConflictsResolved int `json:"conflicts_resolved"` // Conflicts resolved
Errors []string `json:"errors"` // Synchronization errors
SyncTime time.Duration `json:"sync_time"` // Total sync time
PeersContacted int `json:"peers_contacted"` // Number of peers contacted
DataTransferred int64 `json:"data_transferred"` // Bytes transferred
SyncedAt time.Time `json:"synced_at"` // When sync completed
SyncedContexts int `json:"synced_contexts"` // Contexts synchronized
ConflictsResolved int `json:"conflicts_resolved"` // Conflicts resolved
Errors []string `json:"errors"` // Synchronization errors
SyncTime time.Duration `json:"sync_time"` // Total sync time
PeersContacted int `json:"peers_contacted"` // Number of peers contacted
DataTransferred int64 `json:"data_transferred"` // Bytes transferred
SyncedAt time.Time `json:"synced_at"` // When sync completed
}
// ReplicaHealth represents health status of context replicas
type ReplicaHealth struct {
Address ucxl.Address `json:"address"` // Context address
TotalReplicas int `json:"total_replicas"` // Total replica count
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
FailedReplicas int `json:"failed_replicas"` // Failed replica count
ReplicaNodes []*ReplicaNode `json:"replica_nodes"` // Individual replica status
OverallHealth HealthStatus `json:"overall_health"` // Overall health status
LastChecked time.Time `json:"last_checked"` // When last checked
RepairNeeded bool `json:"repair_needed"` // Whether repair is needed
Address ucxl.Address `json:"address"` // Context address
TotalReplicas int `json:"total_replicas"` // Total replica count
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
FailedReplicas int `json:"failed_replicas"` // Failed replica count
ReplicaNodes []*ReplicaNode `json:"replica_nodes"` // Individual replica status
OverallHealth HealthStatus `json:"overall_health"` // Overall health status
LastChecked time.Time `json:"last_checked"` // When last checked
RepairNeeded bool `json:"repair_needed"` // Whether repair is needed
}
// ReplicaNode represents status of individual replica node
type ReplicaNode struct {
NodeID string `json:"node_id"` // Node identifier
Status ReplicaStatus `json:"status"` // Replica status
LastSeen time.Time `json:"last_seen"` // When last seen
Version int64 `json:"version"` // Context version
Checksum string `json:"checksum"` // Data checksum
Latency time.Duration `json:"latency"` // Network latency
NetworkAddress string `json:"network_address"` // Network address
NodeID string `json:"node_id"` // Node identifier
Status ReplicaStatus `json:"status"` // Replica status
LastSeen time.Time `json:"last_seen"` // When last seen
Version int64 `json:"version"` // Context version
Checksum string `json:"checksum"` // Data checksum
Latency time.Duration `json:"latency"` // Network latency
NetworkAddress string `json:"network_address"` // Network address
}
// ReplicaStatus represents status of individual replica
type ReplicaStatus string
const (
ReplicaHealthy ReplicaStatus = "healthy" // Replica is healthy
ReplicaStale ReplicaStatus = "stale" // Replica is stale
ReplicaCorrupted ReplicaStatus = "corrupted" // Replica is corrupted
ReplicaUnreachable ReplicaStatus = "unreachable" // Replica is unreachable
ReplicaSyncing ReplicaStatus = "syncing" // Replica is syncing
ReplicaHealthy ReplicaStatus = "healthy" // Replica is healthy
ReplicaStale ReplicaStatus = "stale" // Replica is stale
ReplicaCorrupted ReplicaStatus = "corrupted" // Replica is corrupted
ReplicaUnreachable ReplicaStatus = "unreachable" // Replica is unreachable
ReplicaSyncing ReplicaStatus = "syncing" // Replica is syncing
)
// HealthStatus represents overall health status
type HealthStatus string
const (
HealthHealthy HealthStatus = "healthy" // All replicas healthy
HealthDegraded HealthStatus = "degraded" // Some replicas unhealthy
HealthCritical HealthStatus = "critical" // Most replicas unhealthy
HealthFailed HealthStatus = "failed" // All replicas failed
HealthHealthy HealthStatus = "healthy" // All replicas healthy
HealthDegraded HealthStatus = "degraded" // Some replicas unhealthy
HealthCritical HealthStatus = "critical" // Most replicas unhealthy
HealthFailed HealthStatus = "failed" // All replicas failed
)
// ReplicationPolicy represents replication behavior configuration
type ReplicationPolicy struct {
DefaultFactor int `json:"default_factor"` // Default replication factor
MinFactor int `json:"min_factor"` // Minimum replication factor
MaxFactor int `json:"max_factor"` // Maximum replication factor
PreferredZones []string `json:"preferred_zones"` // Preferred availability zones
AvoidSameNode bool `json:"avoid_same_node"` // Avoid same physical node
ConsistencyLevel ConsistencyLevel `json:"consistency_level"` // Consistency requirements
RepairThreshold float64 `json:"repair_threshold"` // Health threshold for repair
RebalanceInterval time.Duration `json:"rebalance_interval"` // Rebalancing frequency
DefaultFactor int `json:"default_factor"` // Default replication factor
MinFactor int `json:"min_factor"` // Minimum replication factor
MaxFactor int `json:"max_factor"` // Maximum replication factor
PreferredZones []string `json:"preferred_zones"` // Preferred availability zones
AvoidSameNode bool `json:"avoid_same_node"` // Avoid same physical node
ConsistencyLevel ConsistencyLevel `json:"consistency_level"` // Consistency requirements
RepairThreshold float64 `json:"repair_threshold"` // Health threshold for repair
RebalanceInterval time.Duration `json:"rebalance_interval"` // Rebalancing frequency
}
// ConsistencyLevel represents consistency requirements
@@ -340,12 +346,12 @@ const (
// DHTStoreOptions represents options for DHT storage operations
type DHTStoreOptions struct {
ReplicationFactor int `json:"replication_factor"` // Number of replicas
TTL *time.Duration `json:"ttl,omitempty"` // Time to live
Priority Priority `json:"priority"` // Storage priority
Compress bool `json:"compress"` // Whether to compress
Checksum bool `json:"checksum"` // Whether to checksum
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
ReplicationFactor int `json:"replication_factor"` // Number of replicas
TTL *time.Duration `json:"ttl,omitempty"` // Time to live
Priority Priority `json:"priority"` // Storage priority
Compress bool `json:"compress"` // Whether to compress
Checksum bool `json:"checksum"` // Whether to checksum
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}
// Priority represents storage operation priority
@@ -360,12 +366,12 @@ const (
// DHTMetadata represents metadata for DHT stored data
type DHTMetadata struct {
StoredAt time.Time `json:"stored_at"` // When stored
UpdatedAt time.Time `json:"updated_at"` // When last updated
Version int64 `json:"version"` // Version number
Size int64 `json:"size"` // Data size
Checksum string `json:"checksum"` // Data checksum
ReplicationFactor int `json:"replication_factor"` // Number of replicas
TTL *time.Time `json:"ttl,omitempty"` // Time to live
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}
StoredAt time.Time `json:"stored_at"` // When stored
UpdatedAt time.Time `json:"updated_at"` // When last updated
Version int64 `json:"version"` // Version number
Size int64 `json:"size"` // Data size
Checksum string `json:"checksum"` // Data checksum
ReplicationFactor int `json:"replication_factor"` // Number of replicas
TTL *time.Time `json:"ttl,omitempty"` // Time to live
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}

View File

@@ -10,18 +10,18 @@ import (
"sync"
"time"
"chorus/pkg/dht"
"chorus/pkg/crypto"
"chorus/pkg/election"
"chorus/pkg/ucxl"
"chorus/pkg/config"
"chorus/pkg/crypto"
"chorus/pkg/dht"
"chorus/pkg/election"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
)
// DHTContextDistributor implements ContextDistributor using CHORUS DHT infrastructure
type DHTContextDistributor struct {
mu sync.RWMutex
dht *dht.DHT
dht dht.DHT
roleCrypto *crypto.RoleCrypto
election election.Election
config *config.Config
@@ -37,7 +37,7 @@ type DHTContextDistributor struct {
// NewDHTContextDistributor creates a new DHT-based context distributor
func NewDHTContextDistributor(
dht *dht.DHT,
dht dht.DHT,
roleCrypto *crypto.RoleCrypto,
election election.Election,
config *config.Config,
@@ -147,36 +147,43 @@ func (d *DHTContextDistributor) DistributeContext(ctx context.Context, node *slu
return d.recordError(fmt.Sprintf("failed to get vector clock: %v", err))
}
// Encrypt context for roles
encryptedData, err := d.roleCrypto.EncryptContextForRoles(node, roles, []string{})
// Prepare context payload for role encryption
rawContext, err := json.Marshal(node)
if err != nil {
return d.recordError(fmt.Sprintf("failed to encrypt context: %v", err))
return d.recordError(fmt.Sprintf("failed to marshal context: %v", err))
}
// Create distribution metadata
// Create distribution metadata (checksum calculated per-role below)
metadata := &DistributionMetadata{
Address: node.UCXLAddress,
Roles: roles,
Version: 1,
VectorClock: clock,
DistributedBy: d.config.Agent.ID,
DistributedAt: time.Now(),
Roles: roles,
Version: 1,
VectorClock: clock,
DistributedBy: d.config.Agent.ID,
DistributedAt: time.Now(),
ReplicationFactor: d.getReplicationFactor(),
Checksum: d.calculateChecksum(encryptedData),
}
// Store encrypted data in DHT for each role
for _, role := range roles {
key := d.keyGenerator.GenerateContextKey(node.UCXLAddress.String(), role)
cipher, fingerprint, err := d.roleCrypto.EncryptForRole(rawContext, role)
if err != nil {
return d.recordError(fmt.Sprintf("failed to encrypt context for role %s: %v", role, err))
}
// Create role-specific storage package
storagePackage := &ContextStoragePackage{
EncryptedData: encryptedData,
Metadata: metadata,
Role: role,
StoredAt: time.Now(),
EncryptedData: cipher,
KeyFingerprint: fingerprint,
Metadata: metadata,
Role: role,
StoredAt: time.Now(),
}
metadata.Checksum = d.calculateChecksum(cipher)
// Serialize for storage
storageBytes, err := json.Marshal(storagePackage)
if err != nil {
@@ -252,25 +259,30 @@ func (d *DHTContextDistributor) RetrieveContext(ctx context.Context, address ucx
}
// Decrypt context for role
contextNode, err := d.roleCrypto.DecryptContextForRole(storagePackage.EncryptedData, role)
plain, err := d.roleCrypto.DecryptForRole(storagePackage.EncryptedData, role, storagePackage.KeyFingerprint)
if err != nil {
return nil, d.recordRetrievalError(fmt.Sprintf("failed to decrypt context: %v", err))
}
var contextNode slurpContext.ContextNode
if err := json.Unmarshal(plain, &contextNode); err != nil {
return nil, d.recordRetrievalError(fmt.Sprintf("failed to decode context: %v", err))
}
// Convert to resolved context
resolvedContext := &slurpContext.ResolvedContext{
UCXLAddress: contextNode.UCXLAddress,
Summary: contextNode.Summary,
Purpose: contextNode.Purpose,
Technologies: contextNode.Technologies,
Tags: contextNode.Tags,
Insights: contextNode.Insights,
ContextSourcePath: contextNode.Path,
InheritanceChain: []string{contextNode.Path},
ResolutionConfidence: contextNode.RAGConfidence,
BoundedDepth: 1,
GlobalContextsApplied: false,
ResolvedAt: time.Now(),
UCXLAddress: contextNode.UCXLAddress,
Summary: contextNode.Summary,
Purpose: contextNode.Purpose,
Technologies: contextNode.Technologies,
Tags: contextNode.Tags,
Insights: contextNode.Insights,
ContextSourcePath: contextNode.Path,
InheritanceChain: []string{contextNode.Path},
ResolutionConfidence: contextNode.RAGConfidence,
BoundedDepth: 1,
GlobalContextsApplied: false,
ResolvedAt: time.Now(),
}
// Update statistics
@@ -304,15 +316,15 @@ func (d *DHTContextDistributor) UpdateContext(ctx context.Context, node *slurpCo
// Convert existing resolved context back to context node for comparison
existingNode := &slurpContext.ContextNode{
Path: existingContext.ContextSourcePath,
UCXLAddress: existingContext.UCXLAddress,
Summary: existingContext.Summary,
Purpose: existingContext.Purpose,
Technologies: existingContext.Technologies,
Tags: existingContext.Tags,
Insights: existingContext.Insights,
RAGConfidence: existingContext.ResolutionConfidence,
GeneratedAt: existingContext.ResolvedAt,
Path: existingContext.ContextSourcePath,
UCXLAddress: existingContext.UCXLAddress,
Summary: existingContext.Summary,
Purpose: existingContext.Purpose,
Technologies: existingContext.Technologies,
Tags: existingContext.Tags,
Insights: existingContext.Insights,
RAGConfidence: existingContext.ResolutionConfidence,
GeneratedAt: existingContext.ResolvedAt,
}
// Use conflict resolver to handle the update
@@ -357,7 +369,7 @@ func (d *DHTContextDistributor) DeleteContext(ctx context.Context, address ucxl.
func (d *DHTContextDistributor) ListDistributedContexts(ctx context.Context, role string, criteria *DistributionCriteria) ([]*DistributedContextInfo, error) {
// This is a simplified implementation
// In production, we'd maintain proper indexes and filtering
results := []*DistributedContextInfo{}
limit := 100
if criteria != nil && criteria.Limit > 0 {
@@ -380,13 +392,13 @@ func (d *DHTContextDistributor) Sync(ctx context.Context) (*SyncResult, error) {
}
result := &SyncResult{
SyncedContexts: 0, // Would be populated in real implementation
SyncedContexts: 0, // Would be populated in real implementation
ConflictsResolved: 0,
Errors: []string{},
SyncTime: time.Since(start),
PeersContacted: len(d.dht.GetConnectedPeers()),
DataTransferred: 0,
SyncedAt: time.Now(),
Errors: []string{},
SyncTime: time.Since(start),
PeersContacted: len(d.dht.GetConnectedPeers()),
DataTransferred: 0,
SyncedAt: time.Now(),
}
return result, nil
@@ -453,28 +465,13 @@ func (d *DHTContextDistributor) calculateChecksum(data interface{}) string {
return hex.EncodeToString(hash[:])
}
// Ensure DHT is bootstrapped before operations
func (d *DHTContextDistributor) ensureDHTReady() error {
if !d.dht.IsBootstrapped() {
return fmt.Errorf("DHT not bootstrapped")
}
return nil
}
// Start starts the distribution service
func (d *DHTContextDistributor) Start(ctx context.Context) error {
// Bootstrap DHT if not already done
if !d.dht.IsBootstrapped() {
if err := d.dht.Bootstrap(); err != nil {
return fmt.Errorf("failed to bootstrap DHT: %w", err)
if d.gossipProtocol != nil {
if err := d.gossipProtocol.StartGossip(ctx); err != nil {
return fmt.Errorf("failed to start gossip protocol: %w", err)
}
}
// Start gossip protocol
if err := d.gossipProtocol.StartGossip(ctx); err != nil {
return fmt.Errorf("failed to start gossip protocol: %w", err)
}
return nil
}
@@ -488,22 +485,23 @@ func (d *DHTContextDistributor) Stop(ctx context.Context) error {
// ContextStoragePackage represents a complete package for DHT storage
type ContextStoragePackage struct {
EncryptedData *crypto.EncryptedContextData `json:"encrypted_data"`
Metadata *DistributionMetadata `json:"metadata"`
Role string `json:"role"`
StoredAt time.Time `json:"stored_at"`
EncryptedData []byte `json:"encrypted_data"`
KeyFingerprint string `json:"key_fingerprint,omitempty"`
Metadata *DistributionMetadata `json:"metadata"`
Role string `json:"role"`
StoredAt time.Time `json:"stored_at"`
}
// DistributionMetadata contains metadata for distributed context
type DistributionMetadata struct {
Address ucxl.Address `json:"address"`
Roles []string `json:"roles"`
Version int64 `json:"version"`
VectorClock *VectorClock `json:"vector_clock"`
DistributedBy string `json:"distributed_by"`
DistributedAt time.Time `json:"distributed_at"`
ReplicationFactor int `json:"replication_factor"`
Checksum string `json:"checksum"`
Address ucxl.Address `json:"address"`
Roles []string `json:"roles"`
Version int64 `json:"version"`
VectorClock *VectorClock `json:"vector_clock"`
DistributedBy string `json:"distributed_by"`
DistributedAt time.Time `json:"distributed_at"`
ReplicationFactor int `json:"replication_factor"`
Checksum string `json:"checksum"`
}
// DHTKeyGenerator implements KeyGenerator interface
@@ -532,65 +530,124 @@ func (kg *DHTKeyGenerator) GenerateReplicationKey(address string) string {
// Component constructors - these would be implemented in separate files
// NewReplicationManager creates a new replication manager
func NewReplicationManager(dht *dht.DHT, config *config.Config) (ReplicationManager, error) {
// Placeholder implementation
return &ReplicationManagerImpl{}, nil
func NewReplicationManager(dht dht.DHT, config *config.Config) (ReplicationManager, error) {
impl, err := NewReplicationManagerImpl(dht, config)
if err != nil {
return nil, err
}
return impl, nil
}
// NewConflictResolver creates a new conflict resolver
func NewConflictResolver(dht *dht.DHT, config *config.Config) (ConflictResolver, error) {
// Placeholder implementation
func NewConflictResolver(dht dht.DHT, config *config.Config) (ConflictResolver, error) {
// Placeholder implementation until full resolver is wired
return &ConflictResolverImpl{}, nil
}
// NewGossipProtocol creates a new gossip protocol
func NewGossipProtocol(dht *dht.DHT, config *config.Config) (GossipProtocol, error) {
// Placeholder implementation
return &GossipProtocolImpl{}, nil
func NewGossipProtocol(dht dht.DHT, config *config.Config) (GossipProtocol, error) {
impl, err := NewGossipProtocolImpl(dht, config)
if err != nil {
return nil, err
}
return impl, nil
}
// NewNetworkManager creates a new network manager
func NewNetworkManager(dht *dht.DHT, config *config.Config) (NetworkManager, error) {
// Placeholder implementation
return &NetworkManagerImpl{}, nil
func NewNetworkManager(dht dht.DHT, config *config.Config) (NetworkManager, error) {
impl, err := NewNetworkManagerImpl(dht, config)
if err != nil {
return nil, err
}
return impl, nil
}
// NewVectorClockManager creates a new vector clock manager
func NewVectorClockManager(dht *dht.DHT, nodeID string) (VectorClockManager, error) {
// Placeholder implementation
return &VectorClockManagerImpl{}, nil
func NewVectorClockManager(dht dht.DHT, nodeID string) (VectorClockManager, error) {
return &defaultVectorClockManager{
clocks: make(map[string]*VectorClock),
}, nil
}
// Placeholder structs for components - these would be properly implemented
type ReplicationManagerImpl struct{}
func (rm *ReplicationManagerImpl) EnsureReplication(ctx context.Context, address ucxl.Address, factor int) error { return nil }
func (rm *ReplicationManagerImpl) GetReplicationStatus(ctx context.Context, address ucxl.Address) (*ReplicaHealth, error) {
return &ReplicaHealth{}, nil
}
func (rm *ReplicationManagerImpl) SetReplicationFactor(factor int) error { return nil }
// ConflictResolverImpl is a temporary stub until the full resolver is implemented
type ConflictResolverImpl struct{}
func (cr *ConflictResolverImpl) ResolveConflict(ctx context.Context, local, remote *slurpContext.ContextNode) (*ConflictResolution, error) {
return &ConflictResolution{
Address: local.UCXLAddress,
Address: local.UCXLAddress,
ResolutionType: ResolutionMerged,
MergedContext: local,
MergedContext: local,
ResolutionTime: time.Millisecond,
ResolvedAt: time.Now(),
Confidence: 0.95,
ResolvedAt: time.Now(),
Confidence: 0.95,
}, nil
}
type GossipProtocolImpl struct{}
func (gp *GossipProtocolImpl) StartGossip(ctx context.Context) error { return nil }
// defaultVectorClockManager provides a minimal vector clock store for SEC-SLURP scaffolding.
type defaultVectorClockManager struct {
mu sync.Mutex
clocks map[string]*VectorClock
}
type NetworkManagerImpl struct{}
func (vcm *defaultVectorClockManager) GetClock(nodeID string) (*VectorClock, error) {
vcm.mu.Lock()
defer vcm.mu.Unlock()
type VectorClockManagerImpl struct{}
func (vcm *VectorClockManagerImpl) GetClock(nodeID string) (*VectorClock, error) {
return &VectorClock{
Clock: map[string]int64{nodeID: time.Now().Unix()},
if clock, ok := vcm.clocks[nodeID]; ok {
return clock, nil
}
clock := &VectorClock{
Clock: map[string]int64{nodeID: time.Now().Unix()},
UpdatedAt: time.Now(),
}, nil
}
}
vcm.clocks[nodeID] = clock
return clock, nil
}
func (vcm *defaultVectorClockManager) UpdateClock(nodeID string, clock *VectorClock) error {
vcm.mu.Lock()
defer vcm.mu.Unlock()
vcm.clocks[nodeID] = clock
return nil
}
func (vcm *defaultVectorClockManager) CompareClock(clock1, clock2 *VectorClock) ClockRelation {
if clock1 == nil || clock2 == nil {
return ClockConcurrent
}
if clock1.UpdatedAt.Before(clock2.UpdatedAt) {
return ClockBefore
}
if clock1.UpdatedAt.After(clock2.UpdatedAt) {
return ClockAfter
}
return ClockEqual
}
func (vcm *defaultVectorClockManager) MergeClock(clocks []*VectorClock) *VectorClock {
if len(clocks) == 0 {
return &VectorClock{
Clock: map[string]int64{},
UpdatedAt: time.Now(),
}
}
merged := &VectorClock{
Clock: make(map[string]int64),
UpdatedAt: clocks[0].UpdatedAt,
}
for _, clock := range clocks {
if clock == nil {
continue
}
if clock.UpdatedAt.After(merged.UpdatedAt) {
merged.UpdatedAt = clock.UpdatedAt
}
for node, value := range clock.Clock {
if existing, ok := merged.Clock[node]; !ok || value > existing {
merged.Clock[node] = value
}
}
}
return merged
}

View File

@@ -15,48 +15,48 @@ import (
// MonitoringSystem provides comprehensive monitoring for the distributed context system
type MonitoringSystem struct {
mu sync.RWMutex
config *config.Config
metrics *MetricsCollector
healthChecks *HealthCheckManager
alertManager *AlertManager
dashboard *DashboardServer
logManager *LogManager
traceManager *TraceManager
mu sync.RWMutex
config *config.Config
metrics *MetricsCollector
healthChecks *HealthCheckManager
alertManager *AlertManager
dashboard *DashboardServer
logManager *LogManager
traceManager *TraceManager
// State
running bool
monitoringPort int
updateInterval time.Duration
retentionPeriod time.Duration
running bool
monitoringPort int
updateInterval time.Duration
retentionPeriod time.Duration
}
// MetricsCollector collects and aggregates system metrics
type MetricsCollector struct {
mu sync.RWMutex
timeSeries map[string]*TimeSeries
counters map[string]*Counter
gauges map[string]*Gauge
histograms map[string]*Histogram
customMetrics map[string]*CustomMetric
aggregatedStats *AggregatedStatistics
exporters []MetricsExporter
lastCollection time.Time
mu sync.RWMutex
timeSeries map[string]*TimeSeries
counters map[string]*Counter
gauges map[string]*Gauge
histograms map[string]*Histogram
customMetrics map[string]*CustomMetric
aggregatedStats *AggregatedStatistics
exporters []MetricsExporter
lastCollection time.Time
}
// TimeSeries represents a time-series metric
type TimeSeries struct {
Name string `json:"name"`
Labels map[string]string `json:"labels"`
DataPoints []*TimeSeriesPoint `json:"data_points"`
Name string `json:"name"`
Labels map[string]string `json:"labels"`
DataPoints []*TimeSeriesPoint `json:"data_points"`
RetentionTTL time.Duration `json:"retention_ttl"`
LastUpdated time.Time `json:"last_updated"`
LastUpdated time.Time `json:"last_updated"`
}
// TimeSeriesPoint represents a single data point in a time series
type TimeSeriesPoint struct {
Timestamp time.Time `json:"timestamp"`
Value float64 `json:"value"`
Timestamp time.Time `json:"timestamp"`
Value float64 `json:"value"`
Labels map[string]string `json:"labels,omitempty"`
}
@@ -64,7 +64,7 @@ type TimeSeriesPoint struct {
type Counter struct {
Name string `json:"name"`
Value int64 `json:"value"`
Rate float64 `json:"rate"` // per second
Rate float64 `json:"rate"` // per second
Labels map[string]string `json:"labels"`
LastUpdated time.Time `json:"last_updated"`
}
@@ -82,13 +82,13 @@ type Gauge struct {
// Histogram represents distribution of values
type Histogram struct {
Name string `json:"name"`
Buckets map[float64]int64 `json:"buckets"`
Count int64 `json:"count"`
Sum float64 `json:"sum"`
Labels map[string]string `json:"labels"`
Name string `json:"name"`
Buckets map[float64]int64 `json:"buckets"`
Count int64 `json:"count"`
Sum float64 `json:"sum"`
Labels map[string]string `json:"labels"`
Percentiles map[float64]float64 `json:"percentiles"`
LastUpdated time.Time `json:"last_updated"`
LastUpdated time.Time `json:"last_updated"`
}
// CustomMetric represents application-specific metrics
@@ -114,81 +114,81 @@ const (
// AggregatedStatistics provides high-level system statistics
type AggregatedStatistics struct {
SystemOverview *SystemOverview `json:"system_overview"`
PerformanceMetrics *PerformanceOverview `json:"performance_metrics"`
HealthMetrics *HealthOverview `json:"health_metrics"`
ErrorMetrics *ErrorOverview `json:"error_metrics"`
ResourceMetrics *ResourceOverview `json:"resource_metrics"`
NetworkMetrics *NetworkOverview `json:"network_metrics"`
LastUpdated time.Time `json:"last_updated"`
SystemOverview *SystemOverview `json:"system_overview"`
PerformanceMetrics *PerformanceOverview `json:"performance_metrics"`
HealthMetrics *HealthOverview `json:"health_metrics"`
ErrorMetrics *ErrorOverview `json:"error_metrics"`
ResourceMetrics *ResourceOverview `json:"resource_metrics"`
NetworkMetrics *NetworkOverview `json:"network_metrics"`
LastUpdated time.Time `json:"last_updated"`
}
// SystemOverview provides system-wide overview metrics
type SystemOverview struct {
TotalNodes int `json:"total_nodes"`
HealthyNodes int `json:"healthy_nodes"`
TotalContexts int64 `json:"total_contexts"`
DistributedContexts int64 `json:"distributed_contexts"`
ReplicationFactor float64 `json:"average_replication_factor"`
SystemUptime time.Duration `json:"system_uptime"`
ClusterVersion string `json:"cluster_version"`
LastRestart time.Time `json:"last_restart"`
TotalNodes int `json:"total_nodes"`
HealthyNodes int `json:"healthy_nodes"`
TotalContexts int64 `json:"total_contexts"`
DistributedContexts int64 `json:"distributed_contexts"`
ReplicationFactor float64 `json:"average_replication_factor"`
SystemUptime time.Duration `json:"system_uptime"`
ClusterVersion string `json:"cluster_version"`
LastRestart time.Time `json:"last_restart"`
}
// PerformanceOverview provides performance metrics
type PerformanceOverview struct {
RequestsPerSecond float64 `json:"requests_per_second"`
AverageResponseTime time.Duration `json:"average_response_time"`
P95ResponseTime time.Duration `json:"p95_response_time"`
P99ResponseTime time.Duration `json:"p99_response_time"`
Throughput float64 `json:"throughput_mbps"`
CacheHitRate float64 `json:"cache_hit_rate"`
QueueDepth int `json:"queue_depth"`
ActiveConnections int `json:"active_connections"`
RequestsPerSecond float64 `json:"requests_per_second"`
AverageResponseTime time.Duration `json:"average_response_time"`
P95ResponseTime time.Duration `json:"p95_response_time"`
P99ResponseTime time.Duration `json:"p99_response_time"`
Throughput float64 `json:"throughput_mbps"`
CacheHitRate float64 `json:"cache_hit_rate"`
QueueDepth int `json:"queue_depth"`
ActiveConnections int `json:"active_connections"`
}
// HealthOverview provides health-related metrics
type HealthOverview struct {
OverallHealthScore float64 `json:"overall_health_score"`
ComponentHealth map[string]float64 `json:"component_health"`
FailedHealthChecks int `json:"failed_health_checks"`
LastHealthCheck time.Time `json:"last_health_check"`
HealthTrend string `json:"health_trend"` // improving, stable, degrading
CriticalAlerts int `json:"critical_alerts"`
WarningAlerts int `json:"warning_alerts"`
OverallHealthScore float64 `json:"overall_health_score"`
ComponentHealth map[string]float64 `json:"component_health"`
FailedHealthChecks int `json:"failed_health_checks"`
LastHealthCheck time.Time `json:"last_health_check"`
HealthTrend string `json:"health_trend"` // improving, stable, degrading
CriticalAlerts int `json:"critical_alerts"`
WarningAlerts int `json:"warning_alerts"`
}
// ErrorOverview provides error-related metrics
type ErrorOverview struct {
TotalErrors int64 `json:"total_errors"`
ErrorRate float64 `json:"error_rate"`
ErrorsByType map[string]int64 `json:"errors_by_type"`
ErrorsByComponent map[string]int64 `json:"errors_by_component"`
LastError *ErrorEvent `json:"last_error"`
ErrorTrend string `json:"error_trend"` // increasing, stable, decreasing
TotalErrors int64 `json:"total_errors"`
ErrorRate float64 `json:"error_rate"`
ErrorsByType map[string]int64 `json:"errors_by_type"`
ErrorsByComponent map[string]int64 `json:"errors_by_component"`
LastError *ErrorEvent `json:"last_error"`
ErrorTrend string `json:"error_trend"` // increasing, stable, decreasing
}
// ResourceOverview provides resource utilization metrics
type ResourceOverview struct {
CPUUtilization float64 `json:"cpu_utilization"`
MemoryUtilization float64 `json:"memory_utilization"`
DiskUtilization float64 `json:"disk_utilization"`
NetworkUtilization float64 `json:"network_utilization"`
StorageUsed int64 `json:"storage_used_bytes"`
StorageAvailable int64 `json:"storage_available_bytes"`
FileDescriptors int `json:"open_file_descriptors"`
Goroutines int `json:"goroutines"`
CPUUtilization float64 `json:"cpu_utilization"`
MemoryUtilization float64 `json:"memory_utilization"`
DiskUtilization float64 `json:"disk_utilization"`
NetworkUtilization float64 `json:"network_utilization"`
StorageUsed int64 `json:"storage_used_bytes"`
StorageAvailable int64 `json:"storage_available_bytes"`
FileDescriptors int `json:"open_file_descriptors"`
Goroutines int `json:"goroutines"`
}
// NetworkOverview provides network-related metrics
type NetworkOverview struct {
TotalConnections int `json:"total_connections"`
ActiveConnections int `json:"active_connections"`
BandwidthUtilization float64 `json:"bandwidth_utilization"`
PacketLossRate float64 `json:"packet_loss_rate"`
AverageLatency time.Duration `json:"average_latency"`
NetworkPartitions int `json:"network_partitions"`
DataTransferred int64 `json:"data_transferred_bytes"`
TotalConnections int `json:"total_connections"`
ActiveConnections int `json:"active_connections"`
BandwidthUtilization float64 `json:"bandwidth_utilization"`
PacketLossRate float64 `json:"packet_loss_rate"`
AverageLatency time.Duration `json:"average_latency"`
NetworkPartitions int `json:"network_partitions"`
DataTransferred int64 `json:"data_transferred_bytes"`
}
// MetricsExporter exports metrics to external systems
@@ -200,49 +200,49 @@ type MetricsExporter interface {
// HealthCheckManager manages system health checks
type HealthCheckManager struct {
mu sync.RWMutex
healthChecks map[string]*HealthCheck
checkResults map[string]*HealthCheckResult
schedules map[string]*HealthCheckSchedule
running bool
mu sync.RWMutex
healthChecks map[string]*HealthCheck
checkResults map[string]*HealthCheckResult
schedules map[string]*HealthCheckSchedule
running bool
}
// HealthCheck represents a single health check
type HealthCheck struct {
Name string `json:"name"`
Description string `json:"description"`
CheckType HealthCheckType `json:"check_type"`
Target string `json:"target"`
Timeout time.Duration `json:"timeout"`
Interval time.Duration `json:"interval"`
Retries int `json:"retries"`
Metadata map[string]interface{} `json:"metadata"`
Enabled bool `json:"enabled"`
CheckFunction func(context.Context) (*HealthCheckResult, error) `json:"-"`
Name string `json:"name"`
Description string `json:"description"`
CheckType HealthCheckType `json:"check_type"`
Target string `json:"target"`
Timeout time.Duration `json:"timeout"`
Interval time.Duration `json:"interval"`
Retries int `json:"retries"`
Metadata map[string]interface{} `json:"metadata"`
Enabled bool `json:"enabled"`
CheckFunction func(context.Context) (*HealthCheckResult, error) `json:"-"`
}
// HealthCheckType represents different types of health checks
type HealthCheckType string
const (
HealthCheckTypeHTTP HealthCheckType = "http"
HealthCheckTypeTCP HealthCheckType = "tcp"
HealthCheckTypeCustom HealthCheckType = "custom"
HealthCheckTypeComponent HealthCheckType = "component"
HealthCheckTypeDatabase HealthCheckType = "database"
HealthCheckTypeService HealthCheckType = "service"
HealthCheckTypeHTTP HealthCheckType = "http"
HealthCheckTypeTCP HealthCheckType = "tcp"
HealthCheckTypeCustom HealthCheckType = "custom"
HealthCheckTypeComponent HealthCheckType = "component"
HealthCheckTypeDatabase HealthCheckType = "database"
HealthCheckTypeService HealthCheckType = "service"
)
// HealthCheckResult represents the result of a health check
type HealthCheckResult struct {
CheckName string `json:"check_name"`
Status HealthCheckStatus `json:"status"`
ResponseTime time.Duration `json:"response_time"`
Message string `json:"message"`
Details map[string]interface{} `json:"details"`
Error string `json:"error,omitempty"`
Timestamp time.Time `json:"timestamp"`
Attempt int `json:"attempt"`
CheckName string `json:"check_name"`
Status HealthCheckStatus `json:"status"`
ResponseTime time.Duration `json:"response_time"`
Message string `json:"message"`
Details map[string]interface{} `json:"details"`
Error string `json:"error,omitempty"`
Timestamp time.Time `json:"timestamp"`
Attempt int `json:"attempt"`
}
// HealthCheckStatus represents the status of a health check
@@ -258,45 +258,45 @@ const (
// HealthCheckSchedule defines when health checks should run
type HealthCheckSchedule struct {
CheckName string `json:"check_name"`
Interval time.Duration `json:"interval"`
NextRun time.Time `json:"next_run"`
LastRun time.Time `json:"last_run"`
Enabled bool `json:"enabled"`
FailureCount int `json:"failure_count"`
CheckName string `json:"check_name"`
Interval time.Duration `json:"interval"`
NextRun time.Time `json:"next_run"`
LastRun time.Time `json:"last_run"`
Enabled bool `json:"enabled"`
FailureCount int `json:"failure_count"`
}
// AlertManager manages system alerts and notifications
type AlertManager struct {
mu sync.RWMutex
alertRules map[string]*AlertRule
activeAlerts map[string]*Alert
alertHistory []*Alert
notifiers []AlertNotifier
silences map[string]*AlertSilence
running bool
mu sync.RWMutex
alertRules map[string]*AlertRule
activeAlerts map[string]*Alert
alertHistory []*Alert
notifiers []AlertNotifier
silences map[string]*AlertSilence
running bool
}
// AlertRule defines conditions for triggering alerts
type AlertRule struct {
Name string `json:"name"`
Description string `json:"description"`
Severity AlertSeverity `json:"severity"`
Conditions []*AlertCondition `json:"conditions"`
Duration time.Duration `json:"duration"` // How long condition must persist
Cooldown time.Duration `json:"cooldown"` // Minimum time between alerts
Labels map[string]string `json:"labels"`
Annotations map[string]string `json:"annotations"`
Enabled bool `json:"enabled"`
LastTriggered *time.Time `json:"last_triggered,omitempty"`
Name string `json:"name"`
Description string `json:"description"`
Severity AlertSeverity `json:"severity"`
Conditions []*AlertCondition `json:"conditions"`
Duration time.Duration `json:"duration"` // How long condition must persist
Cooldown time.Duration `json:"cooldown"` // Minimum time between alerts
Labels map[string]string `json:"labels"`
Annotations map[string]string `json:"annotations"`
Enabled bool `json:"enabled"`
LastTriggered *time.Time `json:"last_triggered,omitempty"`
}
// AlertCondition defines a single condition for an alert
type AlertCondition struct {
MetricName string `json:"metric_name"`
Operator ConditionOperator `json:"operator"`
Threshold float64 `json:"threshold"`
Duration time.Duration `json:"duration"`
MetricName string `json:"metric_name"`
Operator ConditionOperator `json:"operator"`
Threshold float64 `json:"threshold"`
Duration time.Duration `json:"duration"`
}
// ConditionOperator represents comparison operators for alert conditions
@@ -313,39 +313,39 @@ const (
// Alert represents an active alert
type Alert struct {
ID string `json:"id"`
RuleName string `json:"rule_name"`
Severity AlertSeverity `json:"severity"`
Status AlertStatus `json:"status"`
Message string `json:"message"`
Details map[string]interface{} `json:"details"`
Labels map[string]string `json:"labels"`
Annotations map[string]string `json:"annotations"`
StartsAt time.Time `json:"starts_at"`
EndsAt *time.Time `json:"ends_at,omitempty"`
LastUpdated time.Time `json:"last_updated"`
AckBy string `json:"acknowledged_by,omitempty"`
AckAt *time.Time `json:"acknowledged_at,omitempty"`
ID string `json:"id"`
RuleName string `json:"rule_name"`
Severity AlertSeverity `json:"severity"`
Status AlertStatus `json:"status"`
Message string `json:"message"`
Details map[string]interface{} `json:"details"`
Labels map[string]string `json:"labels"`
Annotations map[string]string `json:"annotations"`
StartsAt time.Time `json:"starts_at"`
EndsAt *time.Time `json:"ends_at,omitempty"`
LastUpdated time.Time `json:"last_updated"`
AckBy string `json:"acknowledged_by,omitempty"`
AckAt *time.Time `json:"acknowledged_at,omitempty"`
}
// AlertSeverity represents the severity level of an alert
type AlertSeverity string
const (
SeverityInfo AlertSeverity = "info"
SeverityWarning AlertSeverity = "warning"
SeverityError AlertSeverity = "error"
SeverityCritical AlertSeverity = "critical"
AlertAlertSeverityInfo AlertSeverity = "info"
AlertAlertSeverityWarning AlertSeverity = "warning"
AlertAlertSeverityError AlertSeverity = "error"
AlertAlertSeverityCritical AlertSeverity = "critical"
)
// AlertStatus represents the current status of an alert
type AlertStatus string
const (
AlertStatusFiring AlertStatus = "firing"
AlertStatusResolved AlertStatus = "resolved"
AlertStatusFiring AlertStatus = "firing"
AlertStatusResolved AlertStatus = "resolved"
AlertStatusAcknowledged AlertStatus = "acknowledged"
AlertStatusSilenced AlertStatus = "silenced"
AlertStatusSilenced AlertStatus = "silenced"
)
// AlertNotifier sends alert notifications
@@ -357,64 +357,64 @@ type AlertNotifier interface {
// AlertSilence represents a silenced alert
type AlertSilence struct {
ID string `json:"id"`
Matchers map[string]string `json:"matchers"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
CreatedBy string `json:"created_by"`
Comment string `json:"comment"`
Active bool `json:"active"`
ID string `json:"id"`
Matchers map[string]string `json:"matchers"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
CreatedBy string `json:"created_by"`
Comment string `json:"comment"`
Active bool `json:"active"`
}
// DashboardServer provides web-based monitoring dashboard
type DashboardServer struct {
mu sync.RWMutex
server *http.Server
dashboards map[string]*Dashboard
widgets map[string]*Widget
customPages map[string]*CustomPage
running bool
port int
mu sync.RWMutex
server *http.Server
dashboards map[string]*Dashboard
widgets map[string]*Widget
customPages map[string]*CustomPage
running bool
port int
}
// Dashboard represents a monitoring dashboard
type Dashboard struct {
ID string `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
Widgets []*Widget `json:"widgets"`
Layout *DashboardLayout `json:"layout"`
ID string `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
Widgets []*Widget `json:"widgets"`
Layout *DashboardLayout `json:"layout"`
Settings *DashboardSettings `json:"settings"`
CreatedBy string `json:"created_by"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
CreatedBy string `json:"created_by"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
// Widget represents a dashboard widget
type Widget struct {
ID string `json:"id"`
Type WidgetType `json:"type"`
Title string `json:"title"`
DataSource string `json:"data_source"`
Query string `json:"query"`
Settings map[string]interface{} `json:"settings"`
Position *WidgetPosition `json:"position"`
RefreshRate time.Duration `json:"refresh_rate"`
LastUpdated time.Time `json:"last_updated"`
ID string `json:"id"`
Type WidgetType `json:"type"`
Title string `json:"title"`
DataSource string `json:"data_source"`
Query string `json:"query"`
Settings map[string]interface{} `json:"settings"`
Position *WidgetPosition `json:"position"`
RefreshRate time.Duration `json:"refresh_rate"`
LastUpdated time.Time `json:"last_updated"`
}
// WidgetType represents different types of dashboard widgets
type WidgetType string
const (
WidgetTypeMetric WidgetType = "metric"
WidgetTypeChart WidgetType = "chart"
WidgetTypeTable WidgetType = "table"
WidgetTypeAlert WidgetType = "alert"
WidgetTypeHealth WidgetType = "health"
WidgetTypeTopology WidgetType = "topology"
WidgetTypeLog WidgetType = "log"
WidgetTypeCustom WidgetType = "custom"
WidgetTypeMetric WidgetType = "metric"
WidgetTypeChart WidgetType = "chart"
WidgetTypeTable WidgetType = "table"
WidgetTypeAlert WidgetType = "alert"
WidgetTypeHealth WidgetType = "health"
WidgetTypeTopology WidgetType = "topology"
WidgetTypeLog WidgetType = "log"
WidgetTypeCustom WidgetType = "custom"
)
// WidgetPosition defines widget position and size
@@ -427,11 +427,11 @@ type WidgetPosition struct {
// DashboardLayout defines dashboard layout settings
type DashboardLayout struct {
Columns int `json:"columns"`
RowHeight int `json:"row_height"`
Margins [2]int `json:"margins"` // [x, y]
Spacing [2]int `json:"spacing"` // [x, y]
Breakpoints map[string]int `json:"breakpoints"`
Columns int `json:"columns"`
RowHeight int `json:"row_height"`
Margins [2]int `json:"margins"` // [x, y]
Spacing [2]int `json:"spacing"` // [x, y]
Breakpoints map[string]int `json:"breakpoints"`
}
// DashboardSettings contains dashboard configuration
@@ -446,43 +446,43 @@ type DashboardSettings struct {
// CustomPage represents a custom monitoring page
type CustomPage struct {
Path string `json:"path"`
Title string `json:"title"`
Content string `json:"content"`
ContentType string `json:"content_type"`
Handler http.HandlerFunc `json:"-"`
Path string `json:"path"`
Title string `json:"title"`
Content string `json:"content"`
ContentType string `json:"content_type"`
Handler http.HandlerFunc `json:"-"`
}
// LogManager manages system logs and log analysis
type LogManager struct {
mu sync.RWMutex
logSources map[string]*LogSource
logEntries []*LogEntry
logAnalyzers []LogAnalyzer
mu sync.RWMutex
logSources map[string]*LogSource
logEntries []*LogEntry
logAnalyzers []LogAnalyzer
retentionPolicy *LogRetentionPolicy
running bool
running bool
}
// LogSource represents a source of log data
type LogSource struct {
Name string `json:"name"`
Type LogSourceType `json:"type"`
Location string `json:"location"`
Format LogFormat `json:"format"`
Labels map[string]string `json:"labels"`
Enabled bool `json:"enabled"`
LastRead time.Time `json:"last_read"`
Name string `json:"name"`
Type LogSourceType `json:"type"`
Location string `json:"location"`
Format LogFormat `json:"format"`
Labels map[string]string `json:"labels"`
Enabled bool `json:"enabled"`
LastRead time.Time `json:"last_read"`
}
// LogSourceType represents different types of log sources
type LogSourceType string
const (
LogSourceTypeFile LogSourceType = "file"
LogSourceTypeHTTP LogSourceType = "http"
LogSourceTypeStream LogSourceType = "stream"
LogSourceTypeDatabase LogSourceType = "database"
LogSourceTypeCustom LogSourceType = "custom"
LogSourceTypeFile LogSourceType = "file"
LogSourceTypeHTTP LogSourceType = "http"
LogSourceTypeStream LogSourceType = "stream"
LogSourceTypeDatabase LogSourceType = "database"
LogSourceTypeCustom LogSourceType = "custom"
)
// LogFormat represents log entry format
@@ -497,14 +497,14 @@ const (
// LogEntry represents a single log entry
type LogEntry struct {
Timestamp time.Time `json:"timestamp"`
Level LogLevel `json:"level"`
Source string `json:"source"`
Message string `json:"message"`
Fields map[string]interface{} `json:"fields"`
Labels map[string]string `json:"labels"`
TraceID string `json:"trace_id,omitempty"`
SpanID string `json:"span_id,omitempty"`
Timestamp time.Time `json:"timestamp"`
Level LogLevel `json:"level"`
Source string `json:"source"`
Message string `json:"message"`
Fields map[string]interface{} `json:"fields"`
Labels map[string]string `json:"labels"`
TraceID string `json:"trace_id,omitempty"`
SpanID string `json:"span_id,omitempty"`
}
// LogLevel represents log entry severity
@@ -527,22 +527,22 @@ type LogAnalyzer interface {
// LogAnalysisResult represents the result of log analysis
type LogAnalysisResult struct {
AnalyzerName string `json:"analyzer_name"`
Anomalies []*LogAnomaly `json:"anomalies"`
Patterns []*LogPattern `json:"patterns"`
Statistics *LogStatistics `json:"statistics"`
Recommendations []string `json:"recommendations"`
AnalyzedAt time.Time `json:"analyzed_at"`
AnalyzerName string `json:"analyzer_name"`
Anomalies []*LogAnomaly `json:"anomalies"`
Patterns []*LogPattern `json:"patterns"`
Statistics *LogStatistics `json:"statistics"`
Recommendations []string `json:"recommendations"`
AnalyzedAt time.Time `json:"analyzed_at"`
}
// LogAnomaly represents detected log anomaly
type LogAnomaly struct {
Type AnomalyType `json:"type"`
Severity AlertSeverity `json:"severity"`
Description string `json:"description"`
Entries []*LogEntry `json:"entries"`
Confidence float64 `json:"confidence"`
DetectedAt time.Time `json:"detected_at"`
Type AnomalyType `json:"type"`
Severity AlertSeverity `json:"severity"`
Description string `json:"description"`
Entries []*LogEntry `json:"entries"`
Confidence float64 `json:"confidence"`
DetectedAt time.Time `json:"detected_at"`
}
// AnomalyType represents different types of log anomalies
@@ -558,38 +558,38 @@ const (
// LogPattern represents detected log pattern
type LogPattern struct {
Pattern string `json:"pattern"`
Frequency int `json:"frequency"`
LastSeen time.Time `json:"last_seen"`
Sources []string `json:"sources"`
Confidence float64 `json:"confidence"`
Pattern string `json:"pattern"`
Frequency int `json:"frequency"`
LastSeen time.Time `json:"last_seen"`
Sources []string `json:"sources"`
Confidence float64 `json:"confidence"`
}
// LogStatistics provides log statistics
type LogStatistics struct {
TotalEntries int64 `json:"total_entries"`
EntriesByLevel map[LogLevel]int64 `json:"entries_by_level"`
EntriesBySource map[string]int64 `json:"entries_by_source"`
ErrorRate float64 `json:"error_rate"`
AverageRate float64 `json:"average_rate"`
TimeRange [2]time.Time `json:"time_range"`
TotalEntries int64 `json:"total_entries"`
EntriesByLevel map[LogLevel]int64 `json:"entries_by_level"`
EntriesBySource map[string]int64 `json:"entries_by_source"`
ErrorRate float64 `json:"error_rate"`
AverageRate float64 `json:"average_rate"`
TimeRange [2]time.Time `json:"time_range"`
}
// LogRetentionPolicy defines log retention rules
type LogRetentionPolicy struct {
RetentionPeriod time.Duration `json:"retention_period"`
MaxEntries int64 `json:"max_entries"`
CompressionAge time.Duration `json:"compression_age"`
ArchiveAge time.Duration `json:"archive_age"`
Rules []*RetentionRule `json:"rules"`
RetentionPeriod time.Duration `json:"retention_period"`
MaxEntries int64 `json:"max_entries"`
CompressionAge time.Duration `json:"compression_age"`
ArchiveAge time.Duration `json:"archive_age"`
Rules []*RetentionRule `json:"rules"`
}
// RetentionRule defines specific retention rules
type RetentionRule struct {
Name string `json:"name"`
Condition string `json:"condition"` // Query expression
Retention time.Duration `json:"retention"`
Action RetentionAction `json:"action"`
Name string `json:"name"`
Condition string `json:"condition"` // Query expression
Retention time.Duration `json:"retention"`
Action RetentionAction `json:"action"`
}
// RetentionAction represents retention actions
@@ -603,47 +603,47 @@ const (
// TraceManager manages distributed tracing
type TraceManager struct {
mu sync.RWMutex
traces map[string]*Trace
spans map[string]*Span
samplers []TraceSampler
exporters []TraceExporter
running bool
mu sync.RWMutex
traces map[string]*Trace
spans map[string]*Span
samplers []TraceSampler
exporters []TraceExporter
running bool
}
// Trace represents a distributed trace
type Trace struct {
TraceID string `json:"trace_id"`
Spans []*Span `json:"spans"`
Duration time.Duration `json:"duration"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
Status TraceStatus `json:"status"`
Tags map[string]string `json:"tags"`
Operations []string `json:"operations"`
TraceID string `json:"trace_id"`
Spans []*Span `json:"spans"`
Duration time.Duration `json:"duration"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
Status TraceStatus `json:"status"`
Tags map[string]string `json:"tags"`
Operations []string `json:"operations"`
}
// Span represents a single span in a trace
type Span struct {
SpanID string `json:"span_id"`
TraceID string `json:"trace_id"`
ParentID string `json:"parent_id,omitempty"`
Operation string `json:"operation"`
Service string `json:"service"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
Duration time.Duration `json:"duration"`
Status SpanStatus `json:"status"`
Tags map[string]string `json:"tags"`
Logs []*SpanLog `json:"logs"`
SpanID string `json:"span_id"`
TraceID string `json:"trace_id"`
ParentID string `json:"parent_id,omitempty"`
Operation string `json:"operation"`
Service string `json:"service"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
Duration time.Duration `json:"duration"`
Status SpanStatus `json:"status"`
Tags map[string]string `json:"tags"`
Logs []*SpanLog `json:"logs"`
}
// TraceStatus represents the status of a trace
type TraceStatus string
const (
TraceStatusOK TraceStatus = "ok"
TraceStatusError TraceStatus = "error"
TraceStatusOK TraceStatus = "ok"
TraceStatusError TraceStatus = "error"
TraceStatusTimeout TraceStatus = "timeout"
)
@@ -675,18 +675,18 @@ type TraceExporter interface {
// ErrorEvent represents a system error event
type ErrorEvent struct {
ID string `json:"id"`
Timestamp time.Time `json:"timestamp"`
Level LogLevel `json:"level"`
Component string `json:"component"`
Message string `json:"message"`
Error string `json:"error"`
Context map[string]interface{} `json:"context"`
TraceID string `json:"trace_id,omitempty"`
SpanID string `json:"span_id,omitempty"`
Count int `json:"count"`
FirstSeen time.Time `json:"first_seen"`
LastSeen time.Time `json:"last_seen"`
ID string `json:"id"`
Timestamp time.Time `json:"timestamp"`
Level LogLevel `json:"level"`
Component string `json:"component"`
Message string `json:"message"`
Error string `json:"error"`
Context map[string]interface{} `json:"context"`
TraceID string `json:"trace_id,omitempty"`
SpanID string `json:"span_id,omitempty"`
Count int `json:"count"`
FirstSeen time.Time `json:"first_seen"`
LastSeen time.Time `json:"last_seen"`
}
// NewMonitoringSystem creates a comprehensive monitoring system
@@ -722,7 +722,7 @@ func (ms *MonitoringSystem) initializeComponents() error {
aggregatedStats: &AggregatedStatistics{
LastUpdated: time.Now(),
},
exporters: []MetricsExporter{},
exporters: []MetricsExporter{},
lastCollection: time.Now(),
}
@@ -1134,15 +1134,15 @@ func (ms *MonitoringSystem) createDefaultDashboards() {
func (ms *MonitoringSystem) severityWeight(severity AlertSeverity) int {
switch severity {
case SeverityCritical:
case AlertSeverityCritical:
return 4
case SeverityError:
case AlertSeverityError:
return 3
case SeverityWarning:
case AlertSeverityWarning:
return 2
case SeverityInfo:
case AlertSeverityInfo:
return 1
default:
return 0
}
}
}

View File

@@ -9,74 +9,74 @@ import (
"sync"
"time"
"chorus/pkg/dht"
"chorus/pkg/config"
"chorus/pkg/dht"
"github.com/libp2p/go-libp2p/core/peer"
)
// NetworkManagerImpl implements NetworkManager interface for network topology and partition management
type NetworkManagerImpl struct {
mu sync.RWMutex
dht *dht.DHT
config *config.Config
topology *NetworkTopology
partitionInfo *PartitionInfo
connectivity *ConnectivityMatrix
stats *NetworkStatistics
healthChecker *NetworkHealthChecker
partitionDetector *PartitionDetector
recoveryManager *RecoveryManager
mu sync.RWMutex
dht *dht.DHT
config *config.Config
topology *NetworkTopology
partitionInfo *PartitionInfo
connectivity *ConnectivityMatrix
stats *NetworkStatistics
healthChecker *NetworkHealthChecker
partitionDetector *PartitionDetector
recoveryManager *RecoveryManager
// Configuration
healthCheckInterval time.Duration
healthCheckInterval time.Duration
partitionCheckInterval time.Duration
connectivityTimeout time.Duration
maxPartitionDuration time.Duration
connectivityTimeout time.Duration
maxPartitionDuration time.Duration
// State
lastTopologyUpdate time.Time
lastPartitionCheck time.Time
running bool
recoveryInProgress bool
lastTopologyUpdate time.Time
lastPartitionCheck time.Time
running bool
recoveryInProgress bool
}
// ConnectivityMatrix tracks connectivity between all nodes
type ConnectivityMatrix struct {
Matrix map[string]map[string]*ConnectionInfo `json:"matrix"`
LastUpdated time.Time `json:"last_updated"`
LastUpdated time.Time `json:"last_updated"`
mu sync.RWMutex
}
// ConnectionInfo represents connectivity information between two nodes
type ConnectionInfo struct {
Connected bool `json:"connected"`
Latency time.Duration `json:"latency"`
PacketLoss float64 `json:"packet_loss"`
Bandwidth int64 `json:"bandwidth"`
LastChecked time.Time `json:"last_checked"`
ErrorCount int `json:"error_count"`
LastError string `json:"last_error,omitempty"`
Connected bool `json:"connected"`
Latency time.Duration `json:"latency"`
PacketLoss float64 `json:"packet_loss"`
Bandwidth int64 `json:"bandwidth"`
LastChecked time.Time `json:"last_checked"`
ErrorCount int `json:"error_count"`
LastError string `json:"last_error,omitempty"`
}
// NetworkHealthChecker performs network health checks
type NetworkHealthChecker struct {
mu sync.RWMutex
nodeHealth map[string]*NodeHealth
healthHistory map[string][]*HealthCheckResult
healthHistory map[string][]*NetworkHealthCheckResult
alertThresholds *NetworkAlertThresholds
}
// NodeHealth represents health status of a network node
type NodeHealth struct {
NodeID string `json:"node_id"`
Status NodeStatus `json:"status"`
HealthScore float64 `json:"health_score"`
LastSeen time.Time `json:"last_seen"`
ResponseTime time.Duration `json:"response_time"`
PacketLossRate float64 `json:"packet_loss_rate"`
BandwidthUtil float64 `json:"bandwidth_utilization"`
Uptime time.Duration `json:"uptime"`
ErrorRate float64 `json:"error_rate"`
NodeID string `json:"node_id"`
Status NodeStatus `json:"status"`
HealthScore float64 `json:"health_score"`
LastSeen time.Time `json:"last_seen"`
ResponseTime time.Duration `json:"response_time"`
PacketLossRate float64 `json:"packet_loss_rate"`
BandwidthUtil float64 `json:"bandwidth_utilization"`
Uptime time.Duration `json:"uptime"`
ErrorRate float64 `json:"error_rate"`
}
// NodeStatus represents the status of a network node
@@ -91,23 +91,23 @@ const (
)
// HealthCheckResult represents the result of a health check
type HealthCheckResult struct {
NodeID string `json:"node_id"`
Timestamp time.Time `json:"timestamp"`
Success bool `json:"success"`
ResponseTime time.Duration `json:"response_time"`
ErrorMessage string `json:"error_message,omitempty"`
type NetworkHealthCheckResult struct {
NodeID string `json:"node_id"`
Timestamp time.Time `json:"timestamp"`
Success bool `json:"success"`
ResponseTime time.Duration `json:"response_time"`
ErrorMessage string `json:"error_message,omitempty"`
NetworkMetrics *NetworkMetrics `json:"network_metrics"`
}
// NetworkAlertThresholds defines thresholds for network alerts
type NetworkAlertThresholds struct {
LatencyWarning time.Duration `json:"latency_warning"`
LatencyCritical time.Duration `json:"latency_critical"`
PacketLossWarning float64 `json:"packet_loss_warning"`
PacketLossCritical float64 `json:"packet_loss_critical"`
HealthScoreWarning float64 `json:"health_score_warning"`
HealthScoreCritical float64 `json:"health_score_critical"`
LatencyWarning time.Duration `json:"latency_warning"`
LatencyCritical time.Duration `json:"latency_critical"`
PacketLossWarning float64 `json:"packet_loss_warning"`
PacketLossCritical float64 `json:"packet_loss_critical"`
HealthScoreWarning float64 `json:"health_score_warning"`
HealthScoreCritical float64 `json:"health_score_critical"`
}
// PartitionDetector detects network partitions
@@ -131,14 +131,14 @@ const (
// PartitionEvent represents a partition detection event
type PartitionEvent struct {
EventID string `json:"event_id"`
DetectedAt time.Time `json:"detected_at"`
EventID string `json:"event_id"`
DetectedAt time.Time `json:"detected_at"`
Algorithm PartitionDetectionAlgorithm `json:"algorithm"`
PartitionedNodes []string `json:"partitioned_nodes"`
Confidence float64 `json:"confidence"`
Duration time.Duration `json:"duration"`
Resolved bool `json:"resolved"`
ResolvedAt *time.Time `json:"resolved_at,omitempty"`
PartitionedNodes []string `json:"partitioned_nodes"`
Confidence float64 `json:"confidence"`
Duration time.Duration `json:"duration"`
Resolved bool `json:"resolved"`
ResolvedAt *time.Time `json:"resolved_at,omitempty"`
}
// FalsePositiveFilter helps reduce false partition detections
@@ -159,10 +159,10 @@ type PartitionDetectorConfig struct {
// RecoveryManager manages network partition recovery
type RecoveryManager struct {
mu sync.RWMutex
mu sync.RWMutex
recoveryStrategies map[RecoveryStrategy]*RecoveryStrategyConfig
activeRecoveries map[string]*RecoveryOperation
recoveryHistory []*RecoveryResult
activeRecoveries map[string]*RecoveryOperation
recoveryHistory []*RecoveryResult
}
// RecoveryStrategy represents different recovery strategies
@@ -177,25 +177,25 @@ const (
// RecoveryStrategyConfig configures a recovery strategy
type RecoveryStrategyConfig struct {
Strategy RecoveryStrategy `json:"strategy"`
Timeout time.Duration `json:"timeout"`
RetryAttempts int `json:"retry_attempts"`
RetryInterval time.Duration `json:"retry_interval"`
RequireConsensus bool `json:"require_consensus"`
ForcedThreshold time.Duration `json:"forced_threshold"`
Strategy RecoveryStrategy `json:"strategy"`
Timeout time.Duration `json:"timeout"`
RetryAttempts int `json:"retry_attempts"`
RetryInterval time.Duration `json:"retry_interval"`
RequireConsensus bool `json:"require_consensus"`
ForcedThreshold time.Duration `json:"forced_threshold"`
}
// RecoveryOperation represents an active recovery operation
type RecoveryOperation struct {
OperationID string `json:"operation_id"`
Strategy RecoveryStrategy `json:"strategy"`
StartedAt time.Time `json:"started_at"`
TargetNodes []string `json:"target_nodes"`
Status RecoveryStatus `json:"status"`
Progress float64 `json:"progress"`
CurrentPhase RecoveryPhase `json:"current_phase"`
Errors []string `json:"errors"`
LastUpdate time.Time `json:"last_update"`
OperationID string `json:"operation_id"`
Strategy RecoveryStrategy `json:"strategy"`
StartedAt time.Time `json:"started_at"`
TargetNodes []string `json:"target_nodes"`
Status RecoveryStatus `json:"status"`
Progress float64 `json:"progress"`
CurrentPhase RecoveryPhase `json:"current_phase"`
Errors []string `json:"errors"`
LastUpdate time.Time `json:"last_update"`
}
// RecoveryStatus represents the status of a recovery operation
@@ -213,12 +213,12 @@ const (
type RecoveryPhase string
const (
RecoveryPhaseAssessment RecoveryPhase = "assessment"
RecoveryPhasePreparation RecoveryPhase = "preparation"
RecoveryPhaseReconnection RecoveryPhase = "reconnection"
RecoveryPhaseAssessment RecoveryPhase = "assessment"
RecoveryPhasePreparation RecoveryPhase = "preparation"
RecoveryPhaseReconnection RecoveryPhase = "reconnection"
RecoveryPhaseSynchronization RecoveryPhase = "synchronization"
RecoveryPhaseValidation RecoveryPhase = "validation"
RecoveryPhaseCompletion RecoveryPhase = "completion"
RecoveryPhaseValidation RecoveryPhase = "validation"
RecoveryPhaseCompletion RecoveryPhase = "completion"
)
// NewNetworkManagerImpl creates a new network manager implementation
@@ -231,13 +231,13 @@ func NewNetworkManagerImpl(dht *dht.DHT, config *config.Config) (*NetworkManager
}
nm := &NetworkManagerImpl{
dht: dht,
config: config,
healthCheckInterval: 30 * time.Second,
partitionCheckInterval: 60 * time.Second,
connectivityTimeout: 10 * time.Second,
maxPartitionDuration: 10 * time.Minute,
connectivity: &ConnectivityMatrix{Matrix: make(map[string]map[string]*ConnectionInfo)},
dht: dht,
config: config,
healthCheckInterval: 30 * time.Second,
partitionCheckInterval: 60 * time.Second,
connectivityTimeout: 10 * time.Second,
maxPartitionDuration: 10 * time.Minute,
connectivity: &ConnectivityMatrix{Matrix: make(map[string]map[string]*ConnectionInfo)},
stats: &NetworkStatistics{
LastUpdated: time.Now(),
},
@@ -255,33 +255,33 @@ func NewNetworkManagerImpl(dht *dht.DHT, config *config.Config) (*NetworkManager
func (nm *NetworkManagerImpl) initializeComponents() error {
// Initialize topology
nm.topology = &NetworkTopology{
TotalNodes: 0,
Connections: make(map[string][]string),
Regions: make(map[string][]string),
TotalNodes: 0,
Connections: make(map[string][]string),
Regions: make(map[string][]string),
AvailabilityZones: make(map[string][]string),
UpdatedAt: time.Now(),
UpdatedAt: time.Now(),
}
// Initialize partition info
nm.partitionInfo = &PartitionInfo{
PartitionDetected: false,
PartitionCount: 1,
IsolatedNodes: []string{},
PartitionDetected: false,
PartitionCount: 1,
IsolatedNodes: []string{},
ConnectivityMatrix: make(map[string]map[string]bool),
DetectedAt: time.Now(),
DetectedAt: time.Now(),
}
// Initialize health checker
nm.healthChecker = &NetworkHealthChecker{
nodeHealth: make(map[string]*NodeHealth),
healthHistory: make(map[string][]*HealthCheckResult),
healthHistory: make(map[string][]*NetworkHealthCheckResult),
alertThresholds: &NetworkAlertThresholds{
LatencyWarning: 500 * time.Millisecond,
LatencyCritical: 2 * time.Second,
PacketLossWarning: 0.05, // 5%
PacketLossCritical: 0.15, // 15%
HealthScoreWarning: 0.7,
HealthScoreCritical: 0.4,
LatencyWarning: 500 * time.Millisecond,
LatencyCritical: 2 * time.Second,
PacketLossWarning: 0.05, // 5%
PacketLossCritical: 0.15, // 15%
HealthScoreWarning: 0.7,
HealthScoreCritical: 0.4,
},
}
@@ -307,20 +307,20 @@ func (nm *NetworkManagerImpl) initializeComponents() error {
nm.recoveryManager = &RecoveryManager{
recoveryStrategies: map[RecoveryStrategy]*RecoveryStrategyConfig{
RecoveryStrategyAutomatic: {
Strategy: RecoveryStrategyAutomatic,
Timeout: 5 * time.Minute,
RetryAttempts: 3,
RetryInterval: 30 * time.Second,
Strategy: RecoveryStrategyAutomatic,
Timeout: 5 * time.Minute,
RetryAttempts: 3,
RetryInterval: 30 * time.Second,
RequireConsensus: false,
ForcedThreshold: 10 * time.Minute,
ForcedThreshold: 10 * time.Minute,
},
RecoveryStrategyGraceful: {
Strategy: RecoveryStrategyGraceful,
Timeout: 10 * time.Minute,
RetryAttempts: 5,
RetryInterval: 60 * time.Second,
Strategy: RecoveryStrategyGraceful,
Timeout: 10 * time.Minute,
RetryAttempts: 5,
RetryInterval: 60 * time.Second,
RequireConsensus: true,
ForcedThreshold: 20 * time.Minute,
ForcedThreshold: 20 * time.Minute,
},
},
activeRecoveries: make(map[string]*RecoveryOperation),
@@ -628,10 +628,10 @@ func (nm *NetworkManagerImpl) connectivityChecker(ctx context.Context) {
func (nm *NetworkManagerImpl) updateTopology() {
peers := nm.dht.GetConnectedPeers()
nm.topology.TotalNodes = len(peers) + 1 // +1 for current node
nm.topology.Connections = make(map[string][]string)
// Build connection map
currentNodeID := nm.config.Agent.ID
peerConnections := make([]string, len(peers))
@@ -639,21 +639,21 @@ func (nm *NetworkManagerImpl) updateTopology() {
peerConnections[i] = peer.String()
}
nm.topology.Connections[currentNodeID] = peerConnections
// Calculate network metrics
nm.topology.ClusterDiameter = nm.calculateClusterDiameter()
nm.topology.ClusteringCoefficient = nm.calculateClusteringCoefficient()
nm.topology.UpdatedAt = time.Now()
nm.lastTopologyUpdate = time.Now()
}
func (nm *NetworkManagerImpl) performHealthChecks(ctx context.Context) {
peers := nm.dht.GetConnectedPeers()
for _, peer := range peers {
result := nm.performHealthCheck(ctx, peer.String())
// Update node health
nodeHealth := &NodeHealth{
NodeID: peer.String(),
@@ -664,7 +664,7 @@ func (nm *NetworkManagerImpl) performHealthChecks(ctx context.Context) {
PacketLossRate: 0.0, // Would be measured in real implementation
ErrorRate: 0.0, // Would be calculated from history
}
if result.Success {
nodeHealth.Status = NodeStatusHealthy
nodeHealth.HealthScore = 1.0
@@ -672,21 +672,21 @@ func (nm *NetworkManagerImpl) performHealthChecks(ctx context.Context) {
nodeHealth.Status = NodeStatusUnreachable
nodeHealth.HealthScore = 0.0
}
nm.healthChecker.nodeHealth[peer.String()] = nodeHealth
// Store health check history
if _, exists := nm.healthChecker.healthHistory[peer.String()]; !exists {
nm.healthChecker.healthHistory[peer.String()] = []*HealthCheckResult{}
nm.healthChecker.healthHistory[peer.String()] = []*NetworkHealthCheckResult{}
}
nm.healthChecker.healthHistory[peer.String()] = append(
nm.healthChecker.healthHistory[peer.String()],
nm.healthChecker.healthHistory[peer.String()],
result,
)
// Keep only recent history (last 100 checks)
if len(nm.healthChecker.healthHistory[peer.String()]) > 100 {
nm.healthChecker.healthHistory[peer.String()] =
nm.healthChecker.healthHistory[peer.String()] =
nm.healthChecker.healthHistory[peer.String()][1:]
}
}
@@ -694,31 +694,31 @@ func (nm *NetworkManagerImpl) performHealthChecks(ctx context.Context) {
func (nm *NetworkManagerImpl) updateConnectivityMatrix(ctx context.Context) {
peers := nm.dht.GetConnectedPeers()
nm.connectivity.mu.Lock()
defer nm.connectivity.mu.Unlock()
// Initialize matrix if needed
if nm.connectivity.Matrix == nil {
nm.connectivity.Matrix = make(map[string]map[string]*ConnectionInfo)
}
currentNodeID := nm.config.Agent.ID
// Ensure current node exists in matrix
if nm.connectivity.Matrix[currentNodeID] == nil {
nm.connectivity.Matrix[currentNodeID] = make(map[string]*ConnectionInfo)
}
// Test connectivity to all peers
for _, peer := range peers {
peerID := peer.String()
// Test connection
connInfo := nm.testConnection(ctx, peerID)
nm.connectivity.Matrix[currentNodeID][peerID] = connInfo
}
nm.connectivity.LastUpdated = time.Now()
}
@@ -741,7 +741,7 @@ func (nm *NetworkManagerImpl) detectPartitionByConnectivity() (bool, []string, f
// Simplified connectivity-based detection
peers := nm.dht.GetConnectedPeers()
knownPeers := nm.dht.GetKnownPeers()
// If we know more peers than we're connected to, might be partitioned
if len(knownPeers) > len(peers)+2 { // Allow some tolerance
isolatedNodes := []string{}
@@ -759,7 +759,7 @@ func (nm *NetworkManagerImpl) detectPartitionByConnectivity() (bool, []string, f
}
return true, isolatedNodes, 0.8
}
return false, []string{}, 0.0
}
@@ -767,18 +767,18 @@ func (nm *NetworkManagerImpl) detectPartitionByHeartbeat() (bool, []string, floa
// Simplified heartbeat-based detection
nm.healthChecker.mu.RLock()
defer nm.healthChecker.mu.RUnlock()
isolatedNodes := []string{}
for nodeID, health := range nm.healthChecker.nodeHealth {
if health.Status == NodeStatusUnreachable {
isolatedNodes = append(isolatedNodes, nodeID)
}
}
if len(isolatedNodes) > 0 {
return true, isolatedNodes, 0.7
}
return false, []string{}, 0.0
}
@@ -791,7 +791,7 @@ func (nm *NetworkManagerImpl) detectPartitionHybrid() (bool, []string, float64)
// Combine multiple detection methods
partitioned1, nodes1, conf1 := nm.detectPartitionByConnectivity()
partitioned2, nodes2, conf2 := nm.detectPartitionByHeartbeat()
if partitioned1 && partitioned2 {
// Both methods agree
combinedNodes := nm.combineNodeLists(nodes1, nodes2)
@@ -805,7 +805,7 @@ func (nm *NetworkManagerImpl) detectPartitionHybrid() (bool, []string, float64)
return true, nodes2, conf2 * 0.7
}
}
return false, []string{}, 0.0
}
@@ -878,11 +878,11 @@ func (nm *NetworkManagerImpl) completeRecovery(ctx context.Context, operation *R
func (nm *NetworkManagerImpl) testPeerConnectivity(ctx context.Context, peerID string) *ConnectivityResult {
start := time.Now()
// In a real implementation, this would test actual network connectivity
// For now, we'll simulate based on DHT connectivity
peers := nm.dht.GetConnectedPeers()
for _, peer := range peers {
if peer.String() == peerID {
return &ConnectivityResult{
@@ -895,7 +895,7 @@ func (nm *NetworkManagerImpl) testPeerConnectivity(ctx context.Context, peerID s
}
}
}
return &ConnectivityResult{
PeerID: peerID,
Reachable: false,
@@ -907,13 +907,13 @@ func (nm *NetworkManagerImpl) testPeerConnectivity(ctx context.Context, peerID s
}
}
func (nm *NetworkManagerImpl) performHealthCheck(ctx context.Context, nodeID string) *HealthCheckResult {
func (nm *NetworkManagerImpl) performHealthCheck(ctx context.Context, nodeID string) *NetworkHealthCheckResult {
start := time.Now()
// In a real implementation, this would perform actual health checks
// For now, simulate based on connectivity
peers := nm.dht.GetConnectedPeers()
for _, peer := range peers {
if peer.String() == nodeID {
return &HealthCheckResult{
@@ -924,7 +924,7 @@ func (nm *NetworkManagerImpl) performHealthCheck(ctx context.Context, nodeID str
}
}
}
return &HealthCheckResult{
NodeID: nodeID,
Timestamp: time.Now(),
@@ -938,7 +938,7 @@ func (nm *NetworkManagerImpl) testConnection(ctx context.Context, peerID string)
// Test connection to specific peer
connected := false
latency := time.Duration(0)
// Check if peer is in connected peers list
peers := nm.dht.GetConnectedPeers()
for _, peer := range peers {
@@ -948,28 +948,28 @@ func (nm *NetworkManagerImpl) testConnection(ctx context.Context, peerID string)
break
}
}
return &ConnectionInfo{
Connected: connected,
Latency: latency,
PacketLoss: 0.0,
Bandwidth: 1000000, // 1 Mbps placeholder
LastChecked: time.Now(),
ErrorCount: 0,
Connected: connected,
Latency: latency,
PacketLoss: 0.0,
Bandwidth: 1000000, // 1 Mbps placeholder
LastChecked: time.Now(),
ErrorCount: 0,
}
}
func (nm *NetworkManagerImpl) updateNetworkStatistics() {
peers := nm.dht.GetConnectedPeers()
nm.stats.TotalNodes = len(peers) + 1
nm.stats.ConnectedNodes = len(peers)
nm.stats.DisconnectedNodes = nm.stats.TotalNodes - nm.stats.ConnectedNodes
// Calculate average latency from connectivity matrix
totalLatency := time.Duration(0)
connectionCount := 0
nm.connectivity.mu.RLock()
for _, connections := range nm.connectivity.Matrix {
for _, conn := range connections {
@@ -980,11 +980,11 @@ func (nm *NetworkManagerImpl) updateNetworkStatistics() {
}
}
nm.connectivity.mu.RUnlock()
if connectionCount > 0 {
nm.stats.AverageLatency = totalLatency / time.Duration(connectionCount)
}
nm.stats.OverallHealth = nm.calculateOverallNetworkHealth()
nm.stats.LastUpdated = time.Now()
}
@@ -1024,14 +1024,14 @@ func (nm *NetworkManagerImpl) calculateOverallNetworkHealth() float64 {
return float64(nm.stats.ConnectedNodes) / float64(nm.stats.TotalNodes)
}
func (nm *NetworkManagerImpl) determineNodeStatus(result *HealthCheckResult) NodeStatus {
func (nm *NetworkManagerImpl) determineNodeStatus(result *NetworkHealthCheckResult) NodeStatus {
if result.Success {
return NodeStatusHealthy
}
return NodeStatusUnreachable
}
func (nm *NetworkManagerImpl) calculateHealthScore(result *HealthCheckResult) float64 {
func (nm *NetworkManagerImpl) calculateHealthScore(result *NetworkHealthCheckResult) float64 {
if result.Success {
return 1.0
}
@@ -1040,19 +1040,19 @@ func (nm *NetworkManagerImpl) calculateHealthScore(result *HealthCheckResult) fl
func (nm *NetworkManagerImpl) combineNodeLists(list1, list2 []string) []string {
nodeSet := make(map[string]bool)
for _, node := range list1 {
nodeSet[node] = true
}
for _, node := range list2 {
nodeSet[node] = true
}
result := make([]string, 0, len(nodeSet))
for node := range nodeSet {
result = append(result, node)
}
sort.Strings(result)
return result
}
@@ -1073,4 +1073,4 @@ func (nm *NetworkManagerImpl) generateEventID() string {
func (nm *NetworkManagerImpl) generateOperationID() string {
return fmt.Sprintf("op-%d", time.Now().UnixNano())
}
}

View File

@@ -7,39 +7,39 @@ import (
"sync"
"time"
"chorus/pkg/dht"
"chorus/pkg/config"
"chorus/pkg/dht"
"chorus/pkg/ucxl"
"github.com/libp2p/go-libp2p/core/peer"
)
// ReplicationManagerImpl implements ReplicationManager interface
type ReplicationManagerImpl struct {
mu sync.RWMutex
dht *dht.DHT
config *config.Config
replicationMap map[string]*ReplicationStatus
repairQueue chan *RepairRequest
rebalanceQueue chan *RebalanceRequest
consistentHash ConsistentHashing
policy *ReplicationPolicy
stats *ReplicationStatistics
running bool
mu sync.RWMutex
dht *dht.DHT
config *config.Config
replicationMap map[string]*ReplicationStatus
repairQueue chan *RepairRequest
rebalanceQueue chan *RebalanceRequest
consistentHash ConsistentHashing
policy *ReplicationPolicy
stats *ReplicationStatistics
running bool
}
// RepairRequest represents a repair request
type RepairRequest struct {
Address ucxl.Address
RequestedBy string
Priority Priority
RequestTime time.Time
Address ucxl.Address
RequestedBy string
Priority Priority
RequestTime time.Time
}
// RebalanceRequest represents a rebalance request
type RebalanceRequest struct {
Reason string
RequestedBy string
RequestTime time.Time
Reason string
RequestedBy string
RequestTime time.Time
}
// NewReplicationManagerImpl creates a new replication manager implementation
@@ -220,10 +220,10 @@ func (rm *ReplicationManagerImpl) BalanceReplicas(ctx context.Context) (*Rebalan
start := time.Now()
result := &RebalanceResult{
RebalanceTime: 0,
RebalanceTime: 0,
RebalanceSuccessful: false,
Errors: []string{},
RebalancedAt: time.Now(),
Errors: []string{},
RebalancedAt: time.Now(),
}
// Get current cluster topology
@@ -462,9 +462,9 @@ func (rm *ReplicationManagerImpl) discoverReplicas(ctx context.Context, address
// For now, we'll simulate some replicas
peers := rm.dht.GetConnectedPeers()
if len(peers) > 0 {
status.CurrentReplicas = min(len(peers), rm.policy.DefaultFactor)
status.CurrentReplicas = minInt(len(peers), rm.policy.DefaultFactor)
status.HealthyReplicas = status.CurrentReplicas
for i, peer := range peers {
if i >= status.CurrentReplicas {
break
@@ -478,9 +478,9 @@ func (rm *ReplicationManagerImpl) determineOverallHealth(status *ReplicationStat
if status.HealthyReplicas == 0 {
return HealthFailed
}
healthRatio := float64(status.HealthyReplicas) / float64(status.DesiredReplicas)
if healthRatio >= 1.0 {
return HealthHealthy
} else if healthRatio >= 0.7 {
@@ -579,7 +579,7 @@ func (rm *ReplicationManagerImpl) calculateIdealDistribution(peers []peer.ID) ma
func (rm *ReplicationManagerImpl) getCurrentDistribution(ctx context.Context) map[string]map[string]int {
// Returns current distribution: address -> node -> replica count
distribution := make(map[string]map[string]int)
rm.mu.RLock()
for addr, status := range rm.replicationMap {
distribution[addr] = make(map[string]int)
@@ -588,7 +588,7 @@ func (rm *ReplicationManagerImpl) getCurrentDistribution(ctx context.Context) ma
}
}
rm.mu.RUnlock()
return distribution
}
@@ -630,17 +630,17 @@ func (rm *ReplicationManagerImpl) isNodeOverloaded(nodeID string) bool {
// RebalanceMove represents a replica move operation
type RebalanceMove struct {
Address ucxl.Address `json:"address"`
FromNode string `json:"from_node"`
ToNode string `json:"to_node"`
Priority Priority `json:"priority"`
Reason string `json:"reason"`
Address ucxl.Address `json:"address"`
FromNode string `json:"from_node"`
ToNode string `json:"to_node"`
Priority Priority `json:"priority"`
Reason string `json:"reason"`
}
// Utility functions
func min(a, b int) int {
func minInt(a, b int) int {
if a < b {
return a
}
return b
}
}

View File

@@ -20,22 +20,22 @@ import (
// SecurityManager handles all security aspects of the distributed system
type SecurityManager struct {
mu sync.RWMutex
config *config.Config
tlsConfig *TLSConfig
authManager *AuthenticationManager
authzManager *AuthorizationManager
auditLogger *SecurityAuditLogger
nodeAuth *NodeAuthentication
encryption *DistributionEncryption
certificateAuth *CertificateAuthority
mu sync.RWMutex
config *config.Config
tlsConfig *TLSConfig
authManager *AuthenticationManager
authzManager *AuthorizationManager
auditLogger *SecurityAuditLogger
nodeAuth *NodeAuthentication
encryption *DistributionEncryption
certificateAuth *CertificateAuthority
// Security state
trustedNodes map[string]*TrustedNode
activeSessions map[string]*SecuritySession
securityPolicies map[string]*SecurityPolicy
threatDetector *ThreatDetector
trustedNodes map[string]*TrustedNode
activeSessions map[string]*SecuritySession
securityPolicies map[string]*SecurityPolicy
threatDetector *ThreatDetector
// Configuration
tlsEnabled bool
mutualTLSEnabled bool
@@ -45,28 +45,28 @@ type SecurityManager struct {
// TLSConfig manages TLS configuration for secure communications
type TLSConfig struct {
ServerConfig *tls.Config
ClientConfig *tls.Config
CertificatePath string
PrivateKeyPath string
CAPath string
MinTLSVersion uint16
CipherSuites []uint16
CurvePreferences []tls.CurveID
ClientAuth tls.ClientAuthType
VerifyConnection func(tls.ConnectionState) error
ServerConfig *tls.Config
ClientConfig *tls.Config
CertificatePath string
PrivateKeyPath string
CAPath string
MinTLSVersion uint16
CipherSuites []uint16
CurvePreferences []tls.CurveID
ClientAuth tls.ClientAuthType
VerifyConnection func(tls.ConnectionState) error
}
// AuthenticationManager handles node and user authentication
type AuthenticationManager struct {
mu sync.RWMutex
providers map[string]AuthProvider
tokenValidator TokenValidator
sessionManager *SessionManager
multiFactorAuth *MultiFactorAuth
credentialStore *CredentialStore
loginAttempts map[string]*LoginAttempts
authPolicies map[string]*AuthPolicy
mu sync.RWMutex
providers map[string]AuthProvider
tokenValidator TokenValidator
sessionManager *SessionManager
multiFactorAuth *MultiFactorAuth
credentialStore *CredentialStore
loginAttempts map[string]*LoginAttempts
authPolicies map[string]*AuthPolicy
}
// AuthProvider interface for different authentication methods
@@ -80,14 +80,14 @@ type AuthProvider interface {
// Credentials represents authentication credentials
type Credentials struct {
Type CredentialType `json:"type"`
Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"`
Token string `json:"token,omitempty"`
Certificate *x509.Certificate `json:"certificate,omitempty"`
Signature []byte `json:"signature,omitempty"`
Challenge string `json:"challenge,omitempty"`
Metadata map[string]interface{} `json:"metadata,omitempty"`
Type CredentialType `json:"type"`
Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"`
Token string `json:"token,omitempty"`
Certificate *x509.Certificate `json:"certificate,omitempty"`
Signature []byte `json:"signature,omitempty"`
Challenge string `json:"challenge,omitempty"`
Metadata map[string]interface{} `json:"metadata,omitempty"`
}
// CredentialType represents different types of credentials
@@ -104,15 +104,15 @@ const (
// AuthResult represents the result of authentication
type AuthResult struct {
Success bool `json:"success"`
UserID string `json:"user_id"`
Roles []string `json:"roles"`
Permissions []string `json:"permissions"`
TokenPair *TokenPair `json:"token_pair"`
SessionID string `json:"session_id"`
ExpiresAt time.Time `json:"expires_at"`
Metadata map[string]interface{} `json:"metadata"`
FailureReason string `json:"failure_reason,omitempty"`
Success bool `json:"success"`
UserID string `json:"user_id"`
Roles []string `json:"roles"`
Permissions []string `json:"permissions"`
TokenPair *TokenPair `json:"token_pair"`
SessionID string `json:"session_id"`
ExpiresAt time.Time `json:"expires_at"`
Metadata map[string]interface{} `json:"metadata"`
FailureReason string `json:"failure_reason,omitempty"`
}
// TokenPair represents access and refresh tokens
@@ -140,13 +140,13 @@ type TokenClaims struct {
// AuthorizationManager handles authorization and access control
type AuthorizationManager struct {
mu sync.RWMutex
policyEngine PolicyEngine
rbacManager *RBACManager
aclManager *ACLManager
resourceManager *ResourceManager
permissionCache *PermissionCache
authzPolicies map[string]*AuthorizationPolicy
mu sync.RWMutex
policyEngine PolicyEngine
rbacManager *RBACManager
aclManager *ACLManager
resourceManager *ResourceManager
permissionCache *PermissionCache
authzPolicies map[string]*AuthorizationPolicy
}
// PolicyEngine interface for policy evaluation
@@ -168,13 +168,13 @@ type AuthorizationRequest struct {
// AuthorizationResult represents the result of authorization
type AuthorizationResult struct {
Decision AuthorizationDecision `json:"decision"`
Reason string `json:"reason"`
Policies []string `json:"applied_policies"`
Conditions []string `json:"conditions"`
TTL time.Duration `json:"ttl"`
Metadata map[string]interface{} `json:"metadata"`
EvaluationTime time.Duration `json:"evaluation_time"`
Decision AuthorizationDecision `json:"decision"`
Reason string `json:"reason"`
Policies []string `json:"applied_policies"`
Conditions []string `json:"conditions"`
TTL time.Duration `json:"ttl"`
Metadata map[string]interface{} `json:"metadata"`
EvaluationTime time.Duration `json:"evaluation_time"`
}
// AuthorizationDecision represents authorization decisions
@@ -188,13 +188,13 @@ const (
// SecurityAuditLogger handles security event logging
type SecurityAuditLogger struct {
mu sync.RWMutex
loggers []SecurityLogger
eventBuffer []*SecurityEvent
alertManager *SecurityAlertManager
compliance *ComplianceManager
retention *AuditRetentionPolicy
enabled bool
mu sync.RWMutex
loggers []SecurityLogger
eventBuffer []*SecurityEvent
alertManager *SecurityAlertManager
compliance *ComplianceManager
retention *AuditRetentionPolicy
enabled bool
}
// SecurityLogger interface for security event logging
@@ -206,22 +206,22 @@ type SecurityLogger interface {
// SecurityEvent represents a security event
type SecurityEvent struct {
EventID string `json:"event_id"`
EventType SecurityEventType `json:"event_type"`
Severity SecuritySeverity `json:"severity"`
Timestamp time.Time `json:"timestamp"`
UserID string `json:"user_id,omitempty"`
NodeID string `json:"node_id,omitempty"`
Resource string `json:"resource,omitempty"`
Action string `json:"action,omitempty"`
Result string `json:"result"`
Message string `json:"message"`
Details map[string]interface{} `json:"details"`
IPAddress string `json:"ip_address,omitempty"`
UserAgent string `json:"user_agent,omitempty"`
SessionID string `json:"session_id,omitempty"`
RequestID string `json:"request_id,omitempty"`
Fingerprint string `json:"fingerprint"`
EventID string `json:"event_id"`
EventType SecurityEventType `json:"event_type"`
Severity SecuritySeverity `json:"severity"`
Timestamp time.Time `json:"timestamp"`
UserID string `json:"user_id,omitempty"`
NodeID string `json:"node_id,omitempty"`
Resource string `json:"resource,omitempty"`
Action string `json:"action,omitempty"`
Result string `json:"result"`
Message string `json:"message"`
Details map[string]interface{} `json:"details"`
IPAddress string `json:"ip_address,omitempty"`
UserAgent string `json:"user_agent,omitempty"`
SessionID string `json:"session_id,omitempty"`
RequestID string `json:"request_id,omitempty"`
Fingerprint string `json:"fingerprint"`
}
// SecurityEventType represents different types of security events
@@ -242,12 +242,12 @@ const (
type SecuritySeverity string
const (
SeverityDebug SecuritySeverity = "debug"
SeverityInfo SecuritySeverity = "info"
SeverityWarning SecuritySeverity = "warning"
SeverityError SecuritySeverity = "error"
SeverityCritical SecuritySeverity = "critical"
SeverityAlert SecuritySeverity = "alert"
SecuritySeverityDebug SecuritySeverity = "debug"
SecuritySeverityInfo SecuritySeverity = "info"
SecuritySeverityWarning SecuritySeverity = "warning"
SecuritySeverityError SecuritySeverity = "error"
SecuritySeverityCritical SecuritySeverity = "critical"
SecuritySeverityAlert SecuritySeverity = "alert"
)
// NodeAuthentication handles node-to-node authentication
@@ -262,16 +262,16 @@ type NodeAuthentication struct {
// TrustedNode represents a trusted node in the network
type TrustedNode struct {
NodeID string `json:"node_id"`
PublicKey []byte `json:"public_key"`
Certificate *x509.Certificate `json:"certificate"`
Roles []string `json:"roles"`
Capabilities []string `json:"capabilities"`
TrustLevel TrustLevel `json:"trust_level"`
LastSeen time.Time `json:"last_seen"`
VerifiedAt time.Time `json:"verified_at"`
Metadata map[string]interface{} `json:"metadata"`
Status NodeStatus `json:"status"`
NodeID string `json:"node_id"`
PublicKey []byte `json:"public_key"`
Certificate *x509.Certificate `json:"certificate"`
Roles []string `json:"roles"`
Capabilities []string `json:"capabilities"`
TrustLevel TrustLevel `json:"trust_level"`
LastSeen time.Time `json:"last_seen"`
VerifiedAt time.Time `json:"verified_at"`
Metadata map[string]interface{} `json:"metadata"`
Status NodeStatus `json:"status"`
}
// TrustLevel represents the trust level of a node
@@ -287,18 +287,18 @@ const (
// SecuritySession represents an active security session
type SecuritySession struct {
SessionID string `json:"session_id"`
UserID string `json:"user_id"`
NodeID string `json:"node_id"`
Roles []string `json:"roles"`
Permissions []string `json:"permissions"`
CreatedAt time.Time `json:"created_at"`
ExpiresAt time.Time `json:"expires_at"`
LastActivity time.Time `json:"last_activity"`
IPAddress string `json:"ip_address"`
UserAgent string `json:"user_agent"`
Metadata map[string]interface{} `json:"metadata"`
Status SessionStatus `json:"status"`
SessionID string `json:"session_id"`
UserID string `json:"user_id"`
NodeID string `json:"node_id"`
Roles []string `json:"roles"`
Permissions []string `json:"permissions"`
CreatedAt time.Time `json:"created_at"`
ExpiresAt time.Time `json:"expires_at"`
LastActivity time.Time `json:"last_activity"`
IPAddress string `json:"ip_address"`
UserAgent string `json:"user_agent"`
Metadata map[string]interface{} `json:"metadata"`
Status SessionStatus `json:"status"`
}
// SessionStatus represents session status
@@ -313,61 +313,61 @@ const (
// ThreatDetector detects security threats and anomalies
type ThreatDetector struct {
mu sync.RWMutex
detectionRules []*ThreatDetectionRule
behaviorAnalyzer *BehaviorAnalyzer
anomalyDetector *AnomalyDetector
threatIntelligence *ThreatIntelligence
activeThreats map[string]*ThreatEvent
mu sync.RWMutex
detectionRules []*ThreatDetectionRule
behaviorAnalyzer *BehaviorAnalyzer
anomalyDetector *AnomalyDetector
threatIntelligence *ThreatIntelligence
activeThreats map[string]*ThreatEvent
mitigationStrategies map[ThreatType]*MitigationStrategy
}
// ThreatDetectionRule represents a threat detection rule
type ThreatDetectionRule struct {
RuleID string `json:"rule_id"`
Name string `json:"name"`
Description string `json:"description"`
ThreatType ThreatType `json:"threat_type"`
Severity SecuritySeverity `json:"severity"`
Conditions []*ThreatCondition `json:"conditions"`
Actions []*ThreatAction `json:"actions"`
Enabled bool `json:"enabled"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
Metadata map[string]interface{} `json:"metadata"`
RuleID string `json:"rule_id"`
Name string `json:"name"`
Description string `json:"description"`
ThreatType ThreatType `json:"threat_type"`
Severity SecuritySeverity `json:"severity"`
Conditions []*ThreatCondition `json:"conditions"`
Actions []*ThreatAction `json:"actions"`
Enabled bool `json:"enabled"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
Metadata map[string]interface{} `json:"metadata"`
}
// ThreatType represents different types of threats
type ThreatType string
const (
ThreatTypeBruteForce ThreatType = "brute_force"
ThreatTypeUnauthorized ThreatType = "unauthorized_access"
ThreatTypeDataExfiltration ThreatType = "data_exfiltration"
ThreatTypeDoS ThreatType = "denial_of_service"
ThreatTypeBruteForce ThreatType = "brute_force"
ThreatTypeUnauthorized ThreatType = "unauthorized_access"
ThreatTypeDataExfiltration ThreatType = "data_exfiltration"
ThreatTypeDoS ThreatType = "denial_of_service"
ThreatTypePrivilegeEscalation ThreatType = "privilege_escalation"
ThreatTypeAnomalous ThreatType = "anomalous_behavior"
ThreatTypeMaliciousCode ThreatType = "malicious_code"
ThreatTypeInsiderThreat ThreatType = "insider_threat"
ThreatTypeAnomalous ThreatType = "anomalous_behavior"
ThreatTypeMaliciousCode ThreatType = "malicious_code"
ThreatTypeInsiderThreat ThreatType = "insider_threat"
)
// CertificateAuthority manages certificate generation and validation
type CertificateAuthority struct {
mu sync.RWMutex
rootCA *x509.Certificate
rootKey interface{}
intermediateCA *x509.Certificate
mu sync.RWMutex
rootCA *x509.Certificate
rootKey interface{}
intermediateCA *x509.Certificate
intermediateKey interface{}
certStore *CertificateStore
crlManager *CRLManager
ocspResponder *OCSPResponder
certStore *CertificateStore
crlManager *CRLManager
ocspResponder *OCSPResponder
}
// DistributionEncryption handles encryption for distributed communications
type DistributionEncryption struct {
mu sync.RWMutex
keyManager *DistributionKeyManager
encryptionSuite *EncryptionSuite
mu sync.RWMutex
keyManager *DistributionKeyManager
encryptionSuite *EncryptionSuite
keyRotationPolicy *KeyRotationPolicy
encryptionMetrics *EncryptionMetrics
}
@@ -379,13 +379,13 @@ func NewSecurityManager(config *config.Config) (*SecurityManager, error) {
}
sm := &SecurityManager{
config: config,
trustedNodes: make(map[string]*TrustedNode),
activeSessions: make(map[string]*SecuritySession),
securityPolicies: make(map[string]*SecurityPolicy),
tlsEnabled: true,
mutualTLSEnabled: true,
auditingEnabled: true,
config: config,
trustedNodes: make(map[string]*TrustedNode),
activeSessions: make(map[string]*SecuritySession),
securityPolicies: make(map[string]*SecurityPolicy),
tlsEnabled: true,
mutualTLSEnabled: true,
auditingEnabled: true,
encryptionEnabled: true,
}
@@ -508,12 +508,12 @@ func (sm *SecurityManager) Authenticate(ctx context.Context, credentials *Creden
// Log authentication attempt
sm.logSecurityEvent(ctx, &SecurityEvent{
EventType: EventTypeAuthentication,
Severity: SeverityInfo,
Severity: SecuritySeverityInfo,
Action: "authenticate",
Message: "Authentication attempt",
Details: map[string]interface{}{
"credential_type": credentials.Type,
"username": credentials.Username,
"username": credentials.Username,
},
})
@@ -525,7 +525,7 @@ func (sm *SecurityManager) Authorize(ctx context.Context, request *Authorization
// Log authorization attempt
sm.logSecurityEvent(ctx, &SecurityEvent{
EventType: EventTypeAuthorization,
Severity: SeverityInfo,
Severity: SecuritySeverityInfo,
UserID: request.UserID,
Resource: request.Resource,
Action: request.Action,
@@ -554,7 +554,7 @@ func (sm *SecurityManager) ValidateNodeIdentity(ctx context.Context, nodeID stri
// Log successful validation
sm.logSecurityEvent(ctx, &SecurityEvent{
EventType: EventTypeAuthentication,
Severity: SeverityInfo,
Severity: SecuritySeverityInfo,
NodeID: nodeID,
Action: "validate_node_identity",
Result: "success",
@@ -609,7 +609,7 @@ func (sm *SecurityManager) AddTrustedNode(ctx context.Context, node *TrustedNode
// Log node addition
sm.logSecurityEvent(ctx, &SecurityEvent{
EventType: EventTypeConfiguration,
Severity: SeverityInfo,
Severity: SecuritySeverityInfo,
NodeID: node.NodeID,
Action: "add_trusted_node",
Result: "success",
@@ -649,7 +649,7 @@ func (sm *SecurityManager) loadOrGenerateCertificate() (*tls.Certificate, error)
func (sm *SecurityManager) generateSelfSignedCertificate() ([]byte, []byte, error) {
// Generate a self-signed certificate for development/testing
// In production, use proper CA-signed certificates
template := x509.Certificate{
SerialNumber: big.NewInt(1),
Subject: pkix.Name{
@@ -660,11 +660,11 @@ func (sm *SecurityManager) generateSelfSignedCertificate() ([]byte, []byte, erro
StreetAddress: []string{""},
PostalCode: []string{""},
},
NotBefore: time.Now(),
NotAfter: time.Now().Add(365 * 24 * time.Hour),
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
IPAddresses: []net.IP{net.IPv4(127, 0, 0, 1), net.IPv6loopback},
NotBefore: time.Now(),
NotAfter: time.Now().Add(365 * 24 * time.Hour),
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
IPAddresses: []net.IP{net.IPv4(127, 0, 0, 1), net.IPv6loopback},
}
// This is a simplified implementation
@@ -765,8 +765,8 @@ func NewDistributionEncryption(config *config.Config) (*DistributionEncryption,
func NewThreatDetector(config *config.Config) (*ThreatDetector, error) {
return &ThreatDetector{
detectionRules: []*ThreatDetectionRule{},
activeThreats: make(map[string]*ThreatEvent),
detectionRules: []*ThreatDetectionRule{},
activeThreats: make(map[string]*ThreatEvent),
mitigationStrategies: make(map[ThreatType]*MitigationStrategy),
}, nil
}
@@ -831,4 +831,4 @@ type OCSPResponder struct{}
type DistributionKeyManager struct{}
type EncryptionSuite struct{}
type KeyRotationPolicy struct{}
type EncryptionMetrics struct{}
type EncryptionMetrics struct{}

View File

@@ -11,8 +11,8 @@ import (
"strings"
"time"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
)
// DefaultDirectoryAnalyzer provides comprehensive directory structure analysis
@@ -268,11 +268,11 @@ func NewRelationshipAnalyzer() *RelationshipAnalyzer {
// AnalyzeStructure analyzes directory organization patterns
func (da *DefaultDirectoryAnalyzer) AnalyzeStructure(ctx context.Context, dirPath string) (*DirectoryStructure, error) {
structure := &DirectoryStructure{
Path: dirPath,
FileTypes: make(map[string]int),
Languages: make(map[string]int),
Dependencies: []string{},
AnalyzedAt: time.Now(),
Path: dirPath,
FileTypes: make(map[string]int),
Languages: make(map[string]int),
Dependencies: []string{},
AnalyzedAt: time.Now(),
}
// Walk the directory tree
@@ -340,9 +340,9 @@ func (da *DefaultDirectoryAnalyzer) DetectConventions(ctx context.Context, dirPa
OrganizationalPatterns: []*OrganizationalPattern{},
Consistency: 0.0,
Violations: []*Violation{},
Recommendations: []*Recommendation{},
Recommendations: []*BasicRecommendation{},
AppliedStandards: []string{},
AnalyzedAt: time.Now(),
AnalyzedAt: time.Now(),
}
// Collect all files and directories
@@ -385,39 +385,39 @@ func (da *DefaultDirectoryAnalyzer) IdentifyPurpose(ctx context.Context, structu
purpose string
confidence float64
}{
"src": {"Source code repository", 0.9},
"source": {"Source code repository", 0.9},
"lib": {"Library code", 0.8},
"libs": {"Library code", 0.8},
"vendor": {"Third-party dependencies", 0.9},
"node_modules": {"Node.js dependencies", 0.95},
"build": {"Build artifacts", 0.9},
"dist": {"Distribution files", 0.9},
"bin": {"Binary executables", 0.9},
"test": {"Test code", 0.9},
"tests": {"Test code", 0.9},
"docs": {"Documentation", 0.9},
"doc": {"Documentation", 0.9},
"config": {"Configuration files", 0.9},
"configs": {"Configuration files", 0.9},
"scripts": {"Utility scripts", 0.8},
"tools": {"Development tools", 0.8},
"assets": {"Static assets", 0.8},
"public": {"Public web assets", 0.8},
"static": {"Static files", 0.8},
"templates": {"Template files", 0.8},
"migrations": {"Database migrations", 0.9},
"models": {"Data models", 0.8},
"views": {"View layer", 0.8},
"controllers": {"Controller layer", 0.8},
"services": {"Service layer", 0.8},
"components": {"Reusable components", 0.8},
"modules": {"Modular components", 0.8},
"packages": {"Package organization", 0.7},
"internal": {"Internal implementation", 0.8},
"cmd": {"Command-line applications", 0.9},
"api": {"API implementation", 0.8},
"pkg": {"Go package directory", 0.8},
"src": {"Source code repository", 0.9},
"source": {"Source code repository", 0.9},
"lib": {"Library code", 0.8},
"libs": {"Library code", 0.8},
"vendor": {"Third-party dependencies", 0.9},
"node_modules": {"Node.js dependencies", 0.95},
"build": {"Build artifacts", 0.9},
"dist": {"Distribution files", 0.9},
"bin": {"Binary executables", 0.9},
"test": {"Test code", 0.9},
"tests": {"Test code", 0.9},
"docs": {"Documentation", 0.9},
"doc": {"Documentation", 0.9},
"config": {"Configuration files", 0.9},
"configs": {"Configuration files", 0.9},
"scripts": {"Utility scripts", 0.8},
"tools": {"Development tools", 0.8},
"assets": {"Static assets", 0.8},
"public": {"Public web assets", 0.8},
"static": {"Static files", 0.8},
"templates": {"Template files", 0.8},
"migrations": {"Database migrations", 0.9},
"models": {"Data models", 0.8},
"views": {"View layer", 0.8},
"controllers": {"Controller layer", 0.8},
"services": {"Service layer", 0.8},
"components": {"Reusable components", 0.8},
"modules": {"Modular components", 0.8},
"packages": {"Package organization", 0.7},
"internal": {"Internal implementation", 0.8},
"cmd": {"Command-line applications", 0.9},
"api": {"API implementation", 0.8},
"pkg": {"Go package directory", 0.8},
}
if p, exists := purposes[dirName]; exists {
@@ -459,12 +459,12 @@ func (da *DefaultDirectoryAnalyzer) IdentifyPurpose(ctx context.Context, structu
// AnalyzeRelationships analyzes relationships between subdirectories
func (da *DefaultDirectoryAnalyzer) AnalyzeRelationships(ctx context.Context, dirPath string) (*RelationshipAnalysis, error) {
analysis := &RelationshipAnalysis{
Dependencies: []*DirectoryDependency{},
Relationships: []*DirectoryRelation{},
CouplingMetrics: &CouplingMetrics{},
ModularityScore: 0.0,
Dependencies: []*DirectoryDependency{},
Relationships: []*DirectoryRelation{},
CouplingMetrics: &CouplingMetrics{},
ModularityScore: 0.0,
ArchitecturalStyle: "unknown",
AnalyzedAt: time.Now(),
AnalyzedAt: time.Now(),
}
// Find subdirectories
@@ -568,20 +568,20 @@ func (da *DefaultDirectoryAnalyzer) GenerateHierarchy(ctx context.Context, rootP
func (da *DefaultDirectoryAnalyzer) mapExtensionToLanguage(ext string) string {
langMap := map[string]string{
".go": "go",
".py": "python",
".js": "javascript",
".jsx": "javascript",
".ts": "typescript",
".tsx": "typescript",
".java": "java",
".c": "c",
".cpp": "cpp",
".cs": "csharp",
".php": "php",
".rb": "ruby",
".rs": "rust",
".kt": "kotlin",
".go": "go",
".py": "python",
".js": "javascript",
".jsx": "javascript",
".ts": "typescript",
".tsx": "typescript",
".java": "java",
".c": "c",
".cpp": "cpp",
".cs": "csharp",
".php": "php",
".rb": "ruby",
".rs": "rust",
".kt": "kotlin",
".swift": "swift",
}
@@ -604,7 +604,7 @@ func (da *DefaultDirectoryAnalyzer) analyzeOrganization(dirPath string) (*Organi
// Detect organizational pattern
pattern := da.detectOrganizationalPattern(subdirs)
// Calculate metrics
fanOut := len(subdirs)
consistency := da.calculateOrganizationalConsistency(subdirs)
@@ -672,7 +672,7 @@ func (da *DefaultDirectoryAnalyzer) allAreDomainLike(subdirs []string) bool {
// Simple heuristic: if directories don't look like technical layers,
// they might be domain/feature based
technicalTerms := []string{"api", "service", "repository", "model", "dto", "util", "config", "test", "lib"}
for _, subdir := range subdirs {
lowerDir := strings.ToLower(subdir)
for _, term := range technicalTerms {
@@ -733,7 +733,7 @@ func (da *DefaultDirectoryAnalyzer) isSnakeCase(s string) bool {
func (da *DefaultDirectoryAnalyzer) calculateMaxDepth(dirPath string) int {
maxDepth := 0
filepath.Walk(dirPath, func(path string, info os.FileInfo, err error) error {
if err != nil {
return nil
@@ -747,7 +747,7 @@ func (da *DefaultDirectoryAnalyzer) calculateMaxDepth(dirPath string) int {
}
return nil
})
return maxDepth
}
@@ -756,7 +756,7 @@ func (da *DefaultDirectoryAnalyzer) calculateModularity(subdirs []string) float6
if len(subdirs) == 0 {
return 0.0
}
// More subdirectories with clear separation indicates higher modularity
if len(subdirs) > 5 {
return 0.8
@@ -786,7 +786,7 @@ func (da *DefaultDirectoryAnalyzer) analyzeConventions(ctx context.Context, dirP
// Detect dominant naming style
namingStyle := da.detectDominantNamingStyle(append(fileNames, dirNames...))
// Calculate consistency
consistency := da.calculateNamingConsistency(append(fileNames, dirNames...), namingStyle)
@@ -988,7 +988,7 @@ func (da *DefaultDirectoryAnalyzer) analyzeNamingPattern(paths []string, scope s
// Detect the dominant convention
convention := da.detectDominantNamingStyle(names)
return &NamingPattern{
Pattern: Pattern{
ID: fmt.Sprintf("%s_naming", scope),
@@ -996,7 +996,7 @@ func (da *DefaultDirectoryAnalyzer) analyzeNamingPattern(paths []string, scope s
Type: "naming",
Description: fmt.Sprintf("Naming convention for %ss", scope),
Confidence: da.calculateNamingConsistency(names, convention),
Examples: names[:min(5, len(names))],
Examples: names[:minInt(5, len(names))],
},
Convention: convention,
Scope: scope,
@@ -1100,12 +1100,12 @@ func (da *DefaultDirectoryAnalyzer) detectNamingStyle(name string) string {
return "unknown"
}
func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *ConventionAnalysis) []*Recommendation {
recommendations := []*Recommendation{}
func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *ConventionAnalysis) []*BasicRecommendation {
recommendations := []*BasicRecommendation{}
// Recommend consistency improvements
if analysis.Consistency < 0.8 {
recommendations = append(recommendations, &Recommendation{
recommendations = append(recommendations, &BasicRecommendation{
Type: "consistency",
Title: "Improve naming consistency",
Description: "Consider standardizing naming conventions across the project",
@@ -1118,7 +1118,7 @@ func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *
// Recommend architectural improvements
if len(analysis.OrganizationalPatterns) == 0 {
recommendations = append(recommendations, &Recommendation{
recommendations = append(recommendations, &BasicRecommendation{
Type: "architecture",
Title: "Consider architectural patterns",
Description: "Project structure could benefit from established architectural patterns",
@@ -1185,7 +1185,7 @@ func (da *DefaultDirectoryAnalyzer) findDirectoryDependencies(ctx context.Contex
if detector, exists := da.relationshipAnalyzer.dependencyDetectors[language]; exists {
imports := da.extractImports(string(content), detector.importPatterns)
// Check which imports refer to other directories
for _, imp := range imports {
for _, otherDir := range allDirs {
@@ -1210,7 +1210,7 @@ func (da *DefaultDirectoryAnalyzer) findDirectoryDependencies(ctx context.Contex
func (da *DefaultDirectoryAnalyzer) extractImports(content string, patterns []*regexp.Regexp) []string {
imports := []string{}
for _, pattern := range patterns {
matches := pattern.FindAllStringSubmatch(content, -1)
for _, match := range matches {
@@ -1225,12 +1225,11 @@ func (da *DefaultDirectoryAnalyzer) extractImports(content string, patterns []*r
func (da *DefaultDirectoryAnalyzer) isLocalDependency(importPath, fromDir, toDir string) bool {
// Simple heuristic: check if import path references the target directory
fromBase := filepath.Base(fromDir)
toBase := filepath.Base(toDir)
return strings.Contains(importPath, toBase) ||
strings.Contains(importPath, "../"+toBase) ||
strings.Contains(importPath, "./"+toBase)
return strings.Contains(importPath, toBase) ||
strings.Contains(importPath, "../"+toBase) ||
strings.Contains(importPath, "./"+toBase)
}
func (da *DefaultDirectoryAnalyzer) analyzeDirectoryRelationships(subdirs []string, dependencies []*DirectoryDependency) []*DirectoryRelation {
@@ -1399,7 +1398,7 @@ func (da *DefaultDirectoryAnalyzer) walkDirectoryHierarchy(rootPath string, curr
func (da *DefaultDirectoryAnalyzer) generateUCXLAddress(path string) (*ucxl.Address, error) {
cleanPath := filepath.Clean(path)
addr, err := ucxl.ParseAddress(fmt.Sprintf("dir://%s", cleanPath))
addr, err := ucxl.Parse(fmt.Sprintf("dir://%s", cleanPath))
if err != nil {
return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
}
@@ -1407,7 +1406,7 @@ func (da *DefaultDirectoryAnalyzer) generateUCXLAddress(path string) (*ucxl.Addr
}
func (da *DefaultDirectoryAnalyzer) generateDirectorySummary(structure *DirectoryStructure) string {
summary := fmt.Sprintf("Directory with %d files and %d subdirectories",
summary := fmt.Sprintf("Directory with %d files and %d subdirectories",
structure.FileCount, structure.DirectoryCount)
// Add language information
@@ -1417,7 +1416,7 @@ func (da *DefaultDirectoryAnalyzer) generateDirectorySummary(structure *Director
langs = append(langs, fmt.Sprintf("%s (%d)", lang, count))
}
sort.Strings(langs)
summary += fmt.Sprintf(", containing: %s", strings.Join(langs[:min(3, len(langs))], ", "))
summary += fmt.Sprintf(", containing: %s", strings.Join(langs[:minInt(3, len(langs))], ", "))
}
return summary
@@ -1497,9 +1496,9 @@ func (da *DefaultDirectoryAnalyzer) calculateDirectorySpecificity(structure *Dir
return specificity
}
func min(a, b int) int {
func minInt(a, b int) int {
if a < b {
return a
}
return b
}
}

View File

@@ -2,9 +2,9 @@ package intelligence
import (
"context"
"sync"
"time"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context"
)
@@ -17,38 +17,38 @@ type IntelligenceEngine interface {
// AnalyzeFile analyzes a single file and generates context
// Performs content analysis, language detection, and pattern recognition
AnalyzeFile(ctx context.Context, filePath string, role string) (*slurpContext.ContextNode, error)
// AnalyzeDirectory analyzes directory structure for hierarchical patterns
// Identifies organizational patterns, naming conventions, and structure insights
AnalyzeDirectory(ctx context.Context, dirPath string) ([]*slurpContext.ContextNode, error)
// GenerateRoleInsights generates role-specific insights for existing context
// Provides specialized analysis based on role requirements and perspectives
GenerateRoleInsights(ctx context.Context, baseContext *slurpContext.ContextNode, role string) ([]string, error)
// AssessGoalAlignment assesses how well context aligns with project goals
// Returns alignment score and specific alignment metrics
AssessGoalAlignment(ctx context.Context, node *slurpContext.ContextNode) (float64, error)
// AnalyzeBatch processes multiple files efficiently in parallel
// Optimized for bulk analysis operations with resource management
AnalyzeBatch(ctx context.Context, filePaths []string, role string) (map[string]*slurpContext.ContextNode, error)
// DetectPatterns identifies recurring patterns across multiple contexts
// Useful for template creation and standardization
DetectPatterns(ctx context.Context, contexts []*slurpContext.ContextNode) ([]*Pattern, error)
// EnhanceWithRAG enhances context using RAG system knowledge
// Integrates external knowledge for richer context understanding
EnhanceWithRAG(ctx context.Context, node *slurpContext.ContextNode) (*slurpContext.ContextNode, error)
// ValidateContext validates generated context quality and consistency
// Ensures context meets quality thresholds and consistency requirements
ValidateContext(ctx context.Context, node *slurpContext.ContextNode) (*ValidationResult, error)
// GetEngineStats returns engine performance and operational statistics
GetEngineStats() (*EngineStatistics, error)
// SetConfiguration updates engine configuration
SetConfiguration(config *EngineConfig) error
}
@@ -57,22 +57,22 @@ type IntelligenceEngine interface {
type FileAnalyzer interface {
// AnalyzeContent analyzes file content for context extraction
AnalyzeContent(ctx context.Context, filePath string, content []byte) (*FileAnalysis, error)
// DetectLanguage detects programming language from content
DetectLanguage(ctx context.Context, filePath string, content []byte) (string, float64, error)
// ExtractMetadata extracts file metadata and statistics
ExtractMetadata(ctx context.Context, filePath string) (*FileMetadata, error)
// AnalyzeStructure analyzes code structure and organization
AnalyzeStructure(ctx context.Context, filePath string, content []byte) (*StructureAnalysis, error)
// IdentifyPurpose identifies the primary purpose of the file
IdentifyPurpose(ctx context.Context, analysis *FileAnalysis) (string, float64, error)
// GenerateSummary generates a concise summary of file content
GenerateSummary(ctx context.Context, analysis *FileAnalysis) (string, error)
// ExtractTechnologies identifies technologies used in the file
ExtractTechnologies(ctx context.Context, analysis *FileAnalysis) ([]string, error)
}
@@ -81,16 +81,16 @@ type FileAnalyzer interface {
type DirectoryAnalyzer interface {
// AnalyzeStructure analyzes directory organization patterns
AnalyzeStructure(ctx context.Context, dirPath string) (*DirectoryStructure, error)
// DetectConventions identifies naming and organizational conventions
DetectConventions(ctx context.Context, dirPath string) (*ConventionAnalysis, error)
// IdentifyPurpose determines the primary purpose of a directory
IdentifyPurpose(ctx context.Context, structure *DirectoryStructure) (string, float64, error)
// AnalyzeRelationships analyzes relationships between subdirectories
AnalyzeRelationships(ctx context.Context, dirPath string) (*RelationshipAnalysis, error)
// GenerateHierarchy generates context hierarchy for directory tree
GenerateHierarchy(ctx context.Context, rootPath string, maxDepth int) ([]*slurpContext.ContextNode, error)
}
@@ -99,16 +99,16 @@ type DirectoryAnalyzer interface {
type PatternDetector interface {
// DetectCodePatterns identifies code patterns and architectural styles
DetectCodePatterns(ctx context.Context, filePath string, content []byte) ([]*CodePattern, error)
// DetectNamingPatterns identifies naming conventions and patterns
DetectNamingPatterns(ctx context.Context, contexts []*slurpContext.ContextNode) ([]*NamingPattern, error)
// DetectOrganizationalPatterns identifies organizational patterns
DetectOrganizationalPatterns(ctx context.Context, rootPath string) ([]*OrganizationalPattern, error)
// MatchPatterns matches context against known patterns
MatchPatterns(ctx context.Context, node *slurpContext.ContextNode, patterns []*Pattern) ([]*PatternMatch, error)
// LearnPatterns learns new patterns from context examples
LearnPatterns(ctx context.Context, examples []*slurpContext.ContextNode) ([]*Pattern, error)
}
@@ -117,19 +117,19 @@ type PatternDetector interface {
type RAGIntegration interface {
// Query queries the RAG system for relevant information
Query(ctx context.Context, query string, context map[string]interface{}) (*RAGResponse, error)
// EnhanceContext enhances context using RAG knowledge
EnhanceContext(ctx context.Context, node *slurpContext.ContextNode) (*slurpContext.ContextNode, error)
// IndexContent indexes content for RAG retrieval
IndexContent(ctx context.Context, content string, metadata map[string]interface{}) error
// SearchSimilar searches for similar content in RAG system
SearchSimilar(ctx context.Context, content string, limit int) ([]*RAGResult, error)
// UpdateIndex updates RAG index with new content
UpdateIndex(ctx context.Context, updates []*RAGUpdate) error
// GetRAGStats returns RAG system statistics
GetRAGStats(ctx context.Context) (*RAGStatistics, error)
}
@@ -138,26 +138,26 @@ type RAGIntegration interface {
// ProjectGoal represents a high-level project objective
type ProjectGoal struct {
ID string `json:"id"` // Unique identifier
Name string `json:"name"` // Goal name
Description string `json:"description"` // Detailed description
Keywords []string `json:"keywords"` // Associated keywords
Priority int `json:"priority"` // Priority level (1=highest)
Phase string `json:"phase"` // Project phase
Metrics []string `json:"metrics"` // Success metrics
Owner string `json:"owner"` // Goal owner
ID string `json:"id"` // Unique identifier
Name string `json:"name"` // Goal name
Description string `json:"description"` // Detailed description
Keywords []string `json:"keywords"` // Associated keywords
Priority int `json:"priority"` // Priority level (1=highest)
Phase string `json:"phase"` // Project phase
Metrics []string `json:"metrics"` // Success metrics
Owner string `json:"owner"` // Goal owner
Deadline *time.Time `json:"deadline,omitempty"` // Target deadline
}
// RoleProfile defines context requirements for different roles
type RoleProfile struct {
Role string `json:"role"` // Role identifier
AccessLevel slurpContext.RoleAccessLevel `json:"access_level"` // Required access level
RelevantTags []string `json:"relevant_tags"` // Relevant context tags
ContextScope []string `json:"context_scope"` // Scope of interest
InsightTypes []string `json:"insight_types"` // Types of insights needed
QualityThreshold float64 `json:"quality_threshold"` // Minimum quality threshold
Preferences map[string]interface{} `json:"preferences"` // Role-specific preferences
Role string `json:"role"` // Role identifier
AccessLevel slurpContext.RoleAccessLevel `json:"access_level"` // Required access level
RelevantTags []string `json:"relevant_tags"` // Relevant context tags
ContextScope []string `json:"context_scope"` // Scope of interest
InsightTypes []string `json:"insight_types"` // Types of insights needed
QualityThreshold float64 `json:"quality_threshold"` // Minimum quality threshold
Preferences map[string]interface{} `json:"preferences"` // Role-specific preferences
}
// EngineConfig represents configuration for the intelligence engine
@@ -166,61 +166,66 @@ type EngineConfig struct {
MaxConcurrentAnalysis int `json:"max_concurrent_analysis"` // Maximum concurrent analyses
AnalysisTimeout time.Duration `json:"analysis_timeout"` // Analysis timeout
MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze
// RAG integration settings
RAGEndpoint string `json:"rag_endpoint"` // RAG system endpoint
RAGTimeout time.Duration `json:"rag_timeout"` // RAG query timeout
RAGEnabled bool `json:"rag_enabled"` // Whether RAG is enabled
RAGEndpoint string `json:"rag_endpoint"` // RAG system endpoint
RAGTimeout time.Duration `json:"rag_timeout"` // RAG query timeout
RAGEnabled bool `json:"rag_enabled"` // Whether RAG is enabled
EnableRAG bool `json:"enable_rag"` // Legacy toggle for RAG enablement
// Feature toggles
EnableGoalAlignment bool `json:"enable_goal_alignment"`
EnablePatternDetection bool `json:"enable_pattern_detection"`
EnableRoleAware bool `json:"enable_role_aware"`
// Quality settings
MinConfidenceThreshold float64 `json:"min_confidence_threshold"` // Minimum confidence for results
RequireValidation bool `json:"require_validation"` // Whether validation is required
MinConfidenceThreshold float64 `json:"min_confidence_threshold"` // Minimum confidence for results
RequireValidation bool `json:"require_validation"` // Whether validation is required
// Performance settings
CacheEnabled bool `json:"cache_enabled"` // Whether caching is enabled
CacheTTL time.Duration `json:"cache_ttl"` // Cache TTL
CacheEnabled bool `json:"cache_enabled"` // Whether caching is enabled
CacheTTL time.Duration `json:"cache_ttl"` // Cache TTL
// Role profiles
RoleProfiles map[string]*RoleProfile `json:"role_profiles"` // Role-specific profiles
RoleProfiles map[string]*RoleProfile `json:"role_profiles"` // Role-specific profiles
// Project goals
ProjectGoals []*ProjectGoal `json:"project_goals"` // Active project goals
ProjectGoals []*ProjectGoal `json:"project_goals"` // Active project goals
}
// EngineStatistics represents performance statistics for the engine
type EngineStatistics struct {
TotalAnalyses int64 `json:"total_analyses"` // Total analyses performed
SuccessfulAnalyses int64 `json:"successful_analyses"` // Successful analyses
FailedAnalyses int64 `json:"failed_analyses"` // Failed analyses
AverageAnalysisTime time.Duration `json:"average_analysis_time"` // Average analysis time
CacheHitRate float64 `json:"cache_hit_rate"` // Cache hit rate
RAGQueriesPerformed int64 `json:"rag_queries_performed"` // RAG queries made
AverageConfidence float64 `json:"average_confidence"` // Average confidence score
FilesAnalyzed int64 `json:"files_analyzed"` // Total files analyzed
DirectoriesAnalyzed int64 `json:"directories_analyzed"` // Total directories analyzed
PatternsDetected int64 `json:"patterns_detected"` // Patterns detected
LastResetAt time.Time `json:"last_reset_at"` // When stats were last reset
TotalAnalyses int64 `json:"total_analyses"` // Total analyses performed
SuccessfulAnalyses int64 `json:"successful_analyses"` // Successful analyses
FailedAnalyses int64 `json:"failed_analyses"` // Failed analyses
AverageAnalysisTime time.Duration `json:"average_analysis_time"` // Average analysis time
CacheHitRate float64 `json:"cache_hit_rate"` // Cache hit rate
RAGQueriesPerformed int64 `json:"rag_queries_performed"` // RAG queries made
AverageConfidence float64 `json:"average_confidence"` // Average confidence score
FilesAnalyzed int64 `json:"files_analyzed"` // Total files analyzed
DirectoriesAnalyzed int64 `json:"directories_analyzed"` // Total directories analyzed
PatternsDetected int64 `json:"patterns_detected"` // Patterns detected
LastResetAt time.Time `json:"last_reset_at"` // When stats were last reset
}
// FileAnalysis represents the result of file analysis
type FileAnalysis struct {
FilePath string `json:"file_path"` // Path to analyzed file
Language string `json:"language"` // Detected language
LanguageConf float64 `json:"language_conf"` // Language detection confidence
FileType string `json:"file_type"` // File type classification
Size int64 `json:"size"` // File size in bytes
LineCount int `json:"line_count"` // Number of lines
Complexity float64 `json:"complexity"` // Code complexity score
Dependencies []string `json:"dependencies"` // Identified dependencies
Exports []string `json:"exports"` // Exported symbols/functions
Imports []string `json:"imports"` // Import statements
Functions []string `json:"functions"` // Function/method names
Classes []string `json:"classes"` // Class names
Variables []string `json:"variables"` // Variable names
Comments []string `json:"comments"` // Extracted comments
TODOs []string `json:"todos"` // TODO comments
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
FilePath string `json:"file_path"` // Path to analyzed file
Language string `json:"language"` // Detected language
LanguageConf float64 `json:"language_conf"` // Language detection confidence
FileType string `json:"file_type"` // File type classification
Size int64 `json:"size"` // File size in bytes
LineCount int `json:"line_count"` // Number of lines
Complexity float64 `json:"complexity"` // Code complexity score
Dependencies []string `json:"dependencies"` // Identified dependencies
Exports []string `json:"exports"` // Exported symbols/functions
Imports []string `json:"imports"` // Import statements
Functions []string `json:"functions"` // Function/method names
Classes []string `json:"classes"` // Class names
Variables []string `json:"variables"` // Variable names
Comments []string `json:"comments"` // Extracted comments
TODOs []string `json:"todos"` // TODO comments
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
}
// DefaultIntelligenceEngine provides a complete implementation of the IntelligenceEngine interface
@@ -250,6 +255,10 @@ func NewDefaultIntelligenceEngine(config *EngineConfig) (*DefaultIntelligenceEng
config = DefaultEngineConfig()
}
if config.EnableRAG {
config.RAGEnabled = true
}
// Initialize file analyzer
fileAnalyzer := NewDefaultFileAnalyzer(config)
@@ -273,13 +282,22 @@ func NewDefaultIntelligenceEngine(config *EngineConfig) (*DefaultIntelligenceEng
directoryAnalyzer: dirAnalyzer,
patternDetector: patternDetector,
ragIntegration: ragIntegration,
stats: &EngineStatistics{
stats: &EngineStatistics{
LastResetAt: time.Now(),
},
cache: &sync.Map{},
projectGoals: config.ProjectGoals,
roleProfiles: config.RoleProfiles,
cache: &sync.Map{},
projectGoals: config.ProjectGoals,
roleProfiles: config.RoleProfiles,
}
return engine, nil
}
}
// NewIntelligenceEngine is a convenience wrapper expected by legacy callers.
func NewIntelligenceEngine(config *EngineConfig) *DefaultIntelligenceEngine {
engine, err := NewDefaultIntelligenceEngine(config)
if err != nil {
panic(err)
}
return engine
}

View File

@@ -4,14 +4,13 @@ import (
"context"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
"sync"
"time"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
)
// AnalyzeFile analyzes a single file and generates contextual understanding
@@ -136,8 +135,7 @@ func (e *DefaultIntelligenceEngine) AnalyzeDirectory(ctx context.Context, dirPat
}()
// Analyze directory structure
structure, err := e.directoryAnalyzer.AnalyzeStructure(ctx, dirPath)
if err != nil {
if _, err := e.directoryAnalyzer.AnalyzeStructure(ctx, dirPath); err != nil {
e.updateStats("directory_analysis", time.Since(start), false)
return nil, fmt.Errorf("failed to analyze directory structure: %w", err)
}
@@ -232,7 +230,7 @@ func (e *DefaultIntelligenceEngine) AnalyzeBatch(ctx context.Context, filePaths
wg.Add(1)
go func(path string) {
defer wg.Done()
semaphore <- struct{}{} // Acquire semaphore
semaphore <- struct{}{} // Acquire semaphore
defer func() { <-semaphore }() // Release semaphore
ctxNode, err := e.AnalyzeFile(ctx, path, role)
@@ -317,7 +315,7 @@ func (e *DefaultIntelligenceEngine) EnhanceWithRAG(ctx context.Context, node *sl
if ragResponse.Confidence >= e.config.MinConfidenceThreshold {
enhanced.Insights = append(enhanced.Insights, fmt.Sprintf("RAG: %s", ragResponse.Answer))
enhanced.RAGConfidence = ragResponse.Confidence
// Add source information to metadata
if len(ragResponse.Sources) > 0 {
sources := make([]string, len(ragResponse.Sources))
@@ -430,7 +428,7 @@ func (e *DefaultIntelligenceEngine) readFileContent(filePath string) ([]byte, er
func (e *DefaultIntelligenceEngine) generateUCXLAddress(filePath string) (*ucxl.Address, error) {
// Simple implementation - in reality this would be more sophisticated
cleanPath := filepath.Clean(filePath)
addr, err := ucxl.ParseAddress(fmt.Sprintf("file://%s", cleanPath))
addr, err := ucxl.Parse(fmt.Sprintf("file://%s", cleanPath))
if err != nil {
return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
}
@@ -640,6 +638,10 @@ func DefaultEngineConfig() *EngineConfig {
RAGEndpoint: "",
RAGTimeout: 10 * time.Second,
RAGEnabled: false,
EnableRAG: false,
EnableGoalAlignment: false,
EnablePatternDetection: false,
EnableRoleAware: false,
MinConfidenceThreshold: 0.6,
RequireValidation: true,
CacheEnabled: true,
@@ -647,4 +649,4 @@ func DefaultEngineConfig() *EngineConfig {
RoleProfiles: make(map[string]*RoleProfile),
ProjectGoals: []*ProjectGoal{},
}
}
}

View File

@@ -1,3 +1,6 @@
//go:build integration
// +build integration
package intelligence
import (
@@ -13,12 +16,12 @@ import (
func TestIntelligenceEngine_Integration(t *testing.T) {
// Create test configuration
config := &EngineConfig{
EnableRAG: false, // Disable RAG for testing
EnableGoalAlignment: true,
EnablePatternDetection: true,
EnableRoleAware: true,
MaxConcurrentAnalysis: 2,
AnalysisTimeout: 30 * time.Second,
EnableRAG: false, // Disable RAG for testing
EnableGoalAlignment: true,
EnablePatternDetection: true,
EnableRoleAware: true,
MaxConcurrentAnalysis: 2,
AnalysisTimeout: 30 * time.Second,
CacheTTL: 5 * time.Minute,
MinConfidenceThreshold: 0.5,
}
@@ -29,13 +32,13 @@ func TestIntelligenceEngine_Integration(t *testing.T) {
// Create test context node
testNode := &slurpContext.ContextNode{
Path: "/test/example.go",
Summary: "A Go service implementing user authentication",
Purpose: "Handles user login and authentication for the web application",
Path: "/test/example.go",
Summary: "A Go service implementing user authentication",
Purpose: "Handles user login and authentication for the web application",
Technologies: []string{"go", "jwt", "bcrypt"},
Tags: []string{"authentication", "security", "web"},
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
Tags: []string{"authentication", "security", "web"},
GeneratedAt: time.Now(),
UpdatedAt: time.Now(),
}
// Create test project goal
@@ -47,7 +50,7 @@ func TestIntelligenceEngine_Integration(t *testing.T) {
Priority: 1,
Phase: "development",
Deadline: nil,
CreatedAt: time.Now(),
GeneratedAt: time.Now(),
}
t.Run("AnalyzeFile", func(t *testing.T) {
@@ -220,9 +223,9 @@ func TestPatternDetector_DetectDesignPatterns(t *testing.T) {
ctx := context.Background()
tests := []struct {
name string
filename string
content []byte
name string
filename string
content []byte
expectedPattern string
}{
{
@@ -244,7 +247,7 @@ func TestPatternDetector_DetectDesignPatterns(t *testing.T) {
},
{
name: "Go Factory Pattern",
filename: "factory.go",
filename: "factory.go",
content: []byte(`
package main
func NewUser(name string) *User {
@@ -312,7 +315,7 @@ func TestGoalAlignment_DimensionCalculators(t *testing.T) {
testNode := &slurpContext.ContextNode{
Path: "/test/auth.go",
Summary: "User authentication service with JWT tokens",
Purpose: "Handles user login and token generation",
Purpose: "Handles user login and token generation",
Technologies: []string{"go", "jwt", "bcrypt"},
Tags: []string{"authentication", "security"},
}
@@ -470,7 +473,7 @@ func TestRoleAwareProcessor_AccessControl(t *testing.T) {
hasAccess := err == nil
if hasAccess != tc.expected {
t.Errorf("Expected access %v for role %s, action %s, resource %s, got %v",
t.Errorf("Expected access %v for role %s, action %s, resource %s, got %v",
tc.expected, tc.roleID, tc.action, tc.resource, hasAccess)
}
})
@@ -491,7 +494,7 @@ func TestDirectoryAnalyzer_StructureAnalysis(t *testing.T) {
// Create test structure
testDirs := []string{
"src/main",
"src/lib",
"src/lib",
"test/unit",
"test/integration",
"docs/api",
@@ -504,7 +507,7 @@ func TestDirectoryAnalyzer_StructureAnalysis(t *testing.T) {
if err := os.MkdirAll(fullPath, 0755); err != nil {
t.Fatalf("Failed to create directory %s: %v", fullPath, err)
}
// Create a dummy file in each directory
testFile := filepath.Join(fullPath, "test.txt")
if err := os.WriteFile(testFile, []byte("test content"), 0644); err != nil {
@@ -652,7 +655,7 @@ func createTestContextNode(path, summary, purpose string, technologies, tags []s
Purpose: purpose,
Technologies: technologies,
Tags: tags,
CreatedAt: time.Now(),
GeneratedAt: time.Now(),
UpdatedAt: time.Now(),
}
}
@@ -665,7 +668,7 @@ func createTestProjectGoal(id, name, description string, keywords []string, prio
Keywords: keywords,
Priority: priority,
Phase: phase,
CreatedAt: time.Now(),
GeneratedAt: time.Now(),
}
}
@@ -697,4 +700,4 @@ func assertValidDimensionScore(t *testing.T, score *DimensionScore) {
if score.Confidence <= 0 || score.Confidence > 1 {
t.Errorf("Invalid confidence: %f", score.Confidence)
}
}
}

View File

@@ -1,7 +1,6 @@
package intelligence
import (
"bufio"
"bytes"
"context"
"fmt"
@@ -33,12 +32,12 @@ type CodeStructureAnalyzer struct {
// LanguagePatterns contains regex patterns for different language constructs
type LanguagePatterns struct {
Functions []*regexp.Regexp
Classes []*regexp.Regexp
Variables []*regexp.Regexp
Imports []*regexp.Regexp
Comments []*regexp.Regexp
TODOs []*regexp.Regexp
Functions []*regexp.Regexp
Classes []*regexp.Regexp
Variables []*regexp.Regexp
Imports []*regexp.Regexp
Comments []*regexp.Regexp
TODOs []*regexp.Regexp
}
// MetadataExtractor extracts file system metadata
@@ -65,66 +64,66 @@ func NewLanguageDetector() *LanguageDetector {
// Map file extensions to languages
extensions := map[string]string{
".go": "go",
".py": "python",
".js": "javascript",
".jsx": "javascript",
".ts": "typescript",
".tsx": "typescript",
".java": "java",
".c": "c",
".cpp": "cpp",
".cc": "cpp",
".cxx": "cpp",
".h": "c",
".hpp": "cpp",
".cs": "csharp",
".php": "php",
".rb": "ruby",
".rs": "rust",
".kt": "kotlin",
".swift": "swift",
".m": "objective-c",
".mm": "objective-c",
".scala": "scala",
".clj": "clojure",
".hs": "haskell",
".ex": "elixir",
".exs": "elixir",
".erl": "erlang",
".lua": "lua",
".pl": "perl",
".r": "r",
".sh": "shell",
".bash": "shell",
".zsh": "shell",
".fish": "shell",
".sql": "sql",
".html": "html",
".htm": "html",
".css": "css",
".scss": "scss",
".sass": "sass",
".less": "less",
".xml": "xml",
".json": "json",
".yaml": "yaml",
".yml": "yaml",
".toml": "toml",
".ini": "ini",
".cfg": "ini",
".conf": "config",
".md": "markdown",
".rst": "rst",
".tex": "latex",
".proto": "protobuf",
".tf": "terraform",
".hcl": "hcl",
".dockerfile": "dockerfile",
".go": "go",
".py": "python",
".js": "javascript",
".jsx": "javascript",
".ts": "typescript",
".tsx": "typescript",
".java": "java",
".c": "c",
".cpp": "cpp",
".cc": "cpp",
".cxx": "cpp",
".h": "c",
".hpp": "cpp",
".cs": "csharp",
".php": "php",
".rb": "ruby",
".rs": "rust",
".kt": "kotlin",
".swift": "swift",
".m": "objective-c",
".mm": "objective-c",
".scala": "scala",
".clj": "clojure",
".hs": "haskell",
".ex": "elixir",
".exs": "elixir",
".erl": "erlang",
".lua": "lua",
".pl": "perl",
".r": "r",
".sh": "shell",
".bash": "shell",
".zsh": "shell",
".fish": "shell",
".sql": "sql",
".html": "html",
".htm": "html",
".css": "css",
".scss": "scss",
".sass": "sass",
".less": "less",
".xml": "xml",
".json": "json",
".yaml": "yaml",
".yml": "yaml",
".toml": "toml",
".ini": "ini",
".cfg": "ini",
".conf": "config",
".md": "markdown",
".rst": "rst",
".tex": "latex",
".proto": "protobuf",
".tf": "terraform",
".hcl": "hcl",
".dockerfile": "dockerfile",
".dockerignore": "dockerignore",
".gitignore": "gitignore",
".vim": "vim",
".emacs": "emacs",
".gitignore": "gitignore",
".vim": "vim",
".emacs": "emacs",
}
for ext, lang := range extensions {
@@ -383,11 +382,11 @@ func (fa *DefaultFileAnalyzer) AnalyzeContent(ctx context.Context, filePath stri
// DetectLanguage detects programming language from content and file extension
func (fa *DefaultFileAnalyzer) DetectLanguage(ctx context.Context, filePath string, content []byte) (string, float64, error) {
ext := strings.ToLower(filepath.Ext(filePath))
// First try extension-based detection
if lang, exists := fa.languageDetector.extensionMap[ext]; exists {
confidence := 0.8 // High confidence for extension-based detection
// Verify with content signatures
if signatures, hasSignatures := fa.languageDetector.signatureRegexs[lang]; hasSignatures {
matches := 0
@@ -396,7 +395,7 @@ func (fa *DefaultFileAnalyzer) DetectLanguage(ctx context.Context, filePath stri
matches++
}
}
// Adjust confidence based on signature matches
if matches > 0 {
confidence = 0.9 + float64(matches)/float64(len(signatures))*0.1
@@ -404,14 +403,14 @@ func (fa *DefaultFileAnalyzer) DetectLanguage(ctx context.Context, filePath stri
confidence = 0.6 // Lower confidence if no signatures match
}
}
return lang, confidence, nil
}
// Fall back to content-based detection
bestLang := "unknown"
bestScore := 0
for lang, signatures := range fa.languageDetector.signatureRegexs {
score := 0
for _, regex := range signatures {
@@ -419,7 +418,7 @@ func (fa *DefaultFileAnalyzer) DetectLanguage(ctx context.Context, filePath stri
score++
}
}
if score > bestScore {
bestScore = score
bestLang = lang
@@ -499,9 +498,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
filenameUpper := strings.ToUpper(filename)
// Configuration files
if strings.Contains(filenameUpper, "CONFIG") ||
strings.Contains(filenameUpper, "CONF") ||
analysis.FileType == ".ini" || analysis.FileType == ".toml" {
if strings.Contains(filenameUpper, "CONFIG") ||
strings.Contains(filenameUpper, "CONF") ||
analysis.FileType == ".ini" || analysis.FileType == ".toml" {
purpose = "Configuration management"
confidence = 0.9
return purpose, confidence, nil
@@ -509,9 +508,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// Test files
if strings.Contains(filenameUpper, "TEST") ||
strings.Contains(filenameUpper, "SPEC") ||
strings.HasSuffix(filenameUpper, "_TEST.GO") ||
strings.HasSuffix(filenameUpper, "_TEST.PY") {
strings.Contains(filenameUpper, "SPEC") ||
strings.HasSuffix(filenameUpper, "_TEST.GO") ||
strings.HasSuffix(filenameUpper, "_TEST.PY") {
purpose = "Testing and quality assurance"
confidence = 0.9
return purpose, confidence, nil
@@ -519,8 +518,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// Documentation files
if analysis.FileType == ".md" || analysis.FileType == ".rst" ||
strings.Contains(filenameUpper, "README") ||
strings.Contains(filenameUpper, "DOC") {
strings.Contains(filenameUpper, "README") ||
strings.Contains(filenameUpper, "DOC") {
purpose = "Documentation and guidance"
confidence = 0.9
return purpose, confidence, nil
@@ -528,8 +527,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// API files
if strings.Contains(filenameUpper, "API") ||
strings.Contains(filenameUpper, "ROUTER") ||
strings.Contains(filenameUpper, "HANDLER") {
strings.Contains(filenameUpper, "ROUTER") ||
strings.Contains(filenameUpper, "HANDLER") {
purpose = "API endpoint management"
confidence = 0.8
return purpose, confidence, nil
@@ -537,9 +536,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// Database files
if strings.Contains(filenameUpper, "DB") ||
strings.Contains(filenameUpper, "DATABASE") ||
strings.Contains(filenameUpper, "MODEL") ||
strings.Contains(filenameUpper, "SCHEMA") {
strings.Contains(filenameUpper, "DATABASE") ||
strings.Contains(filenameUpper, "MODEL") ||
strings.Contains(filenameUpper, "SCHEMA") {
purpose = "Data storage and management"
confidence = 0.8
return purpose, confidence, nil
@@ -547,9 +546,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// UI/Frontend files
if analysis.Language == "javascript" || analysis.Language == "typescript" ||
strings.Contains(filenameUpper, "COMPONENT") ||
strings.Contains(filenameUpper, "VIEW") ||
strings.Contains(filenameUpper, "UI") {
strings.Contains(filenameUpper, "COMPONENT") ||
strings.Contains(filenameUpper, "VIEW") ||
strings.Contains(filenameUpper, "UI") {
purpose = "User interface component"
confidence = 0.7
return purpose, confidence, nil
@@ -557,8 +556,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// Service/Business logic
if strings.Contains(filenameUpper, "SERVICE") ||
strings.Contains(filenameUpper, "BUSINESS") ||
strings.Contains(filenameUpper, "LOGIC") {
strings.Contains(filenameUpper, "BUSINESS") ||
strings.Contains(filenameUpper, "LOGIC") {
purpose = "Business logic implementation"
confidence = 0.7
return purpose, confidence, nil
@@ -566,8 +565,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// Utility files
if strings.Contains(filenameUpper, "UTIL") ||
strings.Contains(filenameUpper, "HELPER") ||
strings.Contains(filenameUpper, "COMMON") {
strings.Contains(filenameUpper, "HELPER") ||
strings.Contains(filenameUpper, "COMMON") {
purpose = "Utility and helper functions"
confidence = 0.7
return purpose, confidence, nil
@@ -591,7 +590,7 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
// GenerateSummary generates a concise summary of file content
func (fa *DefaultFileAnalyzer) GenerateSummary(ctx context.Context, analysis *FileAnalysis) (string, error) {
summary := strings.Builder{}
// Language and type
if analysis.Language != "unknown" {
summary.WriteString(fmt.Sprintf("%s", strings.Title(analysis.Language)))
@@ -643,23 +642,23 @@ func (fa *DefaultFileAnalyzer) ExtractTechnologies(ctx context.Context, analysis
// Extract from file patterns
filename := strings.ToLower(filepath.Base(analysis.FilePath))
// Framework detection
frameworks := map[string]string{
"react": "React",
"vue": "Vue.js",
"angular": "Angular",
"express": "Express.js",
"django": "Django",
"flask": "Flask",
"spring": "Spring",
"gin": "Gin",
"echo": "Echo",
"fastapi": "FastAPI",
"bootstrap": "Bootstrap",
"tailwind": "Tailwind CSS",
"material": "Material UI",
"antd": "Ant Design",
"react": "React",
"vue": "Vue.js",
"angular": "Angular",
"express": "Express.js",
"django": "Django",
"flask": "Flask",
"spring": "Spring",
"gin": "Gin",
"echo": "Echo",
"fastapi": "FastAPI",
"bootstrap": "Bootstrap",
"tailwind": "Tailwind CSS",
"material": "Material UI",
"antd": "Ant Design",
}
for pattern, tech := range frameworks {
@@ -778,7 +777,7 @@ func (fa *DefaultFileAnalyzer) analyzeCodeStructure(analysis *FileAnalysis, cont
func (fa *DefaultFileAnalyzer) calculateComplexity(analysis *FileAnalysis) float64 {
complexity := 0.0
// Base complexity from structure
complexity += float64(len(analysis.Functions)) * 1.5
complexity += float64(len(analysis.Classes)) * 2.0
@@ -799,7 +798,7 @@ func (fa *DefaultFileAnalyzer) calculateComplexity(analysis *FileAnalysis) float
func (fa *DefaultFileAnalyzer) analyzeArchitecturalPatterns(analysis *StructureAnalysis, content []byte, patterns *LanguagePatterns, language string) {
contentStr := string(content)
// Detect common architectural patterns
if strings.Contains(contentStr, "interface") && language == "go" {
analysis.Patterns = append(analysis.Patterns, "Interface Segregation")
@@ -813,7 +812,7 @@ func (fa *DefaultFileAnalyzer) analyzeArchitecturalPatterns(analysis *StructureA
if strings.Contains(contentStr, "Observer") {
analysis.Patterns = append(analysis.Patterns, "Observer Pattern")
}
// Architectural style detection
if strings.Contains(contentStr, "http.") || strings.Contains(contentStr, "router") {
analysis.Architecture = "REST API"
@@ -832,13 +831,13 @@ func (fa *DefaultFileAnalyzer) mapImportToTechnology(importPath, language string
// Technology mapping based on common imports
techMap := map[string]string{
// Go
"gin-gonic/gin": "Gin",
"labstack/echo": "Echo",
"gorilla/mux": "Gorilla Mux",
"gorm.io/gorm": "GORM",
"github.com/redis": "Redis",
"go.mongodb.org": "MongoDB",
"gin-gonic/gin": "Gin",
"labstack/echo": "Echo",
"gorilla/mux": "Gorilla Mux",
"gorm.io/gorm": "GORM",
"github.com/redis": "Redis",
"go.mongodb.org": "MongoDB",
// Python
"django": "Django",
"flask": "Flask",
@@ -849,15 +848,15 @@ func (fa *DefaultFileAnalyzer) mapImportToTechnology(importPath, language string
"numpy": "NumPy",
"tensorflow": "TensorFlow",
"torch": "PyTorch",
// JavaScript/TypeScript
"react": "React",
"vue": "Vue.js",
"angular": "Angular",
"express": "Express.js",
"axios": "Axios",
"lodash": "Lodash",
"moment": "Moment.js",
"react": "React",
"vue": "Vue.js",
"angular": "Angular",
"express": "Express.js",
"axios": "Axios",
"lodash": "Lodash",
"moment": "Moment.js",
"socket.io": "Socket.IO",
}
@@ -868,4 +867,4 @@ func (fa *DefaultFileAnalyzer) mapImportToTechnology(importPath, language string
}
return ""
}
}

View File

@@ -8,80 +8,79 @@ import (
"sync"
"time"
"chorus/pkg/crypto"
slurpContext "chorus/pkg/slurp/context"
)
// RoleAwareProcessor provides role-based context processing and insight generation
type RoleAwareProcessor struct {
mu sync.RWMutex
config *EngineConfig
roleManager *RoleManager
securityFilter *SecurityFilter
insightGenerator *InsightGenerator
accessController *AccessController
auditLogger *AuditLogger
permissions *PermissionMatrix
roleProfiles map[string]*RoleProfile
mu sync.RWMutex
config *EngineConfig
roleManager *RoleManager
securityFilter *SecurityFilter
insightGenerator *InsightGenerator
accessController *AccessController
auditLogger *AuditLogger
permissions *PermissionMatrix
roleProfiles map[string]*RoleBlueprint
}
// RoleManager manages role definitions and hierarchies
type RoleManager struct {
roles map[string]*Role
hierarchies map[string]*RoleHierarchy
capabilities map[string]*RoleCapabilities
restrictions map[string]*RoleRestrictions
roles map[string]*Role
hierarchies map[string]*RoleHierarchy
capabilities map[string]*RoleCapabilities
restrictions map[string]*RoleRestrictions
}
// Role represents an AI agent role with specific permissions and capabilities
type Role struct {
ID string `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
SecurityLevel int `json:"security_level"`
Capabilities []string `json:"capabilities"`
Restrictions []string `json:"restrictions"`
AccessPatterns []string `json:"access_patterns"`
ContextFilters []string `json:"context_filters"`
Priority int `json:"priority"`
ParentRoles []string `json:"parent_roles"`
ChildRoles []string `json:"child_roles"`
Metadata map[string]interface{} `json:"metadata"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
IsActive bool `json:"is_active"`
ID string `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
SecurityLevel int `json:"security_level"`
Capabilities []string `json:"capabilities"`
Restrictions []string `json:"restrictions"`
AccessPatterns []string `json:"access_patterns"`
ContextFilters []string `json:"context_filters"`
Priority int `json:"priority"`
ParentRoles []string `json:"parent_roles"`
ChildRoles []string `json:"child_roles"`
Metadata map[string]interface{} `json:"metadata"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
IsActive bool `json:"is_active"`
}
// RoleHierarchy defines role inheritance and relationships
type RoleHierarchy struct {
ParentRole string `json:"parent_role"`
ChildRoles []string `json:"child_roles"`
InheritLevel int `json:"inherit_level"`
OverrideRules []string `json:"override_rules"`
ParentRole string `json:"parent_role"`
ChildRoles []string `json:"child_roles"`
InheritLevel int `json:"inherit_level"`
OverrideRules []string `json:"override_rules"`
}
// RoleCapabilities defines what a role can do
type RoleCapabilities struct {
RoleID string `json:"role_id"`
ReadAccess []string `json:"read_access"`
WriteAccess []string `json:"write_access"`
ExecuteAccess []string `json:"execute_access"`
AnalysisTypes []string `json:"analysis_types"`
InsightLevels []string `json:"insight_levels"`
SecurityScopes []string `json:"security_scopes"`
RoleID string `json:"role_id"`
ReadAccess []string `json:"read_access"`
WriteAccess []string `json:"write_access"`
ExecuteAccess []string `json:"execute_access"`
AnalysisTypes []string `json:"analysis_types"`
InsightLevels []string `json:"insight_levels"`
SecurityScopes []string `json:"security_scopes"`
DataClassifications []string `json:"data_classifications"`
}
// RoleRestrictions defines what a role cannot do or access
type RoleRestrictions struct {
RoleID string `json:"role_id"`
ForbiddenPaths []string `json:"forbidden_paths"`
ForbiddenTypes []string `json:"forbidden_types"`
ForbiddenKeywords []string `json:"forbidden_keywords"`
TimeRestrictions []string `json:"time_restrictions"`
RateLimit *RateLimit `json:"rate_limit"`
MaxContextSize int `json:"max_context_size"`
MaxInsights int `json:"max_insights"`
RoleID string `json:"role_id"`
ForbiddenPaths []string `json:"forbidden_paths"`
ForbiddenTypes []string `json:"forbidden_types"`
ForbiddenKeywords []string `json:"forbidden_keywords"`
TimeRestrictions []string `json:"time_restrictions"`
RateLimit *RateLimit `json:"rate_limit"`
MaxContextSize int `json:"max_context_size"`
MaxInsights int `json:"max_insights"`
}
// RateLimit defines rate limiting for role operations
@@ -111,9 +110,9 @@ type ContentFilter struct {
// AccessMatrix defines access control rules
type AccessMatrix struct {
Rules map[string]*AccessRule `json:"rules"`
DefaultDeny bool `json:"default_deny"`
LastUpdated time.Time `json:"last_updated"`
Rules map[string]*AccessRule `json:"rules"`
DefaultDeny bool `json:"default_deny"`
LastUpdated time.Time `json:"last_updated"`
}
// AccessRule defines a specific access control rule
@@ -144,14 +143,14 @@ type RoleInsightGenerator interface {
// InsightTemplate defines templates for generating insights
type InsightTemplate struct {
TemplateID string `json:"template_id"`
Name string `json:"name"`
Template string `json:"template"`
Variables []string `json:"variables"`
Roles []string `json:"roles"`
Category string `json:"category"`
Priority int `json:"priority"`
Metadata map[string]interface{} `json:"metadata"`
TemplateID string `json:"template_id"`
Name string `json:"name"`
Template string `json:"template"`
Variables []string `json:"variables"`
Roles []string `json:"roles"`
Category string `json:"category"`
Priority int `json:"priority"`
Metadata map[string]interface{} `json:"metadata"`
}
// InsightFilter filters insights based on role permissions
@@ -179,39 +178,39 @@ type PermissionMatrix struct {
// RolePermissions defines permissions for a specific role
type RolePermissions struct {
RoleID string `json:"role_id"`
ContextAccess *ContextAccessRights `json:"context_access"`
AnalysisAccess *AnalysisAccessRights `json:"analysis_access"`
InsightAccess *InsightAccessRights `json:"insight_access"`
SystemAccess *SystemAccessRights `json:"system_access"`
CustomAccess map[string]interface{} `json:"custom_access"`
RoleID string `json:"role_id"`
ContextAccess *ContextAccessRights `json:"context_access"`
AnalysisAccess *AnalysisAccessRights `json:"analysis_access"`
InsightAccess *InsightAccessRights `json:"insight_access"`
SystemAccess *SystemAccessRights `json:"system_access"`
CustomAccess map[string]interface{} `json:"custom_access"`
}
// ContextAccessRights defines context-related access rights
type ContextAccessRights struct {
ReadLevel int `json:"read_level"`
WriteLevel int `json:"write_level"`
AllowedTypes []string `json:"allowed_types"`
ForbiddenTypes []string `json:"forbidden_types"`
ReadLevel int `json:"read_level"`
WriteLevel int `json:"write_level"`
AllowedTypes []string `json:"allowed_types"`
ForbiddenTypes []string `json:"forbidden_types"`
PathRestrictions []string `json:"path_restrictions"`
SizeLimit int `json:"size_limit"`
SizeLimit int `json:"size_limit"`
}
// AnalysisAccessRights defines analysis-related access rights
type AnalysisAccessRights struct {
AllowedAnalysisTypes []string `json:"allowed_analysis_types"`
MaxComplexity int `json:"max_complexity"`
AllowedAnalysisTypes []string `json:"allowed_analysis_types"`
MaxComplexity int `json:"max_complexity"`
TimeoutLimit time.Duration `json:"timeout_limit"`
ResourceLimit int `json:"resource_limit"`
ResourceLimit int `json:"resource_limit"`
}
// InsightAccessRights defines insight-related access rights
type InsightAccessRights struct {
GenerationLevel int `json:"generation_level"`
AccessLevel int `json:"access_level"`
CategoryFilters []string `json:"category_filters"`
ConfidenceThreshold float64 `json:"confidence_threshold"`
MaxInsights int `json:"max_insights"`
GenerationLevel int `json:"generation_level"`
AccessLevel int `json:"access_level"`
CategoryFilters []string `json:"category_filters"`
ConfidenceThreshold float64 `json:"confidence_threshold"`
MaxInsights int `json:"max_insights"`
}
// SystemAccessRights defines system-level access rights
@@ -254,15 +253,15 @@ type AuditLogger struct {
// AuditEntry represents an audit log entry
type AuditEntry struct {
ID string `json:"id"`
Timestamp time.Time `json:"timestamp"`
RoleID string `json:"role_id"`
Action string `json:"action"`
Resource string `json:"resource"`
Result string `json:"result"` // success, denied, error
Details string `json:"details"`
Context map[string]interface{} `json:"context"`
SecurityLevel int `json:"security_level"`
ID string `json:"id"`
Timestamp time.Time `json:"timestamp"`
RoleID string `json:"role_id"`
Action string `json:"action"`
Resource string `json:"resource"`
Result string `json:"result"` // success, denied, error
Details string `json:"details"`
Context map[string]interface{} `json:"context"`
SecurityLevel int `json:"security_level"`
}
// AuditConfig defines audit logging configuration
@@ -276,49 +275,49 @@ type AuditConfig struct {
}
// RoleProfile contains comprehensive role configuration
type RoleProfile struct {
Role *Role `json:"role"`
Capabilities *RoleCapabilities `json:"capabilities"`
Restrictions *RoleRestrictions `json:"restrictions"`
Permissions *RolePermissions `json:"permissions"`
InsightConfig *RoleInsightConfig `json:"insight_config"`
SecurityConfig *RoleSecurityConfig `json:"security_config"`
type RoleBlueprint struct {
Role *Role `json:"role"`
Capabilities *RoleCapabilities `json:"capabilities"`
Restrictions *RoleRestrictions `json:"restrictions"`
Permissions *RolePermissions `json:"permissions"`
InsightConfig *RoleInsightConfig `json:"insight_config"`
SecurityConfig *RoleSecurityConfig `json:"security_config"`
}
// RoleInsightConfig defines insight generation configuration for a role
type RoleInsightConfig struct {
EnabledGenerators []string `json:"enabled_generators"`
MaxInsights int `json:"max_insights"`
ConfidenceThreshold float64 `json:"confidence_threshold"`
CategoryWeights map[string]float64 `json:"category_weights"`
CustomFilters []string `json:"custom_filters"`
EnabledGenerators []string `json:"enabled_generators"`
MaxInsights int `json:"max_insights"`
ConfidenceThreshold float64 `json:"confidence_threshold"`
CategoryWeights map[string]float64 `json:"category_weights"`
CustomFilters []string `json:"custom_filters"`
}
// RoleSecurityConfig defines security configuration for a role
type RoleSecurityConfig struct {
EncryptionRequired bool `json:"encryption_required"`
AccessLogging bool `json:"access_logging"`
EncryptionRequired bool `json:"encryption_required"`
AccessLogging bool `json:"access_logging"`
RateLimit *RateLimit `json:"rate_limit"`
IPWhitelist []string `json:"ip_whitelist"`
RequiredClaims []string `json:"required_claims"`
IPWhitelist []string `json:"ip_whitelist"`
RequiredClaims []string `json:"required_claims"`
}
// RoleSpecificInsight represents an insight tailored to a specific role
type RoleSpecificInsight struct {
ID string `json:"id"`
RoleID string `json:"role_id"`
Category string `json:"category"`
Title string `json:"title"`
Content string `json:"content"`
Confidence float64 `json:"confidence"`
Priority int `json:"priority"`
SecurityLevel int `json:"security_level"`
Tags []string `json:"tags"`
ActionItems []string `json:"action_items"`
References []string `json:"references"`
Metadata map[string]interface{} `json:"metadata"`
GeneratedAt time.Time `json:"generated_at"`
ExpiresAt *time.Time `json:"expires_at,omitempty"`
ID string `json:"id"`
RoleID string `json:"role_id"`
Category string `json:"category"`
Title string `json:"title"`
Content string `json:"content"`
Confidence float64 `json:"confidence"`
Priority int `json:"priority"`
SecurityLevel int `json:"security_level"`
Tags []string `json:"tags"`
ActionItems []string `json:"action_items"`
References []string `json:"references"`
Metadata map[string]interface{} `json:"metadata"`
GeneratedAt time.Time `json:"generated_at"`
ExpiresAt *time.Time `json:"expires_at,omitempty"`
}
// NewRoleAwareProcessor creates a new role-aware processor
@@ -331,7 +330,7 @@ func NewRoleAwareProcessor(config *EngineConfig) *RoleAwareProcessor {
accessController: NewAccessController(),
auditLogger: NewAuditLogger(),
permissions: NewPermissionMatrix(),
roleProfiles: make(map[string]*RoleProfile),
roleProfiles: make(map[string]*RoleBlueprint),
}
// Initialize default roles
@@ -342,10 +341,10 @@ func NewRoleAwareProcessor(config *EngineConfig) *RoleAwareProcessor {
// NewRoleManager creates a role manager with default roles
func NewRoleManager() *RoleManager {
rm := &RoleManager{
roles: make(map[string]*Role),
hierarchies: make(map[string]*RoleHierarchy),
capabilities: make(map[string]*RoleCapabilities),
restrictions: make(map[string]*RoleRestrictions),
roles: make(map[string]*Role),
hierarchies: make(map[string]*RoleHierarchy),
capabilities: make(map[string]*RoleCapabilities),
restrictions: make(map[string]*RoleRestrictions),
}
// Initialize with default roles
@@ -383,12 +382,15 @@ func (rap *RoleAwareProcessor) ProcessContextForRole(ctx context.Context, node *
// Apply insights to node
if len(insights) > 0 {
filteredNode.RoleSpecificInsights = insights
filteredNode.ProcessedForRole = roleID
if filteredNode.Metadata == nil {
filteredNode.Metadata = make(map[string]interface{})
}
filteredNode.Metadata["role_specific_insights"] = insights
filteredNode.Metadata["processed_for_role"] = roleID
}
// Log successful processing
rap.auditLogger.logAccess(roleID, "context:process", node.Path, "success",
rap.auditLogger.logAccess(roleID, "context:process", node.Path, "success",
fmt.Sprintf("processed with %d insights", len(insights)))
return filteredNode, nil
@@ -413,7 +415,7 @@ func (rap *RoleAwareProcessor) GenerateRoleSpecificInsights(ctx context.Context,
return nil, err
}
rap.auditLogger.logAccess(roleID, "insight:generate", node.Path, "success",
rap.auditLogger.logAccess(roleID, "insight:generate", node.Path, "success",
fmt.Sprintf("generated %d insights", len(insights)))
return insights, nil
@@ -448,69 +450,69 @@ func (rap *RoleAwareProcessor) GetRoleCapabilities(roleID string) (*RoleCapabili
func (rap *RoleAwareProcessor) initializeDefaultRoles() {
defaultRoles := []*Role{
{
ID: "architect",
Name: "System Architect",
Description: "High-level system design and architecture decisions",
SecurityLevel: 8,
Capabilities: []string{"architecture_design", "high_level_analysis", "strategic_planning"},
Restrictions: []string{"no_implementation_details", "no_low_level_code"},
ID: "architect",
Name: "System Architect",
Description: "High-level system design and architecture decisions",
SecurityLevel: 8,
Capabilities: []string{"architecture_design", "high_level_analysis", "strategic_planning"},
Restrictions: []string{"no_implementation_details", "no_low_level_code"},
AccessPatterns: []string{"architecture/**", "design/**", "docs/**"},
Priority: 1,
IsActive: true,
CreatedAt: time.Now(),
Priority: 1,
IsActive: true,
CreatedAt: time.Now(),
},
{
ID: "developer",
Name: "Software Developer",
Description: "Code implementation and development tasks",
SecurityLevel: 6,
Capabilities: []string{"code_analysis", "implementation", "debugging", "testing"},
Restrictions: []string{"no_architecture_changes", "no_security_config"},
ID: "developer",
Name: "Software Developer",
Description: "Code implementation and development tasks",
SecurityLevel: 6,
Capabilities: []string{"code_analysis", "implementation", "debugging", "testing"},
Restrictions: []string{"no_architecture_changes", "no_security_config"},
AccessPatterns: []string{"src/**", "lib/**", "test/**"},
Priority: 2,
IsActive: true,
CreatedAt: time.Now(),
Priority: 2,
IsActive: true,
CreatedAt: time.Now(),
},
{
ID: "security_analyst",
Name: "Security Analyst",
Description: "Security analysis and vulnerability assessment",
SecurityLevel: 9,
Capabilities: []string{"security_analysis", "vulnerability_assessment", "compliance_check"},
Restrictions: []string{"no_code_modification"},
ID: "security_analyst",
Name: "Security Analyst",
Description: "Security analysis and vulnerability assessment",
SecurityLevel: 9,
Capabilities: []string{"security_analysis", "vulnerability_assessment", "compliance_check"},
Restrictions: []string{"no_code_modification"},
AccessPatterns: []string{"**/*"},
Priority: 1,
IsActive: true,
CreatedAt: time.Now(),
Priority: 1,
IsActive: true,
CreatedAt: time.Now(),
},
{
ID: "devops_engineer",
Name: "DevOps Engineer",
Description: "Infrastructure and deployment operations",
SecurityLevel: 7,
Capabilities: []string{"infrastructure_analysis", "deployment", "monitoring", "ci_cd"},
Restrictions: []string{"no_business_logic"},
ID: "devops_engineer",
Name: "DevOps Engineer",
Description: "Infrastructure and deployment operations",
SecurityLevel: 7,
Capabilities: []string{"infrastructure_analysis", "deployment", "monitoring", "ci_cd"},
Restrictions: []string{"no_business_logic"},
AccessPatterns: []string{"infra/**", "deploy/**", "config/**", "docker/**"},
Priority: 2,
IsActive: true,
CreatedAt: time.Now(),
Priority: 2,
IsActive: true,
CreatedAt: time.Now(),
},
{
ID: "qa_engineer",
Name: "Quality Assurance Engineer",
Description: "Quality assurance and testing",
SecurityLevel: 5,
Capabilities: []string{"quality_analysis", "testing", "test_planning"},
Restrictions: []string{"no_production_access", "no_code_modification"},
ID: "qa_engineer",
Name: "Quality Assurance Engineer",
Description: "Quality assurance and testing",
SecurityLevel: 5,
Capabilities: []string{"quality_analysis", "testing", "test_planning"},
Restrictions: []string{"no_production_access", "no_code_modification"},
AccessPatterns: []string{"test/**", "spec/**", "qa/**"},
Priority: 3,
IsActive: true,
CreatedAt: time.Now(),
Priority: 3,
IsActive: true,
CreatedAt: time.Now(),
},
}
for _, role := range defaultRoles {
rap.roleProfiles[role.ID] = &RoleProfile{
rap.roleProfiles[role.ID] = &RoleBlueprint{
Role: role,
Capabilities: rap.createDefaultCapabilities(role),
Restrictions: rap.createDefaultRestrictions(role),
@@ -540,23 +542,23 @@ func (rap *RoleAwareProcessor) createDefaultCapabilities(role *Role) *RoleCapabi
baseCapabilities.ExecuteAccess = []string{"design_tools", "modeling"}
baseCapabilities.InsightLevels = []string{"strategic", "architectural", "high_level"}
baseCapabilities.SecurityScopes = []string{"public", "internal", "confidential"}
case "developer":
baseCapabilities.WriteAccess = []string{"src/**", "test/**"}
baseCapabilities.ExecuteAccess = []string{"compile", "test", "debug"}
baseCapabilities.InsightLevels = []string{"implementation", "code_quality", "performance"}
case "security_analyst":
baseCapabilities.ReadAccess = []string{"**/*"}
baseCapabilities.InsightLevels = []string{"security", "vulnerability", "compliance"}
baseCapabilities.SecurityScopes = []string{"public", "internal", "confidential", "secret"}
baseCapabilities.DataClassifications = []string{"public", "internal", "confidential", "restricted"}
case "devops_engineer":
baseCapabilities.WriteAccess = []string{"infra/**", "deploy/**", "config/**"}
baseCapabilities.ExecuteAccess = []string{"deploy", "configure", "monitor"}
baseCapabilities.InsightLevels = []string{"infrastructure", "deployment", "monitoring"}
case "qa_engineer":
baseCapabilities.WriteAccess = []string{"test/**", "qa/**"}
baseCapabilities.ExecuteAccess = []string{"test", "validate"}
@@ -587,21 +589,21 @@ func (rap *RoleAwareProcessor) createDefaultRestrictions(role *Role) *RoleRestri
// Architects have fewer restrictions
baseRestrictions.MaxContextSize = 50000
baseRestrictions.MaxInsights = 100
case "developer":
baseRestrictions.ForbiddenPaths = append(baseRestrictions.ForbiddenPaths, "architecture/**", "security/**")
baseRestrictions.ForbiddenTypes = []string{"security_config", "deployment_config"}
case "security_analyst":
// Security analysts have minimal path restrictions but keyword restrictions
baseRestrictions.ForbiddenPaths = []string{"temp/**"}
baseRestrictions.ForbiddenKeywords = []string{"password", "secret", "key"}
baseRestrictions.MaxContextSize = 100000
case "devops_engineer":
baseRestrictions.ForbiddenPaths = append(baseRestrictions.ForbiddenPaths, "src/**")
baseRestrictions.ForbiddenTypes = []string{"business_logic", "user_data"}
case "qa_engineer":
baseRestrictions.ForbiddenPaths = append(baseRestrictions.ForbiddenPaths, "src/**", "infra/**")
baseRestrictions.ForbiddenTypes = []string{"production_config", "security_config"}
@@ -615,10 +617,10 @@ func (rap *RoleAwareProcessor) createDefaultPermissions(role *Role) *RolePermiss
return &RolePermissions{
RoleID: role.ID,
ContextAccess: &ContextAccessRights{
ReadLevel: role.SecurityLevel,
WriteLevel: role.SecurityLevel - 2,
AllowedTypes: []string{"code", "documentation", "configuration"},
SizeLimit: 1000000,
ReadLevel: role.SecurityLevel,
WriteLevel: role.SecurityLevel - 2,
AllowedTypes: []string{"code", "documentation", "configuration"},
SizeLimit: 1000000,
},
AnalysisAccess: &AnalysisAccessRights{
AllowedAnalysisTypes: role.Capabilities,
@@ -627,10 +629,10 @@ func (rap *RoleAwareProcessor) createDefaultPermissions(role *Role) *RolePermiss
ResourceLimit: 100,
},
InsightAccess: &InsightAccessRights{
GenerationLevel: role.SecurityLevel,
AccessLevel: role.SecurityLevel,
ConfidenceThreshold: 0.5,
MaxInsights: 50,
GenerationLevel: role.SecurityLevel,
AccessLevel: role.SecurityLevel,
ConfidenceThreshold: 0.5,
MaxInsights: 50,
},
SystemAccess: &SystemAccessRights{
AdminAccess: role.SecurityLevel >= 8,
@@ -660,26 +662,26 @@ func (rap *RoleAwareProcessor) createDefaultInsightConfig(role *Role) *RoleInsig
"scalability": 0.9,
}
config.MaxInsights = 100
case "developer":
config.EnabledGenerators = []string{"code_insights", "implementation_suggestions", "bug_detection"}
config.CategoryWeights = map[string]float64{
"code_quality": 1.0,
"implementation": 0.9,
"bugs": 0.8,
"performance": 0.6,
"code_quality": 1.0,
"implementation": 0.9,
"bugs": 0.8,
"performance": 0.6,
}
case "security_analyst":
config.EnabledGenerators = []string{"security_insights", "vulnerability_analysis", "compliance_check"}
config.CategoryWeights = map[string]float64{
"security": 1.0,
"security": 1.0,
"vulnerabilities": 1.0,
"compliance": 0.9,
"privacy": 0.8,
"compliance": 0.9,
"privacy": 0.8,
}
config.MaxInsights = 200
case "devops_engineer":
config.EnabledGenerators = []string{"infrastructure_insights", "deployment_analysis", "monitoring_suggestions"}
config.CategoryWeights = map[string]float64{
@@ -688,7 +690,7 @@ func (rap *RoleAwareProcessor) createDefaultInsightConfig(role *Role) *RoleInsig
"monitoring": 0.8,
"automation": 0.7,
}
case "qa_engineer":
config.EnabledGenerators = []string{"quality_insights", "test_suggestions", "validation_analysis"}
config.CategoryWeights = map[string]float64{
@@ -751,7 +753,7 @@ func NewSecurityFilter() *SecurityFilter {
"top_secret": 10,
},
contentFilters: make(map[string]*ContentFilter),
accessMatrix: &AccessMatrix{
accessMatrix: &AccessMatrix{
Rules: make(map[string]*AccessRule),
DefaultDeny: true,
LastUpdated: time.Now(),
@@ -765,7 +767,7 @@ func (sf *SecurityFilter) filterForRole(node *slurpContext.ContextNode, role *Ro
// Apply content filtering based on role security level
filtered.Summary = sf.filterContent(node.Summary, role)
filtered.Purpose = sf.filterContent(node.Purpose, role)
// Filter insights based on role access level
filteredInsights := []string{}
for _, insight := range node.Insights {
@@ -816,7 +818,7 @@ func (sf *SecurityFilter) filterContent(content string, role *Role) string {
func (sf *SecurityFilter) canAccessInsight(insight string, role *Role) bool {
// Check if role can access this type of insight
lowerInsight := strings.ToLower(insight)
// Security analysts can see all insights
if role.ID == "security_analyst" {
return true
@@ -849,20 +851,20 @@ func (sf *SecurityFilter) canAccessInsight(insight string, role *Role) bool {
func (sf *SecurityFilter) filterTechnologies(technologies []string, role *Role) []string {
filtered := []string{}
for _, tech := range technologies {
if sf.canAccessTechnology(tech, role) {
filtered = append(filtered, tech)
}
}
return filtered
}
func (sf *SecurityFilter) canAccessTechnology(technology string, role *Role) bool {
// Role-specific technology access rules
lowerTech := strings.ToLower(technology)
switch role.ID {
case "qa_engineer":
// QA engineers shouldn't see infrastructure technologies
@@ -881,26 +883,26 @@ func (sf *SecurityFilter) canAccessTechnology(technology string, role *Role) boo
}
}
}
return true
}
func (sf *SecurityFilter) filterTags(tags []string, role *Role) []string {
filtered := []string{}
for _, tag := range tags {
if sf.canAccessTag(tag, role) {
filtered = append(filtered, tag)
}
}
return filtered
}
func (sf *SecurityFilter) canAccessTag(tag string, role *Role) bool {
// Simple tag filtering based on role
lowerTag := strings.ToLower(tag)
// Security-related tags only for security analysts and architects
securityTags := []string{"security", "vulnerability", "encryption", "authentication"}
for _, secTag := range securityTags {
@@ -908,7 +910,7 @@ func (sf *SecurityFilter) canAccessTag(tag string, role *Role) bool {
return false
}
}
return true
}
@@ -968,7 +970,7 @@ func (ig *InsightGenerator) generateForRole(ctx context.Context, node *slurpCont
func (ig *InsightGenerator) applyRoleFilters(insights []*RoleSpecificInsight, role *Role) []*RoleSpecificInsight {
filtered := []*RoleSpecificInsight{}
for _, insight := range insights {
// Check security level
if insight.SecurityLevel > role.SecurityLevel {
@@ -1174,6 +1176,7 @@ func (al *AuditLogger) GetAuditLog(limit int) []*AuditEntry {
// These would be fully implemented with sophisticated logic in production
type ArchitectInsightGenerator struct{}
func NewArchitectInsightGenerator() *ArchitectInsightGenerator { return &ArchitectInsightGenerator{} }
func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
return []*RoleSpecificInsight{
@@ -1191,10 +1194,15 @@ func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node
}, nil
}
func (aig *ArchitectInsightGenerator) GetSupportedRoles() []string { return []string{"architect"} }
func (aig *ArchitectInsightGenerator) GetInsightTypes() []string { return []string{"architecture", "design", "patterns"} }
func (aig *ArchitectInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
func (aig *ArchitectInsightGenerator) GetInsightTypes() []string {
return []string{"architecture", "design", "patterns"}
}
func (aig *ArchitectInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
return nil
}
type DeveloperInsightGenerator struct{}
func NewDeveloperInsightGenerator() *DeveloperInsightGenerator { return &DeveloperInsightGenerator{} }
func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
return []*RoleSpecificInsight{
@@ -1212,10 +1220,15 @@ func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node
}, nil
}
func (dig *DeveloperInsightGenerator) GetSupportedRoles() []string { return []string{"developer"} }
func (dig *DeveloperInsightGenerator) GetInsightTypes() []string { return []string{"code_quality", "implementation", "bugs"} }
func (dig *DeveloperInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
func (dig *DeveloperInsightGenerator) GetInsightTypes() []string {
return []string{"code_quality", "implementation", "bugs"}
}
func (dig *DeveloperInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
return nil
}
type SecurityInsightGenerator struct{}
func NewSecurityInsightGenerator() *SecurityInsightGenerator { return &SecurityInsightGenerator{} }
func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
return []*RoleSpecificInsight{
@@ -1232,11 +1245,18 @@ func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node
},
}, nil
}
func (sig *SecurityInsightGenerator) GetSupportedRoles() []string { return []string{"security_analyst"} }
func (sig *SecurityInsightGenerator) GetInsightTypes() []string { return []string{"security", "vulnerability", "compliance"} }
func (sig *SecurityInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
func (sig *SecurityInsightGenerator) GetSupportedRoles() []string {
return []string{"security_analyst"}
}
func (sig *SecurityInsightGenerator) GetInsightTypes() []string {
return []string{"security", "vulnerability", "compliance"}
}
func (sig *SecurityInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
return nil
}
type DevOpsInsightGenerator struct{}
func NewDevOpsInsightGenerator() *DevOpsInsightGenerator { return &DevOpsInsightGenerator{} }
func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
return []*RoleSpecificInsight{
@@ -1254,10 +1274,15 @@ func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *
}, nil
}
func (doig *DevOpsInsightGenerator) GetSupportedRoles() []string { return []string{"devops_engineer"} }
func (doig *DevOpsInsightGenerator) GetInsightTypes() []string { return []string{"infrastructure", "deployment", "monitoring"} }
func (doig *DevOpsInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
func (doig *DevOpsInsightGenerator) GetInsightTypes() []string {
return []string{"infrastructure", "deployment", "monitoring"}
}
func (doig *DevOpsInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
return nil
}
type QAInsightGenerator struct{}
func NewQAInsightGenerator() *QAInsightGenerator { return &QAInsightGenerator{} }
func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
return []*RoleSpecificInsight{
@@ -1275,5 +1300,9 @@ func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slur
}, nil
}
func (qaig *QAInsightGenerator) GetSupportedRoles() []string { return []string{"qa_engineer"} }
func (qaig *QAInsightGenerator) GetInsightTypes() []string { return []string{"quality", "testing", "validation"} }
func (qaig *QAInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
func (qaig *QAInsightGenerator) GetInsightTypes() []string {
return []string{"quality", "testing", "validation"}
}
func (qaig *QAInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
return nil
}

View File

@@ -6,236 +6,236 @@ import (
// FileMetadata represents metadata extracted from file system
type FileMetadata struct {
Path string `json:"path"` // File path
Size int64 `json:"size"` // File size in bytes
ModTime time.Time `json:"mod_time"` // Last modification time
Mode uint32 `json:"mode"` // File mode
IsDir bool `json:"is_dir"` // Whether it's a directory
Extension string `json:"extension"` // File extension
MimeType string `json:"mime_type"` // MIME type
Hash string `json:"hash"` // Content hash
Permissions string `json:"permissions"` // File permissions
Path string `json:"path"` // File path
Size int64 `json:"size"` // File size in bytes
ModTime time.Time `json:"mod_time"` // Last modification time
Mode uint32 `json:"mode"` // File mode
IsDir bool `json:"is_dir"` // Whether it's a directory
Extension string `json:"extension"` // File extension
MimeType string `json:"mime_type"` // MIME type
Hash string `json:"hash"` // Content hash
Permissions string `json:"permissions"` // File permissions
}
// StructureAnalysis represents analysis of code structure
type StructureAnalysis struct {
Architecture string `json:"architecture"` // Architectural pattern
Patterns []string `json:"patterns"` // Design patterns used
Components []*Component `json:"components"` // Code components
Relationships []*Relationship `json:"relationships"` // Component relationships
Complexity *ComplexityMetrics `json:"complexity"` // Complexity metrics
QualityMetrics *QualityMetrics `json:"quality_metrics"` // Code quality metrics
TestCoverage float64 `json:"test_coverage"` // Test coverage percentage
Documentation *DocMetrics `json:"documentation"` // Documentation metrics
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
Architecture string `json:"architecture"` // Architectural pattern
Patterns []string `json:"patterns"` // Design patterns used
Components []*Component `json:"components"` // Code components
Relationships []*Relationship `json:"relationships"` // Component relationships
Complexity *ComplexityMetrics `json:"complexity"` // Complexity metrics
QualityMetrics *QualityMetrics `json:"quality_metrics"` // Code quality metrics
TestCoverage float64 `json:"test_coverage"` // Test coverage percentage
Documentation *DocMetrics `json:"documentation"` // Documentation metrics
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
}
// Component represents a code component
type Component struct {
Name string `json:"name"` // Component name
Type string `json:"type"` // Component type (class, function, etc.)
Purpose string `json:"purpose"` // Component purpose
Visibility string `json:"visibility"` // Visibility (public, private, etc.)
Lines int `json:"lines"` // Lines of code
Complexity int `json:"complexity"` // Cyclomatic complexity
Dependencies []string `json:"dependencies"` // Dependencies
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
Name string `json:"name"` // Component name
Type string `json:"type"` // Component type (class, function, etc.)
Purpose string `json:"purpose"` // Component purpose
Visibility string `json:"visibility"` // Visibility (public, private, etc.)
Lines int `json:"lines"` // Lines of code
Complexity int `json:"complexity"` // Cyclomatic complexity
Dependencies []string `json:"dependencies"` // Dependencies
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}
// Relationship represents a relationship between components
type Relationship struct {
From string `json:"from"` // Source component
To string `json:"to"` // Target component
Type string `json:"type"` // Relationship type
Strength float64 `json:"strength"` // Relationship strength (0-1)
Direction string `json:"direction"` // Direction (unidirectional, bidirectional)
Description string `json:"description"` // Relationship description
From string `json:"from"` // Source component
To string `json:"to"` // Target component
Type string `json:"type"` // Relationship type
Strength float64 `json:"strength"` // Relationship strength (0-1)
Direction string `json:"direction"` // Direction (unidirectional, bidirectional)
Description string `json:"description"` // Relationship description
}
// ComplexityMetrics represents code complexity metrics
type ComplexityMetrics struct {
Cyclomatic float64 `json:"cyclomatic"` // Cyclomatic complexity
Cognitive float64 `json:"cognitive"` // Cognitive complexity
Halstead float64 `json:"halstead"` // Halstead complexity
Maintainability float64 `json:"maintainability"` // Maintainability index
TechnicalDebt float64 `json:"technical_debt"` // Technical debt estimate
Cyclomatic float64 `json:"cyclomatic"` // Cyclomatic complexity
Cognitive float64 `json:"cognitive"` // Cognitive complexity
Halstead float64 `json:"halstead"` // Halstead complexity
Maintainability float64 `json:"maintainability"` // Maintainability index
TechnicalDebt float64 `json:"technical_debt"` // Technical debt estimate
}
// QualityMetrics represents code quality metrics
type QualityMetrics struct {
Readability float64 `json:"readability"` // Readability score
Testability float64 `json:"testability"` // Testability score
Reusability float64 `json:"reusability"` // Reusability score
Reliability float64 `json:"reliability"` // Reliability score
Security float64 `json:"security"` // Security score
Performance float64 `json:"performance"` // Performance score
Duplication float64 `json:"duplication"` // Code duplication percentage
Consistency float64 `json:"consistency"` // Code consistency score
Readability float64 `json:"readability"` // Readability score
Testability float64 `json:"testability"` // Testability score
Reusability float64 `json:"reusability"` // Reusability score
Reliability float64 `json:"reliability"` // Reliability score
Security float64 `json:"security"` // Security score
Performance float64 `json:"performance"` // Performance score
Duplication float64 `json:"duplication"` // Code duplication percentage
Consistency float64 `json:"consistency"` // Code consistency score
}
// DocMetrics represents documentation metrics
type DocMetrics struct {
Coverage float64 `json:"coverage"` // Documentation coverage
Quality float64 `json:"quality"` // Documentation quality
CommentRatio float64 `json:"comment_ratio"` // Comment to code ratio
APIDocCoverage float64 `json:"api_doc_coverage"` // API documentation coverage
ExampleCount int `json:"example_count"` // Number of examples
TODOCount int `json:"todo_count"` // Number of TODO comments
FIXMECount int `json:"fixme_count"` // Number of FIXME comments
Coverage float64 `json:"coverage"` // Documentation coverage
Quality float64 `json:"quality"` // Documentation quality
CommentRatio float64 `json:"comment_ratio"` // Comment to code ratio
APIDocCoverage float64 `json:"api_doc_coverage"` // API documentation coverage
ExampleCount int `json:"example_count"` // Number of examples
TODOCount int `json:"todo_count"` // Number of TODO comments
FIXMECount int `json:"fixme_count"` // Number of FIXME comments
}
// DirectoryStructure represents analysis of directory organization
type DirectoryStructure struct {
Path string `json:"path"` // Directory path
FileCount int `json:"file_count"` // Number of files
DirectoryCount int `json:"directory_count"` // Number of subdirectories
TotalSize int64 `json:"total_size"` // Total size in bytes
FileTypes map[string]int `json:"file_types"` // File type distribution
Languages map[string]int `json:"languages"` // Language distribution
Organization *OrganizationInfo `json:"organization"` // Organization information
Conventions *ConventionInfo `json:"conventions"` // Convention information
Dependencies []string `json:"dependencies"` // Directory dependencies
Purpose string `json:"purpose"` // Directory purpose
Architecture string `json:"architecture"` // Architectural pattern
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
Path string `json:"path"` // Directory path
FileCount int `json:"file_count"` // Number of files
DirectoryCount int `json:"directory_count"` // Number of subdirectories
TotalSize int64 `json:"total_size"` // Total size in bytes
FileTypes map[string]int `json:"file_types"` // File type distribution
Languages map[string]int `json:"languages"` // Language distribution
Organization *OrganizationInfo `json:"organization"` // Organization information
Conventions *ConventionInfo `json:"conventions"` // Convention information
Dependencies []string `json:"dependencies"` // Directory dependencies
Purpose string `json:"purpose"` // Directory purpose
Architecture string `json:"architecture"` // Architectural pattern
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
}
// OrganizationInfo represents directory organization information
type OrganizationInfo struct {
Pattern string `json:"pattern"` // Organization pattern
Consistency float64 `json:"consistency"` // Organization consistency
Depth int `json:"depth"` // Directory depth
FanOut int `json:"fan_out"` // Average fan-out
Modularity float64 `json:"modularity"` // Modularity score
Cohesion float64 `json:"cohesion"` // Cohesion score
Coupling float64 `json:"coupling"` // Coupling score
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
Pattern string `json:"pattern"` // Organization pattern
Consistency float64 `json:"consistency"` // Organization consistency
Depth int `json:"depth"` // Directory depth
FanOut int `json:"fan_out"` // Average fan-out
Modularity float64 `json:"modularity"` // Modularity score
Cohesion float64 `json:"cohesion"` // Cohesion score
Coupling float64 `json:"coupling"` // Coupling score
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}
// ConventionInfo represents naming and organizational conventions
type ConventionInfo struct {
NamingStyle string `json:"naming_style"` // Naming convention style
FileNaming string `json:"file_naming"` // File naming pattern
DirectoryNaming string `json:"directory_naming"` // Directory naming pattern
Consistency float64 `json:"consistency"` // Convention consistency
Violations []*Violation `json:"violations"` // Convention violations
Standards []string `json:"standards"` // Applied standards
NamingStyle string `json:"naming_style"` // Naming convention style
FileNaming string `json:"file_naming"` // File naming pattern
DirectoryNaming string `json:"directory_naming"` // Directory naming pattern
Consistency float64 `json:"consistency"` // Convention consistency
Violations []*Violation `json:"violations"` // Convention violations
Standards []string `json:"standards"` // Applied standards
}
// Violation represents a convention violation
type Violation struct {
Type string `json:"type"` // Violation type
Path string `json:"path"` // Violating path
Expected string `json:"expected"` // Expected format
Actual string `json:"actual"` // Actual format
Severity string `json:"severity"` // Violation severity
Suggestion string `json:"suggestion"` // Suggested fix
Type string `json:"type"` // Violation type
Path string `json:"path"` // Violating path
Expected string `json:"expected"` // Expected format
Actual string `json:"actual"` // Actual format
Severity string `json:"severity"` // Violation severity
Suggestion string `json:"suggestion"` // Suggested fix
}
// ConventionAnalysis represents analysis of naming and organizational conventions
type ConventionAnalysis struct {
NamingPatterns []*NamingPattern `json:"naming_patterns"` // Detected naming patterns
NamingPatterns []*NamingPattern `json:"naming_patterns"` // Detected naming patterns
OrganizationalPatterns []*OrganizationalPattern `json:"organizational_patterns"` // Organizational patterns
Consistency float64 `json:"consistency"` // Overall consistency score
Violations []*Violation `json:"violations"` // Convention violations
Recommendations []*Recommendation `json:"recommendations"` // Improvement recommendations
AppliedStandards []string `json:"applied_standards"` // Applied coding standards
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
Consistency float64 `json:"consistency"` // Overall consistency score
Violations []*Violation `json:"violations"` // Convention violations
Recommendations []*BasicRecommendation `json:"recommendations"` // Improvement recommendations
AppliedStandards []string `json:"applied_standards"` // Applied coding standards
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
}
// RelationshipAnalysis represents analysis of directory relationships
type RelationshipAnalysis struct {
Dependencies []*DirectoryDependency `json:"dependencies"` // Directory dependencies
Relationships []*DirectoryRelation `json:"relationships"` // Directory relationships
CouplingMetrics *CouplingMetrics `json:"coupling_metrics"` // Coupling metrics
ModularityScore float64 `json:"modularity_score"` // Modularity score
ArchitecturalStyle string `json:"architectural_style"` // Architectural style
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
Dependencies []*DirectoryDependency `json:"dependencies"` // Directory dependencies
Relationships []*DirectoryRelation `json:"relationships"` // Directory relationships
CouplingMetrics *CouplingMetrics `json:"coupling_metrics"` // Coupling metrics
ModularityScore float64 `json:"modularity_score"` // Modularity score
ArchitecturalStyle string `json:"architectural_style"` // Architectural style
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
}
// DirectoryDependency represents a dependency between directories
type DirectoryDependency struct {
From string `json:"from"` // Source directory
To string `json:"to"` // Target directory
Type string `json:"type"` // Dependency type
Strength float64 `json:"strength"` // Dependency strength
Reason string `json:"reason"` // Reason for dependency
FileCount int `json:"file_count"` // Number of files involved
From string `json:"from"` // Source directory
To string `json:"to"` // Target directory
Type string `json:"type"` // Dependency type
Strength float64 `json:"strength"` // Dependency strength
Reason string `json:"reason"` // Reason for dependency
FileCount int `json:"file_count"` // Number of files involved
}
// DirectoryRelation represents a relationship between directories
type DirectoryRelation struct {
Directory1 string `json:"directory1"` // First directory
Directory2 string `json:"directory2"` // Second directory
Type string `json:"type"` // Relation type
Strength float64 `json:"strength"` // Relation strength
Description string `json:"description"` // Relation description
Bidirectional bool `json:"bidirectional"` // Whether relation is bidirectional
Directory1 string `json:"directory1"` // First directory
Directory2 string `json:"directory2"` // Second directory
Type string `json:"type"` // Relation type
Strength float64 `json:"strength"` // Relation strength
Description string `json:"description"` // Relation description
Bidirectional bool `json:"bidirectional"` // Whether relation is bidirectional
}
// CouplingMetrics represents coupling metrics between directories
type CouplingMetrics struct {
AfferentCoupling float64 `json:"afferent_coupling"` // Afferent coupling
EfferentCoupling float64 `json:"efferent_coupling"` // Efferent coupling
Instability float64 `json:"instability"` // Instability metric
Abstractness float64 `json:"abstractness"` // Abstractness metric
DistanceFromMain float64 `json:"distance_from_main"` // Distance from main sequence
AfferentCoupling float64 `json:"afferent_coupling"` // Afferent coupling
EfferentCoupling float64 `json:"efferent_coupling"` // Efferent coupling
Instability float64 `json:"instability"` // Instability metric
Abstractness float64 `json:"abstractness"` // Abstractness metric
DistanceFromMain float64 `json:"distance_from_main"` // Distance from main sequence
}
// Pattern represents a detected pattern in code or organization
type Pattern struct {
ID string `json:"id"` // Pattern identifier
Name string `json:"name"` // Pattern name
Type string `json:"type"` // Pattern type
Description string `json:"description"` // Pattern description
Confidence float64 `json:"confidence"` // Detection confidence
Frequency int `json:"frequency"` // Pattern frequency
Examples []string `json:"examples"` // Example instances
Criteria map[string]interface{} `json:"criteria"` // Pattern criteria
Benefits []string `json:"benefits"` // Pattern benefits
Drawbacks []string `json:"drawbacks"` // Pattern drawbacks
ApplicableRoles []string `json:"applicable_roles"` // Roles that benefit from this pattern
DetectedAt time.Time `json:"detected_at"` // When pattern was detected
ID string `json:"id"` // Pattern identifier
Name string `json:"name"` // Pattern name
Type string `json:"type"` // Pattern type
Description string `json:"description"` // Pattern description
Confidence float64 `json:"confidence"` // Detection confidence
Frequency int `json:"frequency"` // Pattern frequency
Examples []string `json:"examples"` // Example instances
Criteria map[string]interface{} `json:"criteria"` // Pattern criteria
Benefits []string `json:"benefits"` // Pattern benefits
Drawbacks []string `json:"drawbacks"` // Pattern drawbacks
ApplicableRoles []string `json:"applicable_roles"` // Roles that benefit from this pattern
DetectedAt time.Time `json:"detected_at"` // When pattern was detected
}
// CodePattern represents a code-specific pattern
type CodePattern struct {
Pattern // Embedded base pattern
Language string `json:"language"` // Programming language
Framework string `json:"framework"` // Framework context
Complexity float64 `json:"complexity"` // Pattern complexity
Usage *UsagePattern `json:"usage"` // Usage pattern
Performance *PerformanceInfo `json:"performance"` // Performance characteristics
Pattern // Embedded base pattern
Language string `json:"language"` // Programming language
Framework string `json:"framework"` // Framework context
Complexity float64 `json:"complexity"` // Pattern complexity
Usage *UsagePattern `json:"usage"` // Usage pattern
Performance *PerformanceInfo `json:"performance"` // Performance characteristics
}
// NamingPattern represents a naming convention pattern
type NamingPattern struct {
Pattern // Embedded base pattern
Convention string `json:"convention"` // Naming convention
Scope string `json:"scope"` // Pattern scope
Regex string `json:"regex"` // Regex pattern
CaseStyle string `json:"case_style"` // Case style (camelCase, snake_case, etc.)
Prefix string `json:"prefix"` // Common prefix
Suffix string `json:"suffix"` // Common suffix
Pattern // Embedded base pattern
Convention string `json:"convention"` // Naming convention
Scope string `json:"scope"` // Pattern scope
Regex string `json:"regex"` // Regex pattern
CaseStyle string `json:"case_style"` // Case style (camelCase, snake_case, etc.)
Prefix string `json:"prefix"` // Common prefix
Suffix string `json:"suffix"` // Common suffix
}
// OrganizationalPattern represents an organizational pattern
type OrganizationalPattern struct {
Pattern // Embedded base pattern
Structure string `json:"structure"` // Organizational structure
Depth int `json:"depth"` // Typical depth
FanOut int `json:"fan_out"` // Typical fan-out
Modularity float64 `json:"modularity"` // Modularity characteristics
Scalability string `json:"scalability"` // Scalability characteristics
Pattern // Embedded base pattern
Structure string `json:"structure"` // Organizational structure
Depth int `json:"depth"` // Typical depth
FanOut int `json:"fan_out"` // Typical fan-out
Modularity float64 `json:"modularity"` // Modularity characteristics
Scalability string `json:"scalability"` // Scalability characteristics
}
// UsagePattern represents how a pattern is typically used
type UsagePattern struct {
Frequency string `json:"frequency"` // Usage frequency
Context []string `json:"context"` // Usage contexts
Prerequisites []string `json:"prerequisites"` // Prerequisites
Alternatives []string `json:"alternatives"` // Alternative patterns
Compatibility map[string]string `json:"compatibility"` // Compatibility with other patterns
Frequency string `json:"frequency"` // Usage frequency
Context []string `json:"context"` // Usage contexts
Prerequisites []string `json:"prerequisites"` // Prerequisites
Alternatives []string `json:"alternatives"` // Alternative patterns
Compatibility map[string]string `json:"compatibility"` // Compatibility with other patterns
}
// PerformanceInfo represents performance characteristics of a pattern
@@ -249,12 +249,12 @@ type PerformanceInfo struct {
// PatternMatch represents a match between context and a pattern
type PatternMatch struct {
PatternID string `json:"pattern_id"` // Pattern identifier
MatchScore float64 `json:"match_score"` // Match score (0-1)
Confidence float64 `json:"confidence"` // Match confidence
PatternID string `json:"pattern_id"` // Pattern identifier
MatchScore float64 `json:"match_score"` // Match score (0-1)
Confidence float64 `json:"confidence"` // Match confidence
MatchedFields []string `json:"matched_fields"` // Fields that matched
Explanation string `json:"explanation"` // Match explanation
Suggestions []string `json:"suggestions"` // Improvement suggestions
Explanation string `json:"explanation"` // Match explanation
Suggestions []string `json:"suggestions"` // Improvement suggestions
}
// ValidationResult represents context validation results
@@ -269,12 +269,12 @@ type ValidationResult struct {
// ValidationIssue represents a validation issue
type ValidationIssue struct {
Type string `json:"type"` // Issue type
Severity string `json:"severity"` // Issue severity
Message string `json:"message"` // Issue message
Field string `json:"field"` // Affected field
Suggestion string `json:"suggestion"` // Suggested fix
Impact float64 `json:"impact"` // Impact score
Type string `json:"type"` // Issue type
Severity string `json:"severity"` // Issue severity
Message string `json:"message"` // Issue message
Field string `json:"field"` // Affected field
Suggestion string `json:"suggestion"` // Suggested fix
Impact float64 `json:"impact"` // Impact score
}
// Suggestion represents an improvement suggestion
@@ -289,61 +289,61 @@ type Suggestion struct {
}
// Recommendation represents an improvement recommendation
type Recommendation struct {
Type string `json:"type"` // Recommendation type
Title string `json:"title"` // Recommendation title
Description string `json:"description"` // Detailed description
Priority int `json:"priority"` // Priority level
Effort string `json:"effort"` // Effort required
Impact string `json:"impact"` // Expected impact
Steps []string `json:"steps"` // Implementation steps
Resources []string `json:"resources"` // Required resources
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
type BasicRecommendation struct {
Type string `json:"type"` // Recommendation type
Title string `json:"title"` // Recommendation title
Description string `json:"description"` // Detailed description
Priority int `json:"priority"` // Priority level
Effort string `json:"effort"` // Effort required
Impact string `json:"impact"` // Expected impact
Steps []string `json:"steps"` // Implementation steps
Resources []string `json:"resources"` // Required resources
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}
// RAGResponse represents a response from the RAG system
type RAGResponse struct {
Query string `json:"query"` // Original query
Answer string `json:"answer"` // Generated answer
Sources []*RAGSource `json:"sources"` // Source documents
Confidence float64 `json:"confidence"` // Response confidence
Context map[string]interface{} `json:"context"` // Additional context
ProcessedAt time.Time `json:"processed_at"` // When processed
Query string `json:"query"` // Original query
Answer string `json:"answer"` // Generated answer
Sources []*RAGSource `json:"sources"` // Source documents
Confidence float64 `json:"confidence"` // Response confidence
Context map[string]interface{} `json:"context"` // Additional context
ProcessedAt time.Time `json:"processed_at"` // When processed
}
// RAGSource represents a source document from RAG system
type RAGSource struct {
ID string `json:"id"` // Source identifier
Title string `json:"title"` // Source title
Content string `json:"content"` // Source content excerpt
Score float64 `json:"score"` // Relevance score
Metadata map[string]interface{} `json:"metadata"` // Source metadata
URL string `json:"url"` // Source URL if available
ID string `json:"id"` // Source identifier
Title string `json:"title"` // Source title
Content string `json:"content"` // Source content excerpt
Score float64 `json:"score"` // Relevance score
Metadata map[string]interface{} `json:"metadata"` // Source metadata
URL string `json:"url"` // Source URL if available
}
// RAGResult represents a result from RAG similarity search
type RAGResult struct {
ID string `json:"id"` // Result identifier
Content string `json:"content"` // Content
Score float64 `json:"score"` // Similarity score
Metadata map[string]interface{} `json:"metadata"` // Result metadata
Highlights []string `json:"highlights"` // Content highlights
ID string `json:"id"` // Result identifier
Content string `json:"content"` // Content
Score float64 `json:"score"` // Similarity score
Metadata map[string]interface{} `json:"metadata"` // Result metadata
Highlights []string `json:"highlights"` // Content highlights
}
// RAGUpdate represents an update to the RAG index
type RAGUpdate struct {
ID string `json:"id"` // Document identifier
Content string `json:"content"` // Document content
Metadata map[string]interface{} `json:"metadata"` // Document metadata
Operation string `json:"operation"` // Operation type (add, update, delete)
ID string `json:"id"` // Document identifier
Content string `json:"content"` // Document content
Metadata map[string]interface{} `json:"metadata"` // Document metadata
Operation string `json:"operation"` // Operation type (add, update, delete)
}
// RAGStatistics represents RAG system statistics
type RAGStatistics struct {
TotalDocuments int64 `json:"total_documents"` // Total indexed documents
TotalQueries int64 `json:"total_queries"` // Total queries processed
TotalDocuments int64 `json:"total_documents"` // Total indexed documents
TotalQueries int64 `json:"total_queries"` // Total queries processed
AverageQueryTime time.Duration `json:"average_query_time"` // Average query time
IndexSize int64 `json:"index_size"` // Index size in bytes
LastIndexUpdate time.Time `json:"last_index_update"` // When index was last updated
ErrorRate float64 `json:"error_rate"` // Error rate
}
IndexSize int64 `json:"index_size"` // Index size in bytes
LastIndexUpdate time.Time `json:"last_index_update"` // When index was last updated
ErrorRate float64 `json:"error_rate"` // Error rate
}

View File

@@ -227,7 +227,7 @@ func (cau *ContentAnalysisUtils) extractGenericIdentifiers(content string) (func
// CalculateComplexity calculates code complexity based on various metrics
func (cau *ContentAnalysisUtils) CalculateComplexity(content, language string) float64 {
complexity := 0.0
// Lines of code (basic metric)
lines := strings.Split(content, "\n")
nonEmptyLines := 0
@@ -236,26 +236,26 @@ func (cau *ContentAnalysisUtils) CalculateComplexity(content, language string) f
nonEmptyLines++
}
}
// Base complexity from lines of code
complexity += float64(nonEmptyLines) * 0.1
// Control flow complexity (if, for, while, switch, etc.)
controlFlowPatterns := []*regexp.Regexp{
regexp.MustCompile(`\b(?:if|for|while|switch|case)\b`),
regexp.MustCompile(`\b(?:try|catch|finally)\b`),
regexp.MustCompile(`\?\s*.*\s*:`), // ternary operator
}
for _, pattern := range controlFlowPatterns {
matches := pattern.FindAllString(content, -1)
complexity += float64(len(matches)) * 0.5
}
// Function complexity
functions, _, _ := cau.ExtractIdentifiers(content, language)
complexity += float64(len(functions)) * 0.3
// Nesting level (simple approximation)
maxNesting := 0
currentNesting := 0
@@ -269,7 +269,7 @@ func (cau *ContentAnalysisUtils) CalculateComplexity(content, language string) f
}
}
complexity += float64(maxNesting) * 0.2
// Normalize to 0-10 scale
return math.Min(10.0, complexity/10.0)
}
@@ -279,66 +279,66 @@ func (cau *ContentAnalysisUtils) DetectTechnologies(content, filename string) []
technologies := []string{}
lowerContent := strings.ToLower(content)
ext := strings.ToLower(filepath.Ext(filename))
// Language detection
languageMap := map[string][]string{
".go": {"go", "golang"},
".py": {"python"},
".js": {"javascript", "node.js"},
".jsx": {"javascript", "react", "jsx"},
".ts": {"typescript"},
".tsx": {"typescript", "react", "jsx"},
".java": {"java"},
".kt": {"kotlin"},
".rs": {"rust"},
".cpp": {"c++"},
".c": {"c"},
".cs": {"c#", ".net"},
".php": {"php"},
".rb": {"ruby"},
".go": {"go", "golang"},
".py": {"python"},
".js": {"javascript", "node.js"},
".jsx": {"javascript", "react", "jsx"},
".ts": {"typescript"},
".tsx": {"typescript", "react", "jsx"},
".java": {"java"},
".kt": {"kotlin"},
".rs": {"rust"},
".cpp": {"c++"},
".c": {"c"},
".cs": {"c#", ".net"},
".php": {"php"},
".rb": {"ruby"},
".swift": {"swift"},
".scala": {"scala"},
".clj": {"clojure"},
".hs": {"haskell"},
".ml": {"ocaml"},
".clj": {"clojure"},
".hs": {"haskell"},
".ml": {"ocaml"},
}
if langs, exists := languageMap[ext]; exists {
technologies = append(technologies, langs...)
}
// Framework and library detection
frameworkPatterns := map[string][]string{
"react": {"import.*react", "from [\"']react[\"']", "<.*/>", "jsx"},
"vue": {"import.*vue", "from [\"']vue[\"']", "<template>", "vue"},
"angular": {"import.*@angular", "from [\"']@angular", "ngmodule", "component"},
"express": {"import.*express", "require.*express", "app.get", "app.post"},
"django": {"from django", "import django", "django.db", "models.model"},
"flask": {"from flask", "import flask", "@app.route", "flask.request"},
"spring": {"@springboot", "@controller", "@service", "@repository"},
"hibernate": {"@entity", "@table", "@column", "hibernate"},
"jquery": {"$\\(", "jquery"},
"bootstrap": {"bootstrap", "btn-", "col-", "row"},
"docker": {"dockerfile", "docker-compose", "from.*:", "run.*"},
"kubernetes": {"apiversion:", "kind:", "metadata:", "spec:"},
"terraform": {"\\.tf$", "resource \"", "provider \"", "terraform"},
"ansible": {"\\.yml$", "hosts:", "tasks:", "playbook"},
"jenkins": {"jenkinsfile", "pipeline", "stage", "steps"},
"git": {"\\.git", "git add", "git commit", "git push"},
"mysql": {"mysql", "select.*from", "insert into", "create table"},
"postgresql": {"postgresql", "postgres", "psql"},
"mongodb": {"mongodb", "mongo", "find\\(", "insert\\("},
"redis": {"redis", "set.*", "get.*", "rpush"},
"elasticsearch": {"elasticsearch", "elastic", "query.*", "search.*"},
"graphql": {"graphql", "query.*{", "mutation.*{", "subscription.*{"},
"grpc": {"grpc", "proto", "service.*rpc", "\\.proto$"},
"websocket": {"websocket", "ws://", "wss://", "socket.io"},
"jwt": {"jwt", "jsonwebtoken", "bearer.*token"},
"oauth": {"oauth", "oauth2", "client_id", "client_secret"},
"ssl": {"ssl", "tls", "https", "certificate"},
"encryption": {"encrypt", "decrypt", "bcrypt", "sha256"},
"react": {"import.*react", "from [\"']react[\"']", "<.*/>", "jsx"},
"vue": {"import.*vue", "from [\"']vue[\"']", "<template>", "vue"},
"angular": {"import.*@angular", "from [\"']@angular", "ngmodule", "component"},
"express": {"import.*express", "require.*express", "app.get", "app.post"},
"django": {"from django", "import django", "django.db", "models.model"},
"flask": {"from flask", "import flask", "@app.route", "flask.request"},
"spring": {"@springboot", "@controller", "@service", "@repository"},
"hibernate": {"@entity", "@table", "@column", "hibernate"},
"jquery": {"$\\(", "jquery"},
"bootstrap": {"bootstrap", "btn-", "col-", "row"},
"docker": {"dockerfile", "docker-compose", "from.*:", "run.*"},
"kubernetes": {"apiversion:", "kind:", "metadata:", "spec:"},
"terraform": {"\\.tf$", "resource \"", "provider \"", "terraform"},
"ansible": {"\\.yml$", "hosts:", "tasks:", "playbook"},
"jenkins": {"jenkinsfile", "pipeline", "stage", "steps"},
"git": {"\\.git", "git add", "git commit", "git push"},
"mysql": {"mysql", "select.*from", "insert into", "create table"},
"postgresql": {"postgresql", "postgres", "psql"},
"mongodb": {"mongodb", "mongo", "find\\(", "insert\\("},
"redis": {"redis", "set.*", "get.*", "rpush"},
"elasticsearch": {"elasticsearch", "elastic", "query.*", "search.*"},
"graphql": {"graphql", "query.*{", "mutation.*{", "subscription.*{"},
"grpc": {"grpc", "proto", "service.*rpc", "\\.proto$"},
"websocket": {"websocket", "ws://", "wss://", "socket.io"},
"jwt": {"jwt", "jsonwebtoken", "bearer.*token"},
"oauth": {"oauth", "oauth2", "client_id", "client_secret"},
"ssl": {"ssl", "tls", "https", "certificate"},
"encryption": {"encrypt", "decrypt", "bcrypt", "sha256"},
}
for tech, patterns := range frameworkPatterns {
for _, pattern := range patterns {
if matched, _ := regexp.MatchString(pattern, lowerContent); matched {
@@ -347,7 +347,7 @@ func (cau *ContentAnalysisUtils) DetectTechnologies(content, filename string) []
}
}
}
return removeDuplicates(technologies)
}
@@ -371,7 +371,7 @@ func (su *ScoreUtils) NormalizeScore(score, min, max float64) float64 {
func (su *ScoreUtils) CalculateWeightedScore(scores map[string]float64, weights map[string]float64) float64 {
totalWeight := 0.0
weightedSum := 0.0
for dimension, score := range scores {
weight := weights[dimension]
if weight == 0 {
@@ -380,11 +380,11 @@ func (su *ScoreUtils) CalculateWeightedScore(scores map[string]float64, weights
weightedSum += score * weight
totalWeight += weight
}
if totalWeight == 0 {
return 0.0
}
return weightedSum / totalWeight
}
@@ -393,31 +393,31 @@ func (su *ScoreUtils) CalculatePercentile(values []float64, percentile int) floa
if len(values) == 0 {
return 0.0
}
sorted := make([]float64, len(values))
copy(sorted, values)
sort.Float64s(sorted)
if percentile <= 0 {
return sorted[0]
}
if percentile >= 100 {
return sorted[len(sorted)-1]
}
index := float64(percentile) / 100.0 * float64(len(sorted)-1)
lower := int(math.Floor(index))
upper := int(math.Ceil(index))
if lower == upper {
return sorted[lower]
}
// Linear interpolation
lowerValue := sorted[lower]
upperValue := sorted[upper]
weight := index - float64(lower)
return lowerValue + weight*(upperValue-lowerValue)
}
@@ -426,14 +426,14 @@ func (su *ScoreUtils) CalculateStandardDeviation(values []float64) float64 {
if len(values) <= 1 {
return 0.0
}
// Calculate mean
sum := 0.0
for _, value := range values {
sum += value
}
mean := sum / float64(len(values))
// Calculate variance
variance := 0.0
for _, value := range values {
@@ -441,7 +441,7 @@ func (su *ScoreUtils) CalculateStandardDeviation(values []float64) float64 {
variance += diff * diff
}
variance /= float64(len(values) - 1)
return math.Sqrt(variance)
}
@@ -510,41 +510,41 @@ func (su *StringUtils) Similarity(s1, s2 string) float64 {
if s1 == s2 {
return 1.0
}
words1 := strings.Fields(strings.ToLower(s1))
words2 := strings.Fields(strings.ToLower(s2))
if len(words1) == 0 && len(words2) == 0 {
return 1.0
}
if len(words1) == 0 || len(words2) == 0 {
return 0.0
}
set1 := make(map[string]bool)
set2 := make(map[string]bool)
for _, word := range words1 {
set1[word] = true
}
for _, word := range words2 {
set2[word] = true
}
intersection := 0
for word := range set1 {
if set2[word] {
intersection++
}
}
union := len(set1) + len(set2) - intersection
if union == 0 {
return 1.0
}
return float64(intersection) / float64(union)
}
@@ -565,35 +565,35 @@ func (su *StringUtils) ExtractKeywords(text string, minLength int) []string {
"so": true, "than": true, "too": true, "very": true, "can": true, "could": true,
"should": true, "would": true, "use": true, "used": true, "using": true,
}
// Extract words
wordRegex := regexp.MustCompile(`\b[a-zA-Z]+\b`)
words := wordRegex.FindAllString(strings.ToLower(text), -1)
keywords := []string{}
wordFreq := make(map[string]int)
for _, word := range words {
if len(word) >= minLength && !stopWords[word] {
wordFreq[word]++
}
}
// Sort by frequency and return top keywords
type wordCount struct {
word string
count int
}
var sortedWords []wordCount
for word, count := range wordFreq {
sortedWords = append(sortedWords, wordCount{word, count})
}
sort.Slice(sortedWords, func(i, j int) bool {
return sortedWords[i].count > sortedWords[j].count
})
maxKeywords := 20
for i, wc := range sortedWords {
if i >= maxKeywords {
@@ -601,7 +601,7 @@ func (su *StringUtils) ExtractKeywords(text string, minLength int) []string {
}
keywords = append(keywords, wc.word)
}
return keywords
}
@@ -741,30 +741,58 @@ func CloneContextNode(node *slurpContext.ContextNode) *slurpContext.ContextNode
}
clone := &slurpContext.ContextNode{
Path: node.Path,
Summary: node.Summary,
Purpose: node.Purpose,
Technologies: make([]string, len(node.Technologies)),
Tags: make([]string, len(node.Tags)),
Insights: make([]string, len(node.Insights)),
CreatedAt: node.CreatedAt,
UpdatedAt: node.UpdatedAt,
ContextSpecificity: node.ContextSpecificity,
RAGConfidence: node.RAGConfidence,
ProcessedForRole: node.ProcessedForRole,
Path: node.Path,
UCXLAddress: node.UCXLAddress,
Summary: node.Summary,
Purpose: node.Purpose,
Technologies: make([]string, len(node.Technologies)),
Tags: make([]string, len(node.Tags)),
Insights: make([]string, len(node.Insights)),
OverridesParent: node.OverridesParent,
ContextSpecificity: node.ContextSpecificity,
AppliesToChildren: node.AppliesToChildren,
AppliesTo: node.AppliesTo,
GeneratedAt: node.GeneratedAt,
UpdatedAt: node.UpdatedAt,
CreatedBy: node.CreatedBy,
WhoUpdated: node.WhoUpdated,
RAGConfidence: node.RAGConfidence,
EncryptedFor: make([]string, len(node.EncryptedFor)),
AccessLevel: node.AccessLevel,
}
copy(clone.Technologies, node.Technologies)
copy(clone.Tags, node.Tags)
copy(clone.Insights, node.Insights)
copy(clone.EncryptedFor, node.EncryptedFor)
if node.RoleSpecificInsights != nil {
clone.RoleSpecificInsights = make([]*RoleSpecificInsight, len(node.RoleSpecificInsights))
copy(clone.RoleSpecificInsights, node.RoleSpecificInsights)
if node.Parent != nil {
parent := *node.Parent
clone.Parent = &parent
}
if len(node.Children) > 0 {
clone.Children = make([]string, len(node.Children))
copy(clone.Children, node.Children)
}
if node.Language != nil {
language := *node.Language
clone.Language = &language
}
if node.Size != nil {
sz := *node.Size
clone.Size = &sz
}
if node.LastModified != nil {
lm := *node.LastModified
clone.LastModified = &lm
}
if node.ContentHash != nil {
hash := *node.ContentHash
clone.ContentHash = &hash
}
if node.Metadata != nil {
clone.Metadata = make(map[string]interface{})
clone.Metadata = make(map[string]interface{}, len(node.Metadata))
for k, v := range node.Metadata {
clone.Metadata[k] = v
}
@@ -783,7 +811,7 @@ func MergeContextNodes(nodes ...*slurpContext.ContextNode) *slurpContext.Context
}
merged := CloneContextNode(nodes[0])
for i := 1; i < len(nodes); i++ {
node := nodes[i]
if node == nil {
@@ -792,27 +820,29 @@ func MergeContextNodes(nodes ...*slurpContext.ContextNode) *slurpContext.Context
// Merge technologies
merged.Technologies = mergeStringSlices(merged.Technologies, node.Technologies)
// Merge tags
merged.Tags = mergeStringSlices(merged.Tags, node.Tags)
// Merge insights
merged.Insights = mergeStringSlices(merged.Insights, node.Insights)
// Use most recent timestamps
if node.CreatedAt.Before(merged.CreatedAt) {
merged.CreatedAt = node.CreatedAt
// Use most relevant timestamps
if merged.GeneratedAt.IsZero() {
merged.GeneratedAt = node.GeneratedAt
} else if !node.GeneratedAt.IsZero() && node.GeneratedAt.Before(merged.GeneratedAt) {
merged.GeneratedAt = node.GeneratedAt
}
if node.UpdatedAt.After(merged.UpdatedAt) {
merged.UpdatedAt = node.UpdatedAt
}
// Average context specificity
merged.ContextSpecificity = (merged.ContextSpecificity + node.ContextSpecificity) / 2
// Average RAG confidence
merged.RAGConfidence = (merged.RAGConfidence + node.RAGConfidence) / 2
// Merge metadata
if node.Metadata != nil {
if merged.Metadata == nil {
@@ -844,7 +874,7 @@ func removeDuplicates(slice []string) []string {
func mergeStringSlices(slice1, slice2 []string) []string {
merged := make([]string, len(slice1))
copy(merged, slice1)
for _, item := range slice2 {
found := false
for _, existing := range merged {
@@ -857,7 +887,7 @@ func mergeStringSlices(slice1, slice2 []string) []string {
merged = append(merged, item)
}
}
return merged
}
@@ -1034,4 +1064,4 @@ func (bu *ByteUtils) ReadFileWithLimit(filename string, maxSize int64) ([]byte,
}
return io.ReadAll(file)
}
}

View File

@@ -2,6 +2,9 @@ package slurp
import (
"context"
"time"
"chorus/pkg/crypto"
)
// Core interfaces for the SLURP contextual intelligence system.
@@ -17,34 +20,34 @@ type ContextResolver interface {
// Resolve resolves context for a UCXL address using cascading inheritance.
// This is the primary method for context resolution with default depth limits.
Resolve(ctx context.Context, ucxlAddress string) (*ResolvedContext, error)
// ResolveWithDepth resolves context with bounded depth limit.
// Provides fine-grained control over hierarchy traversal depth for
// performance optimization and resource management.
ResolveWithDepth(ctx context.Context, ucxlAddress string, maxDepth int) (*ResolvedContext, error)
// BatchResolve efficiently resolves multiple UCXL addresses.
// Uses parallel processing, request deduplication, and shared caching
// for optimal performance with bulk operations.
BatchResolve(ctx context.Context, addresses []string) (map[string]*ResolvedContext, error)
// InvalidateCache invalidates cached resolution for an address.
// Used when underlying context changes to ensure fresh resolution.
InvalidateCache(ucxlAddress string) error
// InvalidatePattern invalidates cached resolutions matching a pattern.
// Useful for bulk cache invalidation when hierarchies change.
InvalidatePattern(pattern string) error
// GetStatistics returns resolver performance and operational statistics.
GetStatistics() *ResolverStatistics
// SetDepthLimit sets the default depth limit for resolution operations.
SetDepthLimit(maxDepth int) error
// GetDepthLimit returns the current default depth limit.
GetDepthLimit() int
// ClearCache clears all cached resolutions.
ClearCache() error
}
@@ -57,46 +60,46 @@ type HierarchyManager interface {
// LoadHierarchy loads the context hierarchy from storage.
// Must be called before other operations to initialize the hierarchy.
LoadHierarchy(ctx context.Context) error
// AddNode adds a context node to the hierarchy.
// Validates hierarchy constraints and updates relationships.
AddNode(ctx context.Context, node *ContextNode) error
// UpdateNode updates an existing context node.
// Preserves hierarchy relationships while updating content.
UpdateNode(ctx context.Context, node *ContextNode) error
// RemoveNode removes a context node and handles children.
// Provides options for handling orphaned children (promote, delete, reassign).
RemoveNode(ctx context.Context, nodeID string) error
// GetNode retrieves a context node by ID.
GetNode(ctx context.Context, nodeID string) (*ContextNode, error)
// TraverseUp traverses up the hierarchy with bounded depth.
// Returns ancestor nodes within the specified depth limit.
TraverseUp(ctx context.Context, startPath string, maxDepth int) ([]*ContextNode, error)
// TraverseDown traverses down the hierarchy with bounded depth.
// Returns descendant nodes within the specified depth limit.
TraverseDown(ctx context.Context, startPath string, maxDepth int) ([]*ContextNode, error)
// GetChildren gets immediate children of a node.
GetChildren(ctx context.Context, nodeID string) ([]*ContextNode, error)
// GetParent gets the immediate parent of a node.
GetParent(ctx context.Context, nodeID string) (*ContextNode, error)
// GetPath gets the full path from root to a node.
GetPath(ctx context.Context, nodeID string) ([]*ContextNode, error)
// ValidateHierarchy validates hierarchy integrity and constraints.
// Checks for cycles, orphans, and consistency violations.
ValidateHierarchy(ctx context.Context) error
// RebuildIndex rebuilds internal indexes for hierarchy operations.
RebuildIndex(ctx context.Context) error
// GetHierarchyStats returns statistics about the hierarchy.
GetHierarchyStats(ctx context.Context) (*HierarchyStats, error)
}
@@ -110,27 +113,27 @@ type GlobalContextManager interface {
// AddGlobalContext adds a context that applies globally.
// Global contexts are merged into all resolution results.
AddGlobalContext(ctx context.Context, context *ContextNode) error
// RemoveGlobalContext removes a global context.
RemoveGlobalContext(ctx context.Context, contextID string) error
// UpdateGlobalContext updates an existing global context.
UpdateGlobalContext(ctx context.Context, context *ContextNode) error
// ListGlobalContexts lists all global contexts.
// Returns contexts ordered by priority/specificity.
ListGlobalContexts(ctx context.Context) ([]*ContextNode, error)
// GetGlobalContext retrieves a specific global context.
GetGlobalContext(ctx context.Context, contextID string) (*ContextNode, error)
// ApplyGlobalContexts applies global contexts to a resolution.
// Called automatically during resolution process.
ApplyGlobalContexts(ctx context.Context, resolved *ResolvedContext) error
// EnableGlobalContext enables/disables a global context.
EnableGlobalContext(ctx context.Context, contextID string, enabled bool) error
// SetGlobalContextPriority sets priority for global context application.
SetGlobalContextPriority(ctx context.Context, contextID string, priority int) error
}
@@ -143,54 +146,54 @@ type GlobalContextManager interface {
type TemporalGraph interface {
// CreateInitialContext creates the first version of context.
// Establishes the starting point for temporal evolution tracking.
CreateInitialContext(ctx context.Context, ucxlAddress string,
contextData *ContextNode, creator string) (*TemporalNode, error)
CreateInitialContext(ctx context.Context, ucxlAddress string,
contextData *ContextNode, creator string) (*TemporalNode, error)
// EvolveContext creates a new temporal version due to a decision.
// Records the decision that caused the change and updates the graph.
EvolveContext(ctx context.Context, ucxlAddress string,
newContext *ContextNode, reason ChangeReason,
decision *DecisionMetadata) (*TemporalNode, error)
EvolveContext(ctx context.Context, ucxlAddress string,
newContext *ContextNode, reason ChangeReason,
decision *DecisionMetadata) (*TemporalNode, error)
// GetLatestVersion gets the most recent temporal node.
GetLatestVersion(ctx context.Context, ucxlAddress string) (*TemporalNode, error)
// GetVersionAtDecision gets context as it was at a specific decision point.
// Navigation based on decision hops, not chronological time.
GetVersionAtDecision(ctx context.Context, ucxlAddress string,
decisionHop int) (*TemporalNode, error)
GetVersionAtDecision(ctx context.Context, ucxlAddress string,
decisionHop int) (*TemporalNode, error)
// GetEvolutionHistory gets complete evolution history.
// Returns all temporal versions ordered by decision sequence.
GetEvolutionHistory(ctx context.Context, ucxlAddress string) ([]*TemporalNode, error)
// AddInfluenceRelationship adds influence between contexts.
// Establishes that decisions in one context affect another.
AddInfluenceRelationship(ctx context.Context, influencer, influenced string) error
// RemoveInfluenceRelationship removes an influence relationship.
RemoveInfluenceRelationship(ctx context.Context, influencer, influenced string) error
// GetInfluenceRelationships gets all influence relationships for a context.
GetInfluenceRelationships(ctx context.Context, ucxlAddress string) ([]string, []string, error)
// FindRelatedDecisions finds decisions within N decision hops.
// Explores the decision graph by conceptual distance, not time.
FindRelatedDecisions(ctx context.Context, ucxlAddress string,
maxHops int) ([]*DecisionPath, error)
FindRelatedDecisions(ctx context.Context, ucxlAddress string,
maxHops int) ([]*DecisionPath, error)
// FindDecisionPath finds shortest decision path between addresses.
// Returns the path of decisions connecting two contexts.
FindDecisionPath(ctx context.Context, from, to string) ([]*DecisionStep, error)
// AnalyzeDecisionPatterns analyzes decision-making patterns.
// Identifies patterns in how decisions are made and contexts evolve.
AnalyzeDecisionPatterns(ctx context.Context) (*DecisionAnalysis, error)
// ValidateTemporalIntegrity validates temporal graph integrity.
// Checks for inconsistencies and corruption in temporal data.
ValidateTemporalIntegrity(ctx context.Context) error
// CompactHistory compacts old temporal data to save space.
// Removes detailed history while preserving key decision points.
CompactHistory(ctx context.Context, beforeTime *time.Time) error
@@ -204,25 +207,25 @@ type TemporalGraph interface {
type DecisionNavigator interface {
// NavigateDecisionHops navigates by decision distance, not time.
// Moves through the decision graph by the specified number of hops.
NavigateDecisionHops(ctx context.Context, ucxlAddress string,
hops int, direction NavigationDirection) (*TemporalNode, error)
NavigateDecisionHops(ctx context.Context, ucxlAddress string,
hops int, direction NavigationDirection) (*TemporalNode, error)
// GetDecisionTimeline gets timeline ordered by decision sequence.
// Returns decisions in the order they were made, not chronological order.
GetDecisionTimeline(ctx context.Context, ucxlAddress string,
includeRelated bool, maxHops int) (*DecisionTimeline, error)
GetDecisionTimeline(ctx context.Context, ucxlAddress string,
includeRelated bool, maxHops int) (*DecisionTimeline, error)
// FindStaleContexts finds contexts that may be outdated.
// Identifies contexts that haven't been updated despite related changes.
FindStaleContexts(ctx context.Context, stalenessThreshold float64) ([]*StaleContext, error)
// ValidateDecisionPath validates a decision path is reachable.
// Verifies that a path exists and is traversable.
ValidateDecisionPath(ctx context.Context, path []*DecisionStep) error
// GetNavigationHistory gets navigation history for a session.
GetNavigationHistory(ctx context.Context, sessionID string) ([]*DecisionStep, error)
// ResetNavigation resets navigation state to latest versions.
ResetNavigation(ctx context.Context, ucxlAddress string) error
}
@@ -234,41 +237,41 @@ type DecisionNavigator interface {
type DistributedStorage interface {
// Store stores context data in the DHT with encryption.
// Data is encrypted based on access level and role requirements.
Store(ctx context.Context, key string, data interface{},
accessLevel crypto.AccessLevel) error
Store(ctx context.Context, key string, data interface{},
accessLevel crypto.AccessLevel) error
// Retrieve retrieves and decrypts context data.
// Automatically handles decryption based on current role permissions.
Retrieve(ctx context.Context, key string) (interface{}, error)
// Delete removes context data from storage.
// Handles distributed deletion and cleanup.
Delete(ctx context.Context, key string) error
// Exists checks if a key exists in storage.
Exists(ctx context.Context, key string) (bool, error)
// List lists keys matching a pattern.
List(ctx context.Context, pattern string) ([]string, error)
// Index creates searchable indexes for context data.
// Enables efficient searching and filtering operations.
Index(ctx context.Context, key string, metadata *IndexMetadata) error
// Search searches indexed context data.
// Supports complex queries with role-based filtering.
Search(ctx context.Context, query *SearchQuery) ([]*SearchResult, error)
// Sync synchronizes with other nodes.
// Ensures consistency across the distributed system.
Sync(ctx context.Context) error
// GetStorageStats returns storage statistics and health information.
GetStorageStats(ctx context.Context) (*StorageStats, error)
// Backup creates a backup of stored data.
Backup(ctx context.Context, destination string) error
// Restore restores data from backup.
Restore(ctx context.Context, source string) error
}
@@ -280,31 +283,31 @@ type DistributedStorage interface {
type EncryptedStorage interface {
// StoreEncrypted stores data encrypted for specific roles.
// Supports multi-role encryption for shared access.
StoreEncrypted(ctx context.Context, key string, data interface{},
roles []string) error
StoreEncrypted(ctx context.Context, key string, data interface{},
roles []string) error
// RetrieveDecrypted retrieves and decrypts data using current role.
// Automatically selects appropriate decryption key.
RetrieveDecrypted(ctx context.Context, key string) (interface{}, error)
// CanAccess checks if current role can access data.
// Validates access without retrieving the actual data.
CanAccess(ctx context.Context, key string) (bool, error)
// ListAccessibleKeys lists keys accessible to current role.
// Filters keys based on current role permissions.
ListAccessibleKeys(ctx context.Context) ([]string, error)
// ReEncryptForRoles re-encrypts data for different roles.
// Useful for permission changes and access control updates.
ReEncryptForRoles(ctx context.Context, key string, newRoles []string) error
// GetAccessRoles gets roles that can access a specific key.
GetAccessRoles(ctx context.Context, key string) ([]string, error)
// RotateKeys rotates encryption keys for enhanced security.
RotateKeys(ctx context.Context, keyAge time.Duration) error
// ValidateEncryption validates encryption integrity.
ValidateEncryption(ctx context.Context, key string) error
}
@@ -317,35 +320,35 @@ type EncryptedStorage interface {
type ContextGenerator interface {
// GenerateContext generates context for a path (requires admin role).
// Analyzes content, structure, and patterns to create comprehensive context.
GenerateContext(ctx context.Context, path string,
options *GenerationOptions) (*ContextNode, error)
GenerateContext(ctx context.Context, path string,
options *GenerationOptions) (*ContextNode, error)
// RegenerateHierarchy regenerates entire hierarchy (admin-only).
// Rebuilds context hierarchy from scratch with improved analysis.
RegenerateHierarchy(ctx context.Context, rootPath string,
options *GenerationOptions) (*HierarchyStats, error)
options *GenerationOptions) (*HierarchyStats, error)
// ValidateGeneration validates generated context quality.
// Ensures generated context meets quality and consistency standards.
ValidateGeneration(ctx context.Context, context *ContextNode) (*ValidationResult, error)
// EstimateGenerationCost estimates resource cost of generation.
// Helps with resource planning and operation scheduling.
EstimateGenerationCost(ctx context.Context, scope string) (*CostEstimate, error)
// GenerateBatch generates context for multiple paths efficiently.
// Optimized for bulk generation operations.
GenerateBatch(ctx context.Context, paths []string,
options *GenerationOptions) (map[string]*ContextNode, error)
GenerateBatch(ctx context.Context, paths []string,
options *GenerationOptions) (map[string]*ContextNode, error)
// ScheduleGeneration schedules background context generation.
// Queues generation tasks for processing during low-activity periods.
ScheduleGeneration(ctx context.Context, paths []string,
options *GenerationOptions, priority int) error
ScheduleGeneration(ctx context.Context, paths []string,
options *GenerationOptions, priority int) error
// GetGenerationStatus gets status of background generation tasks.
GetGenerationStatus(ctx context.Context) (*GenerationStatus, error)
// CancelGeneration cancels pending generation tasks.
CancelGeneration(ctx context.Context, taskID string) error
}
@@ -358,30 +361,30 @@ type ContextAnalyzer interface {
// AnalyzeContext analyzes context quality and consistency.
// Evaluates individual context nodes for quality and accuracy.
AnalyzeContext(ctx context.Context, context *ContextNode) (*AnalysisResult, error)
// DetectPatterns detects patterns across contexts.
// Identifies recurring patterns that can improve context generation.
DetectPatterns(ctx context.Context, contexts []*ContextNode) ([]*Pattern, error)
// SuggestImprovements suggests context improvements.
// Provides actionable recommendations for context enhancement.
SuggestImprovements(ctx context.Context, context *ContextNode) ([]*Suggestion, error)
// CalculateConfidence calculates confidence score.
// Assesses confidence in context accuracy and completeness.
CalculateConfidence(ctx context.Context, context *ContextNode) (float64, error)
// DetectInconsistencies detects inconsistencies in hierarchy.
// Identifies conflicts and inconsistencies across related contexts.
DetectInconsistencies(ctx context.Context) ([]*Inconsistency, error)
// AnalyzeTrends analyzes trends in context evolution.
// Identifies patterns in how contexts change over time.
AnalyzeTrends(ctx context.Context, timeRange time.Duration) (*TrendAnalysis, error)
// CompareContexts compares contexts for similarity and differences.
CompareContexts(ctx context.Context, context1, context2 *ContextNode) (*ComparisonResult, error)
// ValidateConsistency validates consistency across hierarchy.
ValidateConsistency(ctx context.Context, rootPath string) ([]*ConsistencyIssue, error)
}
@@ -394,31 +397,31 @@ type PatternMatcher interface {
// MatchPatterns matches context against known patterns.
// Identifies which patterns apply to a given context.
MatchPatterns(ctx context.Context, context *ContextNode) ([]*PatternMatch, error)
// RegisterPattern registers a new context pattern.
// Adds patterns that can be used for matching and generation.
RegisterPattern(ctx context.Context, pattern *ContextPattern) error
// UnregisterPattern removes a context pattern.
UnregisterPattern(ctx context.Context, patternID string) error
// UpdatePattern updates an existing pattern.
UpdatePattern(ctx context.Context, pattern *ContextPattern) error
// ListPatterns lists all registered patterns.
// Returns patterns ordered by priority and usage frequency.
ListPatterns(ctx context.Context) ([]*ContextPattern, error)
// GetPattern retrieves a specific pattern.
GetPattern(ctx context.Context, patternID string) (*ContextPattern, error)
// ApplyPattern applies a pattern to context.
// Updates context to match pattern template.
ApplyPattern(ctx context.Context, context *ContextNode, patternID string) (*ContextNode, error)
// ValidatePattern validates pattern definition.
ValidatePattern(ctx context.Context, pattern *ContextPattern) (*ValidationResult, error)
// GetPatternUsage gets usage statistics for patterns.
GetPatternUsage(ctx context.Context) (map[string]int, error)
}
@@ -431,41 +434,41 @@ type QueryEngine interface {
// Query performs a general context query.
// Supports complex queries with multiple criteria and filters.
Query(ctx context.Context, query *SearchQuery) ([]*SearchResult, error)
// SearchByTag finds contexts by tag.
// Optimized search for tag-based filtering.
SearchByTag(ctx context.Context, tags []string) ([]*SearchResult, error)
// SearchByTechnology finds contexts by technology.
// Finds contexts using specific technologies.
SearchByTechnology(ctx context.Context, technologies []string) ([]*SearchResult, error)
// SearchByPath finds contexts by path pattern.
// Supports glob patterns and regex for path matching.
SearchByPath(ctx context.Context, pathPattern string) ([]*SearchResult, error)
// TemporalQuery performs temporal-aware queries.
// Queries context as it existed at specific decision points.
TemporalQuery(ctx context.Context, query *SearchQuery,
temporal *TemporalFilter) ([]*SearchResult, error)
TemporalQuery(ctx context.Context, query *SearchQuery,
temporal *TemporalFilter) ([]*SearchResult, error)
// FuzzySearch performs fuzzy text search.
// Handles typos and approximate matching.
FuzzySearch(ctx context.Context, text string, threshold float64) ([]*SearchResult, error)
// GetSuggestions gets search suggestions and auto-complete.
GetSuggestions(ctx context.Context, prefix string, limit int) ([]string, error)
// GetFacets gets faceted search information.
// Returns available filters and their counts.
GetFacets(ctx context.Context, query *SearchQuery) (map[string]map[string]int, error)
// BuildIndex builds search indexes for efficient querying.
BuildIndex(ctx context.Context, rebuild bool) error
// OptimizeIndex optimizes search indexes for performance.
OptimizeIndex(ctx context.Context) error
// GetQueryStats gets query performance statistics.
GetQueryStats(ctx context.Context) (*QueryStats, error)
}
@@ -497,83 +500,81 @@ type HealthChecker interface {
// Additional types needed by interfaces
import "time"
type StorageStats struct {
TotalKeys int64 `json:"total_keys"`
TotalSize int64 `json:"total_size"`
IndexSize int64 `json:"index_size"`
CacheSize int64 `json:"cache_size"`
ReplicationStatus string `json:"replication_status"`
LastSync time.Time `json:"last_sync"`
SyncErrors int64 `json:"sync_errors"`
AvailableSpace int64 `json:"available_space"`
TotalKeys int64 `json:"total_keys"`
TotalSize int64 `json:"total_size"`
IndexSize int64 `json:"index_size"`
CacheSize int64 `json:"cache_size"`
ReplicationStatus string `json:"replication_status"`
LastSync time.Time `json:"last_sync"`
SyncErrors int64 `json:"sync_errors"`
AvailableSpace int64 `json:"available_space"`
}
type GenerationStatus struct {
ActiveTasks int `json:"active_tasks"`
QueuedTasks int `json:"queued_tasks"`
CompletedTasks int `json:"completed_tasks"`
FailedTasks int `json:"failed_tasks"`
EstimatedCompletion time.Time `json:"estimated_completion"`
CurrentTask *GenerationTask `json:"current_task,omitempty"`
ActiveTasks int `json:"active_tasks"`
QueuedTasks int `json:"queued_tasks"`
CompletedTasks int `json:"completed_tasks"`
FailedTasks int `json:"failed_tasks"`
EstimatedCompletion time.Time `json:"estimated_completion"`
CurrentTask *GenerationTask `json:"current_task,omitempty"`
}
type GenerationTask struct {
ID string `json:"id"`
Path string `json:"path"`
Status string `json:"status"`
Progress float64 `json:"progress"`
StartedAt time.Time `json:"started_at"`
ID string `json:"id"`
Path string `json:"path"`
Status string `json:"status"`
Progress float64 `json:"progress"`
StartedAt time.Time `json:"started_at"`
EstimatedCompletion time.Time `json:"estimated_completion"`
Error string `json:"error,omitempty"`
Error string `json:"error,omitempty"`
}
type TrendAnalysis struct {
TimeRange time.Duration `json:"time_range"`
TotalChanges int `json:"total_changes"`
ChangeVelocity float64 `json:"change_velocity"`
TimeRange time.Duration `json:"time_range"`
TotalChanges int `json:"total_changes"`
ChangeVelocity float64 `json:"change_velocity"`
DominantReasons []ChangeReason `json:"dominant_reasons"`
QualityTrend string `json:"quality_trend"`
ConfidenceTrend string `json:"confidence_trend"`
MostActiveAreas []string `json:"most_active_areas"`
EmergingPatterns []*Pattern `json:"emerging_patterns"`
AnalyzedAt time.Time `json:"analyzed_at"`
QualityTrend string `json:"quality_trend"`
ConfidenceTrend string `json:"confidence_trend"`
MostActiveAreas []string `json:"most_active_areas"`
EmergingPatterns []*Pattern `json:"emerging_patterns"`
AnalyzedAt time.Time `json:"analyzed_at"`
}
type ComparisonResult struct {
SimilarityScore float64 `json:"similarity_score"`
Differences []*Difference `json:"differences"`
CommonElements []string `json:"common_elements"`
Recommendations []*Suggestion `json:"recommendations"`
ComparedAt time.Time `json:"compared_at"`
SimilarityScore float64 `json:"similarity_score"`
Differences []*Difference `json:"differences"`
CommonElements []string `json:"common_elements"`
Recommendations []*Suggestion `json:"recommendations"`
ComparedAt time.Time `json:"compared_at"`
}
type Difference struct {
Field string `json:"field"`
Value1 interface{} `json:"value1"`
Value2 interface{} `json:"value2"`
DifferenceType string `json:"difference_type"`
Significance float64 `json:"significance"`
Field string `json:"field"`
Value1 interface{} `json:"value1"`
Value2 interface{} `json:"value2"`
DifferenceType string `json:"difference_type"`
Significance float64 `json:"significance"`
}
type ConsistencyIssue struct {
Type string `json:"type"`
Description string `json:"description"`
AffectedNodes []string `json:"affected_nodes"`
Severity string `json:"severity"`
Suggestion string `json:"suggestion"`
DetectedAt time.Time `json:"detected_at"`
Type string `json:"type"`
Description string `json:"description"`
AffectedNodes []string `json:"affected_nodes"`
Severity string `json:"severity"`
Suggestion string `json:"suggestion"`
DetectedAt time.Time `json:"detected_at"`
}
type QueryStats struct {
TotalQueries int64 `json:"total_queries"`
AverageQueryTime time.Duration `json:"average_query_time"`
CacheHitRate float64 `json:"cache_hit_rate"`
TotalQueries int64 `json:"total_queries"`
AverageQueryTime time.Duration `json:"average_query_time"`
CacheHitRate float64 `json:"cache_hit_rate"`
IndexUsage map[string]int64 `json:"index_usage"`
PopularQueries []string `json:"popular_queries"`
SlowQueries []string `json:"slow_queries"`
ErrorRate float64 `json:"error_rate"`
PopularQueries []string `json:"popular_queries"`
SlowQueries []string `json:"slow_queries"`
ErrorRate float64 `json:"error_rate"`
}
type CacheStats struct {
@@ -588,17 +589,17 @@ type CacheStats struct {
}
type HealthStatus struct {
Overall string `json:"overall"`
Components map[string]*ComponentHealth `json:"components"`
CheckedAt time.Time `json:"checked_at"`
Version string `json:"version"`
Uptime time.Duration `json:"uptime"`
Overall string `json:"overall"`
Components map[string]*ComponentHealth `json:"components"`
CheckedAt time.Time `json:"checked_at"`
Version string `json:"version"`
Uptime time.Duration `json:"uptime"`
}
type ComponentHealth struct {
Status string `json:"status"`
Message string `json:"message,omitempty"`
LastCheck time.Time `json:"last_check"`
ResponseTime time.Duration `json:"response_time"`
Metadata map[string]interface{} `json:"metadata,omitempty"`
}
Status string `json:"status"`
Message string `json:"message,omitempty"`
LastCheck time.Time `json:"last_check"`
ResponseTime time.Duration `json:"response_time"`
Metadata map[string]interface{} `json:"metadata,omitempty"`
}

View File

@@ -631,7 +631,7 @@ func (s *SLURP) GetTemporalEvolution(ctx context.Context, ucxlAddress string) ([
return nil, fmt.Errorf("invalid UCXL address: %w", err)
}
return s.temporalGraph.GetEvolutionHistory(ctx, *parsed)
return s.temporalGraph.GetEvolutionHistory(ctx, parsed.String())
}
// NavigateDecisionHops navigates through the decision graph by hop distance.
@@ -654,7 +654,7 @@ func (s *SLURP) NavigateDecisionHops(ctx context.Context, ucxlAddress string, ho
}
if navigator, ok := s.temporalGraph.(DecisionNavigator); ok {
return navigator.NavigateDecisionHops(ctx, *parsed, hops, direction)
return navigator.NavigateDecisionHops(ctx, parsed.String(), hops, direction)
}
return nil, fmt.Errorf("decision navigation not supported by temporal graph")
@@ -1348,26 +1348,42 @@ func (s *SLURP) handleEvent(event *SLURPEvent) {
}
}
// validateSLURPConfig validates SLURP configuration for consistency and correctness
func validateSLURPConfig(config *SLURPConfig) error {
if config.ContextResolution.MaxHierarchyDepth < 1 {
return fmt.Errorf("max_hierarchy_depth must be at least 1")
// validateSLURPConfig normalises runtime tunables sourced from configuration.
func validateSLURPConfig(cfg *config.SlurpConfig) error {
if cfg == nil {
return fmt.Errorf("slurp config is nil")
}
if config.ContextResolution.MinConfidenceThreshold < 0 || config.ContextResolution.MinConfidenceThreshold > 1 {
return fmt.Errorf("min_confidence_threshold must be between 0 and 1")
if cfg.Timeout <= 0 {
cfg.Timeout = 15 * time.Second
}
if config.TemporalAnalysis.MaxDecisionHops < 1 {
return fmt.Errorf("max_decision_hops must be at least 1")
if cfg.RetryCount < 0 {
cfg.RetryCount = 0
}
if config.TemporalAnalysis.StalenessThreshold < 0 || config.TemporalAnalysis.StalenessThreshold > 1 {
return fmt.Errorf("staleness_threshold must be between 0 and 1")
if cfg.RetryDelay <= 0 && cfg.RetryCount > 0 {
cfg.RetryDelay = 2 * time.Second
}
if config.Performance.MaxConcurrentResolutions < 1 {
return fmt.Errorf("max_concurrent_resolutions must be at least 1")
if cfg.Performance.MaxConcurrentResolutions <= 0 {
cfg.Performance.MaxConcurrentResolutions = 1
}
if cfg.Performance.MetricsCollectionInterval <= 0 {
cfg.Performance.MetricsCollectionInterval = time.Minute
}
if cfg.TemporalAnalysis.MaxDecisionHops <= 0 {
cfg.TemporalAnalysis.MaxDecisionHops = 1
}
if cfg.TemporalAnalysis.StalenessCheckInterval <= 0 {
cfg.TemporalAnalysis.StalenessCheckInterval = 5 * time.Minute
}
if cfg.TemporalAnalysis.StalenessThreshold < 0 || cfg.TemporalAnalysis.StalenessThreshold > 1 {
cfg.TemporalAnalysis.StalenessThreshold = 0.2
}
return nil

View File

@@ -164,6 +164,8 @@ func (bm *BackupManagerImpl) CreateBackup(
Incremental: config.Incremental,
ParentBackupID: config.ParentBackupID,
Status: BackupStatusInProgress,
Progress: 0,
ErrorMessage: "",
CreatedAt: time.Now(),
RetentionUntil: time.Now().Add(config.Retention),
}
@@ -707,6 +709,7 @@ func (bm *BackupManagerImpl) validateFile(filePath string) error {
func (bm *BackupManagerImpl) failBackup(job *BackupJob, backupInfo *BackupInfo, err error) {
bm.mu.Lock()
backupInfo.Status = BackupStatusFailed
backupInfo.Progress = 0
backupInfo.ErrorMessage = err.Error()
job.Error = err
bm.mu.Unlock()

View File

@@ -3,18 +3,19 @@ package storage
import (
"context"
"fmt"
"strings"
"sync"
"time"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
)
// BatchOperationsImpl provides efficient batch operations for context storage
type BatchOperationsImpl struct {
contextStore *ContextStoreImpl
batchSize int
maxConcurrency int
contextStore *ContextStoreImpl
batchSize int
maxConcurrency int
operationTimeout time.Duration
}
@@ -22,8 +23,8 @@ type BatchOperationsImpl struct {
func NewBatchOperations(contextStore *ContextStoreImpl, batchSize, maxConcurrency int, timeout time.Duration) *BatchOperationsImpl {
return &BatchOperationsImpl{
contextStore: contextStore,
batchSize: batchSize,
maxConcurrency: maxConcurrency,
batchSize: batchSize,
maxConcurrency: maxConcurrency,
operationTimeout: timeout,
}
}
@@ -89,7 +90,7 @@ func (cs *ContextStoreImpl) BatchStore(
result.ErrorCount++
key := workResult.Item.Context.UCXLAddress.String()
result.Errors[key] = workResult.Error
if batch.FailOnError {
// Cancel remaining operations
result.ProcessingTime = time.Since(start)
@@ -164,11 +165,11 @@ func (cs *ContextStoreImpl) BatchRetrieve(
// Process results
for workResult := range resultsCh {
addressStr := workResult.Address.String()
if workResult.Error != nil {
result.ErrorCount++
result.Errors[addressStr] = workResult.Error
if batch.FailOnError {
// Cancel remaining operations
result.ProcessingTime = time.Since(start)

View File

@@ -4,7 +4,6 @@ import (
"context"
"encoding/json"
"fmt"
"regexp"
"sync"
"time"
@@ -13,13 +12,13 @@ import (
// CacheManagerImpl implements the CacheManager interface using Redis
type CacheManagerImpl struct {
mu sync.RWMutex
client *redis.Client
stats *CacheStatistics
policy *CachePolicy
prefix string
nodeID string
warmupKeys map[string]bool
mu sync.RWMutex
client *redis.Client
stats *CacheStatistics
policy *CachePolicy
prefix string
nodeID string
warmupKeys map[string]bool
}
// NewCacheManager creates a new cache manager with Redis backend
@@ -43,7 +42,7 @@ func NewCacheManager(redisAddr, nodeID string, policy *CachePolicy) (*CacheManag
// Test connection
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := client.Ping(ctx).Err(); err != nil {
return nil, fmt.Errorf("failed to connect to Redis: %w", err)
}
@@ -68,13 +67,13 @@ func NewCacheManager(redisAddr, nodeID string, policy *CachePolicy) (*CacheManag
// DefaultCachePolicy returns default caching policy
func DefaultCachePolicy() *CachePolicy {
return &CachePolicy{
TTL: 24 * time.Hour,
MaxSize: 1024 * 1024 * 1024, // 1GB
EvictionPolicy: "LRU",
RefreshThreshold: 0.8, // Refresh when 80% of TTL elapsed
WarmupEnabled: true,
CompressEntries: true,
MaxEntrySize: 10 * 1024 * 1024, // 10MB
TTL: 24 * time.Hour,
MaxSize: 1024 * 1024 * 1024, // 1GB
EvictionPolicy: "LRU",
RefreshThreshold: 0.8, // Refresh when 80% of TTL elapsed
WarmupEnabled: true,
CompressEntries: true,
MaxEntrySize: 10 * 1024 * 1024, // 10MB
}
}
@@ -203,7 +202,7 @@ func (cm *CacheManagerImpl) Set(
// Delete removes data from cache
func (cm *CacheManagerImpl) Delete(ctx context.Context, key string) error {
cacheKey := cm.buildCacheKey(key)
if err := cm.client.Del(ctx, cacheKey).Err(); err != nil {
return fmt.Errorf("cache delete error: %w", err)
}
@@ -215,37 +214,37 @@ func (cm *CacheManagerImpl) Delete(ctx context.Context, key string) error {
func (cm *CacheManagerImpl) DeletePattern(ctx context.Context, pattern string) error {
// Build full pattern with prefix
fullPattern := cm.buildCacheKey(pattern)
// Use Redis SCAN to find matching keys
var cursor uint64
var keys []string
for {
result, nextCursor, err := cm.client.Scan(ctx, cursor, fullPattern, 100).Result()
if err != nil {
return fmt.Errorf("cache scan error: %w", err)
}
keys = append(keys, result...)
cursor = nextCursor
if cursor == 0 {
break
}
}
// Delete found keys in batches
if len(keys) > 0 {
pipeline := cm.client.Pipeline()
for _, key := range keys {
pipeline.Del(ctx, key)
}
if _, err := pipeline.Exec(ctx); err != nil {
return fmt.Errorf("cache batch delete error: %w", err)
}
}
return nil
}
@@ -282,7 +281,7 @@ func (cm *CacheManagerImpl) GetCacheStats() (*CacheStatistics, error) {
// Update Redis memory usage
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
info, err := cm.client.Info(ctx, "memory").Result()
if err == nil {
// Parse memory info to get actual usage
@@ -314,17 +313,17 @@ func (cm *CacheManagerImpl) SetCachePolicy(policy *CachePolicy) error {
// CacheEntry represents a cached data entry with metadata
type CacheEntry struct {
Key string `json:"key"`
Data []byte `json:"data"`
CreatedAt time.Time `json:"created_at"`
ExpiresAt time.Time `json:"expires_at"`
Key string `json:"key"`
Data []byte `json:"data"`
CreatedAt time.Time `json:"created_at"`
ExpiresAt time.Time `json:"expires_at"`
TTL time.Duration `json:"ttl"`
AccessCount int64 `json:"access_count"`
LastAccessedAt time.Time `json:"last_accessed_at"`
Compressed bool `json:"compressed"`
OriginalSize int64 `json:"original_size"`
CompressedSize int64 `json:"compressed_size"`
NodeID string `json:"node_id"`
AccessCount int64 `json:"access_count"`
LastAccessedAt time.Time `json:"last_accessed_at"`
Compressed bool `json:"compressed"`
OriginalSize int64 `json:"original_size"`
CompressedSize int64 `json:"compressed_size"`
NodeID string `json:"node_id"`
}
// Helper methods
@@ -361,7 +360,7 @@ func (cm *CacheManagerImpl) recordMiss() {
func (cm *CacheManagerImpl) updateAccessStats(duration time.Duration) {
cm.mu.Lock()
defer cm.mu.Unlock()
if cm.stats.AverageLoadTime == 0 {
cm.stats.AverageLoadTime = duration
} else {

View File

@@ -3,20 +3,18 @@ package storage
import (
"bytes"
"context"
"os"
"strings"
"testing"
"time"
)
func TestLocalStorageCompression(t *testing.T) {
// Create temporary directory for test
tempDir := t.TempDir()
// Create storage with compression enabled
options := DefaultLocalStorageOptions()
options.Compression = true
storage, err := NewLocalStorage(tempDir, options)
if err != nil {
t.Fatalf("Failed to create storage: %v", err)
@@ -25,24 +23,24 @@ func TestLocalStorageCompression(t *testing.T) {
// Test data that should compress well
largeData := strings.Repeat("This is a test string that should compress well! ", 100)
// Store with compression enabled
storeOptions := &StoreOptions{
Compress: true,
}
ctx := context.Background()
err = storage.Store(ctx, "test-compress", largeData, storeOptions)
if err != nil {
t.Fatalf("Failed to store compressed data: %v", err)
}
// Retrieve and verify
retrieved, err := storage.Retrieve(ctx, "test-compress")
if err != nil {
t.Fatalf("Failed to retrieve compressed data: %v", err)
}
// Verify data integrity
if retrievedStr, ok := retrieved.(string); ok {
if retrievedStr != largeData {
@@ -51,21 +49,21 @@ func TestLocalStorageCompression(t *testing.T) {
} else {
t.Error("Retrieved data is not a string")
}
// Check compression stats
stats, err := storage.GetCompressionStats()
if err != nil {
t.Fatalf("Failed to get compression stats: %v", err)
}
if stats.CompressedEntries == 0 {
t.Error("Expected at least one compressed entry")
}
if stats.CompressionRatio == 0 {
t.Error("Expected non-zero compression ratio")
}
t.Logf("Compression stats: %d/%d entries compressed, ratio: %.2f",
stats.CompressedEntries, stats.TotalEntries, stats.CompressionRatio)
}
@@ -81,27 +79,27 @@ func TestCompressionMethods(t *testing.T) {
// Test data
originalData := []byte(strings.Repeat("Hello, World! ", 1000))
// Test compression
compressed, err := storage.compress(originalData)
if err != nil {
t.Fatalf("Compression failed: %v", err)
}
t.Logf("Original size: %d bytes", len(originalData))
t.Logf("Compressed size: %d bytes", len(compressed))
// Compressed data should be smaller for repetitive data
if len(compressed) >= len(originalData) {
t.Log("Compression didn't reduce size (may be expected for small or non-repetitive data)")
}
// Test decompression
decompressed, err := storage.decompress(compressed)
if err != nil {
t.Fatalf("Decompression failed: %v", err)
}
// Verify data integrity
if !bytes.Equal(originalData, decompressed) {
t.Error("Decompressed data doesn't match original")
@@ -111,7 +109,7 @@ func TestCompressionMethods(t *testing.T) {
func TestStorageOptimization(t *testing.T) {
// Create temporary directory for test
tempDir := t.TempDir()
storage, err := NewLocalStorage(tempDir, nil)
if err != nil {
t.Fatalf("Failed to create storage: %v", err)
@@ -119,7 +117,7 @@ func TestStorageOptimization(t *testing.T) {
defer storage.Close()
ctx := context.Background()
// Store multiple entries without compression
testData := []struct {
key string
@@ -130,50 +128,50 @@ func TestStorageOptimization(t *testing.T) {
{"large2", strings.Repeat("Another large repetitive dataset ", 100)},
{"medium", strings.Repeat("Medium data ", 50)},
}
for _, item := range testData {
err = storage.Store(ctx, item.key, item.data, &StoreOptions{Compress: false})
if err != nil {
t.Fatalf("Failed to store %s: %v", item.key, err)
}
}
// Check initial stats
initialStats, err := storage.GetCompressionStats()
if err != nil {
t.Fatalf("Failed to get initial stats: %v", err)
}
t.Logf("Initial: %d entries, %d compressed",
initialStats.TotalEntries, initialStats.CompressedEntries)
// Optimize storage with threshold (only compress entries larger than 100 bytes)
err = storage.OptimizeStorage(ctx, 100)
if err != nil {
t.Fatalf("Storage optimization failed: %v", err)
}
// Check final stats
finalStats, err := storage.GetCompressionStats()
if err != nil {
t.Fatalf("Failed to get final stats: %v", err)
}
t.Logf("Final: %d entries, %d compressed",
finalStats.TotalEntries, finalStats.CompressedEntries)
// Should have more compressed entries after optimization
if finalStats.CompressedEntries <= initialStats.CompressedEntries {
t.Log("Note: Optimization didn't increase compressed entries (may be expected)")
}
// Verify all data is still retrievable
for _, item := range testData {
retrieved, err := storage.Retrieve(ctx, item.key)
if err != nil {
t.Fatalf("Failed to retrieve %s after optimization: %v", item.key, err)
}
if retrievedStr, ok := retrieved.(string); ok {
if retrievedStr != item.data {
t.Errorf("Data mismatch for %s after optimization", item.key)
@@ -193,26 +191,26 @@ func TestCompressionFallback(t *testing.T) {
// Random-like data that won't compress well
randomData := []byte("a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0u1v2w3x4y5z6")
// Test compression
compressed, err := storage.compress(randomData)
if err != nil {
t.Fatalf("Compression failed: %v", err)
}
// Should return original data if compression doesn't help
if len(compressed) >= len(randomData) {
t.Log("Compression correctly returned original data for incompressible input")
}
// Test decompression of uncompressed data
decompressed, err := storage.decompress(randomData)
if err != nil {
t.Fatalf("Decompression fallback failed: %v", err)
}
// Should return original data unchanged
if !bytes.Equal(randomData, decompressed) {
t.Error("Decompression fallback changed data")
}
}
}

View File

@@ -2,71 +2,68 @@ package storage
import (
"context"
"encoding/json"
"fmt"
"sync"
"time"
"chorus/pkg/crypto"
"chorus/pkg/dht"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
)
// ContextStoreImpl is the main implementation of the ContextStore interface
// It coordinates between local storage, distributed storage, encryption, caching, and indexing
type ContextStoreImpl struct {
mu sync.RWMutex
localStorage LocalStorage
mu sync.RWMutex
localStorage LocalStorage
distributedStorage DistributedStorage
encryptedStorage EncryptedStorage
cacheManager CacheManager
indexManager IndexManager
backupManager BackupManager
eventNotifier EventNotifier
encryptedStorage EncryptedStorage
cacheManager CacheManager
indexManager IndexManager
backupManager BackupManager
eventNotifier EventNotifier
// Configuration
nodeID string
options *ContextStoreOptions
nodeID string
options *ContextStoreOptions
// Statistics and monitoring
statistics *StorageStatistics
metricsCollector *MetricsCollector
statistics *StorageStatistics
metricsCollector *MetricsCollector
// Background processes
stopCh chan struct{}
syncTicker *time.Ticker
compactionTicker *time.Ticker
cleanupTicker *time.Ticker
stopCh chan struct{}
syncTicker *time.Ticker
compactionTicker *time.Ticker
cleanupTicker *time.Ticker
}
// ContextStoreOptions configures the context store behavior
type ContextStoreOptions struct {
// Storage configuration
PreferLocal bool `json:"prefer_local"`
AutoReplicate bool `json:"auto_replicate"`
DefaultReplicas int `json:"default_replicas"`
EncryptionEnabled bool `json:"encryption_enabled"`
CompressionEnabled bool `json:"compression_enabled"`
PreferLocal bool `json:"prefer_local"`
AutoReplicate bool `json:"auto_replicate"`
DefaultReplicas int `json:"default_replicas"`
EncryptionEnabled bool `json:"encryption_enabled"`
CompressionEnabled bool `json:"compression_enabled"`
// Caching configuration
CachingEnabled bool `json:"caching_enabled"`
CacheTTL time.Duration `json:"cache_ttl"`
CacheSize int64 `json:"cache_size"`
CachingEnabled bool `json:"caching_enabled"`
CacheTTL time.Duration `json:"cache_ttl"`
CacheSize int64 `json:"cache_size"`
// Indexing configuration
IndexingEnabled bool `json:"indexing_enabled"`
IndexingEnabled bool `json:"indexing_enabled"`
IndexRefreshInterval time.Duration `json:"index_refresh_interval"`
// Background processes
SyncInterval time.Duration `json:"sync_interval"`
CompactionInterval time.Duration `json:"compaction_interval"`
CleanupInterval time.Duration `json:"cleanup_interval"`
SyncInterval time.Duration `json:"sync_interval"`
CompactionInterval time.Duration `json:"compaction_interval"`
CleanupInterval time.Duration `json:"cleanup_interval"`
// Performance tuning
BatchSize int `json:"batch_size"`
MaxConcurrentOps int `json:"max_concurrent_ops"`
OperationTimeout time.Duration `json:"operation_timeout"`
BatchSize int `json:"batch_size"`
MaxConcurrentOps int `json:"max_concurrent_ops"`
OperationTimeout time.Duration `json:"operation_timeout"`
}
// MetricsCollector collects and aggregates storage metrics
@@ -87,16 +84,16 @@ func DefaultContextStoreOptions() *ContextStoreOptions {
EncryptionEnabled: true,
CompressionEnabled: true,
CachingEnabled: true,
CacheTTL: 24 * time.Hour,
CacheSize: 1024 * 1024 * 1024, // 1GB
IndexingEnabled: true,
CacheTTL: 24 * time.Hour,
CacheSize: 1024 * 1024 * 1024, // 1GB
IndexingEnabled: true,
IndexRefreshInterval: 5 * time.Minute,
SyncInterval: 10 * time.Minute,
CompactionInterval: 24 * time.Hour,
CleanupInterval: 1 * time.Hour,
BatchSize: 100,
MaxConcurrentOps: 10,
OperationTimeout: 30 * time.Second,
SyncInterval: 10 * time.Minute,
CompactionInterval: 24 * time.Hour,
CleanupInterval: 1 * time.Hour,
BatchSize: 100,
MaxConcurrentOps: 10,
OperationTimeout: 30 * time.Second,
}
}
@@ -124,8 +121,8 @@ func NewContextStore(
indexManager: indexManager,
backupManager: backupManager,
eventNotifier: eventNotifier,
nodeID: nodeID,
options: options,
nodeID: nodeID,
options: options,
statistics: &StorageStatistics{
LastSyncTime: time.Now(),
},
@@ -174,11 +171,11 @@ func (cs *ContextStoreImpl) StoreContext(
} else {
// Store unencrypted
storeOptions := &StoreOptions{
Encrypt: false,
Replicate: cs.options.AutoReplicate,
Index: cs.options.IndexingEnabled,
Cache: cs.options.CachingEnabled,
Compress: cs.options.CompressionEnabled,
Encrypt: false,
Replicate: cs.options.AutoReplicate,
Index: cs.options.IndexingEnabled,
Cache: cs.options.CachingEnabled,
Compress: cs.options.CompressionEnabled,
}
storeErr = cs.localStorage.Store(ctx, storageKey, node, storeOptions)
}
@@ -212,14 +209,14 @@ func (cs *ContextStoreImpl) StoreContext(
go func() {
replicateCtx, cancel := context.WithTimeout(context.Background(), cs.options.OperationTimeout)
defer cancel()
distOptions := &DistributedStoreOptions{
ReplicationFactor: cs.options.DefaultReplicas,
ConsistencyLevel: ConsistencyQuorum,
Timeout: cs.options.OperationTimeout,
SyncMode: SyncAsync,
Timeout: cs.options.OperationTimeout,
SyncMode: SyncAsync,
}
if err := cs.distributedStorage.Store(replicateCtx, storageKey, node, distOptions); err != nil {
cs.recordError("replicate", err)
}
@@ -523,11 +520,11 @@ func (cs *ContextStoreImpl) recordOperation(operation string) {
func (cs *ContextStoreImpl) recordLatency(operation string, latency time.Duration) {
cs.metricsCollector.mu.Lock()
defer cs.metricsCollector.mu.Unlock()
if cs.metricsCollector.latencyHistogram[operation] == nil {
cs.metricsCollector.latencyHistogram[operation] = make([]time.Duration, 0, 100)
}
// Keep only last 100 samples
histogram := cs.metricsCollector.latencyHistogram[operation]
if len(histogram) >= 100 {
@@ -541,7 +538,7 @@ func (cs *ContextStoreImpl) recordError(operation string, err error) {
cs.metricsCollector.mu.Lock()
defer cs.metricsCollector.mu.Unlock()
cs.metricsCollector.errorCount[operation]++
// Log the error (in production, use proper logging)
fmt.Printf("Storage error in %s: %v\n", operation, err)
}
@@ -614,7 +611,7 @@ func (cs *ContextStoreImpl) performCleanup(ctx context.Context) {
if err := cs.cacheManager.Clear(ctx); err != nil {
cs.recordError("cache_cleanup", err)
}
// Clean old metrics
cs.cleanupMetrics()
}
@@ -622,7 +619,7 @@ func (cs *ContextStoreImpl) performCleanup(ctx context.Context) {
func (cs *ContextStoreImpl) cleanupMetrics() {
cs.metricsCollector.mu.Lock()
defer cs.metricsCollector.mu.Unlock()
// Reset histograms that are too large
for operation, histogram := range cs.metricsCollector.latencyHistogram {
if len(histogram) > 1000 {
@@ -729,7 +726,7 @@ func (cs *ContextStoreImpl) Sync(ctx context.Context) error {
Type: EventSynced,
Timestamp: time.Now(),
Metadata: map[string]interface{}{
"node_id": cs.nodeID,
"node_id": cs.nodeID,
"sync_time": time.Since(start),
},
}

View File

@@ -8,69 +8,68 @@ import (
"time"
"chorus/pkg/dht"
"chorus/pkg/types"
)
// DistributedStorageImpl implements the DistributedStorage interface
type DistributedStorageImpl struct {
mu sync.RWMutex
dht dht.DHT
nodeID string
metrics *DistributedStorageStats
replicas map[string][]string // key -> replica node IDs
heartbeat *HeartbeatManager
consensus *ConsensusManager
options *DistributedStorageOptions
mu sync.RWMutex
dht dht.DHT
nodeID string
metrics *DistributedStorageStats
replicas map[string][]string // key -> replica node IDs
heartbeat *HeartbeatManager
consensus *ConsensusManager
options *DistributedStorageOptions
}
// HeartbeatManager manages node heartbeats and health
type HeartbeatManager struct {
mu sync.RWMutex
nodes map[string]*NodeHealth
mu sync.RWMutex
nodes map[string]*NodeHealth
heartbeatInterval time.Duration
timeoutThreshold time.Duration
stopCh chan struct{}
stopCh chan struct{}
}
// NodeHealth tracks the health of a distributed storage node
type NodeHealth struct {
NodeID string `json:"node_id"`
LastSeen time.Time `json:"last_seen"`
NodeID string `json:"node_id"`
LastSeen time.Time `json:"last_seen"`
Latency time.Duration `json:"latency"`
IsActive bool `json:"is_active"`
FailureCount int `json:"failure_count"`
Load float64 `json:"load"`
IsActive bool `json:"is_active"`
FailureCount int `json:"failure_count"`
Load float64 `json:"load"`
}
// ConsensusManager handles consensus operations for distributed storage
type ConsensusManager struct {
mu sync.RWMutex
pendingOps map[string]*ConsensusOperation
votingTimeout time.Duration
quorumSize int
mu sync.RWMutex
pendingOps map[string]*ConsensusOperation
votingTimeout time.Duration
quorumSize int
}
// ConsensusOperation represents a distributed operation requiring consensus
type ConsensusOperation struct {
ID string `json:"id"`
Type string `json:"type"`
Key string `json:"key"`
Data interface{} `json:"data"`
Initiator string `json:"initiator"`
Votes map[string]bool `json:"votes"`
CreatedAt time.Time `json:"created_at"`
Status ConsensusStatus `json:"status"`
Callback func(bool, error) `json:"-"`
ID string `json:"id"`
Type string `json:"type"`
Key string `json:"key"`
Data interface{} `json:"data"`
Initiator string `json:"initiator"`
Votes map[string]bool `json:"votes"`
CreatedAt time.Time `json:"created_at"`
Status ConsensusStatus `json:"status"`
Callback func(bool, error) `json:"-"`
}
// ConsensusStatus represents the status of a consensus operation
type ConsensusStatus string
const (
ConsensusPending ConsensusStatus = "pending"
ConsensusApproved ConsensusStatus = "approved"
ConsensusRejected ConsensusStatus = "rejected"
ConsensusTimeout ConsensusStatus = "timeout"
ConsensusPending ConsensusStatus = "pending"
ConsensusApproved ConsensusStatus = "approved"
ConsensusRejected ConsensusStatus = "rejected"
ConsensusTimeout ConsensusStatus = "timeout"
)
// NewDistributedStorage creates a new distributed storage implementation
@@ -83,9 +82,9 @@ func NewDistributedStorage(
options = &DistributedStoreOptions{
ReplicationFactor: 3,
ConsistencyLevel: ConsistencyQuorum,
Timeout: 30 * time.Second,
PreferLocal: true,
SyncMode: SyncAsync,
Timeout: 30 * time.Second,
PreferLocal: true,
SyncMode: SyncAsync,
}
}
@@ -98,10 +97,10 @@ func NewDistributedStorage(
LastRebalance: time.Now(),
},
heartbeat: &HeartbeatManager{
nodes: make(map[string]*NodeHealth),
nodes: make(map[string]*NodeHealth),
heartbeatInterval: 30 * time.Second,
timeoutThreshold: 90 * time.Second,
stopCh: make(chan struct{}),
stopCh: make(chan struct{}),
},
consensus: &ConsensusManager{
pendingOps: make(map[string]*ConsensusOperation),
@@ -125,8 +124,6 @@ func (ds *DistributedStorageImpl) Store(
data interface{},
options *DistributedStoreOptions,
) error {
start := time.Now()
if options == nil {
options = ds.options
}
@@ -179,7 +176,7 @@ func (ds *DistributedStorageImpl) Retrieve(
// Try local first if prefer local is enabled
if ds.options.PreferLocal {
if localData, err := ds.dht.Get(key); err == nil {
if localData, err := ds.dht.GetValue(ctx, key); err == nil {
return ds.deserializeEntry(localData)
}
}
@@ -226,25 +223,9 @@ func (ds *DistributedStorageImpl) Exists(
ctx context.Context,
key string,
) (bool, error) {
// Try local first
if ds.options.PreferLocal {
if exists, err := ds.dht.Exists(key); err == nil {
return exists, nil
}
if _, err := ds.dht.GetValue(ctx, key); err == nil {
return true, nil
}
// Check replicas
replicas, err := ds.getReplicationNodes(key)
if err != nil {
return false, fmt.Errorf("failed to get replication nodes: %w", err)
}
for _, nodeID := range replicas {
if exists, err := ds.checkExistsOnNode(ctx, nodeID, key); err == nil && exists {
return true, nil
}
}
return false, nil
}
@@ -306,10 +287,7 @@ func (ds *DistributedStorageImpl) FindReplicas(
// Sync synchronizes with other DHT nodes
func (ds *DistributedStorageImpl) Sync(ctx context.Context) error {
start := time.Now()
defer func() {
ds.metrics.LastRebalance = time.Now()
}()
ds.metrics.LastRebalance = time.Now()
// Get list of active nodes
activeNodes := ds.heartbeat.getActiveNodes()
@@ -346,7 +324,7 @@ func (ds *DistributedStorageImpl) GetDistributedStats() (*DistributedStorageStat
healthyReplicas := int64(0)
underReplicated := int64(0)
for key, replicas := range ds.replicas {
for _, replicas := range ds.replicas {
totalReplicas += int64(len(replicas))
healthy := 0
for _, nodeID := range replicas {
@@ -371,14 +349,14 @@ func (ds *DistributedStorageImpl) GetDistributedStats() (*DistributedStorageStat
// DistributedEntry represents a distributed storage entry
type DistributedEntry struct {
Key string `json:"key"`
Data []byte `json:"data"`
ReplicationFactor int `json:"replication_factor"`
Key string `json:"key"`
Data []byte `json:"data"`
ReplicationFactor int `json:"replication_factor"`
ConsistencyLevel ConsistencyLevel `json:"consistency_level"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
Version int64 `json:"version"`
Checksum string `json:"checksum"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
Version int64 `json:"version"`
Checksum string `json:"checksum"`
}
// Helper methods implementation
@@ -394,7 +372,7 @@ func (ds *DistributedStorageImpl) selectReplicationNodes(key string, replication
// This is a simplified version - production would use proper consistent hashing
nodes := make([]string, 0, replicationFactor)
hash := ds.calculateKeyHash(key)
// Select nodes in a deterministic way based on key hash
for i := 0; i < replicationFactor && i < len(activeNodes); i++ {
nodeIndex := (int(hash) + i) % len(activeNodes)
@@ -405,13 +383,13 @@ func (ds *DistributedStorageImpl) selectReplicationNodes(key string, replication
}
func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *DistributedEntry, nodes []string) error {
// Store asynchronously on all nodes
// Store asynchronously on all nodes for SEC-SLURP-1.1a replication policy
errCh := make(chan error, len(nodes))
for _, nodeID := range nodes {
go func(node string) {
err := ds.storeOnNode(ctx, node, entry)
errorCh <- err
errCh <- err
}(nodeID)
}
@@ -429,7 +407,7 @@ func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *Dist
// If first failed, try to get at least one success
timer := time.NewTimer(10 * time.Second)
defer timer.Stop()
for i := 1; i < len(nodes); i++ {
select {
case err := <-errCh:
@@ -445,13 +423,13 @@ func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *Dist
}
func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *DistributedEntry, nodes []string) error {
// Store synchronously on all nodes
// Store synchronously on all nodes per SEC-SLURP-1.1a durability target
errCh := make(chan error, len(nodes))
for _, nodeID := range nodes {
go func(node string) {
err := ds.storeOnNode(ctx, node, entry)
errorCh <- err
errCh <- err
}(nodeID)
}
@@ -476,21 +454,21 @@ func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *Distri
}
func (ds *DistributedStorageImpl) storeQuorum(ctx context.Context, entry *DistributedEntry, nodes []string) error {
// Store on quorum of nodes
// Store on quorum of nodes per SEC-SLURP-1.1a availability guardrail
quorumSize := (len(nodes) / 2) + 1
errCh := make(chan error, len(nodes))
for _, nodeID := range nodes {
go func(node string) {
err := ds.storeOnNode(ctx, node, entry)
errorCh <- err
errCh <- err
}(nodeID)
}
// Wait for quorum
successCount := 0
errorCount := 0
for i := 0; i < len(nodes); i++ {
select {
case err := <-errCh:
@@ -537,7 +515,7 @@ func (ds *DistributedStorageImpl) generateOperationID() string {
func (ds *DistributedStorageImpl) updateLatencyMetrics(latency time.Duration) {
ds.mu.Lock()
defer ds.mu.Unlock()
if ds.metrics.NetworkLatency == 0 {
ds.metrics.NetworkLatency = latency
} else {
@@ -553,11 +531,11 @@ func (ds *DistributedStorageImpl) updateLatencyMetrics(latency time.Duration) {
func (ds *DistributedStorageImpl) getReplicationNodes(key string) ([]string, error) {
ds.mu.RLock()
defer ds.mu.RUnlock()
if replicas, exists := ds.replicas[key]; exists {
return replicas, nil
}
// Fall back to consistent hashing
return ds.selectReplicationNodes(key, ds.options.ReplicationFactor)
}

View File

@@ -9,7 +9,6 @@ import (
"time"
"chorus/pkg/crypto"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context"
)
@@ -19,25 +18,25 @@ type EncryptedStorageImpl struct {
crypto crypto.RoleCrypto
localStorage LocalStorage
keyManager crypto.KeyManager
accessControl crypto.AccessController
auditLogger crypto.AuditLogger
accessControl crypto.StorageAccessController
auditLogger crypto.StorageAuditLogger
metrics *EncryptionMetrics
}
// EncryptionMetrics tracks encryption-related metrics
type EncryptionMetrics struct {
mu sync.RWMutex
EncryptOperations int64
DecryptOperations int64
KeyRotations int64
AccessDenials int64
EncryptionErrors int64
DecryptionErrors int64
LastKeyRotation time.Time
AverageEncryptTime time.Duration
AverageDecryptTime time.Duration
ActiveEncryptionKeys int
ExpiredKeys int
mu sync.RWMutex
EncryptOperations int64
DecryptOperations int64
KeyRotations int64
AccessDenials int64
EncryptionErrors int64
DecryptionErrors int64
LastKeyRotation time.Time
AverageEncryptTime time.Duration
AverageDecryptTime time.Duration
ActiveEncryptionKeys int
ExpiredKeys int
}
// NewEncryptedStorage creates a new encrypted storage implementation
@@ -45,8 +44,8 @@ func NewEncryptedStorage(
crypto crypto.RoleCrypto,
localStorage LocalStorage,
keyManager crypto.KeyManager,
accessControl crypto.AccessController,
auditLogger crypto.AuditLogger,
accessControl crypto.StorageAccessController,
auditLogger crypto.StorageAuditLogger,
) *EncryptedStorageImpl {
return &EncryptedStorageImpl{
crypto: crypto,
@@ -286,12 +285,11 @@ func (es *EncryptedStorageImpl) GetAccessRoles(
return roles, nil
}
// RotateKeys rotates encryption keys
// RotateKeys rotates encryption keys in line with SEC-SLURP-1.1 retention constraints
func (es *EncryptedStorageImpl) RotateKeys(
ctx context.Context,
maxAge time.Duration,
) error {
start := time.Now()
defer func() {
es.metrics.mu.Lock()
es.metrics.KeyRotations++
@@ -334,7 +332,7 @@ func (es *EncryptedStorageImpl) ValidateEncryption(
// Validate each encrypted version
for _, role := range roles {
roleKey := es.generateRoleKey(key, role)
// Retrieve encrypted context
encryptedData, err := es.localStorage.Retrieve(ctx, roleKey)
if err != nil {

View File

@@ -9,22 +9,23 @@ import (
"sync"
"time"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
"github.com/blevesearch/bleve/v2"
"github.com/blevesearch/bleve/v2/analysis/analyzer/standard"
"github.com/blevesearch/bleve/v2/analysis/lang/en"
"github.com/blevesearch/bleve/v2/mapping"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context"
"github.com/blevesearch/bleve/v2/search/query"
)
// IndexManagerImpl implements the IndexManager interface using Bleve
type IndexManagerImpl struct {
mu sync.RWMutex
indexes map[string]bleve.Index
stats map[string]*IndexStatistics
basePath string
nodeID string
options *IndexManagerOptions
mu sync.RWMutex
indexes map[string]bleve.Index
stats map[string]*IndexStatistics
basePath string
nodeID string
options *IndexManagerOptions
}
// IndexManagerOptions configures index manager behavior
@@ -60,11 +61,11 @@ func NewIndexManager(basePath, nodeID string, options *IndexManagerOptions) (*In
}
im := &IndexManagerImpl{
indexes: make(map[string]bleve.Index),
stats: make(map[string]*IndexStatistics),
basePath: basePath,
nodeID: nodeID,
options: options,
indexes: make(map[string]bleve.Index),
stats: make(map[string]*IndexStatistics),
basePath: basePath,
nodeID: nodeID,
options: options,
}
// Start background optimization if enabled
@@ -356,11 +357,11 @@ func (im *IndexManagerImpl) createIndexMapping(config *IndexConfig) (mapping.Ind
fieldMapping.Analyzer = analyzer
fieldMapping.Store = true
fieldMapping.Index = true
if im.options.EnableHighlighting {
fieldMapping.IncludeTermVectors = true
}
docMapping.AddFieldMappingsAt(field, fieldMapping)
}
@@ -432,31 +433,31 @@ func (im *IndexManagerImpl) createIndexDocument(data interface{}) (map[string]in
return doc, nil
}
func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.SearchRequest, error) {
// Build Bleve search request from our search query
var bleveQuery bleve.Query
func (im *IndexManagerImpl) buildSearchRequest(searchQuery *SearchQuery) (*bleve.SearchRequest, error) {
// Build Bleve search request from our search query (SEC-SLURP-1.1 search path)
var bleveQuery query.Query
if query.Query == "" {
if searchQuery.Query == "" {
// Match all query
bleveQuery = bleve.NewMatchAllQuery()
} else {
// Text search query
if query.FuzzyMatch {
if searchQuery.FuzzyMatch {
// Use fuzzy query
bleveQuery = bleve.NewFuzzyQuery(query.Query)
bleveQuery = bleve.NewFuzzyQuery(searchQuery.Query)
} else {
// Use match query for better scoring
bleveQuery = bleve.NewMatchQuery(query.Query)
bleveQuery = bleve.NewMatchQuery(searchQuery.Query)
}
}
// Add filters
var conjuncts []bleve.Query
var conjuncts []query.Query
conjuncts = append(conjuncts, bleveQuery)
// Technology filters
if len(query.Technologies) > 0 {
for _, tech := range query.Technologies {
if len(searchQuery.Technologies) > 0 {
for _, tech := range searchQuery.Technologies {
techQuery := bleve.NewTermQuery(tech)
techQuery.SetField("technologies_facet")
conjuncts = append(conjuncts, techQuery)
@@ -464,8 +465,8 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
}
// Tag filters
if len(query.Tags) > 0 {
for _, tag := range query.Tags {
if len(searchQuery.Tags) > 0 {
for _, tag := range searchQuery.Tags {
tagQuery := bleve.NewTermQuery(tag)
tagQuery.SetField("tags_facet")
conjuncts = append(conjuncts, tagQuery)
@@ -479,20 +480,20 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
// Create search request
searchRequest := bleve.NewSearchRequest(bleveQuery)
// Set result options
if query.Limit > 0 && query.Limit <= im.options.MaxResults {
searchRequest.Size = query.Limit
if searchQuery.Limit > 0 && searchQuery.Limit <= im.options.MaxResults {
searchRequest.Size = searchQuery.Limit
} else {
searchRequest.Size = im.options.MaxResults
}
if query.Offset > 0 {
searchRequest.From = query.Offset
if searchQuery.Offset > 0 {
searchRequest.From = searchQuery.Offset
}
// Enable highlighting if requested
if query.HighlightTerms && im.options.EnableHighlighting {
if searchQuery.HighlightTerms && im.options.EnableHighlighting {
searchRequest.Highlight = bleve.NewHighlight()
searchRequest.Highlight.AddField("content")
searchRequest.Highlight.AddField("summary")
@@ -500,9 +501,9 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
}
// Add facets if requested
if len(query.Facets) > 0 && im.options.EnableFaceting {
if len(searchQuery.Facets) > 0 && im.options.EnableFaceting {
searchRequest.Facets = make(bleve.FacetsRequest)
for _, facet := range query.Facets {
for _, facet := range searchQuery.Facets {
switch facet {
case "technologies":
searchRequest.Facets["technologies"] = bleve.NewFacetRequest("technologies_facet", 10)
@@ -535,7 +536,7 @@ func (im *IndexManagerImpl) convertSearchResults(
searchHit := &SearchResult{
MatchScore: hit.Score,
MatchedFields: make([]string, 0),
Highlights: make(map[string][]string),
Highlights: make(map[string][]string),
Rank: i + 1,
}
@@ -558,8 +559,8 @@ func (im *IndexManagerImpl) convertSearchResults(
// Parse UCXL address
if ucxlStr, ok := hit.Fields["ucxl_address"].(string); ok {
if addr, err := ucxl.ParseAddress(ucxlStr); err == nil {
contextNode.UCXLAddress = addr
if addr, err := ucxl.Parse(ucxlStr); err == nil {
contextNode.UCXLAddress = *addr
}
}
@@ -572,8 +573,10 @@ func (im *IndexManagerImpl) convertSearchResults(
results.Facets = make(map[string]map[string]int)
for facetName, facetResult := range searchResult.Facets {
facetCounts := make(map[string]int)
for _, term := range facetResult.Terms {
facetCounts[term.Term] = term.Count
if facetResult.Terms != nil {
for _, term := range facetResult.Terms.Terms() {
facetCounts[term.Term] = term.Count
}
}
results.Facets[facetName] = facetCounts
}

View File

@@ -4,9 +4,8 @@ import (
"context"
"time"
"chorus/pkg/ucxl"
"chorus/pkg/crypto"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
)
// ContextStore provides the main interface for context storage and retrieval
@@ -17,40 +16,40 @@ import (
type ContextStore interface {
// StoreContext stores a context node with role-based encryption
StoreContext(ctx context.Context, node *slurpContext.ContextNode, roles []string) error
// RetrieveContext retrieves context for a UCXL address and role
RetrieveContext(ctx context.Context, address ucxl.Address, role string) (*slurpContext.ContextNode, error)
// UpdateContext updates an existing context node
UpdateContext(ctx context.Context, node *slurpContext.ContextNode, roles []string) error
// DeleteContext removes a context node from storage
DeleteContext(ctx context.Context, address ucxl.Address) error
// ExistsContext checks if context exists for an address
ExistsContext(ctx context.Context, address ucxl.Address) (bool, error)
// ListContexts lists contexts matching criteria
ListContexts(ctx context.Context, criteria *ListCriteria) ([]*slurpContext.ContextNode, error)
// SearchContexts searches contexts using query criteria
SearchContexts(ctx context.Context, query *SearchQuery) (*SearchResults, error)
// BatchStore stores multiple contexts efficiently
BatchStore(ctx context.Context, batch *BatchStoreRequest) (*BatchStoreResult, error)
// BatchRetrieve retrieves multiple contexts efficiently
BatchRetrieve(ctx context.Context, batch *BatchRetrieveRequest) (*BatchRetrieveResult, error)
// GetStorageStats returns storage statistics and health information
GetStorageStats(ctx context.Context) (*StorageStatistics, error)
// Sync synchronizes with distributed storage
Sync(ctx context.Context) error
// Backup creates a backup of stored contexts
Backup(ctx context.Context, destination string) error
// Restore restores contexts from backup
Restore(ctx context.Context, source string) error
}
@@ -59,25 +58,25 @@ type ContextStore interface {
type LocalStorage interface {
// Store stores context data locally with optional encryption
Store(ctx context.Context, key string, data interface{}, options *StoreOptions) error
// Retrieve retrieves context data from local storage
Retrieve(ctx context.Context, key string) (interface{}, error)
// Delete removes data from local storage
Delete(ctx context.Context, key string) error
// Exists checks if data exists locally
Exists(ctx context.Context, key string) (bool, error)
// List lists all keys matching a pattern
List(ctx context.Context, pattern string) ([]string, error)
// Size returns the size of stored data
Size(ctx context.Context, key string) (int64, error)
// Compact compacts local storage to reclaim space
Compact(ctx context.Context) error
// GetLocalStats returns local storage statistics
GetLocalStats() (*LocalStorageStats, error)
}
@@ -86,25 +85,25 @@ type LocalStorage interface {
type DistributedStorage interface {
// Store stores data in the distributed DHT with replication
Store(ctx context.Context, key string, data interface{}, options *DistributedStoreOptions) error
// Retrieve retrieves data from the distributed DHT
Retrieve(ctx context.Context, key string) (interface{}, error)
// Delete removes data from the distributed DHT
Delete(ctx context.Context, key string) error
// Exists checks if data exists in the DHT
Exists(ctx context.Context, key string) (bool, error)
// Replicate ensures data is replicated across nodes
Replicate(ctx context.Context, key string, replicationFactor int) error
// FindReplicas finds all replicas of data
FindReplicas(ctx context.Context, key string) ([]string, error)
// Sync synchronizes with other DHT nodes
Sync(ctx context.Context) error
// GetDistributedStats returns distributed storage statistics
GetDistributedStats() (*DistributedStorageStats, error)
}
@@ -113,25 +112,25 @@ type DistributedStorage interface {
type EncryptedStorage interface {
// StoreEncrypted stores data encrypted for specific roles
StoreEncrypted(ctx context.Context, key string, data interface{}, roles []string) error
// RetrieveDecrypted retrieves and decrypts data for current role
RetrieveDecrypted(ctx context.Context, key string, role string) (interface{}, error)
// CanAccess checks if a role can access specific data
CanAccess(ctx context.Context, key string, role string) (bool, error)
// ListAccessibleKeys lists keys accessible to a role
ListAccessibleKeys(ctx context.Context, role string) ([]string, error)
// ReEncryptForRoles re-encrypts data for different roles
ReEncryptForRoles(ctx context.Context, key string, newRoles []string) error
// GetAccessRoles gets roles that can access specific data
GetAccessRoles(ctx context.Context, key string) ([]string, error)
// RotateKeys rotates encryption keys
RotateKeys(ctx context.Context, maxAge time.Duration) error
// ValidateEncryption validates encryption integrity
ValidateEncryption(ctx context.Context, key string) error
}
@@ -140,25 +139,25 @@ type EncryptedStorage interface {
type CacheManager interface {
// Get retrieves data from cache
Get(ctx context.Context, key string) (interface{}, bool, error)
// Set stores data in cache with TTL
Set(ctx context.Context, key string, data interface{}, ttl time.Duration) error
// Delete removes data from cache
Delete(ctx context.Context, key string) error
// DeletePattern removes cache entries matching pattern
DeletePattern(ctx context.Context, pattern string) error
// Clear clears all cache entries
Clear(ctx context.Context) error
// Warm pre-loads cache with frequently accessed data
Warm(ctx context.Context, keys []string) error
// GetCacheStats returns cache performance statistics
GetCacheStats() (*CacheStatistics, error)
// SetCachePolicy sets caching policy
SetCachePolicy(policy *CachePolicy) error
}
@@ -167,25 +166,25 @@ type CacheManager interface {
type IndexManager interface {
// CreateIndex creates a search index for contexts
CreateIndex(ctx context.Context, indexName string, config *IndexConfig) error
// UpdateIndex updates search index with new data
UpdateIndex(ctx context.Context, indexName string, key string, data interface{}) error
// DeleteFromIndex removes data from search index
DeleteFromIndex(ctx context.Context, indexName string, key string) error
// Search searches indexed data using query
Search(ctx context.Context, indexName string, query *SearchQuery) (*SearchResults, error)
// RebuildIndex rebuilds search index from stored data
RebuildIndex(ctx context.Context, indexName string) error
// OptimizeIndex optimizes search index for performance
OptimizeIndex(ctx context.Context, indexName string) error
// GetIndexStats returns index statistics
GetIndexStats(ctx context.Context, indexName string) (*IndexStatistics, error)
// ListIndexes lists all available indexes
ListIndexes(ctx context.Context) ([]string, error)
}
@@ -194,22 +193,22 @@ type IndexManager interface {
type BackupManager interface {
// CreateBackup creates a backup of stored data
CreateBackup(ctx context.Context, config *BackupConfig) (*BackupInfo, error)
// RestoreBackup restores data from backup
RestoreBackup(ctx context.Context, backupID string, config *RestoreConfig) error
// ListBackups lists available backups
ListBackups(ctx context.Context) ([]*BackupInfo, error)
// DeleteBackup removes a backup
DeleteBackup(ctx context.Context, backupID string) error
// ValidateBackup validates backup integrity
ValidateBackup(ctx context.Context, backupID string) (*BackupValidation, error)
// ScheduleBackup schedules automatic backups
ScheduleBackup(ctx context.Context, schedule *BackupSchedule) error
// GetBackupStats returns backup statistics
GetBackupStats(ctx context.Context) (*BackupStatistics, error)
}
@@ -218,13 +217,13 @@ type BackupManager interface {
type TransactionManager interface {
// BeginTransaction starts a new transaction
BeginTransaction(ctx context.Context) (*Transaction, error)
// CommitTransaction commits a transaction
CommitTransaction(ctx context.Context, tx *Transaction) error
// RollbackTransaction rolls back a transaction
RollbackTransaction(ctx context.Context, tx *Transaction) error
// GetActiveTransactions returns list of active transactions
GetActiveTransactions(ctx context.Context) ([]*Transaction, error)
}
@@ -233,19 +232,19 @@ type TransactionManager interface {
type EventNotifier interface {
// NotifyStored notifies when data is stored
NotifyStored(ctx context.Context, event *StorageEvent) error
// NotifyRetrieved notifies when data is retrieved
NotifyRetrieved(ctx context.Context, event *StorageEvent) error
// NotifyUpdated notifies when data is updated
NotifyUpdated(ctx context.Context, event *StorageEvent) error
// NotifyDeleted notifies when data is deleted
NotifyDeleted(ctx context.Context, event *StorageEvent) error
// Subscribe subscribes to storage events
Subscribe(ctx context.Context, eventType EventType, handler EventHandler) error
// Unsubscribe unsubscribes from storage events
Unsubscribe(ctx context.Context, eventType EventType, handler EventHandler) error
}
@@ -270,35 +269,35 @@ type EventHandler func(event *StorageEvent) error
// StorageEvent represents a storage operation event
type StorageEvent struct {
Type EventType `json:"type"` // Event type
Key string `json:"key"` // Storage key
Data interface{} `json:"data"` // Event data
Timestamp time.Time `json:"timestamp"` // When event occurred
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
Type EventType `json:"type"` // Event type
Key string `json:"key"` // Storage key
Data interface{} `json:"data"` // Event data
Timestamp time.Time `json:"timestamp"` // When event occurred
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}
// Transaction represents a storage transaction
type Transaction struct {
ID string `json:"id"` // Transaction ID
StartTime time.Time `json:"start_time"` // When transaction started
ID string `json:"id"` // Transaction ID
StartTime time.Time `json:"start_time"` // When transaction started
Operations []*TransactionOperation `json:"operations"` // Transaction operations
Status TransactionStatus `json:"status"` // Transaction status
Status TransactionStatus `json:"status"` // Transaction status
}
// TransactionOperation represents a single operation in a transaction
type TransactionOperation struct {
Type string `json:"type"` // Operation type
Key string `json:"key"` // Storage key
Data interface{} `json:"data"` // Operation data
Metadata map[string]interface{} `json:"metadata"` // Operation metadata
Type string `json:"type"` // Operation type
Key string `json:"key"` // Storage key
Data interface{} `json:"data"` // Operation data
Metadata map[string]interface{} `json:"metadata"` // Operation metadata
}
// TransactionStatus represents transaction status
type TransactionStatus string
const (
TransactionActive TransactionStatus = "active"
TransactionCommitted TransactionStatus = "committed"
TransactionActive TransactionStatus = "active"
TransactionCommitted TransactionStatus = "committed"
TransactionRolledBack TransactionStatus = "rolled_back"
TransactionFailed TransactionStatus = "failed"
)
TransactionFailed TransactionStatus = "failed"
)

View File

@@ -33,12 +33,12 @@ type LocalStorageImpl struct {
// LocalStorageOptions configures local storage behavior
type LocalStorageOptions struct {
Compression bool `json:"compression"` // Enable compression
CacheSize int `json:"cache_size"` // Cache size in MB
WriteBuffer int `json:"write_buffer"` // Write buffer size in MB
MaxOpenFiles int `json:"max_open_files"` // Maximum open files
BlockSize int `json:"block_size"` // Block size in KB
SyncWrites bool `json:"sync_writes"` // Synchronous writes
Compression bool `json:"compression"` // Enable compression
CacheSize int `json:"cache_size"` // Cache size in MB
WriteBuffer int `json:"write_buffer"` // Write buffer size in MB
MaxOpenFiles int `json:"max_open_files"` // Maximum open files
BlockSize int `json:"block_size"` // Block size in KB
SyncWrites bool `json:"sync_writes"` // Synchronous writes
CompactionInterval time.Duration `json:"compaction_interval"` // Auto-compaction interval
}
@@ -46,11 +46,11 @@ type LocalStorageOptions struct {
func DefaultLocalStorageOptions() *LocalStorageOptions {
return &LocalStorageOptions{
Compression: true,
CacheSize: 64, // 64MB cache
WriteBuffer: 16, // 16MB write buffer
MaxOpenFiles: 1000,
BlockSize: 4, // 4KB blocks
SyncWrites: false,
CacheSize: 64, // 64MB cache
WriteBuffer: 16, // 16MB write buffer
MaxOpenFiles: 1000,
BlockSize: 4, // 4KB blocks
SyncWrites: false,
CompactionInterval: 24 * time.Hour,
}
}
@@ -135,13 +135,14 @@ func (ls *LocalStorageImpl) Store(
UpdatedAt: time.Now(),
Metadata: make(map[string]interface{}),
}
entry.Checksum = ls.computeChecksum(dataBytes)
// Apply options
if options != nil {
entry.TTL = options.TTL
entry.Compressed = options.Compress
entry.AccessLevel = string(options.AccessLevel)
// Copy metadata
for k, v := range options.Metadata {
entry.Metadata[k] = v
@@ -179,6 +180,7 @@ func (ls *LocalStorageImpl) Store(
if entry.Compressed {
ls.metrics.CompressedSize += entry.CompressedSize
}
ls.updateFileMetricsLocked()
return nil
}
@@ -231,6 +233,14 @@ func (ls *LocalStorageImpl) Retrieve(ctx context.Context, key string) (interface
dataBytes = decompressedData
}
// Verify integrity against stored checksum (SEC-SLURP-1.1a requirement)
if entry.Checksum != "" {
computed := ls.computeChecksum(dataBytes)
if computed != entry.Checksum {
return nil, fmt.Errorf("data integrity check failed for key %s", key)
}
}
// Deserialize data
var result interface{}
if err := json.Unmarshal(dataBytes, &result); err != nil {
@@ -260,6 +270,7 @@ func (ls *LocalStorageImpl) Delete(ctx context.Context, key string) error {
if entryBytes != nil {
ls.metrics.TotalSize -= int64(len(entryBytes))
}
ls.updateFileMetricsLocked()
return nil
}
@@ -350,7 +361,7 @@ func (ls *LocalStorageImpl) Compact(ctx context.Context) error {
// Update metrics
ls.metrics.LastCompaction = time.Now()
compactionTime := time.Since(start)
// Calculate new fragmentation ratio
ls.updateFragmentationRatio()
@@ -397,6 +408,7 @@ type StorageEntry struct {
Compressed bool `json:"compressed"`
OriginalSize int64 `json:"original_size"`
CompressedSize int64 `json:"compressed_size"`
Checksum string `json:"checksum"`
AccessLevel string `json:"access_level"`
Metadata map[string]interface{} `json:"metadata"`
}
@@ -406,34 +418,70 @@ type StorageEntry struct {
func (ls *LocalStorageImpl) compress(data []byte) ([]byte, error) {
// Use gzip compression for efficient data storage
var buf bytes.Buffer
// Create gzip writer with best compression
writer := gzip.NewWriter(&buf)
writer.Header.Name = "storage_data"
writer.Header.Comment = "CHORUS SLURP local storage compressed data"
// Write data to gzip writer
if _, err := writer.Write(data); err != nil {
writer.Close()
return nil, fmt.Errorf("failed to write compressed data: %w", err)
}
// Close writer to flush data
if err := writer.Close(); err != nil {
return nil, fmt.Errorf("failed to close gzip writer: %w", err)
}
compressed := buf.Bytes()
// Only return compressed data if it's actually smaller
if len(compressed) >= len(data) {
// Compression didn't help, return original data
return data, nil
}
return compressed, nil
}
func (ls *LocalStorageImpl) computeChecksum(data []byte) string {
// Compute SHA-256 checksum to satisfy SEC-SLURP-1.1a integrity tracking
digest := sha256.Sum256(data)
return fmt.Sprintf("%x", digest)
}
func (ls *LocalStorageImpl) updateFileMetricsLocked() {
// Refresh filesystem metrics using io/fs traversal (SEC-SLURP-1.1a durability telemetry)
var fileCount int64
var aggregateSize int64
walkErr := fs.WalkDir(os.DirFS(ls.basePath), ".", func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
if d.IsDir() {
return nil
}
fileCount++
if info, infoErr := d.Info(); infoErr == nil {
aggregateSize += info.Size()
}
return nil
})
if walkErr != nil {
fmt.Printf("filesystem metrics refresh failed: %v\n", walkErr)
return
}
ls.metrics.TotalFiles = fileCount
if aggregateSize > 0 {
ls.metrics.TotalSize = aggregateSize
}
}
func (ls *LocalStorageImpl) decompress(data []byte) ([]byte, error) {
// Create gzip reader
reader, err := gzip.NewReader(bytes.NewReader(data))
@@ -442,13 +490,13 @@ func (ls *LocalStorageImpl) decompress(data []byte) ([]byte, error) {
return data, nil
}
defer reader.Close()
// Read decompressed data
var buf bytes.Buffer
if _, err := io.Copy(&buf, reader); err != nil {
return nil, fmt.Errorf("failed to decompress data: %w", err)
}
return buf.Bytes(), nil
}
@@ -462,7 +510,7 @@ func (ls *LocalStorageImpl) getAvailableSpace() (int64, error) {
// Calculate available space in bytes
// Available blocks * block size
availableBytes := int64(stat.Bavail) * int64(stat.Bsize)
return availableBytes, nil
}
@@ -498,11 +546,11 @@ func (ls *LocalStorageImpl) GetCompressionStats() (*CompressionStats, error) {
defer ls.mu.RUnlock()
stats := &CompressionStats{
TotalEntries: 0,
TotalEntries: 0,
CompressedEntries: 0,
TotalSize: ls.metrics.TotalSize,
CompressedSize: ls.metrics.CompressedSize,
CompressionRatio: 0.0,
TotalSize: ls.metrics.TotalSize,
CompressedSize: ls.metrics.CompressedSize,
CompressionRatio: 0.0,
}
// Iterate through all entries to get accurate stats
@@ -511,7 +559,7 @@ func (ls *LocalStorageImpl) GetCompressionStats() (*CompressionStats, error) {
for iter.Next() {
stats.TotalEntries++
// Try to parse entry to check if compressed
var entry StorageEntry
if err := json.Unmarshal(iter.Value(), &entry); err == nil {
@@ -549,7 +597,7 @@ func (ls *LocalStorageImpl) OptimizeStorage(ctx context.Context, compressThresho
}
key := string(iter.Key())
// Parse existing entry
var entry StorageEntry
if err := json.Unmarshal(iter.Value(), &entry); err != nil {
@@ -599,11 +647,11 @@ func (ls *LocalStorageImpl) OptimizeStorage(ctx context.Context, compressThresho
// CompressionStats holds compression statistics
type CompressionStats struct {
TotalEntries int64 `json:"total_entries"`
TotalEntries int64 `json:"total_entries"`
CompressedEntries int64 `json:"compressed_entries"`
TotalSize int64 `json:"total_size"`
CompressedSize int64 `json:"compressed_size"`
CompressionRatio float64 `json:"compression_ratio"`
TotalSize int64 `json:"total_size"`
CompressedSize int64 `json:"compressed_size"`
CompressionRatio float64 `json:"compression_ratio"`
}
// Close closes the local storage

View File

@@ -14,77 +14,77 @@ import (
// MonitoringSystem provides comprehensive monitoring for the storage system
type MonitoringSystem struct {
mu sync.RWMutex
nodeID string
metrics *StorageMetrics
alerts *AlertManager
healthChecker *HealthChecker
mu sync.RWMutex
nodeID string
metrics *StorageMetrics
alerts *AlertManager
healthChecker *HealthChecker
performanceProfiler *PerformanceProfiler
logger *StructuredLogger
notifications chan *MonitoringEvent
stopCh chan struct{}
logger *StructuredLogger
notifications chan *MonitoringEvent
stopCh chan struct{}
}
// StorageMetrics contains all Prometheus metrics for storage operations
type StorageMetrics struct {
// Operation counters
StoreOperations prometheus.Counter
RetrieveOperations prometheus.Counter
DeleteOperations prometheus.Counter
UpdateOperations prometheus.Counter
SearchOperations prometheus.Counter
BatchOperations prometheus.Counter
StoreOperations prometheus.Counter
RetrieveOperations prometheus.Counter
DeleteOperations prometheus.Counter
UpdateOperations prometheus.Counter
SearchOperations prometheus.Counter
BatchOperations prometheus.Counter
// Error counters
StoreErrors prometheus.Counter
RetrieveErrors prometheus.Counter
EncryptionErrors prometheus.Counter
DecryptionErrors prometheus.Counter
ReplicationErrors prometheus.Counter
CacheErrors prometheus.Counter
IndexErrors prometheus.Counter
StoreErrors prometheus.Counter
RetrieveErrors prometheus.Counter
EncryptionErrors prometheus.Counter
DecryptionErrors prometheus.Counter
ReplicationErrors prometheus.Counter
CacheErrors prometheus.Counter
IndexErrors prometheus.Counter
// Latency histograms
StoreLatency prometheus.Histogram
RetrieveLatency prometheus.Histogram
EncryptionLatency prometheus.Histogram
DecryptionLatency prometheus.Histogram
ReplicationLatency prometheus.Histogram
SearchLatency prometheus.Histogram
StoreLatency prometheus.Histogram
RetrieveLatency prometheus.Histogram
EncryptionLatency prometheus.Histogram
DecryptionLatency prometheus.Histogram
ReplicationLatency prometheus.Histogram
SearchLatency prometheus.Histogram
// Cache metrics
CacheHits prometheus.Counter
CacheMisses prometheus.Counter
CacheEvictions prometheus.Counter
CacheSize prometheus.Gauge
CacheHits prometheus.Counter
CacheMisses prometheus.Counter
CacheEvictions prometheus.Counter
CacheSize prometheus.Gauge
// Storage size metrics
LocalStorageSize prometheus.Gauge
LocalStorageSize prometheus.Gauge
DistributedStorageSize prometheus.Gauge
CompressedStorageSize prometheus.Gauge
IndexStorageSize prometheus.Gauge
// Replication metrics
ReplicationFactor prometheus.Gauge
HealthyReplicas prometheus.Gauge
UnderReplicated prometheus.Gauge
ReplicationLag prometheus.Histogram
ReplicationFactor prometheus.Gauge
HealthyReplicas prometheus.Gauge
UnderReplicated prometheus.Gauge
ReplicationLag prometheus.Histogram
// Encryption metrics
EncryptedContexts prometheus.Gauge
KeyRotations prometheus.Counter
AccessDenials prometheus.Counter
ActiveKeys prometheus.Gauge
EncryptedContexts prometheus.Gauge
KeyRotations prometheus.Counter
AccessDenials prometheus.Counter
ActiveKeys prometheus.Gauge
// Performance metrics
Throughput prometheus.Gauge
Throughput prometheus.Gauge
ConcurrentOperations prometheus.Gauge
QueueDepth prometheus.Gauge
QueueDepth prometheus.Gauge
// Health metrics
StorageHealth prometheus.Gauge
NodeConnectivity prometheus.Gauge
SyncLatency prometheus.Histogram
StorageHealth prometheus.Gauge
NodeConnectivity prometheus.Gauge
SyncLatency prometheus.Histogram
}
// AlertManager handles storage-related alerts and notifications
@@ -97,18 +97,96 @@ type AlertManager struct {
maxHistory int
}
func (am *AlertManager) severityRank(severity AlertSeverity) int {
switch severity {
case SeverityCritical:
return 4
case SeverityError:
return 3
case SeverityWarning:
return 2
case SeverityInfo:
return 1
default:
return 0
}
}
// GetActiveAlerts returns sorted active alerts (SEC-SLURP-1.1 monitoring path)
func (am *AlertManager) GetActiveAlerts() []*Alert {
am.mu.RLock()
defer am.mu.RUnlock()
if len(am.activealerts) == 0 {
return nil
}
alerts := make([]*Alert, 0, len(am.activealerts))
for _, alert := range am.activealerts {
alerts = append(alerts, alert)
}
sort.Slice(alerts, func(i, j int) bool {
iRank := am.severityRank(alerts[i].Severity)
jRank := am.severityRank(alerts[j].Severity)
if iRank == jRank {
return alerts[i].StartTime.After(alerts[j].StartTime)
}
return iRank > jRank
})
return alerts
}
// Snapshot marshals monitoring state for UCXL persistence (SEC-SLURP-1.1a telemetry)
func (ms *MonitoringSystem) Snapshot(ctx context.Context) (string, error) {
ms.mu.RLock()
defer ms.mu.RUnlock()
if ms.alerts == nil {
return "", fmt.Errorf("alert manager not initialised")
}
active := ms.alerts.GetActiveAlerts()
alertPayload := make([]map[string]interface{}, 0, len(active))
for _, alert := range active {
alertPayload = append(alertPayload, map[string]interface{}{
"id": alert.ID,
"name": alert.Name,
"severity": alert.Severity,
"message": fmt.Sprintf("%s (threshold %.2f)", alert.Description, alert.Threshold),
"labels": alert.Labels,
"started_at": alert.StartTime,
})
}
snapshot := map[string]interface{}{
"node_id": ms.nodeID,
"generated_at": time.Now().UTC(),
"alert_count": len(active),
"alerts": alertPayload,
}
encoded, err := json.MarshalIndent(snapshot, "", " ")
if err != nil {
return "", fmt.Errorf("failed to marshal monitoring snapshot: %w", err)
}
return string(encoded), nil
}
// AlertRule defines conditions for triggering alerts
type AlertRule struct {
ID string `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
Metric string `json:"metric"`
Condition string `json:"condition"` // >, <, ==, !=, etc.
Threshold float64 `json:"threshold"`
Duration time.Duration `json:"duration"`
Severity AlertSeverity `json:"severity"`
Labels map[string]string `json:"labels"`
Enabled bool `json:"enabled"`
ID string `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
Metric string `json:"metric"`
Condition string `json:"condition"` // >, <, ==, !=, etc.
Threshold float64 `json:"threshold"`
Duration time.Duration `json:"duration"`
Severity AlertSeverity `json:"severity"`
Labels map[string]string `json:"labels"`
Enabled bool `json:"enabled"`
}
// Alert represents an active or resolved alert
@@ -163,30 +241,30 @@ type HealthChecker struct {
// HealthCheck defines a single health check
type HealthCheck struct {
Name string `json:"name"`
Description string `json:"description"`
Name string `json:"name"`
Description string `json:"description"`
Checker func(ctx context.Context) HealthResult `json:"-"`
Interval time.Duration `json:"interval"`
Timeout time.Duration `json:"timeout"`
Enabled bool `json:"enabled"`
Interval time.Duration `json:"interval"`
Timeout time.Duration `json:"timeout"`
Enabled bool `json:"enabled"`
}
// HealthResult represents the result of a health check
type HealthResult struct {
Healthy bool `json:"healthy"`
Message string `json:"message"`
Latency time.Duration `json:"latency"`
Healthy bool `json:"healthy"`
Message string `json:"message"`
Latency time.Duration `json:"latency"`
Metadata map[string]interface{} `json:"metadata"`
Timestamp time.Time `json:"timestamp"`
Timestamp time.Time `json:"timestamp"`
}
// SystemHealth represents the overall health of the storage system
type SystemHealth struct {
OverallStatus HealthStatus `json:"overall_status"`
Components map[string]HealthResult `json:"components"`
LastUpdate time.Time `json:"last_update"`
Uptime time.Duration `json:"uptime"`
StartTime time.Time `json:"start_time"`
OverallStatus HealthStatus `json:"overall_status"`
Components map[string]HealthResult `json:"components"`
LastUpdate time.Time `json:"last_update"`
Uptime time.Duration `json:"uptime"`
StartTime time.Time `json:"start_time"`
}
// HealthStatus represents system health status
@@ -200,82 +278,82 @@ const (
// PerformanceProfiler analyzes storage performance patterns
type PerformanceProfiler struct {
mu sync.RWMutex
mu sync.RWMutex
operationProfiles map[string]*OperationProfile
resourceUsage *ResourceUsage
bottlenecks []*Bottleneck
recommendations []*PerformanceRecommendation
resourceUsage *ResourceUsage
bottlenecks []*Bottleneck
recommendations []*PerformanceRecommendation
}
// OperationProfile contains performance analysis for a specific operation type
type OperationProfile struct {
Operation string `json:"operation"`
TotalOperations int64 `json:"total_operations"`
AverageLatency time.Duration `json:"average_latency"`
P50Latency time.Duration `json:"p50_latency"`
P95Latency time.Duration `json:"p95_latency"`
P99Latency time.Duration `json:"p99_latency"`
Throughput float64 `json:"throughput"`
ErrorRate float64 `json:"error_rate"`
LatencyHistory []time.Duration `json:"-"`
LastUpdated time.Time `json:"last_updated"`
Operation string `json:"operation"`
TotalOperations int64 `json:"total_operations"`
AverageLatency time.Duration `json:"average_latency"`
P50Latency time.Duration `json:"p50_latency"`
P95Latency time.Duration `json:"p95_latency"`
P99Latency time.Duration `json:"p99_latency"`
Throughput float64 `json:"throughput"`
ErrorRate float64 `json:"error_rate"`
LatencyHistory []time.Duration `json:"-"`
LastUpdated time.Time `json:"last_updated"`
}
// ResourceUsage tracks resource consumption
type ResourceUsage struct {
CPUUsage float64 `json:"cpu_usage"`
MemoryUsage int64 `json:"memory_usage"`
DiskUsage int64 `json:"disk_usage"`
NetworkIn int64 `json:"network_in"`
NetworkOut int64 `json:"network_out"`
OpenFiles int `json:"open_files"`
Goroutines int `json:"goroutines"`
LastUpdated time.Time `json:"last_updated"`
CPUUsage float64 `json:"cpu_usage"`
MemoryUsage int64 `json:"memory_usage"`
DiskUsage int64 `json:"disk_usage"`
NetworkIn int64 `json:"network_in"`
NetworkOut int64 `json:"network_out"`
OpenFiles int `json:"open_files"`
Goroutines int `json:"goroutines"`
LastUpdated time.Time `json:"last_updated"`
}
// Bottleneck represents a performance bottleneck
type Bottleneck struct {
ID string `json:"id"`
Type string `json:"type"` // cpu, memory, disk, network, etc.
Component string `json:"component"`
Description string `json:"description"`
Severity AlertSeverity `json:"severity"`
Impact float64 `json:"impact"`
DetectedAt time.Time `json:"detected_at"`
ID string `json:"id"`
Type string `json:"type"` // cpu, memory, disk, network, etc.
Component string `json:"component"`
Description string `json:"description"`
Severity AlertSeverity `json:"severity"`
Impact float64 `json:"impact"`
DetectedAt time.Time `json:"detected_at"`
Metadata map[string]interface{} `json:"metadata"`
}
// PerformanceRecommendation suggests optimizations
type PerformanceRecommendation struct {
ID string `json:"id"`
Type string `json:"type"`
Title string `json:"title"`
Description string `json:"description"`
Priority int `json:"priority"`
Impact string `json:"impact"`
Effort string `json:"effort"`
GeneratedAt time.Time `json:"generated_at"`
ID string `json:"id"`
Type string `json:"type"`
Title string `json:"title"`
Description string `json:"description"`
Priority int `json:"priority"`
Impact string `json:"impact"`
Effort string `json:"effort"`
GeneratedAt time.Time `json:"generated_at"`
Metadata map[string]interface{} `json:"metadata"`
}
// MonitoringEvent represents a monitoring system event
type MonitoringEvent struct {
Type string `json:"type"`
Level string `json:"level"`
Message string `json:"message"`
Component string `json:"component"`
NodeID string `json:"node_id"`
Timestamp time.Time `json:"timestamp"`
Metadata map[string]interface{} `json:"metadata"`
Type string `json:"type"`
Level string `json:"level"`
Message string `json:"message"`
Component string `json:"component"`
NodeID string `json:"node_id"`
Timestamp time.Time `json:"timestamp"`
Metadata map[string]interface{} `json:"metadata"`
}
// StructuredLogger provides structured logging for storage operations
type StructuredLogger struct {
mu sync.RWMutex
level LogLevel
output LogOutput
mu sync.RWMutex
level LogLevel
output LogOutput
formatter LogFormatter
buffer []*LogEntry
buffer []*LogEntry
maxBuffer int
}
@@ -303,27 +381,27 @@ type LogFormatter interface {
// LogEntry represents a single log entry
type LogEntry struct {
Level LogLevel `json:"level"`
Message string `json:"message"`
Component string `json:"component"`
Operation string `json:"operation"`
NodeID string `json:"node_id"`
Timestamp time.Time `json:"timestamp"`
Level LogLevel `json:"level"`
Message string `json:"message"`
Component string `json:"component"`
Operation string `json:"operation"`
NodeID string `json:"node_id"`
Timestamp time.Time `json:"timestamp"`
Fields map[string]interface{} `json:"fields"`
Error error `json:"error,omitempty"`
Error error `json:"error,omitempty"`
}
// NewMonitoringSystem creates a new monitoring system
func NewMonitoringSystem(nodeID string) *MonitoringSystem {
ms := &MonitoringSystem{
nodeID: nodeID,
metrics: initializeMetrics(nodeID),
alerts: newAlertManager(),
healthChecker: newHealthChecker(),
nodeID: nodeID,
metrics: initializeMetrics(nodeID),
alerts: newAlertManager(),
healthChecker: newHealthChecker(),
performanceProfiler: newPerformanceProfiler(),
logger: newStructuredLogger(),
notifications: make(chan *MonitoringEvent, 1000),
stopCh: make(chan struct{}),
logger: newStructuredLogger(),
notifications: make(chan *MonitoringEvent, 1000),
stopCh: make(chan struct{}),
}
// Start monitoring goroutines
@@ -571,7 +649,7 @@ func (ms *MonitoringSystem) executeHealthCheck(check HealthCheck) {
defer cancel()
result := check.Checker(ctx)
ms.healthChecker.mu.Lock()
ms.healthChecker.status.Components[check.Name] = result
ms.healthChecker.mu.Unlock()
@@ -592,21 +670,21 @@ func (ms *MonitoringSystem) analyzePerformance() {
func newAlertManager() *AlertManager {
return &AlertManager{
rules: make([]*AlertRule, 0),
rules: make([]*AlertRule, 0),
activealerts: make(map[string]*Alert),
notifiers: make([]AlertNotifier, 0),
history: make([]*Alert, 0),
maxHistory: 1000,
history: make([]*Alert, 0),
maxHistory: 1000,
}
}
func newHealthChecker() *HealthChecker {
return &HealthChecker{
checks: make(map[string]HealthCheck),
status: &SystemHealth{
checks: make(map[string]HealthCheck),
status: &SystemHealth{
OverallStatus: HealthHealthy,
Components: make(map[string]HealthResult),
StartTime: time.Now(),
Components: make(map[string]HealthResult),
StartTime: time.Now(),
},
checkInterval: 1 * time.Minute,
timeout: 30 * time.Second,
@@ -664,8 +742,8 @@ func (ms *MonitoringSystem) GetMonitoringStats() (*MonitoringStats, error) {
defer ms.mu.RUnlock()
stats := &MonitoringStats{
NodeID: ms.nodeID,
Timestamp: time.Now(),
NodeID: ms.nodeID,
Timestamp: time.Now(),
HealthStatus: ms.healthChecker.status.OverallStatus,
ActiveAlerts: len(ms.alerts.activealerts),
Bottlenecks: len(ms.performanceProfiler.bottlenecks),

View File

@@ -3,9 +3,8 @@ package storage
import (
"time"
"chorus/pkg/ucxl"
"chorus/pkg/crypto"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/ucxl"
)
// DatabaseSchema defines the complete schema for encrypted context storage
@@ -14,325 +13,325 @@ import (
// ContextRecord represents the main context storage record
type ContextRecord struct {
// Primary identification
ID string `json:"id" db:"id"` // Unique record ID
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` // UCXL address
Path string `json:"path" db:"path"` // File system path
PathHash string `json:"path_hash" db:"path_hash"` // Hash of path for indexing
ID string `json:"id" db:"id"` // Unique record ID
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` // UCXL address
Path string `json:"path" db:"path"` // File system path
PathHash string `json:"path_hash" db:"path_hash"` // Hash of path for indexing
// Core context data
Summary string `json:"summary" db:"summary"`
Purpose string `json:"purpose" db:"purpose"`
Technologies []byte `json:"technologies" db:"technologies"` // JSON array
Tags []byte `json:"tags" db:"tags"` // JSON array
Insights []byte `json:"insights" db:"insights"` // JSON array
Summary string `json:"summary" db:"summary"`
Purpose string `json:"purpose" db:"purpose"`
Technologies []byte `json:"technologies" db:"technologies"` // JSON array
Tags []byte `json:"tags" db:"tags"` // JSON array
Insights []byte `json:"insights" db:"insights"` // JSON array
// Hierarchy control
OverridesParent bool `json:"overrides_parent" db:"overrides_parent"`
ContextSpecificity int `json:"context_specificity" db:"context_specificity"`
AppliesToChildren bool `json:"applies_to_children" db:"applies_to_children"`
OverridesParent bool `json:"overrides_parent" db:"overrides_parent"`
ContextSpecificity int `json:"context_specificity" db:"context_specificity"`
AppliesToChildren bool `json:"applies_to_children" db:"applies_to_children"`
// Quality metrics
RAGConfidence float64 `json:"rag_confidence" db:"rag_confidence"`
StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
ValidationScore float64 `json:"validation_score" db:"validation_score"`
RAGConfidence float64 `json:"rag_confidence" db:"rag_confidence"`
StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
ValidationScore float64 `json:"validation_score" db:"validation_score"`
// Versioning
Version int64 `json:"version" db:"version"`
ParentVersion *int64 `json:"parent_version" db:"parent_version"`
ContextHash string `json:"context_hash" db:"context_hash"`
Version int64 `json:"version" db:"version"`
ParentVersion *int64 `json:"parent_version" db:"parent_version"`
ContextHash string `json:"context_hash" db:"context_hash"`
// Temporal metadata
CreatedAt time.Time `json:"created_at" db:"created_at"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
GeneratedAt time.Time `json:"generated_at" db:"generated_at"`
LastAccessedAt *time.Time `json:"last_accessed_at" db:"last_accessed_at"`
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
GeneratedAt time.Time `json:"generated_at" db:"generated_at"`
LastAccessedAt *time.Time `json:"last_accessed_at" db:"last_accessed_at"`
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
// Storage metadata
StorageType string `json:"storage_type" db:"storage_type"` // local, distributed, hybrid
CompressionType string `json:"compression_type" db:"compression_type"`
EncryptionLevel int `json:"encryption_level" db:"encryption_level"`
ReplicationFactor int `json:"replication_factor" db:"replication_factor"`
Checksum string `json:"checksum" db:"checksum"`
DataSize int64 `json:"data_size" db:"data_size"`
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
StorageType string `json:"storage_type" db:"storage_type"` // local, distributed, hybrid
CompressionType string `json:"compression_type" db:"compression_type"`
EncryptionLevel int `json:"encryption_level" db:"encryption_level"`
ReplicationFactor int `json:"replication_factor" db:"replication_factor"`
Checksum string `json:"checksum" db:"checksum"`
DataSize int64 `json:"data_size" db:"data_size"`
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
}
// EncryptedContextRecord represents role-based encrypted context storage
type EncryptedContextRecord struct {
// Primary keys
ID string `json:"id" db:"id"`
ContextID string `json:"context_id" db:"context_id"` // FK to ContextRecord
Role string `json:"role" db:"role"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
ID string `json:"id" db:"id"`
ContextID string `json:"context_id" db:"context_id"` // FK to ContextRecord
Role string `json:"role" db:"role"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
// Encryption details
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
EncryptedData []byte `json:"encrypted_data" db:"encrypted_data"`
KeyFingerprint string `json:"key_fingerprint" db:"key_fingerprint"`
EncryptionAlgo string `json:"encryption_algo" db:"encryption_algo"`
KeyVersion int `json:"key_version" db:"key_version"`
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
EncryptedData []byte `json:"encrypted_data" db:"encrypted_data"`
KeyFingerprint string `json:"key_fingerprint" db:"key_fingerprint"`
EncryptionAlgo string `json:"encryption_algo" db:"encryption_algo"`
KeyVersion int `json:"key_version" db:"key_version"`
// Data integrity
DataChecksum string `json:"data_checksum" db:"data_checksum"`
EncryptionHash string `json:"encryption_hash" db:"encryption_hash"`
DataChecksum string `json:"data_checksum" db:"data_checksum"`
EncryptionHash string `json:"encryption_hash" db:"encryption_hash"`
// Temporal data
CreatedAt time.Time `json:"created_at" db:"created_at"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
LastDecryptedAt *time.Time `json:"last_decrypted_at" db:"last_decrypted_at"`
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
LastDecryptedAt *time.Time `json:"last_decrypted_at" db:"last_decrypted_at"`
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
// Access tracking
AccessCount int64 `json:"access_count" db:"access_count"`
LastAccessedBy string `json:"last_accessed_by" db:"last_accessed_by"`
AccessHistory []byte `json:"access_history" db:"access_history"` // JSON access log
AccessCount int64 `json:"access_count" db:"access_count"`
LastAccessedBy string `json:"last_accessed_by" db:"last_accessed_by"`
AccessHistory []byte `json:"access_history" db:"access_history"` // JSON access log
}
// ContextHierarchyRecord represents hierarchical relationships between contexts
type ContextHierarchyRecord struct {
ID string `json:"id" db:"id"`
ParentAddress ucxl.Address `json:"parent_address" db:"parent_address"`
ChildAddress ucxl.Address `json:"child_address" db:"child_address"`
ParentPath string `json:"parent_path" db:"parent_path"`
ChildPath string `json:"child_path" db:"child_path"`
ID string `json:"id" db:"id"`
ParentAddress ucxl.Address `json:"parent_address" db:"parent_address"`
ChildAddress ucxl.Address `json:"child_address" db:"child_address"`
ParentPath string `json:"parent_path" db:"parent_path"`
ChildPath string `json:"child_path" db:"child_path"`
// Relationship metadata
RelationshipType string `json:"relationship_type" db:"relationship_type"` // parent, sibling, dependency
InheritanceWeight float64 `json:"inheritance_weight" db:"inheritance_weight"`
OverrideStrength int `json:"override_strength" db:"override_strength"`
Distance int `json:"distance" db:"distance"` // Hierarchy depth distance
RelationshipType string `json:"relationship_type" db:"relationship_type"` // parent, sibling, dependency
InheritanceWeight float64 `json:"inheritance_weight" db:"inheritance_weight"`
OverrideStrength int `json:"override_strength" db:"override_strength"`
Distance int `json:"distance" db:"distance"` // Hierarchy depth distance
// Temporal tracking
CreatedAt time.Time `json:"created_at" db:"created_at"`
ValidatedAt time.Time `json:"validated_at" db:"validated_at"`
LastResolvedAt *time.Time `json:"last_resolved_at" db:"last_resolved_at"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
ValidatedAt time.Time `json:"validated_at" db:"validated_at"`
LastResolvedAt *time.Time `json:"last_resolved_at" db:"last_resolved_at"`
// Resolution statistics
ResolutionCount int64 `json:"resolution_count" db:"resolution_count"`
ResolutionTime float64 `json:"resolution_time" db:"resolution_time"` // Average ms
ResolutionCount int64 `json:"resolution_count" db:"resolution_count"`
ResolutionTime float64 `json:"resolution_time" db:"resolution_time"` // Average ms
}
// DecisionHopRecord represents temporal decision analysis storage
type DecisionHopRecord struct {
// Primary identification
ID string `json:"id" db:"id"`
DecisionID string `json:"decision_id" db:"decision_id"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
ContextVersion int64 `json:"context_version" db:"context_version"`
ID string `json:"id" db:"id"`
DecisionID string `json:"decision_id" db:"decision_id"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
ContextVersion int64 `json:"context_version" db:"context_version"`
// Decision metadata
ChangeReason string `json:"change_reason" db:"change_reason"`
DecisionMaker string `json:"decision_maker" db:"decision_maker"`
DecisionRationale string `json:"decision_rationale" db:"decision_rationale"`
ImpactScope string `json:"impact_scope" db:"impact_scope"`
ConfidenceLevel float64 `json:"confidence_level" db:"confidence_level"`
ChangeReason string `json:"change_reason" db:"change_reason"`
DecisionMaker string `json:"decision_maker" db:"decision_maker"`
DecisionRationale string `json:"decision_rationale" db:"decision_rationale"`
ImpactScope string `json:"impact_scope" db:"impact_scope"`
ConfidenceLevel float64 `json:"confidence_level" db:"confidence_level"`
// Context evolution
PreviousHash string `json:"previous_hash" db:"previous_hash"`
CurrentHash string `json:"current_hash" db:"current_hash"`
ContextDelta []byte `json:"context_delta" db:"context_delta"` // JSON diff
StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
PreviousHash string `json:"previous_hash" db:"previous_hash"`
CurrentHash string `json:"current_hash" db:"current_hash"`
ContextDelta []byte `json:"context_delta" db:"context_delta"` // JSON diff
StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
// Temporal data
Timestamp time.Time `json:"timestamp" db:"timestamp"`
PreviousDecisionTime *time.Time `json:"previous_decision_time" db:"previous_decision_time"`
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
Timestamp time.Time `json:"timestamp" db:"timestamp"`
PreviousDecisionTime *time.Time `json:"previous_decision_time" db:"previous_decision_time"`
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
// External references
ExternalRefs []byte `json:"external_refs" db:"external_refs"` // JSON array
CommitHash string `json:"commit_hash" db:"commit_hash"`
TicketID string `json:"ticket_id" db:"ticket_id"`
ExternalRefs []byte `json:"external_refs" db:"external_refs"` // JSON array
CommitHash string `json:"commit_hash" db:"commit_hash"`
TicketID string `json:"ticket_id" db:"ticket_id"`
}
// DecisionInfluenceRecord represents decision influence relationships
type DecisionInfluenceRecord struct {
ID string `json:"id" db:"id"`
SourceDecisionID string `json:"source_decision_id" db:"source_decision_id"`
TargetDecisionID string `json:"target_decision_id" db:"target_decision_id"`
SourceAddress ucxl.Address `json:"source_address" db:"source_address"`
TargetAddress ucxl.Address `json:"target_address" db:"target_address"`
ID string `json:"id" db:"id"`
SourceDecisionID string `json:"source_decision_id" db:"source_decision_id"`
TargetDecisionID string `json:"target_decision_id" db:"target_decision_id"`
SourceAddress ucxl.Address `json:"source_address" db:"source_address"`
TargetAddress ucxl.Address `json:"target_address" db:"target_address"`
// Influence metrics
InfluenceStrength float64 `json:"influence_strength" db:"influence_strength"`
InfluenceType string `json:"influence_type" db:"influence_type"` // direct, indirect, cascading
PropagationDelay float64 `json:"propagation_delay" db:"propagation_delay"` // hours
HopDistance int `json:"hop_distance" db:"hop_distance"`
InfluenceStrength float64 `json:"influence_strength" db:"influence_strength"`
InfluenceType string `json:"influence_type" db:"influence_type"` // direct, indirect, cascading
PropagationDelay float64 `json:"propagation_delay" db:"propagation_delay"` // hours
HopDistance int `json:"hop_distance" db:"hop_distance"`
// Path analysis
ShortestPath []byte `json:"shortest_path" db:"shortest_path"` // JSON path array
AlternatePaths []byte `json:"alternate_paths" db:"alternate_paths"` // JSON paths
PathConfidence float64 `json:"path_confidence" db:"path_confidence"`
ShortestPath []byte `json:"shortest_path" db:"shortest_path"` // JSON path array
AlternatePaths []byte `json:"alternate_paths" db:"alternate_paths"` // JSON paths
PathConfidence float64 `json:"path_confidence" db:"path_confidence"`
// Temporal tracking
CreatedAt time.Time `json:"created_at" db:"created_at"`
LastAnalyzedAt time.Time `json:"last_analyzed_at" db:"last_analyzed_at"`
ValidatedAt *time.Time `json:"validated_at" db:"validated_at"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
LastAnalyzedAt time.Time `json:"last_analyzed_at" db:"last_analyzed_at"`
ValidatedAt *time.Time `json:"validated_at" db:"validated_at"`
}
// AccessControlRecord represents role-based access control metadata
type AccessControlRecord struct {
ID string `json:"id" db:"id"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
Role string `json:"role" db:"role"`
Permissions []byte `json:"permissions" db:"permissions"` // JSON permissions array
ID string `json:"id" db:"id"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
Role string `json:"role" db:"role"`
Permissions []byte `json:"permissions" db:"permissions"` // JSON permissions array
// Access levels
ReadAccess bool `json:"read_access" db:"read_access"`
WriteAccess bool `json:"write_access" db:"write_access"`
DeleteAccess bool `json:"delete_access" db:"delete_access"`
AdminAccess bool `json:"admin_access" db:"admin_access"`
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
ReadAccess bool `json:"read_access" db:"read_access"`
WriteAccess bool `json:"write_access" db:"write_access"`
DeleteAccess bool `json:"delete_access" db:"delete_access"`
AdminAccess bool `json:"admin_access" db:"admin_access"`
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
// Constraints
TimeConstraints []byte `json:"time_constraints" db:"time_constraints"` // JSON time rules
IPConstraints []byte `json:"ip_constraints" db:"ip_constraints"` // JSON IP rules
ContextFilters []byte `json:"context_filters" db:"context_filters"` // JSON filter rules
TimeConstraints []byte `json:"time_constraints" db:"time_constraints"` // JSON time rules
IPConstraints []byte `json:"ip_constraints" db:"ip_constraints"` // JSON IP rules
ContextFilters []byte `json:"context_filters" db:"context_filters"` // JSON filter rules
// Audit trail
CreatedAt time.Time `json:"created_at" db:"created_at"`
CreatedBy string `json:"created_by" db:"created_by"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
UpdatedBy string `json:"updated_by" db:"updated_by"`
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
CreatedBy string `json:"created_by" db:"created_by"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
UpdatedBy string `json:"updated_by" db:"updated_by"`
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
}
// ContextIndexRecord represents search index entries for contexts
type ContextIndexRecord struct {
ID string `json:"id" db:"id"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
IndexName string `json:"index_name" db:"index_name"`
ID string `json:"id" db:"id"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
IndexName string `json:"index_name" db:"index_name"`
// Indexed content
Tokens []byte `json:"tokens" db:"tokens"` // JSON token array
NGrams []byte `json:"ngrams" db:"ngrams"` // JSON n-gram array
SemanticVector []byte `json:"semantic_vector" db:"semantic_vector"` // Embedding vector
Tokens []byte `json:"tokens" db:"tokens"` // JSON token array
NGrams []byte `json:"ngrams" db:"ngrams"` // JSON n-gram array
SemanticVector []byte `json:"semantic_vector" db:"semantic_vector"` // Embedding vector
// Search metadata
IndexWeight float64 `json:"index_weight" db:"index_weight"`
BoostFactor float64 `json:"boost_factor" db:"boost_factor"`
Language string `json:"language" db:"language"`
ContentType string `json:"content_type" db:"content_type"`
IndexWeight float64 `json:"index_weight" db:"index_weight"`
BoostFactor float64 `json:"boost_factor" db:"boost_factor"`
Language string `json:"language" db:"language"`
ContentType string `json:"content_type" db:"content_type"`
// Quality metrics
RelevanceScore float64 `json:"relevance_score" db:"relevance_score"`
FreshnessScore float64 `json:"freshness_score" db:"freshness_score"`
PopularityScore float64 `json:"popularity_score" db:"popularity_score"`
RelevanceScore float64 `json:"relevance_score" db:"relevance_score"`
FreshnessScore float64 `json:"freshness_score" db:"freshness_score"`
PopularityScore float64 `json:"popularity_score" db:"popularity_score"`
// Temporal tracking
CreatedAt time.Time `json:"created_at" db:"created_at"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
LastReindexed time.Time `json:"last_reindexed" db:"last_reindexed"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
LastReindexed time.Time `json:"last_reindexed" db:"last_reindexed"`
}
// CacheEntryRecord represents cached context data
type CacheEntryRecord struct {
ID string `json:"id" db:"id"`
CacheKey string `json:"cache_key" db:"cache_key"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
Role string `json:"role" db:"role"`
ID string `json:"id" db:"id"`
CacheKey string `json:"cache_key" db:"cache_key"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
Role string `json:"role" db:"role"`
// Cached data
CachedData []byte `json:"cached_data" db:"cached_data"`
DataHash string `json:"data_hash" db:"data_hash"`
Compressed bool `json:"compressed" db:"compressed"`
OriginalSize int64 `json:"original_size" db:"original_size"`
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
CachedData []byte `json:"cached_data" db:"cached_data"`
DataHash string `json:"data_hash" db:"data_hash"`
Compressed bool `json:"compressed" db:"compressed"`
OriginalSize int64 `json:"original_size" db:"original_size"`
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
// Cache metadata
TTL int64 `json:"ttl" db:"ttl"` // seconds
Priority int `json:"priority" db:"priority"`
AccessCount int64 `json:"access_count" db:"access_count"`
HitCount int64 `json:"hit_count" db:"hit_count"`
TTL int64 `json:"ttl" db:"ttl"` // seconds
Priority int `json:"priority" db:"priority"`
AccessCount int64 `json:"access_count" db:"access_count"`
HitCount int64 `json:"hit_count" db:"hit_count"`
// Temporal data
CreatedAt time.Time `json:"created_at" db:"created_at"`
LastAccessedAt time.Time `json:"last_accessed_at" db:"last_accessed_at"`
LastHitAt *time.Time `json:"last_hit_at" db:"last_hit_at"`
ExpiresAt time.Time `json:"expires_at" db:"expires_at"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
LastAccessedAt time.Time `json:"last_accessed_at" db:"last_accessed_at"`
LastHitAt *time.Time `json:"last_hit_at" db:"last_hit_at"`
ExpiresAt time.Time `json:"expires_at" db:"expires_at"`
}
// BackupRecord represents backup metadata
type BackupRecord struct {
ID string `json:"id" db:"id"`
BackupID string `json:"backup_id" db:"backup_id"`
Name string `json:"name" db:"name"`
Destination string `json:"destination" db:"destination"`
ID string `json:"id" db:"id"`
BackupID string `json:"backup_id" db:"backup_id"`
Name string `json:"name" db:"name"`
Destination string `json:"destination" db:"destination"`
// Backup content
ContextCount int64 `json:"context_count" db:"context_count"`
DataSize int64 `json:"data_size" db:"data_size"`
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
Checksum string `json:"checksum" db:"checksum"`
ContextCount int64 `json:"context_count" db:"context_count"`
DataSize int64 `json:"data_size" db:"data_size"`
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
Checksum string `json:"checksum" db:"checksum"`
// Backup metadata
IncludesIndexes bool `json:"includes_indexes" db:"includes_indexes"`
IncludesCache bool `json:"includes_cache" db:"includes_cache"`
Encrypted bool `json:"encrypted" db:"encrypted"`
Incremental bool `json:"incremental" db:"incremental"`
ParentBackupID string `json:"parent_backup_id" db:"parent_backup_id"`
IncludesIndexes bool `json:"includes_indexes" db:"includes_indexes"`
IncludesCache bool `json:"includes_cache" db:"includes_cache"`
Encrypted bool `json:"encrypted" db:"encrypted"`
Incremental bool `json:"incremental" db:"incremental"`
ParentBackupID string `json:"parent_backup_id" db:"parent_backup_id"`
// Status tracking
Status BackupStatus `json:"status" db:"status"`
Progress float64 `json:"progress" db:"progress"`
ErrorMessage string `json:"error_message" db:"error_message"`
Status BackupStatus `json:"status" db:"status"`
Progress float64 `json:"progress" db:"progress"`
ErrorMessage string `json:"error_message" db:"error_message"`
// Temporal data
CreatedAt time.Time `json:"created_at" db:"created_at"`
StartedAt *time.Time `json:"started_at" db:"started_at"`
CompletedAt *time.Time `json:"completed_at" db:"completed_at"`
RetentionUntil time.Time `json:"retention_until" db:"retention_until"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
StartedAt *time.Time `json:"started_at" db:"started_at"`
CompletedAt *time.Time `json:"completed_at" db:"completed_at"`
RetentionUntil time.Time `json:"retention_until" db:"retention_until"`
}
// MetricsRecord represents storage performance metrics
type MetricsRecord struct {
ID string `json:"id" db:"id"`
MetricType string `json:"metric_type" db:"metric_type"` // storage, encryption, cache, etc.
NodeID string `json:"node_id" db:"node_id"`
ID string `json:"id" db:"id"`
MetricType string `json:"metric_type" db:"metric_type"` // storage, encryption, cache, etc.
NodeID string `json:"node_id" db:"node_id"`
// Metric data
MetricName string `json:"metric_name" db:"metric_name"`
MetricValue float64 `json:"metric_value" db:"metric_value"`
MetricUnit string `json:"metric_unit" db:"metric_unit"`
Tags []byte `json:"tags" db:"tags"` // JSON tag object
MetricName string `json:"metric_name" db:"metric_name"`
MetricValue float64 `json:"metric_value" db:"metric_value"`
MetricUnit string `json:"metric_unit" db:"metric_unit"`
Tags []byte `json:"tags" db:"tags"` // JSON tag object
// Aggregation data
AggregationType string `json:"aggregation_type" db:"aggregation_type"` // avg, sum, count, etc.
TimeWindow int64 `json:"time_window" db:"time_window"` // seconds
SampleCount int64 `json:"sample_count" db:"sample_count"`
AggregationType string `json:"aggregation_type" db:"aggregation_type"` // avg, sum, count, etc.
TimeWindow int64 `json:"time_window" db:"time_window"` // seconds
SampleCount int64 `json:"sample_count" db:"sample_count"`
// Temporal tracking
Timestamp time.Time `json:"timestamp" db:"timestamp"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
Timestamp time.Time `json:"timestamp" db:"timestamp"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
}
// ContextEvolutionRecord tracks how contexts evolve over time
type ContextEvolutionRecord struct {
ID string `json:"id" db:"id"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
FromVersion int64 `json:"from_version" db:"from_version"`
ToVersion int64 `json:"to_version" db:"to_version"`
ID string `json:"id" db:"id"`
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
FromVersion int64 `json:"from_version" db:"from_version"`
ToVersion int64 `json:"to_version" db:"to_version"`
// Evolution analysis
EvolutionType string `json:"evolution_type" db:"evolution_type"` // enhancement, refactor, fix, etc.
SimilarityScore float64 `json:"similarity_score" db:"similarity_score"`
ChangesMagnitude float64 `json:"changes_magnitude" db:"changes_magnitude"`
SemanticDrift float64 `json:"semantic_drift" db:"semantic_drift"`
EvolutionType string `json:"evolution_type" db:"evolution_type"` // enhancement, refactor, fix, etc.
SimilarityScore float64 `json:"similarity_score" db:"similarity_score"`
ChangesMagnitude float64 `json:"changes_magnitude" db:"changes_magnitude"`
SemanticDrift float64 `json:"semantic_drift" db:"semantic_drift"`
// Change details
ChangedFields []byte `json:"changed_fields" db:"changed_fields"` // JSON array
FieldDeltas []byte `json:"field_deltas" db:"field_deltas"` // JSON delta object
ImpactAnalysis []byte `json:"impact_analysis" db:"impact_analysis"` // JSON analysis
ChangedFields []byte `json:"changed_fields" db:"changed_fields"` // JSON array
FieldDeltas []byte `json:"field_deltas" db:"field_deltas"` // JSON delta object
ImpactAnalysis []byte `json:"impact_analysis" db:"impact_analysis"` // JSON analysis
// Quality assessment
QualityImprovement float64 `json:"quality_improvement" db:"quality_improvement"`
ConfidenceChange float64 `json:"confidence_change" db:"confidence_change"`
ValidationPassed bool `json:"validation_passed" db:"validation_passed"`
QualityImprovement float64 `json:"quality_improvement" db:"quality_improvement"`
ConfidenceChange float64 `json:"confidence_change" db:"confidence_change"`
ValidationPassed bool `json:"validation_passed" db:"validation_passed"`
// Temporal tracking
EvolutionTime time.Time `json:"evolution_time" db:"evolution_time"`
AnalyzedAt time.Time `json:"analyzed_at" db:"analyzed_at"`
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
EvolutionTime time.Time `json:"evolution_time" db:"evolution_time"`
AnalyzedAt time.Time `json:"analyzed_at" db:"analyzed_at"`
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
}
// Schema validation and creation functions
@@ -365,44 +364,44 @@ func CreateIndexStatements() []string {
"CREATE INDEX IF NOT EXISTS idx_context_version ON contexts(version)",
"CREATE INDEX IF NOT EXISTS idx_context_staleness ON contexts(staleness_score)",
"CREATE INDEX IF NOT EXISTS idx_context_confidence ON contexts(rag_confidence)",
// Encrypted context indexes
"CREATE INDEX IF NOT EXISTS idx_encrypted_context_role ON encrypted_contexts(role)",
"CREATE INDEX IF NOT EXISTS idx_encrypted_context_ucxl ON encrypted_contexts(ucxl_address)",
"CREATE INDEX IF NOT EXISTS idx_encrypted_context_access_level ON encrypted_contexts(access_level)",
"CREATE INDEX IF NOT EXISTS idx_encrypted_context_key_fp ON encrypted_contexts(key_fingerprint)",
// Hierarchy indexes
"CREATE INDEX IF NOT EXISTS idx_hierarchy_parent ON context_hierarchy(parent_address)",
"CREATE INDEX IF NOT EXISTS idx_hierarchy_child ON context_hierarchy(child_address)",
"CREATE INDEX IF NOT EXISTS idx_hierarchy_distance ON context_hierarchy(distance)",
"CREATE INDEX IF NOT EXISTS idx_hierarchy_weight ON context_hierarchy(inheritance_weight)",
// Decision hop indexes
"CREATE INDEX IF NOT EXISTS idx_decision_ucxl ON decision_hops(ucxl_address)",
"CREATE INDEX IF NOT EXISTS idx_decision_timestamp ON decision_hops(timestamp)",
"CREATE INDEX IF NOT EXISTS idx_decision_reason ON decision_hops(change_reason)",
"CREATE INDEX IF NOT EXISTS idx_decision_maker ON decision_hops(decision_maker)",
"CREATE INDEX IF NOT EXISTS idx_decision_version ON decision_hops(context_version)",
// Decision influence indexes
"CREATE INDEX IF NOT EXISTS idx_influence_source ON decision_influence(source_decision_id)",
"CREATE INDEX IF NOT EXISTS idx_influence_target ON decision_influence(target_decision_id)",
"CREATE INDEX IF NOT EXISTS idx_influence_strength ON decision_influence(influence_strength)",
"CREATE INDEX IF NOT EXISTS idx_influence_hop_distance ON decision_influence(hop_distance)",
// Access control indexes
"CREATE INDEX IF NOT EXISTS idx_access_role ON access_control(role)",
"CREATE INDEX IF NOT EXISTS idx_access_ucxl ON access_control(ucxl_address)",
"CREATE INDEX IF NOT EXISTS idx_access_level ON access_control(access_level)",
"CREATE INDEX IF NOT EXISTS idx_access_expires ON access_control(expires_at)",
// Search index indexes
"CREATE INDEX IF NOT EXISTS idx_context_index_name ON context_indexes(index_name)",
"CREATE INDEX IF NOT EXISTS idx_context_index_ucxl ON context_indexes(ucxl_address)",
"CREATE INDEX IF NOT EXISTS idx_context_index_relevance ON context_indexes(relevance_score)",
"CREATE INDEX IF NOT EXISTS idx_context_index_freshness ON context_indexes(freshness_score)",
// Cache indexes
"CREATE INDEX IF NOT EXISTS idx_cache_key ON cache_entries(cache_key)",
"CREATE INDEX IF NOT EXISTS idx_cache_ucxl ON cache_entries(ucxl_address)",
@@ -410,13 +409,13 @@ func CreateIndexStatements() []string {
"CREATE INDEX IF NOT EXISTS idx_cache_expires ON cache_entries(expires_at)",
"CREATE INDEX IF NOT EXISTS idx_cache_priority ON cache_entries(priority)",
"CREATE INDEX IF NOT EXISTS idx_cache_access_count ON cache_entries(access_count)",
// Metrics indexes
"CREATE INDEX IF NOT EXISTS idx_metrics_type ON metrics(metric_type)",
"CREATE INDEX IF NOT EXISTS idx_metrics_name ON metrics(metric_name)",
"CREATE INDEX IF NOT EXISTS idx_metrics_node ON metrics(node_id)",
"CREATE INDEX IF NOT EXISTS idx_metrics_timestamp ON metrics(timestamp)",
// Evolution indexes
"CREATE INDEX IF NOT EXISTS idx_evolution_ucxl ON context_evolution(ucxl_address)",
"CREATE INDEX IF NOT EXISTS idx_evolution_from_version ON context_evolution(from_version)",

View File

@@ -283,32 +283,42 @@ type IndexStatistics struct {
// BackupConfig represents backup configuration
type BackupConfig struct {
Name string `json:"name"` // Backup name
Destination string `json:"destination"` // Backup destination
IncludeIndexes bool `json:"include_indexes"` // Include search indexes
IncludeCache bool `json:"include_cache"` // Include cache data
Compression bool `json:"compression"` // Enable compression
Encryption bool `json:"encryption"` // Enable encryption
EncryptionKey string `json:"encryption_key"` // Encryption key
Incremental bool `json:"incremental"` // Incremental backup
Retention time.Duration `json:"retention"` // Backup retention period
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
Name string `json:"name"` // Backup name
Destination string `json:"destination"` // Backup destination
IncludeIndexes bool `json:"include_indexes"` // Include search indexes
IncludeCache bool `json:"include_cache"` // Include cache data
Compression bool `json:"compression"` // Enable compression
Encryption bool `json:"encryption"` // Enable encryption
EncryptionKey string `json:"encryption_key"` // Encryption key
Incremental bool `json:"incremental"` // Incremental backup
ParentBackupID string `json:"parent_backup_id"` // Parent backup reference
Retention time.Duration `json:"retention"` // Backup retention period
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}
// BackupInfo represents information about a backup
type BackupInfo struct {
ID string `json:"id"` // Backup ID
Name string `json:"name"` // Backup name
CreatedAt time.Time `json:"created_at"` // Creation time
Size int64 `json:"size"` // Backup size
CompressedSize int64 `json:"compressed_size"` // Compressed size
ContextCount int64 `json:"context_count"` // Number of contexts
Encrypted bool `json:"encrypted"` // Whether encrypted
Incremental bool `json:"incremental"` // Whether incremental
ParentBackupID string `json:"parent_backup_id"` // Parent backup for incremental
Checksum string `json:"checksum"` // Backup checksum
Status BackupStatus `json:"status"` // Backup status
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
ID string `json:"id"` // Backup ID
BackupID string `json:"backup_id"` // Legacy identifier
Name string `json:"name"` // Backup name
Destination string `json:"destination"` // Destination path
CreatedAt time.Time `json:"created_at"` // Creation time
Size int64 `json:"size"` // Backup size
CompressedSize int64 `json:"compressed_size"` // Compressed size
DataSize int64 `json:"data_size"` // Total data size
ContextCount int64 `json:"context_count"` // Number of contexts
Encrypted bool `json:"encrypted"` // Whether encrypted
Incremental bool `json:"incremental"` // Whether incremental
ParentBackupID string `json:"parent_backup_id"` // Parent backup for incremental
IncludesIndexes bool `json:"includes_indexes"` // Include indexes
IncludesCache bool `json:"includes_cache"` // Include cache data
Checksum string `json:"checksum"` // Backup checksum
Status BackupStatus `json:"status"` // Backup status
Progress float64 `json:"progress"` // Completion progress 0-1
ErrorMessage string `json:"error_message"` // Last error message
RetentionUntil time.Time `json:"retention_until"` // Retention deadline
CompletedAt *time.Time `json:"completed_at"` // Completion time
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}
// BackupStatus represents backup status

View File

@@ -5,7 +5,9 @@ import (
"fmt"
"time"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/slurp/storage"
"chorus/pkg/ucxl"
)
// TemporalGraphFactory creates and configures temporal graph components
@@ -17,44 +19,44 @@ type TemporalGraphFactory struct {
// TemporalConfig represents configuration for the temporal graph system
type TemporalConfig struct {
// Core graph settings
MaxDepth int `json:"max_depth"`
StalenessWeights *StalenessWeights `json:"staleness_weights"`
CacheTimeout time.Duration `json:"cache_timeout"`
MaxDepth int `json:"max_depth"`
StalenessWeights *StalenessWeights `json:"staleness_weights"`
CacheTimeout time.Duration `json:"cache_timeout"`
// Analysis settings
InfluenceAnalysisConfig *InfluenceAnalysisConfig `json:"influence_analysis_config"`
NavigationConfig *NavigationConfig `json:"navigation_config"`
QueryConfig *QueryConfig `json:"query_config"`
// Persistence settings
PersistenceConfig *PersistenceConfig `json:"persistence_config"`
PersistenceConfig *PersistenceConfig `json:"persistence_config"`
// Performance settings
EnableCaching bool `json:"enable_caching"`
EnableCompression bool `json:"enable_compression"`
EnableMetrics bool `json:"enable_metrics"`
EnableCaching bool `json:"enable_caching"`
EnableCompression bool `json:"enable_compression"`
EnableMetrics bool `json:"enable_metrics"`
// Debug settings
EnableDebugLogging bool `json:"enable_debug_logging"`
EnableValidation bool `json:"enable_validation"`
EnableDebugLogging bool `json:"enable_debug_logging"`
EnableValidation bool `json:"enable_validation"`
}
// InfluenceAnalysisConfig represents configuration for influence analysis
type InfluenceAnalysisConfig struct {
DampingFactor float64 `json:"damping_factor"`
MaxIterations int `json:"max_iterations"`
ConvergenceThreshold float64 `json:"convergence_threshold"`
CacheValidDuration time.Duration `json:"cache_valid_duration"`
EnableCentralityMetrics bool `json:"enable_centrality_metrics"`
EnableCommunityDetection bool `json:"enable_community_detection"`
DampingFactor float64 `json:"damping_factor"`
MaxIterations int `json:"max_iterations"`
ConvergenceThreshold float64 `json:"convergence_threshold"`
CacheValidDuration time.Duration `json:"cache_valid_duration"`
EnableCentralityMetrics bool `json:"enable_centrality_metrics"`
EnableCommunityDetection bool `json:"enable_community_detection"`
}
// NavigationConfig represents configuration for decision navigation
type NavigationConfig struct {
MaxNavigationHistory int `json:"max_navigation_history"`
BookmarkRetention time.Duration `json:"bookmark_retention"`
SessionTimeout time.Duration `json:"session_timeout"`
EnablePathCaching bool `json:"enable_path_caching"`
MaxNavigationHistory int `json:"max_navigation_history"`
BookmarkRetention time.Duration `json:"bookmark_retention"`
SessionTimeout time.Duration `json:"session_timeout"`
EnablePathCaching bool `json:"enable_path_caching"`
}
// QueryConfig represents configuration for decision-hop queries
@@ -68,17 +70,17 @@ type QueryConfig struct {
// TemporalGraphSystem represents the complete temporal graph system
type TemporalGraphSystem struct {
Graph TemporalGraph
Navigator DecisionNavigator
InfluenceAnalyzer InfluenceAnalyzer
StalenessDetector StalenessDetector
ConflictDetector ConflictDetector
PatternAnalyzer PatternAnalyzer
VersionManager VersionManager
HistoryManager HistoryManager
MetricsCollector MetricsCollector
QuerySystem *querySystemImpl
PersistenceManager *persistenceManagerImpl
Graph TemporalGraph
Navigator DecisionNavigator
InfluenceAnalyzer InfluenceAnalyzer
StalenessDetector StalenessDetector
ConflictDetector ConflictDetector
PatternAnalyzer PatternAnalyzer
VersionManager VersionManager
HistoryManager HistoryManager
MetricsCollector MetricsCollector
QuerySystem *querySystemImpl
PersistenceManager *persistenceManagerImpl
}
// NewTemporalGraphFactory creates a new temporal graph factory
@@ -86,7 +88,7 @@ func NewTemporalGraphFactory(storage storage.ContextStore, config *TemporalConfi
if config == nil {
config = DefaultTemporalConfig()
}
return &TemporalGraphFactory{
storage: storage,
config: config,
@@ -100,22 +102,22 @@ func (tgf *TemporalGraphFactory) CreateTemporalGraphSystem(
encryptedStorage storage.EncryptedStorage,
backupManager storage.BackupManager,
) (*TemporalGraphSystem, error) {
// Create core temporal graph
graph := NewTemporalGraph(tgf.storage).(*temporalGraphImpl)
// Create navigator
navigator := NewDecisionNavigator(graph)
// Create influence analyzer
analyzer := NewInfluenceAnalyzer(graph)
// Create staleness detector
detector := NewStalenessDetector(graph)
// Create query system
querySystem := NewQuerySystem(graph, navigator, analyzer, detector)
// Create persistence manager
persistenceManager := NewPersistenceManager(
tgf.storage,
@@ -126,28 +128,28 @@ func (tgf *TemporalGraphFactory) CreateTemporalGraphSystem(
graph,
tgf.config.PersistenceConfig,
)
// Create additional components
conflictDetector := NewConflictDetector(graph)
patternAnalyzer := NewPatternAnalyzer(graph)
versionManager := NewVersionManager(graph, persistenceManager)
historyManager := NewHistoryManager(graph, persistenceManager)
metricsCollector := NewMetricsCollector(graph)
system := &TemporalGraphSystem{
Graph: graph,
Navigator: navigator,
InfluenceAnalyzer: analyzer,
StalenessDetector: detector,
ConflictDetector: conflictDetector,
PatternAnalyzer: patternAnalyzer,
VersionManager: versionManager,
HistoryManager: historyManager,
MetricsCollector: metricsCollector,
QuerySystem: querySystem,
PersistenceManager: persistenceManager,
Graph: graph,
Navigator: navigator,
InfluenceAnalyzer: analyzer,
StalenessDetector: detector,
ConflictDetector: conflictDetector,
PatternAnalyzer: patternAnalyzer,
VersionManager: versionManager,
HistoryManager: historyManager,
MetricsCollector: metricsCollector,
QuerySystem: querySystem,
PersistenceManager: persistenceManager,
}
return system, nil
}
@@ -159,19 +161,19 @@ func (tgf *TemporalGraphFactory) LoadExistingSystem(
encryptedStorage storage.EncryptedStorage,
backupManager storage.BackupManager,
) (*TemporalGraphSystem, error) {
// Create system
system, err := tgf.CreateTemporalGraphSystem(localStorage, distributedStorage, encryptedStorage, backupManager)
if err != nil {
return nil, fmt.Errorf("failed to create system: %w", err)
}
// Load graph data
err = system.PersistenceManager.LoadTemporalGraph(ctx)
if err != nil {
return nil, fmt.Errorf("failed to load temporal graph: %w", err)
}
return system, nil
}
@@ -188,23 +190,23 @@ func DefaultTemporalConfig() *TemporalConfig {
DependencyWeight: 0.3,
},
CacheTimeout: time.Minute * 15,
InfluenceAnalysisConfig: &InfluenceAnalysisConfig{
DampingFactor: 0.85,
MaxIterations: 100,
ConvergenceThreshold: 1e-6,
CacheValidDuration: time.Minute * 30,
EnableCentralityMetrics: true,
DampingFactor: 0.85,
MaxIterations: 100,
ConvergenceThreshold: 1e-6,
CacheValidDuration: time.Minute * 30,
EnableCentralityMetrics: true,
EnableCommunityDetection: true,
},
NavigationConfig: &NavigationConfig{
MaxNavigationHistory: 100,
BookmarkRetention: time.Hour * 24 * 30, // 30 days
SessionTimeout: time.Hour * 2,
EnablePathCaching: true,
},
QueryConfig: &QueryConfig{
DefaultMaxHops: 10,
MaxQueryResults: 1000,
@@ -212,28 +214,28 @@ func DefaultTemporalConfig() *TemporalConfig {
CacheQueryResults: true,
EnableQueryOptimization: true,
},
PersistenceConfig: &PersistenceConfig{
EnableLocalStorage: true,
EnableDistributedStorage: true,
EnableEncryption: true,
EncryptionRoles: []string{"analyst", "architect", "developer"},
SyncInterval: time.Minute * 15,
EnableLocalStorage: true,
EnableDistributedStorage: true,
EnableEncryption: true,
EncryptionRoles: []string{"analyst", "architect", "developer"},
SyncInterval: time.Minute * 15,
ConflictResolutionStrategy: "latest_wins",
EnableAutoSync: true,
MaxSyncRetries: 3,
BatchSize: 50,
FlushInterval: time.Second * 30,
EnableWriteBuffer: true,
EnableAutoBackup: true,
BackupInterval: time.Hour * 6,
RetainBackupCount: 10,
KeyPrefix: "temporal_graph",
NodeKeyPattern: "temporal_graph/nodes/%s",
GraphKeyPattern: "temporal_graph/graph/%s",
MetadataKeyPattern: "temporal_graph/metadata/%s",
EnableAutoSync: true,
MaxSyncRetries: 3,
BatchSize: 50,
FlushInterval: time.Second * 30,
EnableWriteBuffer: true,
EnableAutoBackup: true,
BackupInterval: time.Hour * 6,
RetainBackupCount: 10,
KeyPrefix: "temporal_graph",
NodeKeyPattern: "temporal_graph/nodes/%s",
GraphKeyPattern: "temporal_graph/graph/%s",
MetadataKeyPattern: "temporal_graph/metadata/%s",
},
EnableCaching: true,
EnableCompression: false,
EnableMetrics: true,
@@ -308,11 +310,11 @@ func (cd *conflictDetectorImpl) ValidateDecisionSequence(ctx context.Context, ad
func (cd *conflictDetectorImpl) ResolveTemporalConflict(ctx context.Context, conflict *TemporalConflict) (*ConflictResolution, error) {
// Implementation would resolve specific temporal conflicts
return &ConflictResolution{
ConflictID: conflict.ID,
Resolution: "auto_resolved",
ResolvedAt: time.Now(),
ResolvedBy: "system",
Confidence: 0.8,
ConflictID: conflict.ID,
ResolutionMethod: "auto_resolved",
ResolvedAt: time.Now(),
ResolvedBy: "system",
Confidence: 0.8,
}, nil
}
@@ -373,7 +375,7 @@ type versionManagerImpl struct {
persistence *persistenceManagerImpl
}
func (vm *versionManagerImpl) CreateVersion(ctx context.Context, address ucxl.Address,
func (vm *versionManagerImpl) CreateVersion(ctx context.Context, address ucxl.Address,
contextNode *slurpContext.ContextNode, metadata *VersionMetadata) (*TemporalNode, error) {
// Implementation would create a new temporal version
return vm.graph.EvolveContext(ctx, address, contextNode, metadata.Reason, metadata.Decision)
@@ -390,7 +392,7 @@ func (vm *versionManagerImpl) ListVersions(ctx context.Context, address ucxl.Add
if err != nil {
return nil, err
}
versions := make([]*VersionInfo, len(history))
for i, node := range history {
versions[i] = &VersionInfo{
@@ -402,11 +404,11 @@ func (vm *versionManagerImpl) ListVersions(ctx context.Context, address ucxl.Add
DecisionID: node.DecisionID,
}
}
return versions, nil
}
func (vm *versionManagerImpl) CompareVersions(ctx context.Context, address ucxl.Address,
func (vm *versionManagerImpl) CompareVersions(ctx context.Context, address ucxl.Address,
version1, version2 int) (*VersionComparison, error) {
// Implementation would compare two temporal versions
return &VersionComparison{
@@ -420,7 +422,7 @@ func (vm *versionManagerImpl) CompareVersions(ctx context.Context, address ucxl.
}, nil
}
func (vm *versionManagerImpl) MergeVersions(ctx context.Context, address ucxl.Address,
func (vm *versionManagerImpl) MergeVersions(ctx context.Context, address ucxl.Address,
versions []int, strategy MergeStrategy) (*TemporalNode, error) {
// Implementation would merge multiple versions
return vm.graph.GetLatestVersion(ctx, address)
@@ -447,7 +449,7 @@ func (hm *historyManagerImpl) GetFullHistory(ctx context.Context, address ucxl.A
if err != nil {
return nil, err
}
return &ContextHistory{
Address: address,
Versions: history,
@@ -455,7 +457,7 @@ func (hm *historyManagerImpl) GetFullHistory(ctx context.Context, address ucxl.A
}, nil
}
func (hm *historyManagerImpl) GetHistoryRange(ctx context.Context, address ucxl.Address,
func (hm *historyManagerImpl) GetHistoryRange(ctx context.Context, address ucxl.Address,
startHop, endHop int) (*ContextHistory, error) {
// Implementation would get history within a specific range
return hm.GetFullHistory(ctx, address)
@@ -539,13 +541,13 @@ func (mc *metricsCollectorImpl) GetInfluenceMetrics(ctx context.Context) (*Influ
func (mc *metricsCollectorImpl) GetQualityMetrics(ctx context.Context) (*QualityMetrics, error) {
// Implementation would get temporal data quality metrics
return &QualityMetrics{
DataCompleteness: 1.0,
DataConsistency: 1.0,
DataAccuracy: 1.0,
AverageConfidence: 0.8,
ConflictsDetected: 0,
ConflictsResolved: 0,
LastQualityCheck: time.Now(),
DataCompleteness: 1.0,
DataConsistency: 1.0,
DataAccuracy: 1.0,
AverageConfidence: 0.8,
ConflictsDetected: 0,
ConflictsResolved: 0,
LastQualityCheck: time.Now(),
}, nil
}
@@ -560,4 +562,4 @@ func (mc *metricsCollectorImpl) calculateInfluenceConnections() int {
total += len(influences)
}
return total
}
}

View File

@@ -9,36 +9,36 @@ import (
"sync"
"time"
"chorus/pkg/ucxl"
slurpContext "chorus/pkg/slurp/context"
"chorus/pkg/slurp/storage"
"chorus/pkg/ucxl"
)
// temporalGraphImpl implements the TemporalGraph interface
type temporalGraphImpl struct {
mu sync.RWMutex
// Core storage
storage storage.ContextStore
// In-memory graph structures for fast access
nodes map[string]*TemporalNode // nodeID -> TemporalNode
addressToNodes map[string][]*TemporalNode // address -> list of temporal nodes
influences map[string][]string // nodeID -> list of influenced nodeIDs
influencedBy map[string][]string // nodeID -> list of influencer nodeIDs
nodes map[string]*TemporalNode // nodeID -> TemporalNode
addressToNodes map[string][]*TemporalNode // address -> list of temporal nodes
influences map[string][]string // nodeID -> list of influenced nodeIDs
influencedBy map[string][]string // nodeID -> list of influencer nodeIDs
// Decision tracking
decisions map[string]*DecisionMetadata // decisionID -> DecisionMetadata
decisionToNodes map[string][]*TemporalNode // decisionID -> list of affected nodes
decisions map[string]*DecisionMetadata // decisionID -> DecisionMetadata
decisionToNodes map[string][]*TemporalNode // decisionID -> list of affected nodes
// Performance optimization
pathCache map[string][]*DecisionStep // cache for decision paths
metricsCache map[string]interface{} // cache for expensive metrics
cacheTimeout time.Duration
lastCacheClean time.Time
pathCache map[string][]*DecisionStep // cache for decision paths
metricsCache map[string]interface{} // cache for expensive metrics
cacheTimeout time.Duration
lastCacheClean time.Time
// Configuration
maxDepth int // Maximum depth for path finding
maxDepth int // Maximum depth for path finding
stalenessWeight *StalenessWeights
}
@@ -69,113 +69,113 @@ func NewTemporalGraph(storage storage.ContextStore) TemporalGraph {
}
// CreateInitialContext creates the first temporal version of context
func (tg *temporalGraphImpl) CreateInitialContext(ctx context.Context, address ucxl.Address,
func (tg *temporalGraphImpl) CreateInitialContext(ctx context.Context, address ucxl.Address,
contextData *slurpContext.ContextNode, creator string) (*TemporalNode, error) {
tg.mu.Lock()
defer tg.mu.Unlock()
// Generate node ID
nodeID := tg.generateNodeID(address, 1)
// Create temporal node
temporalNode := &TemporalNode{
ID: nodeID,
UCXLAddress: address,
Version: 1,
Context: contextData,
Timestamp: time.Now(),
DecisionID: fmt.Sprintf("initial-%s", creator),
ChangeReason: ReasonInitialCreation,
ParentNode: nil,
ContextHash: tg.calculateContextHash(contextData),
Confidence: contextData.RAGConfidence,
Staleness: 0.0,
Influences: make([]ucxl.Address, 0),
InfluencedBy: make([]ucxl.Address, 0),
ValidatedBy: []string{creator},
ID: nodeID,
UCXLAddress: address,
Version: 1,
Context: contextData,
Timestamp: time.Now(),
DecisionID: fmt.Sprintf("initial-%s", creator),
ChangeReason: ReasonInitialCreation,
ParentNode: nil,
ContextHash: tg.calculateContextHash(contextData),
Confidence: contextData.RAGConfidence,
Staleness: 0.0,
Influences: make([]ucxl.Address, 0),
InfluencedBy: make([]ucxl.Address, 0),
ValidatedBy: []string{creator},
LastValidated: time.Now(),
ImpactScope: ImpactLocal,
PropagatedTo: make([]ucxl.Address, 0),
Metadata: make(map[string]interface{}),
ImpactScope: ImpactLocal,
PropagatedTo: make([]ucxl.Address, 0),
Metadata: make(map[string]interface{}),
}
// Store in memory structures
tg.nodes[nodeID] = temporalNode
addressKey := address.String()
tg.addressToNodes[addressKey] = []*TemporalNode{temporalNode}
// Initialize influence maps
tg.influences[nodeID] = make([]string, 0)
tg.influencedBy[nodeID] = make([]string, 0)
// Store decision metadata
decisionMeta := &DecisionMetadata{
ID: temporalNode.DecisionID,
Maker: creator,
Rationale: "Initial context creation",
Scope: ImpactLocal,
ConfidenceLevel: contextData.RAGConfidence,
ExternalRefs: make([]string, 0),
CreatedAt: time.Now(),
ID: temporalNode.DecisionID,
Maker: creator,
Rationale: "Initial context creation",
Scope: ImpactLocal,
ConfidenceLevel: contextData.RAGConfidence,
ExternalRefs: make([]string, 0),
CreatedAt: time.Now(),
ImplementationStatus: "complete",
Metadata: make(map[string]interface{}),
Metadata: make(map[string]interface{}),
}
tg.decisions[temporalNode.DecisionID] = decisionMeta
tg.decisionToNodes[temporalNode.DecisionID] = []*TemporalNode{temporalNode}
// Persist to storage
if err := tg.persistTemporalNode(ctx, temporalNode); err != nil {
return nil, fmt.Errorf("failed to persist initial temporal node: %w", err)
}
return temporalNode, nil
}
// EvolveContext creates a new temporal version due to a decision
func (tg *temporalGraphImpl) EvolveContext(ctx context.Context, address ucxl.Address,
newContext *slurpContext.ContextNode, reason ChangeReason,
func (tg *temporalGraphImpl) EvolveContext(ctx context.Context, address ucxl.Address,
newContext *slurpContext.ContextNode, reason ChangeReason,
decision *DecisionMetadata) (*TemporalNode, error) {
tg.mu.Lock()
defer tg.mu.Unlock()
// Get latest version
addressKey := address.String()
nodes, exists := tg.addressToNodes[addressKey]
if !exists || len(nodes) == 0 {
return nil, fmt.Errorf("no existing context found for address %s", address.String())
}
// Find latest version
latestNode := nodes[len(nodes)-1]
newVersion := latestNode.Version + 1
// Generate new node ID
nodeID := tg.generateNodeID(address, newVersion)
// Create new temporal node
temporalNode := &TemporalNode{
ID: nodeID,
UCXLAddress: address,
Version: newVersion,
Context: newContext,
Timestamp: time.Now(),
DecisionID: decision.ID,
ChangeReason: reason,
ParentNode: &latestNode.ID,
ContextHash: tg.calculateContextHash(newContext),
Confidence: newContext.RAGConfidence,
Staleness: 0.0, // New version, not stale
Influences: make([]ucxl.Address, 0),
InfluencedBy: make([]ucxl.Address, 0),
ValidatedBy: []string{decision.Maker},
ID: nodeID,
UCXLAddress: address,
Version: newVersion,
Context: newContext,
Timestamp: time.Now(),
DecisionID: decision.ID,
ChangeReason: reason,
ParentNode: &latestNode.ID,
ContextHash: tg.calculateContextHash(newContext),
Confidence: newContext.RAGConfidence,
Staleness: 0.0, // New version, not stale
Influences: make([]ucxl.Address, 0),
InfluencedBy: make([]ucxl.Address, 0),
ValidatedBy: []string{decision.Maker},
LastValidated: time.Now(),
ImpactScope: decision.Scope,
PropagatedTo: make([]ucxl.Address, 0),
Metadata: make(map[string]interface{}),
ImpactScope: decision.Scope,
PropagatedTo: make([]ucxl.Address, 0),
Metadata: make(map[string]interface{}),
}
// Copy influence relationships from parent
if latestNodeInfluences, exists := tg.influences[latestNode.ID]; exists {
tg.influences[nodeID] = make([]string, len(latestNodeInfluences))
@@ -183,18 +183,18 @@ func (tg *temporalGraphImpl) EvolveContext(ctx context.Context, address ucxl.Add
} else {
tg.influences[nodeID] = make([]string, 0)
}
if latestNodeInfluencedBy, exists := tg.influencedBy[latestNode.ID]; exists {
tg.influencedBy[nodeID] = make([]string, len(latestNodeInfluencedBy))
copy(tg.influencedBy[nodeID], latestNodeInfluencedBy)
} else {
tg.influencedBy[nodeID] = make([]string, 0)
}
// Store in memory structures
tg.nodes[nodeID] = temporalNode
tg.addressToNodes[addressKey] = append(tg.addressToNodes[addressKey], temporalNode)
// Store decision metadata
tg.decisions[decision.ID] = decision
if existing, exists := tg.decisionToNodes[decision.ID]; exists {
@@ -202,18 +202,18 @@ func (tg *temporalGraphImpl) EvolveContext(ctx context.Context, address ucxl.Add
} else {
tg.decisionToNodes[decision.ID] = []*TemporalNode{temporalNode}
}
// Update staleness for related contexts
tg.updateStalenessAfterChange(temporalNode)
// Clear relevant caches
tg.clearCacheForAddress(address)
// Persist to storage
if err := tg.persistTemporalNode(ctx, temporalNode); err != nil {
return nil, fmt.Errorf("failed to persist evolved temporal node: %w", err)
}
return temporalNode, nil
}
@@ -221,38 +221,38 @@ func (tg *temporalGraphImpl) EvolveContext(ctx context.Context, address ucxl.Add
func (tg *temporalGraphImpl) GetLatestVersion(ctx context.Context, address ucxl.Address) (*TemporalNode, error) {
tg.mu.RLock()
defer tg.mu.RUnlock()
addressKey := address.String()
nodes, exists := tg.addressToNodes[addressKey]
if !exists || len(nodes) == 0 {
return nil, fmt.Errorf("no temporal nodes found for address %s", address.String())
}
// Return the latest version (last in slice)
return nodes[len(nodes)-1], nil
}
// GetVersionAtDecision gets context as it was at a specific decision hop
func (tg *temporalGraphImpl) GetVersionAtDecision(ctx context.Context, address ucxl.Address,
func (tg *temporalGraphImpl) GetVersionAtDecision(ctx context.Context, address ucxl.Address,
decisionHop int) (*TemporalNode, error) {
tg.mu.RLock()
defer tg.mu.RUnlock()
addressKey := address.String()
nodes, exists := tg.addressToNodes[addressKey]
if !exists || len(nodes) == 0 {
return nil, fmt.Errorf("no temporal nodes found for address %s", address.String())
}
// Find node at specific decision hop (version)
for _, node := range nodes {
if node.Version == decisionHop {
return node, nil
}
}
return nil, fmt.Errorf("no temporal node found at decision hop %d for address %s",
return nil, fmt.Errorf("no temporal node found at decision hop %d for address %s",
decisionHop, address.String())
}
@@ -260,20 +260,20 @@ func (tg *temporalGraphImpl) GetVersionAtDecision(ctx context.Context, address u
func (tg *temporalGraphImpl) GetEvolutionHistory(ctx context.Context, address ucxl.Address) ([]*TemporalNode, error) {
tg.mu.RLock()
defer tg.mu.RUnlock()
addressKey := address.String()
nodes, exists := tg.addressToNodes[addressKey]
if !exists || len(nodes) == 0 {
return []*TemporalNode{}, nil
}
// Sort by version to ensure proper order
sortedNodes := make([]*TemporalNode, len(nodes))
copy(sortedNodes, nodes)
sort.Slice(sortedNodes, func(i, j int) bool {
return sortedNodes[i].Version < sortedNodes[j].Version
})
return sortedNodes, nil
}
@@ -281,22 +281,22 @@ func (tg *temporalGraphImpl) GetEvolutionHistory(ctx context.Context, address uc
func (tg *temporalGraphImpl) AddInfluenceRelationship(ctx context.Context, influencer, influenced ucxl.Address) error {
tg.mu.Lock()
defer tg.mu.Unlock()
// Get latest nodes for both addresses
influencerNode, err := tg.getLatestNodeUnsafe(influencer)
if err != nil {
return fmt.Errorf("influencer node not found: %w", err)
}
influencedNode, err := tg.getLatestNodeUnsafe(influenced)
if err != nil {
return fmt.Errorf("influenced node not found: %w", err)
}
// Add to influence mappings
influencerNodeID := influencerNode.ID
influencedNodeID := influencedNode.ID
// Add to influences map (influencer -> influenced)
if influences, exists := tg.influences[influencerNodeID]; exists {
// Check if relationship already exists
@@ -309,7 +309,7 @@ func (tg *temporalGraphImpl) AddInfluenceRelationship(ctx context.Context, influ
} else {
tg.influences[influencerNodeID] = []string{influencedNodeID}
}
// Add to influencedBy map (influenced <- influencer)
if influencedBy, exists := tg.influencedBy[influencedNodeID]; exists {
// Check if relationship already exists
@@ -322,14 +322,14 @@ func (tg *temporalGraphImpl) AddInfluenceRelationship(ctx context.Context, influ
} else {
tg.influencedBy[influencedNodeID] = []string{influencerNodeID}
}
// Update temporal nodes with the influence relationship
influencerNode.Influences = append(influencerNode.Influences, influenced)
influencedNode.InfluencedBy = append(influencedNode.InfluencedBy, influencer)
// Clear path cache as influence graph has changed
tg.pathCache = make(map[string][]*DecisionStep)
// Persist changes
if err := tg.persistTemporalNode(ctx, influencerNode); err != nil {
return fmt.Errorf("failed to persist influencer node: %w", err)
@@ -337,7 +337,7 @@ func (tg *temporalGraphImpl) AddInfluenceRelationship(ctx context.Context, influ
if err := tg.persistTemporalNode(ctx, influencedNode); err != nil {
return fmt.Errorf("failed to persist influenced node: %w", err)
}
return nil
}
@@ -345,39 +345,39 @@ func (tg *temporalGraphImpl) AddInfluenceRelationship(ctx context.Context, influ
func (tg *temporalGraphImpl) RemoveInfluenceRelationship(ctx context.Context, influencer, influenced ucxl.Address) error {
tg.mu.Lock()
defer tg.mu.Unlock()
// Get latest nodes for both addresses
influencerNode, err := tg.getLatestNodeUnsafe(influencer)
if err != nil {
return fmt.Errorf("influencer node not found: %w", err)
}
influencedNode, err := tg.getLatestNodeUnsafe(influenced)
if err != nil {
return fmt.Errorf("influenced node not found: %w", err)
}
// Remove from influence mappings
influencerNodeID := influencerNode.ID
influencedNodeID := influencedNode.ID
// Remove from influences map
if influences, exists := tg.influences[influencerNodeID]; exists {
tg.influences[influencerNodeID] = tg.removeFromSlice(influences, influencedNodeID)
}
// Remove from influencedBy map
if influencedBy, exists := tg.influencedBy[influencedNodeID]; exists {
tg.influencedBy[influencedNodeID] = tg.removeFromSlice(influencedBy, influencerNodeID)
}
// Update temporal nodes
influencerNode.Influences = tg.removeAddressFromSlice(influencerNode.Influences, influenced)
influencedNode.InfluencedBy = tg.removeAddressFromSlice(influencedNode.InfluencedBy, influencer)
// Clear path cache
tg.pathCache = make(map[string][]*DecisionStep)
// Persist changes
if err := tg.persistTemporalNode(ctx, influencerNode); err != nil {
return fmt.Errorf("failed to persist influencer node: %w", err)
@@ -385,7 +385,7 @@ func (tg *temporalGraphImpl) RemoveInfluenceRelationship(ctx context.Context, in
if err := tg.persistTemporalNode(ctx, influencedNode); err != nil {
return fmt.Errorf("failed to persist influenced node: %w", err)
}
return nil
}
@@ -393,28 +393,28 @@ func (tg *temporalGraphImpl) RemoveInfluenceRelationship(ctx context.Context, in
func (tg *temporalGraphImpl) GetInfluenceRelationships(ctx context.Context, address ucxl.Address) ([]ucxl.Address, []ucxl.Address, error) {
tg.mu.RLock()
defer tg.mu.RUnlock()
node, err := tg.getLatestNodeUnsafe(address)
if err != nil {
return nil, nil, fmt.Errorf("node not found: %w", err)
}
influences := make([]ucxl.Address, len(node.Influences))
copy(influences, node.Influences)
influencedBy := make([]ucxl.Address, len(node.InfluencedBy))
copy(influencedBy, node.InfluencedBy)
return influences, influencedBy, nil
}
// FindRelatedDecisions finds decisions within N decision hops
func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address ucxl.Address,
func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address ucxl.Address,
maxHops int) ([]*DecisionPath, error) {
tg.mu.RLock()
defer tg.mu.RUnlock()
// Check cache first
cacheKey := fmt.Sprintf("related-%s-%d", address.String(), maxHops)
if cached, exists := tg.pathCache[cacheKey]; exists {
@@ -430,27 +430,27 @@ func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address u
}
return paths, nil
}
startNode, err := tg.getLatestNodeUnsafe(address)
if err != nil {
return nil, fmt.Errorf("start node not found: %w", err)
}
// Use BFS to find all nodes within maxHops
visited := make(map[string]bool)
queue := []*bfsItem{{node: startNode, distance: 0, path: []*DecisionStep{}}}
relatedPaths := make([]*DecisionPath, 0)
for len(queue) > 0 {
current := queue[0]
queue = queue[1:]
nodeID := current.node.ID
if visited[nodeID] || current.distance > maxHops {
continue
}
visited[nodeID] = true
// If this is not the starting node, add it to results
if current.distance > 0 {
step := &DecisionStep{
@@ -459,7 +459,7 @@ func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address u
HopDistance: current.distance,
Relationship: "influence",
}
path := &DecisionPath{
From: address,
To: current.node.UCXLAddress,
@@ -469,7 +469,7 @@ func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address u
}
relatedPaths = append(relatedPaths, path)
}
// Add influenced nodes to queue
if influences, exists := tg.influences[nodeID]; exists {
for _, influencedID := range influences {
@@ -491,7 +491,7 @@ func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address u
}
}
}
// Add influencer nodes to queue
if influencedBy, exists := tg.influencedBy[nodeID]; exists {
for _, influencerID := range influencedBy {
@@ -514,7 +514,7 @@ func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address u
}
}
}
return relatedPaths, nil
}
@@ -522,44 +522,44 @@ func (tg *temporalGraphImpl) FindRelatedDecisions(ctx context.Context, address u
func (tg *temporalGraphImpl) FindDecisionPath(ctx context.Context, from, to ucxl.Address) ([]*DecisionStep, error) {
tg.mu.RLock()
defer tg.mu.RUnlock()
// Check cache first
cacheKey := fmt.Sprintf("path-%s-%s", from.String(), to.String())
if cached, exists := tg.pathCache[cacheKey]; exists {
return cached, nil
}
fromNode, err := tg.getLatestNodeUnsafe(from)
if err != nil {
return nil, fmt.Errorf("from node not found: %w", err)
}
toNode, err := tg.getLatestNodeUnsafe(to)
_, err := tg.getLatestNodeUnsafe(to)
if err != nil {
return nil, fmt.Errorf("to node not found: %w", err)
}
// Use BFS to find shortest path
visited := make(map[string]bool)
queue := []*pathItem{{node: fromNode, path: []*DecisionStep{}}}
for len(queue) > 0 {
current := queue[0]
queue = queue[1:]
nodeID := current.node.ID
if visited[nodeID] {
continue
}
visited[nodeID] = true
// Check if we reached the target
if current.node.UCXLAddress.String() == to.String() {
// Cache the result
tg.pathCache[cacheKey] = current.path
return current.path, nil
}
// Explore influenced nodes
if influences, exists := tg.influences[nodeID]; exists {
for _, influencedID := range influences {
@@ -580,7 +580,7 @@ func (tg *temporalGraphImpl) FindDecisionPath(ctx context.Context, from, to ucxl
}
}
}
// Explore influencer nodes
if influencedBy, exists := tg.influencedBy[nodeID]; exists {
for _, influencerID := range influencedBy {
@@ -602,7 +602,7 @@ func (tg *temporalGraphImpl) FindDecisionPath(ctx context.Context, from, to ucxl
}
}
}
return nil, fmt.Errorf("no path found from %s to %s", from.String(), to.String())
}
@@ -610,7 +610,7 @@ func (tg *temporalGraphImpl) FindDecisionPath(ctx context.Context, from, to ucxl
func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*DecisionAnalysis, error) {
tg.mu.RLock()
defer tg.mu.RUnlock()
analysis := &DecisionAnalysis{
TimeRange: 24 * time.Hour, // Analyze last 24 hours by default
TotalDecisions: len(tg.decisions),
@@ -620,10 +620,10 @@ func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*Deci
MostInfluentialDecisions: make([]*InfluentialDecision, 0),
DecisionClusters: make([]*DecisionCluster, 0),
Patterns: make([]*DecisionPattern, 0),
Anomalies: make([]*AnomalousDecision, 0),
AnalyzedAt: time.Now(),
Anomalies: make([]*AnomalousDecision, 0),
AnalyzedAt: time.Now(),
}
// Calculate decision velocity
cutoff := time.Now().Add(-analysis.TimeRange)
recentDecisions := 0
@@ -633,7 +633,7 @@ func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*Deci
}
}
analysis.DecisionVelocity = float64(recentDecisions) / analysis.TimeRange.Hours()
// Calculate average influence distance
totalDistance := 0.0
connections := 0
@@ -648,37 +648,37 @@ func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*Deci
if connections > 0 {
analysis.AverageInfluenceDistance = totalDistance / float64(connections)
}
// Find most influential decisions (simplified)
influenceScores := make(map[string]float64)
for nodeID, node := range tg.nodes {
score := float64(len(tg.influences[nodeID])) * 1.0 // Direct influences
score := float64(len(tg.influences[nodeID])) * 1.0 // Direct influences
score += float64(len(tg.influencedBy[nodeID])) * 0.5 // Being influenced
influenceScores[nodeID] = score
if score > 3.0 { // Threshold for "influential"
influential := &InfluentialDecision{
Address: node.UCXLAddress,
DecisionHop: node.Version,
InfluenceScore: score,
AffectedContexts: node.Influences,
DecisionMetadata: tg.decisions[node.DecisionID],
InfluenceReasons: []string{"high_connectivity", "multiple_influences"},
Address: node.UCXLAddress,
DecisionHop: node.Version,
InfluenceScore: score,
AffectedContexts: node.Influences,
DecisionMetadata: tg.decisions[node.DecisionID],
InfluenceReasons: []string{"high_connectivity", "multiple_influences"},
}
analysis.MostInfluentialDecisions = append(analysis.MostInfluentialDecisions, influential)
}
}
// Sort influential decisions by score
sort.Slice(analysis.MostInfluentialDecisions, func(i, j int) bool {
return analysis.MostInfluentialDecisions[i].InfluenceScore > analysis.MostInfluentialDecisions[j].InfluenceScore
})
// Limit to top 10
if len(analysis.MostInfluentialDecisions) > 10 {
analysis.MostInfluentialDecisions = analysis.MostInfluentialDecisions[:10]
}
return analysis, nil
}
@@ -686,19 +686,19 @@ func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*Deci
func (tg *temporalGraphImpl) ValidateTemporalIntegrity(ctx context.Context) error {
tg.mu.RLock()
defer tg.mu.RUnlock()
errors := make([]string, 0)
// Check for orphaned nodes
for nodeID, node := range tg.nodes {
if node.ParentNode != nil {
if _, exists := tg.nodes[*node.ParentNode]; !exists {
errors = append(errors, fmt.Sprintf("orphaned node %s has non-existent parent %s",
errors = append(errors, fmt.Sprintf("orphaned node %s has non-existent parent %s",
nodeID, *node.ParentNode))
}
}
}
// Check influence consistency
for nodeID := range tg.influences {
if influences, exists := tg.influences[nodeID]; exists {
@@ -713,33 +713,33 @@ func (tg *temporalGraphImpl) ValidateTemporalIntegrity(ctx context.Context) erro
}
}
if !found {
errors = append(errors, fmt.Sprintf("influence inconsistency: %s -> %s not reflected in influencedBy",
errors = append(errors, fmt.Sprintf("influence inconsistency: %s -> %s not reflected in influencedBy",
nodeID, influencedID))
}
}
}
}
}
// Check version sequence integrity
for address, nodes := range tg.addressToNodes {
sort.Slice(nodes, func(i, j int) bool {
return nodes[i].Version < nodes[j].Version
})
for i, node := range nodes {
expectedVersion := i + 1
if node.Version != expectedVersion {
errors = append(errors, fmt.Sprintf("version sequence error for address %s: expected %d, got %d",
errors = append(errors, fmt.Sprintf("version sequence error for address %s: expected %d, got %d",
address, expectedVersion, node.Version))
}
}
}
if len(errors) > 0 {
return fmt.Errorf("temporal integrity violations: %v", errors)
}
return nil
}
@@ -747,21 +747,21 @@ func (tg *temporalGraphImpl) ValidateTemporalIntegrity(ctx context.Context) erro
func (tg *temporalGraphImpl) CompactHistory(ctx context.Context, beforeTime time.Time) error {
tg.mu.Lock()
defer tg.mu.Unlock()
compacted := 0
// For each address, keep only the latest version and major milestones before the cutoff
for address, nodes := range tg.addressToNodes {
toKeep := make([]*TemporalNode, 0)
toRemove := make([]*TemporalNode, 0)
for _, node := range nodes {
// Always keep nodes after the cutoff time
if node.Timestamp.After(beforeTime) {
toKeep = append(toKeep, node)
continue
}
// Keep major changes and influential decisions
if tg.isMajorChange(node) || tg.isInfluentialDecision(node) {
toKeep = append(toKeep, node)
@@ -769,10 +769,10 @@ func (tg *temporalGraphImpl) CompactHistory(ctx context.Context, beforeTime time
toRemove = append(toRemove, node)
}
}
// Update the address mapping
tg.addressToNodes[address] = toKeep
// Remove old nodes from main maps
for _, node := range toRemove {
delete(tg.nodes, node.ID)
@@ -781,11 +781,11 @@ func (tg *temporalGraphImpl) CompactHistory(ctx context.Context, beforeTime time
compacted++
}
}
// Clear caches after compaction
tg.pathCache = make(map[string][]*DecisionStep)
tg.metricsCache = make(map[string]interface{})
return nil
}
@@ -847,13 +847,13 @@ func (tg *temporalGraphImpl) calculateStaleness(node *TemporalNode, changedNode
// Simple staleness calculation based on time since last update and influence strength
timeSinceUpdate := time.Since(node.Timestamp)
timeWeight := math.Min(timeSinceUpdate.Hours()/168.0, 1.0) // Max staleness from time: 1 week
// Influence weight based on connection strength
influenceWeight := 0.0
if len(node.InfluencedBy) > 0 {
influenceWeight = 1.0 / float64(len(node.InfluencedBy)) // Stronger if fewer influencers
}
// Impact scope weight
impactWeight := 0.0
switch changedNode.ImpactScope {
@@ -866,23 +866,23 @@ func (tg *temporalGraphImpl) calculateStaleness(node *TemporalNode, changedNode
case ImpactLocal:
impactWeight = 0.4
}
return math.Min(
tg.stalenessWeight.TimeWeight*timeWeight+
tg.stalenessWeight.InfluenceWeight*influenceWeight+
tg.stalenessWeight.ImportanceWeight*impactWeight, 1.0)
tg.stalenessWeight.InfluenceWeight*influenceWeight+
tg.stalenessWeight.ImportanceWeight*impactWeight, 1.0)
}
func (tg *temporalGraphImpl) clearCacheForAddress(address ucxl.Address) {
addressStr := address.String()
keysToDelete := make([]string, 0)
for key := range tg.pathCache {
if contains(key, addressStr) {
keysToDelete = append(keysToDelete, key)
}
}
for _, key := range keysToDelete {
delete(tg.pathCache, key)
}
@@ -908,7 +908,7 @@ func (tg *temporalGraphImpl) persistTemporalNode(ctx context.Context, node *Temp
}
func contains(s, substr string) bool {
return len(s) >= len(substr) && (s == substr ||
return len(s) >= len(substr) && (s == substr ||
(len(s) > len(substr) && (s[:len(substr)] == substr || s[len(s)-len(substr):] == substr)))
}
@@ -923,4 +923,4 @@ type bfsItem struct {
type pathItem struct {
node *TemporalNode
path []*DecisionStep
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -13,36 +13,36 @@ import (
// decisionNavigatorImpl implements the DecisionNavigator interface
type decisionNavigatorImpl struct {
mu sync.RWMutex
// Reference to the temporal graph
graph *temporalGraphImpl
// Navigation state
navigationSessions map[string]*NavigationSession
bookmarks map[string]*DecisionBookmark
// Configuration
maxNavigationHistory int
}
// NavigationSession represents a navigation session
type NavigationSession struct {
ID string `json:"id"`
UserID string `json:"user_id"`
StartedAt time.Time `json:"started_at"`
LastActivity time.Time `json:"last_activity"`
CurrentPosition ucxl.Address `json:"current_position"`
History []*DecisionStep `json:"history"`
Bookmarks []string `json:"bookmarks"`
Preferences *NavPreferences `json:"preferences"`
ID string `json:"id"`
UserID string `json:"user_id"`
StartedAt time.Time `json:"started_at"`
LastActivity time.Time `json:"last_activity"`
CurrentPosition ucxl.Address `json:"current_position"`
History []*DecisionStep `json:"history"`
Bookmarks []string `json:"bookmarks"`
Preferences *NavPreferences `json:"preferences"`
}
// NavPreferences represents navigation preferences
type NavPreferences struct {
MaxHops int `json:"max_hops"`
MaxHops int `json:"max_hops"`
PreferRecentDecisions bool `json:"prefer_recent_decisions"`
FilterByConfidence float64 `json:"filter_by_confidence"`
IncludeStaleContexts bool `json:"include_stale_contexts"`
FilterByConfidence float64 `json:"filter_by_confidence"`
IncludeStaleContexts bool `json:"include_stale_contexts"`
}
// NewDecisionNavigator creates a new decision navigator
@@ -50,24 +50,24 @@ func NewDecisionNavigator(graph *temporalGraphImpl) DecisionNavigator {
return &decisionNavigatorImpl{
graph: graph,
navigationSessions: make(map[string]*NavigationSession),
bookmarks: make(map[string]*DecisionBookmark),
bookmarks: make(map[string]*DecisionBookmark),
maxNavigationHistory: 100,
}
}
// NavigateDecisionHops navigates by decision distance, not time
func (dn *decisionNavigatorImpl) NavigateDecisionHops(ctx context.Context, address ucxl.Address,
func (dn *decisionNavigatorImpl) NavigateDecisionHops(ctx context.Context, address ucxl.Address,
hops int, direction NavigationDirection) (*TemporalNode, error) {
dn.mu.RLock()
defer dn.mu.RUnlock()
// Get starting node
startNode, err := dn.graph.getLatestNodeUnsafe(address)
if err != nil {
return nil, fmt.Errorf("failed to get starting node: %w", err)
}
// Navigate by hops
currentNode := startNode
for i := 0; i < hops; i++ {
@@ -77,23 +77,23 @@ func (dn *decisionNavigatorImpl) NavigateDecisionHops(ctx context.Context, addre
}
currentNode = nextNode
}
return currentNode, nil
}
// GetDecisionTimeline gets timeline ordered by decision sequence
func (dn *decisionNavigatorImpl) GetDecisionTimeline(ctx context.Context, address ucxl.Address,
func (dn *decisionNavigatorImpl) GetDecisionTimeline(ctx context.Context, address ucxl.Address,
includeRelated bool, maxHops int) (*DecisionTimeline, error) {
dn.mu.RLock()
defer dn.mu.RUnlock()
// Get evolution history for the primary address
history, err := dn.graph.GetEvolutionHistory(ctx, address)
if err != nil {
return nil, fmt.Errorf("failed to get evolution history: %w", err)
}
// Build decision timeline entries
decisionSequence := make([]*DecisionTimelineEntry, len(history))
for i, node := range history {
@@ -112,7 +112,7 @@ func (dn *decisionNavigatorImpl) GetDecisionTimeline(ctx context.Context, addres
}
decisionSequence[i] = entry
}
// Get related decisions if requested
relatedDecisions := make([]*RelatedDecision, 0)
if includeRelated && maxHops > 0 {
@@ -136,16 +136,16 @@ func (dn *decisionNavigatorImpl) GetDecisionTimeline(ctx context.Context, addres
}
}
}
// Calculate timeline analysis
analysis := dn.analyzeTimeline(decisionSequence, relatedDecisions)
// Calculate time span
var timeSpan time.Duration
if len(history) > 1 {
timeSpan = history[len(history)-1].Timestamp.Sub(history[0].Timestamp)
}
timeline := &DecisionTimeline{
PrimaryAddress: address,
DecisionSequence: decisionSequence,
@@ -154,7 +154,7 @@ func (dn *decisionNavigatorImpl) GetDecisionTimeline(ctx context.Context, addres
TimeSpan: timeSpan,
AnalysisMetadata: analysis,
}
return timeline, nil
}
@@ -162,31 +162,31 @@ func (dn *decisionNavigatorImpl) GetDecisionTimeline(ctx context.Context, addres
func (dn *decisionNavigatorImpl) FindStaleContexts(ctx context.Context, stalenessThreshold float64) ([]*StaleContext, error) {
dn.mu.RLock()
defer dn.mu.RUnlock()
staleContexts := make([]*StaleContext, 0)
// Check all nodes for staleness
for _, node := range dn.graph.nodes {
if node.Staleness >= stalenessThreshold {
staleness := &StaleContext{
UCXLAddress: node.UCXLAddress,
TemporalNode: node,
StalenessScore: node.Staleness,
LastUpdated: node.Timestamp,
Reasons: dn.getStalenessReasons(node),
UCXLAddress: node.UCXLAddress,
TemporalNode: node,
StalenessScore: node.Staleness,
LastUpdated: node.Timestamp,
Reasons: dn.getStalenessReasons(node),
SuggestedActions: dn.getSuggestedActions(node),
RelatedChanges: dn.getRelatedChanges(node),
Priority: dn.calculateStalePriority(node),
RelatedChanges: dn.getRelatedChanges(node),
Priority: dn.calculateStalePriority(node),
}
staleContexts = append(staleContexts, staleness)
}
}
// Sort by staleness score (highest first)
sort.Slice(staleContexts, func(i, j int) bool {
return staleContexts[i].StalenessScore > staleContexts[j].StalenessScore
})
return staleContexts, nil
}
@@ -195,28 +195,28 @@ func (dn *decisionNavigatorImpl) ValidateDecisionPath(ctx context.Context, path
if len(path) == 0 {
return fmt.Errorf("empty decision path")
}
dn.mu.RLock()
defer dn.mu.RUnlock()
// Validate each step in the path
for i, step := range path {
// Check if the temporal node exists
if step.TemporalNode == nil {
return fmt.Errorf("step %d has nil temporal node", i)
}
nodeID := step.TemporalNode.ID
if _, exists := dn.graph.nodes[nodeID]; !exists {
return fmt.Errorf("step %d references non-existent node %s", i, nodeID)
}
// Validate hop distance
if step.HopDistance != i {
return fmt.Errorf("step %d has incorrect hop distance: expected %d, got %d",
return fmt.Errorf("step %d has incorrect hop distance: expected %d, got %d",
i, i, step.HopDistance)
}
// Validate relationship to next step
if i < len(path)-1 {
nextStep := path[i+1]
@@ -225,7 +225,7 @@ func (dn *decisionNavigatorImpl) ValidateDecisionPath(ctx context.Context, path
}
}
}
return nil
}
@@ -233,16 +233,16 @@ func (dn *decisionNavigatorImpl) ValidateDecisionPath(ctx context.Context, path
func (dn *decisionNavigatorImpl) GetNavigationHistory(ctx context.Context, sessionID string) ([]*DecisionStep, error) {
dn.mu.RLock()
defer dn.mu.RUnlock()
session, exists := dn.navigationSessions[sessionID]
if !exists {
return nil, fmt.Errorf("navigation session %s not found", sessionID)
}
// Return a copy of the history
history := make([]*DecisionStep, len(session.History))
copy(history, session.History)
return history, nil
}
@@ -250,22 +250,22 @@ func (dn *decisionNavigatorImpl) GetNavigationHistory(ctx context.Context, sessi
func (dn *decisionNavigatorImpl) ResetNavigation(ctx context.Context, address ucxl.Address) error {
dn.mu.Lock()
defer dn.mu.Unlock()
// Clear any navigation sessions for this address
for sessionID, session := range dn.navigationSessions {
for _, session := range dn.navigationSessions {
if session.CurrentPosition.String() == address.String() {
// Reset to latest version
latestNode, err := dn.graph.getLatestNodeUnsafe(address)
if err != nil {
return fmt.Errorf("failed to get latest node: %w", err)
}
session.CurrentPosition = address
session.History = []*DecisionStep{}
session.LastActivity = time.Now()
}
}
return nil
}
@@ -273,13 +273,13 @@ func (dn *decisionNavigatorImpl) ResetNavigation(ctx context.Context, address uc
func (dn *decisionNavigatorImpl) BookmarkDecision(ctx context.Context, address ucxl.Address, hop int, name string) error {
dn.mu.Lock()
defer dn.mu.Unlock()
// Validate the decision point exists
node, err := dn.graph.GetVersionAtDecision(ctx, address, hop)
if err != nil {
return fmt.Errorf("decision point not found: %w", err)
}
// Create bookmark
bookmarkID := fmt.Sprintf("%s-%d-%d", address.String(), hop, time.Now().Unix())
bookmark := &DecisionBookmark{
@@ -293,14 +293,14 @@ func (dn *decisionNavigatorImpl) BookmarkDecision(ctx context.Context, address u
Tags: []string{},
Metadata: make(map[string]interface{}),
}
// Add context information to metadata
bookmark.Metadata["change_reason"] = node.ChangeReason
bookmark.Metadata["decision_id"] = node.DecisionID
bookmark.Metadata["confidence"] = node.Confidence
dn.bookmarks[bookmarkID] = bookmark
return nil
}
@@ -308,17 +308,17 @@ func (dn *decisionNavigatorImpl) BookmarkDecision(ctx context.Context, address u
func (dn *decisionNavigatorImpl) ListBookmarks(ctx context.Context) ([]*DecisionBookmark, error) {
dn.mu.RLock()
defer dn.mu.RUnlock()
bookmarks := make([]*DecisionBookmark, 0, len(dn.bookmarks))
for _, bookmark := range dn.bookmarks {
bookmarks = append(bookmarks, bookmark)
}
// Sort by creation time (newest first)
sort.Slice(bookmarks, func(i, j int) bool {
return bookmarks[i].CreatedAt.After(bookmarks[j].CreatedAt)
})
return bookmarks, nil
}
@@ -342,14 +342,14 @@ func (dn *decisionNavigatorImpl) navigateForward(currentNode *TemporalNode) (*Te
if !exists {
return nil, fmt.Errorf("no nodes found for address")
}
// Find current node in the list and get the next one
for i, node := range nodes {
if node.ID == currentNode.ID && i < len(nodes)-1 {
return nodes[i+1], nil
}
}
return nil, fmt.Errorf("no forward navigation possible")
}
@@ -358,12 +358,12 @@ func (dn *decisionNavigatorImpl) navigateBackward(currentNode *TemporalNode) (*T
if currentNode.ParentNode == nil {
return nil, fmt.Errorf("no backward navigation possible: no parent node")
}
parentNode, exists := dn.graph.nodes[*currentNode.ParentNode]
if !exists {
return nil, fmt.Errorf("parent node not found: %s", *currentNode.ParentNode)
}
return parentNode, nil
}
@@ -387,7 +387,7 @@ func (dn *decisionNavigatorImpl) analyzeTimeline(sequence []*DecisionTimelineEnt
AnalyzedAt: time.Now(),
}
}
// Calculate change velocity
var changeVelocity float64
if len(sequence) > 1 {
@@ -398,27 +398,27 @@ func (dn *decisionNavigatorImpl) analyzeTimeline(sequence []*DecisionTimelineEnt
changeVelocity = float64(len(sequence)-1) / duration.Hours()
}
}
// Analyze confidence trend
confidenceTrend := "stable"
if len(sequence) > 1 {
firstConfidence := sequence[0].ConfidenceEvolution
lastConfidence := sequence[len(sequence)-1].ConfidenceEvolution
diff := lastConfidence - firstConfidence
if diff > 0.1 {
confidenceTrend = "increasing"
} else if diff < -0.1 {
confidenceTrend = "decreasing"
}
}
// Count change reasons
reasonCounts := make(map[ChangeReason]int)
for _, entry := range sequence {
reasonCounts[entry.ChangeReason]++
}
// Find dominant reasons
dominantReasons := make([]ChangeReason, 0)
maxCount := 0
@@ -430,19 +430,19 @@ func (dn *decisionNavigatorImpl) analyzeTimeline(sequence []*DecisionTimelineEnt
dominantReasons = append(dominantReasons, reason)
}
}
// Count decision makers
makerCounts := make(map[string]int)
for _, entry := range sequence {
makerCounts[entry.DecisionMaker]++
}
// Count impact scope distribution
scopeCounts := make(map[ImpactScope]int)
for _, entry := range sequence {
scopeCounts[entry.ImpactScope]++
}
return &TimelineAnalysis{
ChangeVelocity: changeVelocity,
ConfidenceTrend: confidenceTrend,
@@ -456,47 +456,47 @@ func (dn *decisionNavigatorImpl) analyzeTimeline(sequence []*DecisionTimelineEnt
func (dn *decisionNavigatorImpl) getStalenessReasons(node *TemporalNode) []string {
reasons := make([]string, 0)
// Time-based staleness
timeSinceUpdate := time.Since(node.Timestamp)
if timeSinceUpdate > 7*24*time.Hour {
reasons = append(reasons, "not updated in over a week")
}
// Influence-based staleness
if len(node.InfluencedBy) > 0 {
reasons = append(reasons, "influenced by other contexts that may have changed")
}
// Confidence-based staleness
if node.Confidence < 0.7 {
reasons = append(reasons, "low confidence score")
}
return reasons
}
func (dn *decisionNavigatorImpl) getSuggestedActions(node *TemporalNode) []string {
actions := make([]string, 0)
actions = append(actions, "review context for accuracy")
actions = append(actions, "check related decisions for impact")
if node.Confidence < 0.7 {
actions = append(actions, "improve context confidence through additional analysis")
}
if len(node.InfluencedBy) > 3 {
actions = append(actions, "validate dependencies are still accurate")
}
return actions
}
func (dn *decisionNavigatorImpl) getRelatedChanges(node *TemporalNode) []ucxl.Address {
// Find contexts that have changed recently and might affect this one
relatedChanges := make([]ucxl.Address, 0)
cutoff := time.Now().Add(-24 * time.Hour)
for _, otherNode := range dn.graph.nodes {
if otherNode.Timestamp.After(cutoff) && otherNode.ID != node.ID {
@@ -509,18 +509,18 @@ func (dn *decisionNavigatorImpl) getRelatedChanges(node *TemporalNode) []ucxl.Ad
}
}
}
return relatedChanges
}
func (dn *decisionNavigatorImpl) calculateStalePriority(node *TemporalNode) StalePriority {
score := node.Staleness
// Adjust based on influence
if len(node.Influences) > 5 {
score += 0.2 // Higher priority if it influences many others
}
// Adjust based on impact scope
switch node.ImpactScope {
case ImpactSystem:
@@ -530,7 +530,7 @@ func (dn *decisionNavigatorImpl) calculateStalePriority(node *TemporalNode) Stal
case ImpactModule:
score += 0.1
}
if score >= 0.9 {
return PriorityCritical
} else if score >= 0.7 {
@@ -545,7 +545,7 @@ func (dn *decisionNavigatorImpl) validateStepRelationship(step, nextStep *Decisi
// Check if there's a valid relationship between the steps
currentNodeID := step.TemporalNode.ID
nextNodeID := nextStep.TemporalNode.ID
switch step.Relationship {
case "influences":
if influences, exists := dn.graph.influences[currentNodeID]; exists {
@@ -564,6 +564,6 @@ func (dn *decisionNavigatorImpl) validateStepRelationship(step, nextStep *Decisi
}
}
}
return false
}
}

View File

@@ -7,93 +7,93 @@ import (
"sync"
"time"
"chorus/pkg/ucxl"
"chorus/pkg/slurp/storage"
"chorus/pkg/ucxl"
)
// persistenceManagerImpl handles persistence and synchronization of temporal graph data
type persistenceManagerImpl struct {
mu sync.RWMutex
// Storage interfaces
contextStore storage.ContextStore
localStorage storage.LocalStorage
distributedStore storage.DistributedStorage
encryptedStore storage.EncryptedStorage
backupManager storage.BackupManager
// Reference to temporal graph
graph *temporalGraphImpl
// Persistence configuration
config *PersistenceConfig
// Synchronization state
lastSyncTime time.Time
syncInProgress bool
pendingChanges map[string]*PendingChange
conflictResolver ConflictResolver
// Performance optimization
batchSize int
writeBuffer []*TemporalNode
bufferMutex sync.Mutex
flushInterval time.Duration
lastFlush time.Time
batchSize int
writeBuffer []*TemporalNode
bufferMutex sync.Mutex
flushInterval time.Duration
lastFlush time.Time
}
// PersistenceConfig represents configuration for temporal graph persistence
type PersistenceConfig struct {
// Storage settings
EnableLocalStorage bool `json:"enable_local_storage"`
EnableDistributedStorage bool `json:"enable_distributed_storage"`
EnableEncryption bool `json:"enable_encryption"`
EncryptionRoles []string `json:"encryption_roles"`
EnableLocalStorage bool `json:"enable_local_storage"`
EnableDistributedStorage bool `json:"enable_distributed_storage"`
EnableEncryption bool `json:"enable_encryption"`
EncryptionRoles []string `json:"encryption_roles"`
// Synchronization settings
SyncInterval time.Duration `json:"sync_interval"`
ConflictResolutionStrategy string `json:"conflict_resolution_strategy"`
EnableAutoSync bool `json:"enable_auto_sync"`
MaxSyncRetries int `json:"max_sync_retries"`
SyncInterval time.Duration `json:"sync_interval"`
ConflictResolutionStrategy string `json:"conflict_resolution_strategy"`
EnableAutoSync bool `json:"enable_auto_sync"`
MaxSyncRetries int `json:"max_sync_retries"`
// Performance settings
BatchSize int `json:"batch_size"`
FlushInterval time.Duration `json:"flush_interval"`
EnableWriteBuffer bool `json:"enable_write_buffer"`
BatchSize int `json:"batch_size"`
FlushInterval time.Duration `json:"flush_interval"`
EnableWriteBuffer bool `json:"enable_write_buffer"`
// Backup settings
EnableAutoBackup bool `json:"enable_auto_backup"`
BackupInterval time.Duration `json:"backup_interval"`
RetainBackupCount int `json:"retain_backup_count"`
EnableAutoBackup bool `json:"enable_auto_backup"`
BackupInterval time.Duration `json:"backup_interval"`
RetainBackupCount int `json:"retain_backup_count"`
// Storage keys and patterns
KeyPrefix string `json:"key_prefix"`
NodeKeyPattern string `json:"node_key_pattern"`
GraphKeyPattern string `json:"graph_key_pattern"`
MetadataKeyPattern string `json:"metadata_key_pattern"`
KeyPrefix string `json:"key_prefix"`
NodeKeyPattern string `json:"node_key_pattern"`
GraphKeyPattern string `json:"graph_key_pattern"`
MetadataKeyPattern string `json:"metadata_key_pattern"`
}
// PendingChange represents a change waiting to be synchronized
type PendingChange struct {
ID string `json:"id"`
Type ChangeType `json:"type"`
NodeID string `json:"node_id"`
Data interface{} `json:"data"`
Timestamp time.Time `json:"timestamp"`
Retries int `json:"retries"`
LastError string `json:"last_error"`
Metadata map[string]interface{} `json:"metadata"`
ID string `json:"id"`
Type ChangeType `json:"type"`
NodeID string `json:"node_id"`
Data interface{} `json:"data"`
Timestamp time.Time `json:"timestamp"`
Retries int `json:"retries"`
LastError string `json:"last_error"`
Metadata map[string]interface{} `json:"metadata"`
}
// ChangeType represents the type of change to be synchronized
type ChangeType string
const (
ChangeTypeNodeCreated ChangeType = "node_created"
ChangeTypeNodeUpdated ChangeType = "node_updated"
ChangeTypeNodeDeleted ChangeType = "node_deleted"
ChangeTypeGraphUpdated ChangeType = "graph_updated"
ChangeTypeInfluenceAdded ChangeType = "influence_added"
ChangeTypeNodeCreated ChangeType = "node_created"
ChangeTypeNodeUpdated ChangeType = "node_updated"
ChangeTypeNodeDeleted ChangeType = "node_deleted"
ChangeTypeGraphUpdated ChangeType = "graph_updated"
ChangeTypeInfluenceAdded ChangeType = "influence_added"
ChangeTypeInfluenceRemoved ChangeType = "influence_removed"
)
@@ -105,39 +105,39 @@ type ConflictResolver interface {
// GraphSnapshot represents a snapshot of the temporal graph for synchronization
type GraphSnapshot struct {
Timestamp time.Time `json:"timestamp"`
Nodes map[string]*TemporalNode `json:"nodes"`
Influences map[string][]string `json:"influences"`
InfluencedBy map[string][]string `json:"influenced_by"`
Decisions map[string]*DecisionMetadata `json:"decisions"`
Metadata *GraphMetadata `json:"metadata"`
Checksum string `json:"checksum"`
Timestamp time.Time `json:"timestamp"`
Nodes map[string]*TemporalNode `json:"nodes"`
Influences map[string][]string `json:"influences"`
InfluencedBy map[string][]string `json:"influenced_by"`
Decisions map[string]*DecisionMetadata `json:"decisions"`
Metadata *GraphMetadata `json:"metadata"`
Checksum string `json:"checksum"`
}
// GraphMetadata represents metadata about the temporal graph
type GraphMetadata struct {
Version int `json:"version"`
LastModified time.Time `json:"last_modified"`
NodeCount int `json:"node_count"`
EdgeCount int `json:"edge_count"`
DecisionCount int `json:"decision_count"`
CreatedBy string `json:"created_by"`
CreatedAt time.Time `json:"created_at"`
Version int `json:"version"`
LastModified time.Time `json:"last_modified"`
NodeCount int `json:"node_count"`
EdgeCount int `json:"edge_count"`
DecisionCount int `json:"decision_count"`
CreatedBy string `json:"created_by"`
CreatedAt time.Time `json:"created_at"`
}
// SyncResult represents the result of a synchronization operation
type SyncResult struct {
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
Duration time.Duration `json:"duration"`
NodesProcessed int `json:"nodes_processed"`
NodesCreated int `json:"nodes_created"`
NodesUpdated int `json:"nodes_updated"`
NodesDeleted int `json:"nodes_deleted"`
ConflictsFound int `json:"conflicts_found"`
ConflictsResolved int `json:"conflicts_resolved"`
Errors []string `json:"errors"`
Success bool `json:"success"`
StartTime time.Time `json:"start_time"`
EndTime time.Time `json:"end_time"`
Duration time.Duration `json:"duration"`
NodesProcessed int `json:"nodes_processed"`
NodesCreated int `json:"nodes_created"`
NodesUpdated int `json:"nodes_updated"`
NodesDeleted int `json:"nodes_deleted"`
ConflictsFound int `json:"conflicts_found"`
ConflictsResolved int `json:"conflicts_resolved"`
Errors []string `json:"errors"`
Success bool `json:"success"`
}
// NewPersistenceManager creates a new persistence manager
@@ -150,7 +150,7 @@ func NewPersistenceManager(
graph *temporalGraphImpl,
config *PersistenceConfig,
) *persistenceManagerImpl {
pm := &persistenceManagerImpl{
contextStore: contextStore,
localStorage: localStorage,
@@ -165,20 +165,20 @@ func NewPersistenceManager(
writeBuffer: make([]*TemporalNode, 0, config.BatchSize),
flushInterval: config.FlushInterval,
}
// Start background processes
if config.EnableAutoSync {
go pm.syncWorker()
}
if config.EnableWriteBuffer {
go pm.flushWorker()
}
if config.EnableAutoBackup {
go pm.backupWorker()
}
return pm
}
@@ -186,12 +186,12 @@ func NewPersistenceManager(
func (pm *persistenceManagerImpl) PersistTemporalNode(ctx context.Context, node *TemporalNode) error {
pm.mu.Lock()
defer pm.mu.Unlock()
// Add to write buffer if enabled
if pm.config.EnableWriteBuffer {
return pm.addToWriteBuffer(node)
}
// Direct persistence
return pm.persistNodeDirect(ctx, node)
}
@@ -200,20 +200,20 @@ func (pm *persistenceManagerImpl) PersistTemporalNode(ctx context.Context, node
func (pm *persistenceManagerImpl) LoadTemporalGraph(ctx context.Context) error {
pm.mu.Lock()
defer pm.mu.Unlock()
// Load from different storage layers
if pm.config.EnableLocalStorage {
if err := pm.loadFromLocalStorage(ctx); err != nil {
return fmt.Errorf("failed to load from local storage: %w", err)
}
}
if pm.config.EnableDistributedStorage {
if err := pm.loadFromDistributedStorage(ctx); err != nil {
return fmt.Errorf("failed to load from distributed storage: %w", err)
}
}
return nil
}
@@ -226,19 +226,19 @@ func (pm *persistenceManagerImpl) SynchronizeGraph(ctx context.Context) (*SyncRe
}
pm.syncInProgress = true
pm.mu.Unlock()
defer func() {
pm.mu.Lock()
pm.syncInProgress = false
pm.lastSyncTime = time.Now()
pm.mu.Unlock()
}()
result := &SyncResult{
StartTime: time.Now(),
Errors: make([]string, 0),
}
// Create local snapshot
localSnapshot, err := pm.createGraphSnapshot()
if err != nil {
@@ -246,31 +246,31 @@ func (pm *persistenceManagerImpl) SynchronizeGraph(ctx context.Context) (*SyncRe
result.Success = false
return result, err
}
// Get remote snapshot
remoteSnapshot, err := pm.getRemoteSnapshot(ctx)
if err != nil {
// Remote might not exist yet, continue with local
remoteSnapshot = nil
}
// Perform synchronization
if remoteSnapshot != nil {
err = pm.performBidirectionalSync(ctx, localSnapshot, remoteSnapshot, result)
} else {
err = pm.performInitialSync(ctx, localSnapshot, result)
}
if err != nil {
result.Errors = append(result.Errors, fmt.Sprintf("sync failed: %v", err))
result.Success = false
} else {
result.Success = true
}
result.EndTime = time.Now()
result.Duration = result.EndTime.Sub(result.StartTime)
return result, err
}
@@ -278,35 +278,27 @@ func (pm *persistenceManagerImpl) SynchronizeGraph(ctx context.Context) (*SyncRe
func (pm *persistenceManagerImpl) BackupGraph(ctx context.Context) error {
pm.mu.RLock()
defer pm.mu.RUnlock()
if !pm.config.EnableAutoBackup {
return fmt.Errorf("backup not enabled")
}
// Create graph snapshot
snapshot, err := pm.createGraphSnapshot()
if err != nil {
return fmt.Errorf("failed to create snapshot: %w", err)
}
// Serialize snapshot
data, err := json.Marshal(snapshot)
if err != nil {
return fmt.Errorf("failed to serialize snapshot: %w", err)
}
// Create backup configuration
backupConfig := &storage.BackupConfig{
Type: "temporal_graph",
Description: "Temporal graph backup",
Tags: []string{"temporal", "graph", "decision"},
Name: "temporal_graph",
Metadata: map[string]interface{}{
"node_count": snapshot.Metadata.NodeCount,
"edge_count": snapshot.Metadata.EdgeCount,
"decision_count": snapshot.Metadata.DecisionCount,
},
}
// Create backup
_, err = pm.backupManager.CreateBackup(ctx, backupConfig)
return err
@@ -316,19 +308,19 @@ func (pm *persistenceManagerImpl) BackupGraph(ctx context.Context) error {
func (pm *persistenceManagerImpl) RestoreGraph(ctx context.Context, backupID string) error {
pm.mu.Lock()
defer pm.mu.Unlock()
// Create restore configuration
restoreConfig := &storage.RestoreConfig{
OverwriteExisting: true,
ValidateIntegrity: true,
}
// Restore from backup
err := pm.backupManager.RestoreBackup(ctx, backupID, restoreConfig)
if err != nil {
return fmt.Errorf("failed to restore backup: %w", err)
}
// Reload graph from storage
return pm.LoadTemporalGraph(ctx)
}
@@ -338,14 +330,14 @@ func (pm *persistenceManagerImpl) RestoreGraph(ctx context.Context, backupID str
func (pm *persistenceManagerImpl) addToWriteBuffer(node *TemporalNode) error {
pm.bufferMutex.Lock()
defer pm.bufferMutex.Unlock()
pm.writeBuffer = append(pm.writeBuffer, node)
// Check if buffer is full
if len(pm.writeBuffer) >= pm.batchSize {
return pm.flushWriteBuffer()
}
return nil
}
@@ -353,59 +345,57 @@ func (pm *persistenceManagerImpl) flushWriteBuffer() error {
if len(pm.writeBuffer) == 0 {
return nil
}
// Create batch store request
batch := &storage.BatchStoreRequest{
Operations: make([]*storage.BatchStoreOperation, len(pm.writeBuffer)),
Contexts: make([]*storage.ContextStoreItem, len(pm.writeBuffer)),
Roles: pm.config.EncryptionRoles,
FailOnError: true,
}
for i, node := range pm.writeBuffer {
key := pm.generateNodeKey(node)
batch.Operations[i] = &storage.BatchStoreOperation{
Type: "store",
Key: key,
Data: node,
Roles: pm.config.EncryptionRoles,
batch.Contexts[i] = &storage.ContextStoreItem{
Context: node,
Roles: pm.config.EncryptionRoles,
}
}
// Execute batch store
ctx := context.Background()
_, err := pm.contextStore.BatchStore(ctx, batch)
if err != nil {
return fmt.Errorf("failed to flush write buffer: %w", err)
}
// Clear buffer
pm.writeBuffer = pm.writeBuffer[:0]
pm.lastFlush = time.Now()
return nil
}
func (pm *persistenceManagerImpl) persistNodeDirect(ctx context.Context, node *TemporalNode) error {
key := pm.generateNodeKey(node)
// Store in different layers
if pm.config.EnableLocalStorage {
if err := pm.localStorage.Store(ctx, key, node, nil); err != nil {
return fmt.Errorf("failed to store in local storage: %w", err)
}
}
if pm.config.EnableDistributedStorage {
if err := pm.distributedStore.Store(ctx, key, node, nil); err != nil {
return fmt.Errorf("failed to store in distributed storage: %w", err)
}
}
if pm.config.EnableEncryption {
if err := pm.encryptedStore.StoreEncrypted(ctx, key, node, pm.config.EncryptionRoles); err != nil {
return fmt.Errorf("failed to store encrypted: %w", err)
}
}
// Add to pending changes for synchronization
change := &PendingChange{
ID: fmt.Sprintf("%s-%d", node.ID, time.Now().UnixNano()),
@@ -415,9 +405,9 @@ func (pm *persistenceManagerImpl) persistNodeDirect(ctx context.Context, node *T
Timestamp: time.Now(),
Metadata: make(map[string]interface{}),
}
pm.pendingChanges[change.ID] = change
return nil
}
@@ -428,51 +418,51 @@ func (pm *persistenceManagerImpl) loadFromLocalStorage(ctx context.Context) erro
if err != nil {
return fmt.Errorf("failed to load metadata: %w", err)
}
var metadata *GraphMetadata
if err := json.Unmarshal(metadataData.([]byte), &metadata); err != nil {
return fmt.Errorf("failed to unmarshal metadata: %w", err)
}
// Load all nodes
pattern := pm.generateNodeKeyPattern()
nodeKeys, err := pm.localStorage.List(ctx, pattern)
if err != nil {
return fmt.Errorf("failed to list nodes: %w", err)
}
// Load nodes in batches
batchReq := &storage.BatchRetrieveRequest{
Keys: nodeKeys,
}
batchResult, err := pm.contextStore.BatchRetrieve(ctx, batchReq)
if err != nil {
return fmt.Errorf("failed to batch retrieve nodes: %w", err)
}
// Reconstruct graph
pm.graph.mu.Lock()
defer pm.graph.mu.Unlock()
pm.graph.nodes = make(map[string]*TemporalNode)
pm.graph.addressToNodes = make(map[string][]*TemporalNode)
pm.graph.influences = make(map[string][]string)
pm.graph.influencedBy = make(map[string][]string)
for key, result := range batchResult.Results {
if result.Error != nil {
continue // Skip failed retrievals
}
var node *TemporalNode
if err := json.Unmarshal(result.Data.([]byte), &node); err != nil {
continue // Skip invalid nodes
}
pm.reconstructGraphNode(node)
}
return nil
}
@@ -485,7 +475,7 @@ func (pm *persistenceManagerImpl) loadFromDistributedStorage(ctx context.Context
func (pm *persistenceManagerImpl) createGraphSnapshot() (*GraphSnapshot, error) {
pm.graph.mu.RLock()
defer pm.graph.mu.RUnlock()
snapshot := &GraphSnapshot{
Timestamp: time.Now(),
Nodes: make(map[string]*TemporalNode),
@@ -502,48 +492,48 @@ func (pm *persistenceManagerImpl) createGraphSnapshot() (*GraphSnapshot, error)
CreatedAt: time.Now(),
},
}
// Copy nodes
for id, node := range pm.graph.nodes {
snapshot.Nodes[id] = node
}
// Copy influences
for id, influences := range pm.graph.influences {
snapshot.Influences[id] = make([]string, len(influences))
copy(snapshot.Influences[id], influences)
}
// Copy influenced by
for id, influencedBy := range pm.graph.influencedBy {
snapshot.InfluencedBy[id] = make([]string, len(influencedBy))
copy(snapshot.InfluencedBy[id], influencedBy)
}
// Copy decisions
for id, decision := range pm.graph.decisions {
snapshot.Decisions[id] = decision
}
// Calculate checksum
snapshot.Checksum = pm.calculateSnapshotChecksum(snapshot)
return snapshot, nil
}
func (pm *persistenceManagerImpl) getRemoteSnapshot(ctx context.Context) (*GraphSnapshot, error) {
key := pm.generateGraphKey()
data, err := pm.distributedStore.Retrieve(ctx, key)
if err != nil {
return nil, err
}
var snapshot *GraphSnapshot
if err := json.Unmarshal(data.([]byte), &snapshot); err != nil {
return nil, fmt.Errorf("failed to unmarshal remote snapshot: %w", err)
}
return snapshot, nil
}
@@ -551,7 +541,7 @@ func (pm *persistenceManagerImpl) performBidirectionalSync(ctx context.Context,
// Compare snapshots and identify differences
conflicts := pm.identifyConflicts(local, remote)
result.ConflictsFound = len(conflicts)
// Resolve conflicts
for _, conflict := range conflicts {
resolved, err := pm.resolveConflict(ctx, conflict)
@@ -559,48 +549,48 @@ func (pm *persistenceManagerImpl) performBidirectionalSync(ctx context.Context,
result.Errors = append(result.Errors, fmt.Sprintf("failed to resolve conflict %s: %v", conflict.NodeID, err))
continue
}
// Apply resolution
if err := pm.applyConflictResolution(ctx, resolved); err != nil {
result.Errors = append(result.Errors, fmt.Sprintf("failed to apply resolution for %s: %v", conflict.NodeID, err))
continue
}
result.ConflictsResolved++
}
// Sync local changes to remote
err := pm.syncLocalToRemote(ctx, local, remote, result)
if err != nil {
return fmt.Errorf("failed to sync local to remote: %w", err)
}
// Sync remote changes to local
err = pm.syncRemoteToLocal(ctx, remote, local, result)
if err != nil {
return fmt.Errorf("failed to sync remote to local: %w", err)
}
return nil
}
func (pm *persistenceManagerImpl) performInitialSync(ctx context.Context, local *GraphSnapshot, result *SyncResult) error {
// Store entire local snapshot to remote
key := pm.generateGraphKey()
data, err := json.Marshal(local)
if err != nil {
return fmt.Errorf("failed to marshal snapshot: %w", err)
}
err = pm.distributedStore.Store(ctx, key, data, nil)
if err != nil {
return fmt.Errorf("failed to store snapshot: %w", err)
}
result.NodesProcessed = len(local.Nodes)
result.NodesCreated = len(local.Nodes)
return nil
}
@@ -609,7 +599,7 @@ func (pm *persistenceManagerImpl) performInitialSync(ctx context.Context, local
func (pm *persistenceManagerImpl) syncWorker() {
ticker := time.NewTicker(pm.config.SyncInterval)
defer ticker.Stop()
for range ticker.C {
ctx := context.Background()
if _, err := pm.SynchronizeGraph(ctx); err != nil {
@@ -622,7 +612,7 @@ func (pm *persistenceManagerImpl) syncWorker() {
func (pm *persistenceManagerImpl) flushWorker() {
ticker := time.NewTicker(pm.flushInterval)
defer ticker.Stop()
for range ticker.C {
pm.bufferMutex.Lock()
if time.Since(pm.lastFlush) >= pm.flushInterval && len(pm.writeBuffer) > 0 {
@@ -637,7 +627,7 @@ func (pm *persistenceManagerImpl) flushWorker() {
func (pm *persistenceManagerImpl) backupWorker() {
ticker := time.NewTicker(pm.config.BackupInterval)
defer ticker.Stop()
for range ticker.C {
ctx := context.Background()
if err := pm.BackupGraph(ctx); err != nil {
@@ -681,7 +671,7 @@ func (pm *persistenceManagerImpl) calculateSnapshotChecksum(snapshot *GraphSnaps
func (pm *persistenceManagerImpl) reconstructGraphNode(node *TemporalNode) {
// Add node to graph
pm.graph.nodes[node.ID] = node
// Update address mapping
addressKey := node.UCXLAddress.String()
if existing, exists := pm.graph.addressToNodes[addressKey]; exists {
@@ -689,17 +679,17 @@ func (pm *persistenceManagerImpl) reconstructGraphNode(node *TemporalNode) {
} else {
pm.graph.addressToNodes[addressKey] = []*TemporalNode{node}
}
// Reconstruct influence relationships
pm.graph.influences[node.ID] = make([]string, 0)
pm.graph.influencedBy[node.ID] = make([]string, 0)
// These would be rebuilt from the influence data in the snapshot
}
func (pm *persistenceManagerImpl) identifyConflicts(local, remote *GraphSnapshot) []*SyncConflict {
conflicts := make([]*SyncConflict, 0)
// Compare nodes
for nodeID, localNode := range local.Nodes {
if remoteNode, exists := remote.Nodes[nodeID]; exists {
@@ -714,7 +704,7 @@ func (pm *persistenceManagerImpl) identifyConflicts(local, remote *GraphSnapshot
}
}
}
return conflicts
}
@@ -727,28 +717,28 @@ func (pm *persistenceManagerImpl) resolveConflict(ctx context.Context, conflict
// Use conflict resolver to resolve the conflict
localNode := conflict.LocalData.(*TemporalNode)
remoteNode := conflict.RemoteData.(*TemporalNode)
resolvedNode, err := pm.conflictResolver.ResolveConflict(ctx, localNode, remoteNode)
if err != nil {
return nil, err
}
return &ConflictResolution{
ConflictID: conflict.NodeID,
Resolution: "merged",
ResolvedData: resolvedNode,
ResolvedAt: time.Now(),
ConflictID: conflict.NodeID,
Resolution: "merged",
ResolvedData: resolvedNode,
ResolvedAt: time.Now(),
}, nil
}
func (pm *persistenceManagerImpl) applyConflictResolution(ctx context.Context, resolution *ConflictResolution) error {
// Apply the resolved node back to the graph
resolvedNode := resolution.ResolvedData.(*TemporalNode)
pm.graph.mu.Lock()
pm.graph.nodes[resolvedNode.ID] = resolvedNode
pm.graph.mu.Unlock()
// Persist the resolved node
return pm.persistNodeDirect(ctx, resolvedNode)
}
@@ -757,7 +747,7 @@ func (pm *persistenceManagerImpl) syncLocalToRemote(ctx context.Context, local,
// Sync nodes that exist locally but not remotely, or are newer locally
for nodeID, localNode := range local.Nodes {
shouldSync := false
if remoteNode, exists := remote.Nodes[nodeID]; exists {
// Check if local is newer
if localNode.Timestamp.After(remoteNode.Timestamp) {
@@ -768,7 +758,7 @@ func (pm *persistenceManagerImpl) syncLocalToRemote(ctx context.Context, local,
shouldSync = true
result.NodesCreated++
}
if shouldSync {
key := pm.generateNodeKey(localNode)
data, err := json.Marshal(localNode)
@@ -776,19 +766,19 @@ func (pm *persistenceManagerImpl) syncLocalToRemote(ctx context.Context, local,
result.Errors = append(result.Errors, fmt.Sprintf("failed to marshal node %s: %v", nodeID, err))
continue
}
err = pm.distributedStore.Store(ctx, key, data, nil)
if err != nil {
result.Errors = append(result.Errors, fmt.Sprintf("failed to sync node %s to remote: %v", nodeID, err))
continue
}
if remoteNode, exists := remote.Nodes[nodeID]; exists && localNode.Timestamp.After(remoteNode.Timestamp) {
result.NodesUpdated++
}
}
}
return nil
}
@@ -796,7 +786,7 @@ func (pm *persistenceManagerImpl) syncRemoteToLocal(ctx context.Context, remote,
// Sync nodes that exist remotely but not locally, or are newer remotely
for nodeID, remoteNode := range remote.Nodes {
shouldSync := false
if localNode, exists := local.Nodes[nodeID]; exists {
// Check if remote is newer
if remoteNode.Timestamp.After(localNode.Timestamp) {
@@ -807,55 +797,41 @@ func (pm *persistenceManagerImpl) syncRemoteToLocal(ctx context.Context, remote,
shouldSync = true
result.NodesCreated++
}
if shouldSync {
// Add to local graph
pm.graph.mu.Lock()
pm.graph.nodes[remoteNode.ID] = remoteNode
pm.reconstructGraphNode(remoteNode)
pm.graph.mu.Unlock()
// Persist locally
err := pm.persistNodeDirect(ctx, remoteNode)
if err != nil {
result.Errors = append(result.Errors, fmt.Sprintf("failed to sync node %s to local: %v", nodeID, err))
continue
}
if localNode, exists := local.Nodes[nodeID]; exists && remoteNode.Timestamp.After(localNode.Timestamp) {
result.NodesUpdated++
}
}
}
return nil
}
// Supporting types for conflict resolution
type SyncConflict struct {
Type ConflictType `json:"type"`
NodeID string `json:"node_id"`
LocalData interface{} `json:"local_data"`
RemoteData interface{} `json:"remote_data"`
Severity string `json:"severity"`
Type ConflictType `json:"type"`
NodeID string `json:"node_id"`
LocalData interface{} `json:"local_data"`
RemoteData interface{} `json:"remote_data"`
Severity string `json:"severity"`
}
type ConflictType string
const (
ConflictTypeNodeMismatch ConflictType = "node_mismatch"
ConflictTypeInfluenceMismatch ConflictType = "influence_mismatch"
ConflictTypeMetadataMismatch ConflictType = "metadata_mismatch"
)
type ConflictResolution struct {
ConflictID string `json:"conflict_id"`
Resolution string `json:"resolution"`
ResolvedData interface{} `json:"resolved_data"`
ResolvedAt time.Time `json:"resolved_at"`
ResolvedBy string `json:"resolved_by"`
}
// Default conflict resolver implementation
// Default conflict resolver implementation
@@ -886,4 +862,4 @@ func (dcr *defaultConflictResolver) ResolveGraphConflict(ctx context.Context, lo
return localGraph, nil
}
return remoteGraph, nil
}
}

View File

@@ -17,45 +17,46 @@ import (
// cascading context resolution with bounded depth traversal.
type ContextNode struct {
// Identity and addressing
ID string `json:"id"` // Unique identifier
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
Path string `json:"path"` // Filesystem path
ID string `json:"id"` // Unique identifier
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
Path string `json:"path"` // Filesystem path
// Core context information
Summary string `json:"summary"` // Brief description
Purpose string `json:"purpose"` // What this component does
Technologies []string `json:"technologies"` // Technologies used
Tags []string `json:"tags"` // Categorization tags
Insights []string `json:"insights"` // Analytical insights
Summary string `json:"summary"` // Brief description
Purpose string `json:"purpose"` // What this component does
Technologies []string `json:"technologies"` // Technologies used
Tags []string `json:"tags"` // Categorization tags
Insights []string `json:"insights"` // Analytical insights
// Hierarchy relationships
Parent *string `json:"parent,omitempty"` // Parent context ID
Children []string `json:"children"` // Child context IDs
Specificity int `json:"specificity"` // Specificity level (higher = more specific)
Parent *string `json:"parent,omitempty"` // Parent context ID
Children []string `json:"children"` // Child context IDs
Specificity int `json:"specificity"` // Specificity level (higher = more specific)
// File metadata
FileType string `json:"file_type"` // File extension or type
Language *string `json:"language,omitempty"` // Programming language
Size *int64 `json:"size,omitempty"` // File size in bytes
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification time
ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection
FileType string `json:"file_type"` // File extension or type
Language *string `json:"language,omitempty"` // Programming language
Size *int64 `json:"size,omitempty"` // File size in bytes
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification time
ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection
// Resolution metadata
CreatedBy string `json:"created_by"` // Who/what created this context
CreatedAt time.Time `json:"created_at"` // When created
UpdatedAt time.Time `json:"updated_at"` // When last updated
Confidence float64 `json:"confidence"` // Confidence in accuracy (0-1)
CreatedBy string `json:"created_by"` // Who/what created this context
CreatedAt time.Time `json:"created_at"` // When created
UpdatedAt time.Time `json:"updated_at"` // When last updated
UpdatedBy string `json:"updated_by"` // Who performed the last update
Confidence float64 `json:"confidence"` // Confidence in accuracy (0-1)
// Cascading behavior rules
AppliesTo ContextScope `json:"applies_to"` // Scope of application
Overrides bool `json:"overrides"` // Whether this overrides parent context
AppliesTo ContextScope `json:"applies_to"` // Scope of application
Overrides bool `json:"overrides"` // Whether this overrides parent context
// Security and access control
EncryptedFor []string `json:"encrypted_for"` // Roles that can access
AccessLevel crypto.AccessLevel `json:"access_level"` // Access level required
EncryptedFor []string `json:"encrypted_for"` // Roles that can access
AccessLevel crypto.AccessLevel `json:"access_level"` // Access level required
// Custom metadata
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
}
// ResolvedContext represents the final resolved context for a UCXL address.
@@ -64,41 +65,41 @@ type ContextNode struct {
// information from multiple hierarchy levels and applying global contexts.
type ResolvedContext struct {
// Resolved context data
UCXLAddress string `json:"ucxl_address"` // Original UCXL address
Summary string `json:"summary"` // Resolved summary
Purpose string `json:"purpose"` // Resolved purpose
Technologies []string `json:"technologies"` // Merged technologies
Tags []string `json:"tags"` // Merged tags
Insights []string `json:"insights"` // Merged insights
UCXLAddress string `json:"ucxl_address"` // Original UCXL address
Summary string `json:"summary"` // Resolved summary
Purpose string `json:"purpose"` // Resolved purpose
Technologies []string `json:"technologies"` // Merged technologies
Tags []string `json:"tags"` // Merged tags
Insights []string `json:"insights"` // Merged insights
// File information
FileType string `json:"file_type"` // File type
Language *string `json:"language,omitempty"` // Programming language
Size *int64 `json:"size,omitempty"` // File size
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification
ContentHash *string `json:"content_hash,omitempty"` // Content hash
FileType string `json:"file_type"` // File type
Language *string `json:"language,omitempty"` // Programming language
Size *int64 `json:"size,omitempty"` // File size
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification
ContentHash *string `json:"content_hash,omitempty"` // Content hash
// Resolution metadata
SourcePath string `json:"source_path"` // Primary source context path
InheritanceChain []string `json:"inheritance_chain"` // Context inheritance chain
Confidence float64 `json:"confidence"` // Overall confidence (0-1)
BoundedDepth int `json:"bounded_depth"` // Actual traversal depth used
GlobalApplied bool `json:"global_applied"` // Whether global contexts were applied
ResolvedAt time.Time `json:"resolved_at"` // When resolution occurred
SourcePath string `json:"source_path"` // Primary source context path
InheritanceChain []string `json:"inheritance_chain"` // Context inheritance chain
Confidence float64 `json:"confidence"` // Overall confidence (0-1)
BoundedDepth int `json:"bounded_depth"` // Actual traversal depth used
GlobalApplied bool `json:"global_applied"` // Whether global contexts were applied
ResolvedAt time.Time `json:"resolved_at"` // When resolution occurred
// Temporal information
Version int `json:"version"` // Current version number
LastUpdated time.Time `json:"last_updated"` // When context was last updated
EvolutionHistory []string `json:"evolution_history"` // Brief evolution history
// Access control
AccessibleBy []string `json:"accessible_by"` // Roles that can access this
EncryptionKeys []string `json:"encryption_keys"` // Keys used for encryption
AccessibleBy []string `json:"accessible_by"` // Roles that can access this
EncryptionKeys []string `json:"encryption_keys"` // Keys used for encryption
// Performance metadata
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
CacheHit bool `json:"cache_hit"` // Whether result was cached
NodesTraversed int `json:"nodes_traversed"` // Number of hierarchy nodes traversed
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
CacheHit bool `json:"cache_hit"` // Whether result was cached
NodesTraversed int `json:"nodes_traversed"` // Number of hierarchy nodes traversed
}
// ContextScope defines the scope of a context node's application
@@ -117,38 +118,38 @@ const (
// simple chronological progression.
type TemporalNode struct {
// Node identity
ID string `json:"id"` // Unique temporal node ID
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
Version int `json:"version"` // Version number (monotonic)
ID string `json:"id"` // Unique temporal node ID
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
Version int `json:"version"` // Version number (monotonic)
// Context snapshot
Context ContextNode `json:"context"` // Context data at this point
Context ContextNode `json:"context"` // Context data at this point
// Temporal metadata
Timestamp time.Time `json:"timestamp"` // When this version was created
DecisionID string `json:"decision_id"` // Associated decision identifier
ChangeReason ChangeReason `json:"change_reason"` // Why context changed
Timestamp time.Time `json:"timestamp"` // When this version was created
DecisionID string `json:"decision_id"` // Associated decision identifier
ChangeReason ChangeReason `json:"change_reason"` // Why context changed
ParentNode *string `json:"parent_node,omitempty"` // Previous version ID
// Evolution tracking
ContextHash string `json:"context_hash"` // Hash of context content
Confidence float64 `json:"confidence"` // Confidence in this version (0-1)
Staleness float64 `json:"staleness"` // Staleness indicator (0-1)
ContextHash string `json:"context_hash"` // Hash of context content
Confidence float64 `json:"confidence"` // Confidence in this version (0-1)
Staleness float64 `json:"staleness"` // Staleness indicator (0-1)
// Decision graph relationships
Influences []string `json:"influences"` // UCXL addresses this influences
InfluencedBy []string `json:"influenced_by"` // UCXL addresses that influence this
// Validation metadata
ValidatedBy []string `json:"validated_by"` // Who/what validated this
LastValidated time.Time `json:"last_validated"` // When last validated
// Change impact analysis
ImpactScope ImpactScope `json:"impact_scope"` // Scope of change impact
PropagatedTo []string `json:"propagated_to"` // Addresses that received impact
ImpactScope ImpactScope `json:"impact_scope"` // Scope of change impact
PropagatedTo []string `json:"propagated_to"` // Addresses that received impact
// Custom temporal metadata
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
}
// DecisionMetadata represents metadata about a decision that changed context.
@@ -157,56 +158,56 @@ type TemporalNode struct {
// representing why and how context evolved rather than just when.
type DecisionMetadata struct {
// Decision identity
ID string `json:"id"` // Unique decision identifier
Maker string `json:"maker"` // Who/what made the decision
Rationale string `json:"rationale"` // Why the decision was made
ID string `json:"id"` // Unique decision identifier
Maker string `json:"maker"` // Who/what made the decision
Rationale string `json:"rationale"` // Why the decision was made
// Impact and scope
Scope ImpactScope `json:"scope"` // Scope of impact
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in decision (0-1)
Scope ImpactScope `json:"scope"` // Scope of impact
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in decision (0-1)
// External references
ExternalRefs []string `json:"external_refs"` // External references (URLs, docs)
GitCommit *string `json:"git_commit,omitempty"` // Associated git commit
IssueNumber *int `json:"issue_number,omitempty"` // Associated issue number
PullRequestNumber *int `json:"pull_request,omitempty"` // Associated PR number
ExternalRefs []string `json:"external_refs"` // External references (URLs, docs)
GitCommit *string `json:"git_commit,omitempty"` // Associated git commit
IssueNumber *int `json:"issue_number,omitempty"` // Associated issue number
PullRequestNumber *int `json:"pull_request,omitempty"` // Associated PR number
// Timing information
CreatedAt time.Time `json:"created_at"` // When decision was made
EffectiveAt *time.Time `json:"effective_at,omitempty"` // When decision takes effect
ExpiresAt *time.Time `json:"expires_at,omitempty"` // When decision expires
CreatedAt time.Time `json:"created_at"` // When decision was made
EffectiveAt *time.Time `json:"effective_at,omitempty"` // When decision takes effect
ExpiresAt *time.Time `json:"expires_at,omitempty"` // When decision expires
// Decision quality
ReviewedBy []string `json:"reviewed_by,omitempty"` // Who reviewed this decision
ApprovedBy []string `json:"approved_by,omitempty"` // Who approved this decision
ReviewedBy []string `json:"reviewed_by,omitempty"` // Who reviewed this decision
ApprovedBy []string `json:"approved_by,omitempty"` // Who approved this decision
// Implementation tracking
ImplementationStatus string `json:"implementation_status"` // Status: planned, active, complete, cancelled
ImplementationNotes string `json:"implementation_notes"` // Implementation details
ImplementationStatus string `json:"implementation_status"` // Status: planned, active, complete, cancelled
ImplementationNotes string `json:"implementation_notes"` // Implementation details
// Custom metadata
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
}
// ChangeReason represents why context changed
type ChangeReason string
const (
ReasonInitialCreation ChangeReason = "initial_creation" // First time context creation
ReasonCodeChange ChangeReason = "code_change" // Code modification
ReasonDesignDecision ChangeReason = "design_decision" // Design/architecture decision
ReasonRefactoring ChangeReason = "refactoring" // Code refactoring
ReasonArchitectureChange ChangeReason = "architecture_change" // Major architecture change
ReasonRequirementsChange ChangeReason = "requirements_change" // Requirements modification
ReasonLearningEvolution ChangeReason = "learning_evolution" // Improved understanding
ReasonRAGEnhancement ChangeReason = "rag_enhancement" // RAG system enhancement
ReasonTeamInput ChangeReason = "team_input" // Team member input
ReasonBugDiscovery ChangeReason = "bug_discovery" // Bug found that changes understanding
ReasonPerformanceInsight ChangeReason = "performance_insight" // Performance analysis insight
ReasonSecurityReview ChangeReason = "security_review" // Security analysis
ReasonDependencyChange ChangeReason = "dependency_change" // Dependency update
ReasonEnvironmentChange ChangeReason = "environment_change" // Environment configuration change
ReasonToolingUpdate ChangeReason = "tooling_update" // Development tooling update
ReasonInitialCreation ChangeReason = "initial_creation" // First time context creation
ReasonCodeChange ChangeReason = "code_change" // Code modification
ReasonDesignDecision ChangeReason = "design_decision" // Design/architecture decision
ReasonRefactoring ChangeReason = "refactoring" // Code refactoring
ReasonArchitectureChange ChangeReason = "architecture_change" // Major architecture change
ReasonRequirementsChange ChangeReason = "requirements_change" // Requirements modification
ReasonLearningEvolution ChangeReason = "learning_evolution" // Improved understanding
ReasonRAGEnhancement ChangeReason = "rag_enhancement" // RAG system enhancement
ReasonTeamInput ChangeReason = "team_input" // Team member input
ReasonBugDiscovery ChangeReason = "bug_discovery" // Bug found that changes understanding
ReasonPerformanceInsight ChangeReason = "performance_insight" // Performance analysis insight
ReasonSecurityReview ChangeReason = "security_review" // Security analysis
ReasonDependencyChange ChangeReason = "dependency_change" // Dependency update
ReasonEnvironmentChange ChangeReason = "environment_change" // Environment configuration change
ReasonToolingUpdate ChangeReason = "tooling_update" // Development tooling update
ReasonDocumentationUpdate ChangeReason = "documentation_update" // Documentation improvement
)
@@ -222,11 +223,11 @@ const (
// DecisionPath represents a path between two decision points in the temporal graph
type DecisionPath struct {
From string `json:"from"` // Starting UCXL address
To string `json:"to"` // Ending UCXL address
Steps []*DecisionStep `json:"steps"` // Path steps
TotalHops int `json:"total_hops"` // Total decision hops
PathType string `json:"path_type"` // Type of path (direct, influence, etc.)
From string `json:"from"` // Starting UCXL address
To string `json:"to"` // Ending UCXL address
Steps []*DecisionStep `json:"steps"` // Path steps
TotalHops int `json:"total_hops"` // Total decision hops
PathType string `json:"path_type"` // Type of path (direct, influence, etc.)
}
// DecisionStep represents a single step in a decision path
@@ -239,7 +240,7 @@ type DecisionStep struct {
// DecisionTimeline represents the decision evolution timeline for a context
type DecisionTimeline struct {
PrimaryAddress string `json:"primary_address"` // Main UCXL address
PrimaryAddress string `json:"primary_address"` // Main UCXL address
DecisionSequence []*DecisionTimelineEntry `json:"decision_sequence"` // Ordered by decision hops
RelatedDecisions []*RelatedDecision `json:"related_decisions"` // Related decisions within hop limit
TotalDecisions int `json:"total_decisions"` // Total decisions in timeline
@@ -249,40 +250,40 @@ type DecisionTimeline struct {
// DecisionTimelineEntry represents an entry in the decision timeline
type DecisionTimelineEntry struct {
Version int `json:"version"` // Version number
DecisionHop int `json:"decision_hop"` // Decision distance from initial
ChangeReason ChangeReason `json:"change_reason"` // Why it changed
DecisionMaker string `json:"decision_maker"` // Who made the decision
DecisionRationale string `json:"decision_rationale"` // Rationale for decision
ConfidenceEvolution float64 `json:"confidence_evolution"` // Confidence at this point
Timestamp time.Time `json:"timestamp"` // When decision occurred
InfluencesCount int `json:"influences_count"` // Number of influenced addresses
InfluencedByCount int `json:"influenced_by_count"` // Number of influencing addresses
ImpactScope ImpactScope `json:"impact_scope"` // Scope of this decision
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
Version int `json:"version"` // Version number
DecisionHop int `json:"decision_hop"` // Decision distance from initial
ChangeReason ChangeReason `json:"change_reason"` // Why it changed
DecisionMaker string `json:"decision_maker"` // Who made the decision
DecisionRationale string `json:"decision_rationale"` // Rationale for decision
ConfidenceEvolution float64 `json:"confidence_evolution"` // Confidence at this point
Timestamp time.Time `json:"timestamp"` // When decision occurred
InfluencesCount int `json:"influences_count"` // Number of influenced addresses
InfluencedByCount int `json:"influenced_by_count"` // Number of influencing addresses
ImpactScope ImpactScope `json:"impact_scope"` // Scope of this decision
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
}
// RelatedDecision represents a decision related through the influence graph
type RelatedDecision struct {
Address string `json:"address"` // UCXL address
DecisionHops int `json:"decision_hops"` // Hops from primary address
LatestVersion int `json:"latest_version"` // Latest version number
ChangeReason ChangeReason `json:"change_reason"` // Latest change reason
DecisionMaker string `json:"decision_maker"` // Latest decision maker
Confidence float64 `json:"confidence"` // Current confidence
LastDecisionTimestamp time.Time `json:"last_decision_timestamp"` // When last decision occurred
RelationshipType string `json:"relationship_type"` // Type of relationship (influences, influenced_by)
Address string `json:"address"` // UCXL address
DecisionHops int `json:"decision_hops"` // Hops from primary address
LatestVersion int `json:"latest_version"` // Latest version number
ChangeReason ChangeReason `json:"change_reason"` // Latest change reason
DecisionMaker string `json:"decision_maker"` // Latest decision maker
Confidence float64 `json:"confidence"` // Current confidence
LastDecisionTimestamp time.Time `json:"last_decision_timestamp"` // When last decision occurred
RelationshipType string `json:"relationship_type"` // Type of relationship (influences, influenced_by)
}
// TimelineAnalysis contains analysis metadata for decision timelines
type TimelineAnalysis struct {
ChangeVelocity float64 `json:"change_velocity"` // Changes per unit time
ConfidenceTrend string `json:"confidence_trend"` // increasing, decreasing, stable
DominantChangeReasons []ChangeReason `json:"dominant_change_reasons"` // Most common reasons
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
ChangeVelocity float64 `json:"change_velocity"` // Changes per unit time
ConfidenceTrend string `json:"confidence_trend"` // increasing, decreasing, stable
DominantChangeReasons []ChangeReason `json:"dominant_change_reasons"` // Most common reasons
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
ImpactScopeDistribution map[ImpactScope]int `json:"impact_scope_distribution"` // Distribution of impact scopes
InfluenceNetworkSize int `json:"influence_network_size"` // Size of influence network
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
InfluenceNetworkSize int `json:"influence_network_size"` // Size of influence network
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
}
// NavigationDirection represents direction for temporal navigation
@@ -295,77 +296,77 @@ const (
// StaleContext represents a potentially outdated context
type StaleContext struct {
UCXLAddress string `json:"ucxl_address"` // Address of stale context
TemporalNode *TemporalNode `json:"temporal_node"` // Latest temporal node
StalenessScore float64 `json:"staleness_score"` // Staleness score (0-1)
LastUpdated time.Time `json:"last_updated"` // When last updated
Reasons []string `json:"reasons"` // Reasons why considered stale
SuggestedActions []string `json:"suggested_actions"` // Suggested remediation actions
UCXLAddress string `json:"ucxl_address"` // Address of stale context
TemporalNode *TemporalNode `json:"temporal_node"` // Latest temporal node
StalenessScore float64 `json:"staleness_score"` // Staleness score (0-1)
LastUpdated time.Time `json:"last_updated"` // When last updated
Reasons []string `json:"reasons"` // Reasons why considered stale
SuggestedActions []string `json:"suggested_actions"` // Suggested remediation actions
}
// GenerationOptions configures context generation behavior
type GenerationOptions struct {
// Analysis options
AnalyzeContent bool `json:"analyze_content"` // Analyze file content
AnalyzeStructure bool `json:"analyze_structure"` // Analyze directory structure
AnalyzeHistory bool `json:"analyze_history"` // Analyze git history
AnalyzeDependencies bool `json:"analyze_dependencies"` // Analyze dependencies
AnalyzeContent bool `json:"analyze_content"` // Analyze file content
AnalyzeStructure bool `json:"analyze_structure"` // Analyze directory structure
AnalyzeHistory bool `json:"analyze_history"` // Analyze git history
AnalyzeDependencies bool `json:"analyze_dependencies"` // Analyze dependencies
// Generation scope
MaxDepth int `json:"max_depth"` // Maximum directory depth
IncludePatterns []string `json:"include_patterns"` // File patterns to include
ExcludePatterns []string `json:"exclude_patterns"` // File patterns to exclude
MaxDepth int `json:"max_depth"` // Maximum directory depth
IncludePatterns []string `json:"include_patterns"` // File patterns to include
ExcludePatterns []string `json:"exclude_patterns"` // File patterns to exclude
// Quality settings
MinConfidence float64 `json:"min_confidence"` // Minimum confidence threshold
RequireValidation bool `json:"require_validation"` // Require human validation
MinConfidence float64 `json:"min_confidence"` // Minimum confidence threshold
RequireValidation bool `json:"require_validation"` // Require human validation
// External integration
UseRAG bool `json:"use_rag"` // Use RAG for enhancement
RAGEndpoint string `json:"rag_endpoint"` // RAG service endpoint
UseRAG bool `json:"use_rag"` // Use RAG for enhancement
RAGEndpoint string `json:"rag_endpoint"` // RAG service endpoint
// Output options
EncryptForRoles []string `json:"encrypt_for_roles"` // Roles to encrypt for
EncryptForRoles []string `json:"encrypt_for_roles"` // Roles to encrypt for
// Performance limits
Timeout time.Duration `json:"timeout"` // Generation timeout
MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze
Timeout time.Duration `json:"timeout"` // Generation timeout
MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze
// Custom options
CustomOptions map[string]interface{} `json:"custom_options,omitempty"` // Additional options
CustomOptions map[string]interface{} `json:"custom_options,omitempty"` // Additional options
}
// HierarchyStats represents statistics about hierarchy generation
type HierarchyStats struct {
NodesCreated int `json:"nodes_created"` // Number of nodes created
NodesUpdated int `json:"nodes_updated"` // Number of nodes updated
FilesAnalyzed int `json:"files_analyzed"` // Number of files analyzed
DirectoriesScanned int `json:"directories_scanned"` // Number of directories scanned
GenerationTime time.Duration `json:"generation_time"` // Time taken for generation
AverageConfidence float64 `json:"average_confidence"` // Average confidence score
TotalSize int64 `json:"total_size"` // Total size of analyzed content
SkippedFiles int `json:"skipped_files"` // Number of files skipped
Errors []string `json:"errors"` // Generation errors
NodesCreated int `json:"nodes_created"` // Number of nodes created
NodesUpdated int `json:"nodes_updated"` // Number of nodes updated
FilesAnalyzed int `json:"files_analyzed"` // Number of files analyzed
DirectoriesScanned int `json:"directories_scanned"` // Number of directories scanned
GenerationTime time.Duration `json:"generation_time"` // Time taken for generation
AverageConfidence float64 `json:"average_confidence"` // Average confidence score
TotalSize int64 `json:"total_size"` // Total size of analyzed content
SkippedFiles int `json:"skipped_files"` // Number of files skipped
Errors []string `json:"errors"` // Generation errors
}
// ValidationResult represents the result of context validation
type ValidationResult struct {
Valid bool `json:"valid"` // Whether context is valid
ConfidenceScore float64 `json:"confidence_score"` // Overall confidence (0-1)
QualityScore float64 `json:"quality_score"` // Quality assessment (0-1)
Issues []*ValidationIssue `json:"issues"` // Validation issues found
Suggestions []*ValidationSuggestion `json:"suggestions"` // Improvement suggestions
ValidatedAt time.Time `json:"validated_at"` // When validation occurred
ValidatedBy string `json:"validated_by"` // Who/what performed validation
Valid bool `json:"valid"` // Whether context is valid
ConfidenceScore float64 `json:"confidence_score"` // Overall confidence (0-1)
QualityScore float64 `json:"quality_score"` // Quality assessment (0-1)
Issues []*ValidationIssue `json:"issues"` // Validation issues found
Suggestions []*ValidationSuggestion `json:"suggestions"` // Improvement suggestions
ValidatedAt time.Time `json:"validated_at"` // When validation occurred
ValidatedBy string `json:"validated_by"` // Who/what performed validation
}
// ValidationIssue represents an issue found during validation
type ValidationIssue struct {
Severity string `json:"severity"` // error, warning, info
Message string `json:"message"` // Issue description
Field string `json:"field"` // Affected field
Suggestion string `json:"suggestion"` // How to fix
Severity string `json:"severity"` // error, warning, info
Message string `json:"message"` // Issue description
Field string `json:"field"` // Affected field
Suggestion string `json:"suggestion"` // How to fix
}
// ValidationSuggestion represents a suggestion for context improvement
@@ -378,24 +379,24 @@ type ValidationSuggestion struct {
// CostEstimate represents estimated resource cost for operations
type CostEstimate struct {
CPUCost float64 `json:"cpu_cost"` // Estimated CPU cost
MemoryCost float64 `json:"memory_cost"` // Estimated memory cost
StorageCost float64 `json:"storage_cost"` // Estimated storage cost
TimeCost time.Duration `json:"time_cost"` // Estimated time cost
TotalCost float64 `json:"total_cost"` // Total normalized cost
CPUCost float64 `json:"cpu_cost"` // Estimated CPU cost
MemoryCost float64 `json:"memory_cost"` // Estimated memory cost
StorageCost float64 `json:"storage_cost"` // Estimated storage cost
TimeCost time.Duration `json:"time_cost"` // Estimated time cost
TotalCost float64 `json:"total_cost"` // Total normalized cost
CostBreakdown map[string]float64 `json:"cost_breakdown"` // Detailed cost breakdown
}
// AnalysisResult represents the result of context analysis
type AnalysisResult struct {
QualityScore float64 `json:"quality_score"` // Overall quality (0-1)
ConsistencyScore float64 `json:"consistency_score"` // Consistency with hierarchy
CompletenessScore float64 `json:"completeness_score"` // Completeness assessment
AccuracyScore float64 `json:"accuracy_score"` // Accuracy assessment
Issues []*AnalysisIssue `json:"issues"` // Issues found
Strengths []string `json:"strengths"` // Context strengths
Improvements []*Suggestion `json:"improvements"` // Improvement suggestions
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis occurred
QualityScore float64 `json:"quality_score"` // Overall quality (0-1)
ConsistencyScore float64 `json:"consistency_score"` // Consistency with hierarchy
CompletenessScore float64 `json:"completeness_score"` // Completeness assessment
AccuracyScore float64 `json:"accuracy_score"` // Accuracy assessment
Issues []*AnalysisIssue `json:"issues"` // Issues found
Strengths []string `json:"strengths"` // Context strengths
Improvements []*Suggestion `json:"improvements"` // Improvement suggestions
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis occurred
}
// AnalysisIssue represents an issue found during analysis
@@ -418,86 +419,86 @@ type Suggestion struct {
// Pattern represents a detected context pattern
type Pattern struct {
ID string `json:"id"` // Pattern identifier
Name string `json:"name"` // Pattern name
Description string `json:"description"` // Pattern description
ID string `json:"id"` // Pattern identifier
Name string `json:"name"` // Pattern name
Description string `json:"description"` // Pattern description
MatchCriteria map[string]interface{} `json:"match_criteria"` // Criteria for matching
Confidence float64 `json:"confidence"` // Pattern confidence (0-1)
Frequency int `json:"frequency"` // How often pattern appears
Examples []string `json:"examples"` // Example contexts that match
CreatedAt time.Time `json:"created_at"` // When pattern was detected
Confidence float64 `json:"confidence"` // Pattern confidence (0-1)
Frequency int `json:"frequency"` // How often pattern appears
Examples []string `json:"examples"` // Example contexts that match
CreatedAt time.Time `json:"created_at"` // When pattern was detected
}
// PatternMatch represents a match between context and pattern
type PatternMatch struct {
PatternID string `json:"pattern_id"` // ID of matched pattern
MatchScore float64 `json:"match_score"` // How well it matches (0-1)
PatternID string `json:"pattern_id"` // ID of matched pattern
MatchScore float64 `json:"match_score"` // How well it matches (0-1)
MatchedFields []string `json:"matched_fields"` // Which fields matched
Confidence float64 `json:"confidence"` // Confidence in match
Confidence float64 `json:"confidence"` // Confidence in match
}
// ContextPattern represents a registered context pattern template
type ContextPattern struct {
ID string `json:"id"` // Pattern identifier
Name string `json:"name"` // Human-readable name
Description string `json:"description"` // Pattern description
Template *ContextNode `json:"template"` // Template for matching
Criteria map[string]interface{} `json:"criteria"` // Matching criteria
Priority int `json:"priority"` // Pattern priority
CreatedBy string `json:"created_by"` // Who created pattern
CreatedAt time.Time `json:"created_at"` // When created
UpdatedAt time.Time `json:"updated_at"` // When last updated
UsageCount int `json:"usage_count"` // How often used
ID string `json:"id"` // Pattern identifier
Name string `json:"name"` // Human-readable name
Description string `json:"description"` // Pattern description
Template *ContextNode `json:"template"` // Template for matching
Criteria map[string]interface{} `json:"criteria"` // Matching criteria
Priority int `json:"priority"` // Pattern priority
CreatedBy string `json:"created_by"` // Who created pattern
CreatedAt time.Time `json:"created_at"` // When created
UpdatedAt time.Time `json:"updated_at"` // When last updated
UsageCount int `json:"usage_count"` // How often used
}
// Inconsistency represents a detected inconsistency in the context hierarchy
type Inconsistency struct {
Type string `json:"type"` // Type of inconsistency
Description string `json:"description"` // Description of the issue
AffectedNodes []string `json:"affected_nodes"` // Nodes involved
Severity string `json:"severity"` // Severity level
Suggestion string `json:"suggestion"` // How to resolve
DetectedAt time.Time `json:"detected_at"` // When detected
Type string `json:"type"` // Type of inconsistency
Description string `json:"description"` // Description of the issue
AffectedNodes []string `json:"affected_nodes"` // Nodes involved
Severity string `json:"severity"` // Severity level
Suggestion string `json:"suggestion"` // How to resolve
DetectedAt time.Time `json:"detected_at"` // When detected
}
// SearchQuery represents a context search query
type SearchQuery struct {
// Query terms
Query string `json:"query"` // Main search query
Tags []string `json:"tags"` // Required tags
Technologies []string `json:"technologies"` // Required technologies
FileTypes []string `json:"file_types"` // File types to include
Query string `json:"query"` // Main search query
Tags []string `json:"tags"` // Required tags
Technologies []string `json:"technologies"` // Required technologies
FileTypes []string `json:"file_types"` // File types to include
// Filters
MinConfidence float64 `json:"min_confidence"` // Minimum confidence
MaxAge *time.Duration `json:"max_age"` // Maximum age
Roles []string `json:"roles"` // Required access roles
MinConfidence float64 `json:"min_confidence"` // Minimum confidence
MaxAge *time.Duration `json:"max_age"` // Maximum age
Roles []string `json:"roles"` // Required access roles
// Scope
Scope []string `json:"scope"` // Paths to search within
ExcludeScope []string `json:"exclude_scope"` // Paths to exclude
Scope []string `json:"scope"` // Paths to search within
ExcludeScope []string `json:"exclude_scope"` // Paths to exclude
// Result options
Limit int `json:"limit"` // Maximum results
Offset int `json:"offset"` // Result offset
SortBy string `json:"sort_by"` // Sort field
SortOrder string `json:"sort_order"` // asc, desc
Limit int `json:"limit"` // Maximum results
Offset int `json:"offset"` // Result offset
SortBy string `json:"sort_by"` // Sort field
SortOrder string `json:"sort_order"` // asc, desc
// Advanced options
FuzzyMatch bool `json:"fuzzy_match"` // Enable fuzzy matching
IncludeStale bool `json:"include_stale"` // Include stale contexts
FuzzyMatch bool `json:"fuzzy_match"` // Enable fuzzy matching
IncludeStale bool `json:"include_stale"` // Include stale contexts
TemporalFilter *TemporalFilter `json:"temporal_filter"` // Temporal filtering
}
// TemporalFilter represents temporal filtering options
type TemporalFilter struct {
FromTime *time.Time `json:"from_time"` // Start time
ToTime *time.Time `json:"to_time"` // End time
VersionRange *VersionRange `json:"version_range"` // Version range
ChangeReasons []ChangeReason `json:"change_reasons"` // Specific change reasons
DecisionMakers []string `json:"decision_makers"` // Specific decision makers
MinDecisionHops int `json:"min_decision_hops"` // Minimum decision hops
MaxDecisionHops int `json:"max_decision_hops"` // Maximum decision hops
FromTime *time.Time `json:"from_time"` // Start time
ToTime *time.Time `json:"to_time"` // End time
VersionRange *VersionRange `json:"version_range"` // Version range
ChangeReasons []ChangeReason `json:"change_reasons"` // Specific change reasons
DecisionMakers []string `json:"decision_makers"` // Specific decision makers
MinDecisionHops int `json:"min_decision_hops"` // Minimum decision hops
MaxDecisionHops int `json:"max_decision_hops"` // Maximum decision hops
}
// VersionRange represents a range of versions
@@ -509,58 +510,58 @@ type VersionRange struct {
// SearchResult represents a single search result
type SearchResult struct {
Context *ResolvedContext `json:"context"` // Resolved context
TemporalNode *TemporalNode `json:"temporal_node"` // Associated temporal node
MatchScore float64 `json:"match_score"` // How well it matches query (0-1)
MatchedFields []string `json:"matched_fields"` // Which fields matched
Snippet string `json:"snippet"` // Text snippet showing match
Rank int `json:"rank"` // Result rank
TemporalNode *TemporalNode `json:"temporal_node"` // Associated temporal node
MatchScore float64 `json:"match_score"` // How well it matches query (0-1)
MatchedFields []string `json:"matched_fields"` // Which fields matched
Snippet string `json:"snippet"` // Text snippet showing match
Rank int `json:"rank"` // Result rank
}
// IndexMetadata represents metadata for context indexing
type IndexMetadata struct {
IndexType string `json:"index_type"` // Type of index
IndexedFields []string `json:"indexed_fields"` // Fields that are indexed
IndexedAt time.Time `json:"indexed_at"` // When indexed
IndexVersion string `json:"index_version"` // Index version
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
IndexType string `json:"index_type"` // Type of index
IndexedFields []string `json:"indexed_fields"` // Fields that are indexed
IndexedAt time.Time `json:"indexed_at"` // When indexed
IndexVersion string `json:"index_version"` // Index version
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
}
// DecisionAnalysis represents analysis of decision patterns
type DecisionAnalysis struct {
TotalDecisions int `json:"total_decisions"` // Total decisions analyzed
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
ChangeReasons map[ChangeReason]int `json:"change_reasons"` // Change reason frequency
ImpactScopes map[ImpactScope]int `json:"impact_scopes"` // Impact scope distribution
ConfidenceTrends map[string]float64 `json:"confidence_trends"` // Confidence trends over time
DecisionFrequency map[string]int `json:"decision_frequency"` // Decisions per time period
InfluenceNetworkStats *InfluenceNetworkStats `json:"influence_network_stats"` // Network statistics
Patterns []*DecisionPattern `json:"patterns"` // Detected decision patterns
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
AnalysisTimeSpan time.Duration `json:"analysis_time_span"` // Time span analyzed
TotalDecisions int `json:"total_decisions"` // Total decisions analyzed
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
ChangeReasons map[ChangeReason]int `json:"change_reasons"` // Change reason frequency
ImpactScopes map[ImpactScope]int `json:"impact_scopes"` // Impact scope distribution
ConfidenceTrends map[string]float64 `json:"confidence_trends"` // Confidence trends over time
DecisionFrequency map[string]int `json:"decision_frequency"` // Decisions per time period
InfluenceNetworkStats *InfluenceNetworkStats `json:"influence_network_stats"` // Network statistics
Patterns []*DecisionPattern `json:"patterns"` // Detected decision patterns
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
AnalysisTimeSpan time.Duration `json:"analysis_time_span"` // Time span analyzed
}
// InfluenceNetworkStats represents statistics about the influence network
type InfluenceNetworkStats struct {
TotalNodes int `json:"total_nodes"` // Total nodes in network
TotalEdges int `json:"total_edges"` // Total influence relationships
AverageConnections float64 `json:"average_connections"` // Average connections per node
MaxConnections int `json:"max_connections"` // Maximum connections for any node
NetworkDensity float64 `json:"network_density"` // Network density (0-1)
ClusteringCoeff float64 `json:"clustering_coeff"` // Clustering coefficient
MaxPathLength int `json:"max_path_length"` // Maximum path length in network
CentralNodes []string `json:"central_nodes"` // Most central nodes
TotalNodes int `json:"total_nodes"` // Total nodes in network
TotalEdges int `json:"total_edges"` // Total influence relationships
AverageConnections float64 `json:"average_connections"` // Average connections per node
MaxConnections int `json:"max_connections"` // Maximum connections for any node
NetworkDensity float64 `json:"network_density"` // Network density (0-1)
ClusteringCoeff float64 `json:"clustering_coeff"` // Clustering coefficient
MaxPathLength int `json:"max_path_length"` // Maximum path length in network
CentralNodes []string `json:"central_nodes"` // Most central nodes
}
// DecisionPattern represents a detected pattern in decision-making
type DecisionPattern struct {
ID string `json:"id"` // Pattern identifier
Name string `json:"name"` // Pattern name
Description string `json:"description"` // Pattern description
Frequency int `json:"frequency"` // How often this pattern occurs
Confidence float64 `json:"confidence"` // Confidence in pattern (0-1)
ExampleDecisions []string `json:"example_decisions"` // Example decisions that match
Characteristics map[string]interface{} `json:"characteristics"` // Pattern characteristics
DetectedAt time.Time `json:"detected_at"` // When pattern was detected
ID string `json:"id"` // Pattern identifier
Name string `json:"name"` // Pattern name
Description string `json:"description"` // Pattern description
Frequency int `json:"frequency"` // How often this pattern occurs
Confidence float64 `json:"confidence"` // Confidence in pattern (0-1)
ExampleDecisions []string `json:"example_decisions"` // Example decisions that match
Characteristics map[string]interface{} `json:"characteristics"` // Pattern characteristics
DetectedAt time.Time `json:"detected_at"` // When pattern was detected
}
// ResolverStatistics represents statistics about context resolution operations
@@ -577,4 +578,4 @@ type ResolverStatistics struct {
MaxCacheSize int64 `json:"max_cache_size"` // Maximum cache size
CacheEvictions int64 `json:"cache_evictions"` // Number of cache evictions
LastResetAt time.Time `json:"last_reset_at"` // When statistics were last reset
}
}